The Rise of Deepfake Scams: A $25 Million Lesson from Arup 

In the last decade, deepfake technology became as a powerful tool leveraging artificial intelligence (AI) and machine learning (ML). It allows for the creation of highly realistic and believable fake images, videos, or audio recordings.

The term "deepfake" is a portmanteau of "deep learning" and "fake," reflecting its foundation in deep learning techniques. At the core of this technology are neural networks, particularly Generative Adversarial Networks (GANs). They can manipulate or generate synthetic media that is difficult to distinguish from real ones.

Deepfake technology has undergone rapid evolution, fueled by advancements in artificial intelligence and machine learning algorithms. Originally, it was notorious for producing realistic fake videos of public figures. But, its applications have since expanded across various domains, offering both opportunities and challenges for businesses. 

A tree in front of a buildingDescription automatically generated

The Arup Deepfake Scam: A Case Study 

In a shocking turn of events, a British design and engineering company, Arup, fell victim to a deepfake scam. The scam of this multinational company, known for its work on iconic structures, started innocently. However, it led to one of Arup's Hong Kong employees transferring $25 million to fraudsters. Let's take a closer look at this incident and the lessons we can learn from it. 

About Arup 

Arup is a globally renowned engineering and design consultancy. They are known for its innovative approach to architecture, engineering, and urban planning.

Founded in 1946 by Sir Ove Arup, the firm has evolved into a multinational organization. It has delivered some of the world's most iconic and complex projects, including the Sydney Opera House, Beijing National Stadium, and High Line. Arup's commitment to excellence, sustainability, and innovation has made it a leader in the built environment sector. 

The Incident 

An Arup spokesperson confirmed that the incident involved fake voices and images, a hallmark of deepfake technology. 

“Unfortunately, we can’t discuss into details at this stage as the incident. This is still the subject of an ongoing investigation. However, we can confirm that fake voices and images were used.” the spokesperson said.  

“Our financial stability and business operations were not affected, and none of our internal systems were compromised.”, he added. 

How the Scam Unfolded 

The scam unfolded when a finance worker at Arup received a phishing email from what appeared to be the company’s UK office. The email detailed a need for a secret transaction.  

The worker was initially skeptical. However, after a video call with individuals who looked and sounded just like their colleagues, the doubts were cleared. But these individuals were, in fact, deepfake recreations.  

Convinced of the call’s legitimacy, the worker transferred approximately 200 million Hong Kong dollars (about $25.6 million) over 15 transactions. The funds were sent to the scammer’s bank account.

Implications for Businesses 

Scams targeting businesses have become a significant global threat with financial and reputational risks. These fraudulent activities often include phishing attacks, business email compromise (BEC), and invoice fraud. They have profound implications not only for the targeted companies but also for the wider economic landscape.

As the threats evolve, more businesses are turning to cyber insurance to mitigate potential losses from scams and cyberattacks. The global cyber insurance market is projected to grow from $7 billion in 2020 to $20 billion by 2025. While it provides a safety net, it also underscores the need for comprehensive risk management strategies. 

Notable Scams 

The Arup deepfake scam is just one of many that have occurred in recent years. Other notable scams include:

  • Toyota BEC Scam (2019): Toyota Boshoku Corporation, a subsidiary of Toyota Group, lost $37 million. Scammers impersonated company executives and requested a fraudulent money transfer. 
  • Facebook and Google (2013-2015): A Lithuanian scammer tricked Facebook and Google into paying over $100 million by posing as a hardware vendor. 

Earlier this month, an executive at Ferrari NV received unusual WhatsApp messages, allegedly from CEO Benedetto Vigna, suggesting a major acquisition. Despite the convincing details, the executive grew suspicious and uncovered a sophisticated deepfake scam.

These incidents are becoming increasingly common, highlighting the need for organizations and individuals to remain vigilant and implement strong security measures.  AI-generated images of Taylor Swift circulating on social media this year also highlight the risks of deepfake technology.  

Arup employs 18,500 people across 34 offices worldwide and manages landmark projects like the Bird’s Nest stadium, the site of the 2008 Beijing Olympic Games. Rob Greig, Arup’s global chief information officer, noted: 

“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. The number and sophistication of these attacks have been rising sharply in recent months.” 

Preventive Measures for Businesses 

To protect against these scams, companies can implement several measures: 

Employee Training 

Educating employees about common scams and how to spot suspicious messages or requests can go a long way in preventing successful attacks. This can include:

  • Awareness Programs: Regularly educate employees about the risks and signs of deepfake scams. Employees should be able to recognize unusual requests and verify their authenticity through secondary channels. 
  • Simulation Exercises: Conduct training sessions that include simulated deepfake attacks to help employees practice responding to potential threats. 
  • Reporting Mechanisms: Establish clear protocols for reporting suspected deepfake scams, ensuring that employees know how to escalate concerns promptly. 

Advanced Detection Technologies 

Certain technologies can help identify deepfakes and protect against them. These technologies include:

  • Deepfake Detection Software: Invest in state-of-the-art software that can analyze media content for signs of manipulation. These tools use algorithms to detect inconsistencies in video and audio files. 
  • AI-Powered Verification Tools: Implement tools that use artificial intelligence to cross-check and verify the authenticity of digital content in real-time. 
  • Biometric Authentication: Use advanced biometric systems, such as voice recognition and facial recognition, to verify the identity of individuals in critical communications. 

Updated Security Protocols 

Finally, to safeguard against deepfakes, organizations should also consider updating their security protocols. These steps include:

  • Strong Employee Identification Access: Implement stringent identification protocols for employees accessing sensitive areas and systems. This includes biometric verification, smart cards, or other secure identification methods to ensure that only authorized personnel have access.

  • Multi-Factor Authentication (MFA): Require multiple forms of verification for access to sensitive information and systems. This makes it more difficult for scammers to gain unauthorized access. 
  • Secure Communication Platforms: Utilize secure and encrypted communication platforms for all business communications. Companies like RealTyme can help prevent unauthorized access and ensure the integrity of communications. 

  • Regular Security Audits: Conduct frequent security audits to identify and address vulnerabilities in the company's systems and protocols. 

  • Incident Response Plan: Develop and regularly update an incident response plan specifically for deepfake-related threats, ensuring that the company can respond swiftly and effectively to any incidents. 

While thinking about the deepfake threats can make you feel vulnerable, taking these proactive steps can help protect your organization from falling victim to this increasingly sophisticated form of fraud.

Expert Insights 

As cybercriminals increasingly leverage sophisticated AI-driven tools to create highly convincing fake media, the need for robust security measures has never been more critical. Here is what experts have to say about protecting your organization from deepfakes:  

  • Sam Curry, Chief Security Officer at Cybereason: "Deepfakes are becoming increasingly sophisticated and accessible, making them a formidable tool for cybercriminals. Businesses need to be aware of this emerging threat and implement multi-factor authentication and verification processes to ensure the legitimacy of communications." 
  • Bruce Schneier, Security Technologist: "The rise of deepfakes poses a significant risk to organizations, especially in areas like social engineering and disinformation campaigns. It's crucial for companies to invest in advanced detection technologies and educate their employees about the potential dangers." 

Company Response and Actions 

Arup's immediate response to the $25 million deepfake scam involved several crucial steps to manage the situation and prevent future incidents. Firstly, Arup notified the Hong Kong police in January upon discovering the fraud. As multiple scammers posed as the board using deep fake AI to change their faces and voices on a video conference call, the organization took urgent action to investigate and determine the origin of the scam. 

Even though the investigation is still ongoing, Arup emphasized that their financial stability and business operations were not compromised. The company has been working closely with law enforcement agencies as part of the ongoing investigation to identify the perpetrators and recover the lost funds.  

Additionally, Arup's Chief Information Officer, Rob Greig, used this incident to highlight the increasing sophistication of cyberattacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. Greig stressed the importance of raising awareness about these evolving threats and implementing robust security measures to protect against them. 

RealTyme’s Role in Preventing Deepfake Scams 

Using secure communication platforms like RealTyme can significantly enhance a company's defense against deepfake scams. RealTyme offers robust security features, including end-to-end encryption, secure collaboration tools, identity protection, and secure access controls, ensuring that all communications remain confidential and protected from tampering.  

RealTyme’s advanced technology provides real-time detection and prevention measures. By integrating our solutions, organizations can enhance their security protocols, safeguard financial assets, and maintain their reputation in the face of evolving cyber threats. 

Conclusion 

The incident at Arup serves as a reminder of the growing threat posed by deepfake scams and other cyberattacks. As technology continues to advance, organizations have to stay vigilant and take proactive measures to protect against potential risks.  

By implementing robust security measures and using secure communication platforms like RealTyme, companies can mitigate the risk of falling victim to such scams and continue their operations without disruption.  

It is also essential for businesses to educate their employees about these evolving threats and regularly update their security protocols to stay ahead of scammers' tactics. With strong awareness, prevention, and detection measures in place, organizations can safeguard themselves from financial losses and maintain trust with their stakeholders.  

To learn more about how RealTyme can help protect your organization from deepfake scams and other cyber threats, visit our website or contact our team today. Let us work together to safeguard your business and reputation in the digital age.

You may also like