How to Detect Deepfakes
Deepfakes are a clear and present danger to businesses. According to Markets and Markets, the deepfake market will balloon from $564 million in 2024 to $5.1 billion by 2030, which represents a 44.5% compound annual growth rate.
Deepfakes represent several types of threats including corporate sabotage, enhanced social engineering attacks, identity spoofing, and more. More commonly, bad actors use deepfakes to increase the effectiveness of social engineering.
“It’s no secret that deep fakes are a significant concern for businesses and individuals. With the advancement of AI-generated fakes, organizations must spot basic manipulations and stay ahead of techniques that convincingly mimic facial movements and voice patterns,” says Chris Borkenhagen, chief digital officer and chief information security officer at identity verification and fraud prevention solutions provider AuthenticID, in an email interview. “Detecting deep fakes requires advanced machine learning models, behavioral analysis, and forensic tools to identify subtle inconsistencies in images, videos, and audio. Mismatches in lighting, shadows, and eye movements can often expose even the most convincing deep fakes.”
Organizations should leverage visual and text fraud algorithms that utilize deep learning to detect anomalies in the data underpinning deepfakes. This approach should go beyond surface-level detection to analyze content structure for signs of manipulation.
“The responsibility for detecting and mitigating deep fake threats should be shared across the organization, with CISOs leading the way. They must equip their teams with the right tools and training to recognize deep fake threats,” Borkenhagen says. “However, CEO and board-level involvement is important, as deep fakes pose risks that extend beyond fraud. They can damage a brand’s reputation and compromise sensitive communications. Organizations must incorporate deep fake detection into their broader fraud prevention strategies and stay informed about the latest advancements in AI technologies and detection tools.”
Chris Borkenhagen, AuthenticID
As deep fakes become more sophisticated, organizations must be prepared with both advanced detection tools and comprehensive response strategies.
“AI-powered solutions like Reality Defender and Sensity AI play a key role in detecting manipulated content by identifying subtle inconsistencies in visuals and audio,” says Ryan Waite, adjunct professor at Brigham Young University-Hawaii and VP of public affairs at digital advocacy firm Think Big. “Tools like FakeCatcher go further, analyzing physiological markers such as blood flow in the face to identify deep fakes. Amber Authenticate adds another layer of security by verifying the authenticity of media files through cryptographic techniques.”
Deep fake detection should be a priority, with CISOs, data science teams, and legal departments working together to manage these technologies. In addition to detection, companies must implement a deep fake response strategy, he says. This involves:
-
Having clear protocols for identifying, managing, and mitigating deep fakes.
-
Training employees to recognize manipulated content.
-
Making sure the C-suite understands the risks of impersonation, fraud and reputational damage, and plan accordingly.
-
Staying informed on evolving AI and deep fake legislation is critical. As regulatory frameworks develop, companies must be proactive in ensuring compliance and safeguarding their reputation.
“Combining cutting-edge tools, a robust response strategy, and legislative awareness is the best defense against this growing threat,” says Waite.
How Deepfakes Facilitate Social Engineering
Deepfakes are being used in elaborate scams against businesses by threat actors leveraging synthetic videos, audio, and images to enhance their social engineering attacks, like Business Email Compromise (BEC) and phishing techniques. The use of AI has also made it incredibly easy to produce a deepfake and spread it far and wide. Moreover, there is a wealth of readily available tools on the dark web.
“We have seen evidence of deepfake videos being used in virtual meetings and audio in voicemail or live conversations, deceiving targets into revealing sensitive information or clicking malicious links,” says Azeem Aleem, managing director client leadership, EMEA and managing director of Northern Europe. Financial services firms are especially worried about the use of AI or generative-AI fraud, with Deloitte Insights showing a 700% rise in deepfake incidents in fintech in 2023.”
Other examples of deepfake techniques include “vishing” (voice phishing), Zoom bombing and biometric attacks.
“Hackers are now combining email and vishing with deepfake voice technology, enabling them to clone voices from just three seconds of recorded speech and conduct highly targeted social engineering fraud,” says Aleem. “This evolution makes it possible for attackers to impersonate C-level executives using their cloned voices, significantly enhancing their ability to breach corporate networks.”
Zoom bombing occurs when uninvited guests disrupt online meetings or when attackers impersonate trusted individuals to infiltrate meetings. There are also biometric attacks.
“Businesses frequently use biometric authentication systems, such as facial or voice recognition, for employee verification,” says Aleem. “However, deepfake technology has advanced to the point where it can deceive these systems to bypass customer verification processes, including commands like blinking or looking in specific directions.”
According to accounts payable automation solution provider, Medius, 53% of businesses in the US and UK have been targets of a financial scam powered by deepfake technology, with 43% falling victim to such attacks.
“Beyond BEC, attackers use deepfakes to create convincing fake social media profiles and impersonate individuals in real-time conversations, making it easier to manipulate victims into compromising their security,” says Aleem. “It’s not necessarily targeted, but it does prey on natural vulnerabilities like human-error and fear. As AI applications develop, deepfakes can be produced to also request profile changes with agents and train voice bots to mimic IVRs. These deepfake voice techniques allow attackers to navigate IVR systems and steal basic account details, increasing the risk to organizations and their customers.”
The business risk is potential fraud, extortion, and market manipulation.
“Deepfakes are disrupting various industries in profound ways. Call centers at banks and financial institutions are grappling with deepfake voice cloning attacks aimed at unauthorized account access and fraudulent transactions,” says Aleem. “In the insurance sector, deepfakes are exploited to submit false evidence for fraudulent claims, causing significant financial losses. Media companies suffer reputational damage and revenue loss due to deepfake content that undermines their credibility and accuracy. Meanwhile, social media platforms are inundated with deepfake manipulation, resulting in misleading news stories and potential societal harm.”
Take a Breath
The accelerating velocity of business means that workers need to work more efficiently. Given the number of communications channels one uses at work (email, Slack, Asana, Teams, Zoom, etc.), one’s brain tends to be fractionalized into tasks and processes while attempting to multitask. The complex way of working, and its rapid pace, makes pausing seem like a crazy idea. However, without constantly questioning what is “real,” one can easily fall victim to a scam.
“The first line of defense is to stop and think critically. Deepfake attacks often ask someone to do something unexpected or illogical. Building a culture of verification, regardless of who is asking for something, is key,” says Zack Schuler, founder and executive chairman of cybersecurity awareness training company NINJIO. “It’s as much about building the habit of slowing any action that is unexpected and requiring additional verification than it is detecting a deepfake. This, along with just knowing that deepfakes exist, goes far to stop their attacks.”
For example, a Ferrari executive foiled an attack involving a deepfake of the CEO. Senator Ben Cardin (D-Md.), who serves as the Democratic chair of the Senate Foreign Relations Committee, was targeted in an advanced deep fake scheme that partially succeeding in duping the politician.
“Security leaders should think beyond the novel deepfake attack vector and recognize that this is just another social engineering tool for manipulation,” says Schuler. “Teaching employee what to look for in any kind manipulation will build a more resilient workforce and safer security posture.”
Ryan Waite, Think Big
As deepfakes continue to advance, they’ll become more interactive and therefore more convincing.
Carl Frogger, CISO at zero-day data security company DeepInstinct, says his company is seeing fake identities that are AI-generated and AI-driven avatars.
“The media and entertainment industry are worried about that, but banks and others should be worried,” says Frogger. “The person you think is a customer might not actually exist at all, so you’ve got to build that into your touch points.”
The popular targets are CEOs and CFOs because they have the power to move money. Recently, an investor executed some trades after going through MFA and hung up. Immediately thereafter, a fraudulent call followed, spoofing the number. Because the customer had just called, there was no need to reauthenticate.
“The way I would not approach this is not I’ve got this new technology because of this generative AI stuff,” says Frogger. “I’d say if we don’t do this, I estimate our fraud and loss is going to go [to a higher level]. It’s a business risk decision, more than a technology-drive one.”
Lisa O’Connor, global security R&D Lead for Accenture Labs, says her firm sees companies focusing more on education and awareness.
“This is the part where we need to pause our human brains, stop the natural responses and create the space to ask questions,” says O’Connor. “We find it’s more about creating human space and giving people the ability to pause — actually, giving them the responsibility to pause — and make sure the right controls are happening. Is this something outside of a normal workflow? Is something not following a pattern? Do I have a sense something feels off?”
Having a zero-trust culture must be supported with technology-enabled deep fake prevention and detection. Those tools need to be integrated into the workflow, the pipeline of calls, and video conferencing so they are hardened. Good identity and access management (IAM) is also a necessity.
“You need to think all across the reference architecture of deepfakes and how they can affect the enterprise. There probably isn’t an area that it doesn’t affect. It’s just the degree to which it does,” says O’Connor.
Education Is Key
Humans tend to be the weakest link in an organization, so it’s important to train the staff so they understand the risks and can spot the red flags.
“Rather than focusing solely on the technical signs of deepfakes, like odd facial movements or inconsistencies in speech, the emphasis should be on identifying suspicious situations. For example, if someone makes an unusual request for money or creates a sense of urgency, it’s a good time to verify the authenticity by using secure methods like a password or additional phone call,” says Yinglian Xie, CEO and co-Founder of fraud and risk management platform DataVisor. “Both individuals and companies should focus on raising awareness and training teams to react quickly to potential threats. For financial institutions, advanced fraud detection technology, like behavioral biometrics and real-time machine learning, is crucial. These systems continuously monitor interactions and adapt to new threats, providing an extra layer of protection.”
Jennifer Wilson, head of cyber at insurance broker Newfront, says one of her company’s clients hired someone who had created a deepfake, AI-generated profile. A few weeks after this person was hired, a news story broke about how they had defrauded several other companies, and they were fired on the spot. Because this person had been given corporate credentials, he installed malware and then extorted the companies that hired him.
“Since the pandemic, we’ve been in this perpetual chase with the threat actors and they are far ahead of us because they’re implementing all these new tools at rapid speed, and the tools are helping them,” says Wilson. “They’re giving unskilled hackers the capability to pull off very sophisticated attacks, and they’re just running with it, sharing it with their actor groups, and those groups are multiplying.”
Bottom line, the threat of deepfakes is very real and becoming more common every day. Organizations need to train employees about the dangers while the cyber and IT teams attempt to keep pace with the rapidly evolving threat landscape.