For some time, deepfake technology has been capable of creating realistic yet false renderings of real people and their voices. Now, with rapid advances in generative artificial intelligence, it is poised to become significantly more sophisticated. How soon will it be before digital impersonators flood our screens with fake narratives and deplete our bank accounts?
Since the introduction of deepfakes in film editing, experts have feared the technology could be misused to spread online misinformation or facilitate identity theft and fraud. Consequently, a market for deepfake detection tools quickly emerged. These tools use AI to identify signs that content has been falsified by understanding how deepfakes are created. For example, in a person’s photo, inconsistencies in lighting, shadows, and angles, or signs of distortion and blurring, are clear indicators.
However, with the recent surge of generative AI models and consumer chatbots such as ChatGPT, deepfake technology has become more convincing and widely accessible. Hackers no longer require advanced technical skills. Michael Matias, CEO of deepfake detection start-up Clarity, notes: “More advanced AI models are being released in the open-source domain, making deepfakes more prevalent and pushing technology even further.” He warns that “the rise of easily accessible ‘killer apps’ empowers bad actors to generate super high-quality deepfakes quickly, easily, and at no cost.” This proliferation is already diminishing the effectiveness of some detection tools.
According to technology provider ExpressVPN, there are now millions of deepfakes online, up from fewer than 15,000 in 2019. In a survey by Regula, about 80% of companies reported that deepfakes—both voice and video—posed real threats to their operations. “Businesses need to view this as the next generation of cyber security concerns,” says Matthew Moynahan, CEO of authentication provider OneSpan. “We’ve pretty much solved the issues of confidentiality and availability; now it’s about authenticity.”
This is a pressing issue. A June report by Transmit Security found that AI-generated deepfakes can bypass biometric security systems, such as facial recognition systems protecting customer accounts, and create counterfeit ID documents. Chatbots could be programmed to mimic a trusted individual or customer service representative, deceiving people into providing valuable personally identifiable information for use in other attacks.
Only last year, the FBI reported an increase in complaints involving the use of deepfakes alongside stolen personally identifiable information to apply for jobs and remote work positions online. One way to combat this type of identity theft is through behavioral biometrics, says Haywood Talcove, CEO of LexisNexis Risk Solutions Government Group. This involves analyzing and learning how a user interacts with a device, such as a smartphone, or behaves when using a computer. Any suspicious changes are flagged.
“These behavioral biometric systems look for thousands of cues that someone might not be who they claim to be,” Talcove explains. For instance, if a user navigates a new part of a website with unusual familiarity and speed, it might indicate fraud. Start-ups like BioCatch and larger companies like LexisNexis are developing this technology to continuously verify users in real-time.
There is a risk of counter-attacks, though. “Traditional fraud detection systems often rely on rule-based algorithms or pattern-recognition techniques,” notes Transmit Security. “However, AI-powered fraudsters can employ deepfakes to evade these systems. By generating counterfeit data or manipulating patterns that AI models have learned from—a fraud technique known as adversarial attacks—fraudsters can trick algorithms into classifying fraudulent activities as legitimate.”