Table of Contents

How Fraudsters Leverage AI and Deepfakes for Identity Fraud

Advancements in artificial intelligence (AI) and the proliferation of deepfakes have provided fraudsters with powerful tools to conduct identity fraud. Deepfakes, which are manipulated media created using AI algorithms, enable fraudsters to convincingly alter or fabricate images, videos and audio.

In this blog post, we will explore how fraudsters are leveraging AI and deepfakes to carry out identity fraud, highlighting the risks and challenges associated with these emerging techniques and how businesses can overcome these risks with an integrated identity security platform.

The rise of deepfakes

Although deepfakes are not new — the term was coined in 2017 in reference to face-swapping technology — they have become increasingly widespread over the past year as a tool for fraudsters to wage sophisticated attacks. With recent advances in generative AI, including large language models (LLMs) like ChatGPT, image generation applications like Midjourney, voice cloning tools like ElevenLabs and video deepfake software like FaceMagic, fraudsters no longer need advanced technical skills to create impressive deepfakes.

Generative AI software is intended for legitimate uses, like helping users with disabilities speak in their natural voice or improving business productivity through email and text creation, but its widespread availability and ease of use gives even novice fraudsters the ability to wage sophisticated attacks that put user accounts at risk.

As a result, deepfake fraud is increasing at an alarming rate, with one-third of businesses already hit by video and audio deepfake attacks as of April 2023. To wage these attacks, fraudsters use a variety of tactics and techniques that leverage generative AI software to gain access to accounts and commit identity theft.

Deepfake and AI-assisted fraud tactics

Attacks on Biometric Systems

Biometric authentication lets users login using their device’s native biometrics, such as fingerprint, face or iris scanning, to login using inherency and device possession as multiple authentication factors. Because it is inherently multifactor and does not rely on knowledge-based questions that can be easily gained by fraudsters through data leaks, social engineering and other tactics, biometric authentication has long been considered one of the most secure methods of login. 

As a result of its increased security, better user experience and the emergence of new technologies like passkeys that increase awareness of biometric authentication, it is being increasingly adopted by businesses. However, as businesses move to adopt biometric authentication, it becomes an increasing target for fraudsters as well, and deepfakes provide them with the means to circumvent this secure login method.     

With the rise of deepfakes and generative AI, fraudsters can create synthetic biometric data, including facial features, to deceive biometric systems. This can enable unauthorized access to devices, secure areas or sensitive information, compromising the integrity of biometric-based identity verification.

Deepfakes tactics to circumvent biometric authentication include: 

  • Manipulating facial recognition systems: Facial recognition systems are widely used for identity verification, but they can be vulnerable to deepfake attacks. Fraudsters can use AI-generated deepfake images or videos to trick facial recognition algorithms into recognizing them as legitimate individuals. This can allow them to gain unauthorized access to accounts, bypass security measures or even gain entry into secure premises.
  • Exploiting voice cloning technology: Voice cloning, another application of AI, allows fraudsters to imitate someone’s voice with remarkable accuracy. By combining deepfake technology with voice cloning, fraudsters can gain access to user accounts that are protected with voice authentication and use it to authorize fraudulent transactions and conduct other malicious activities.

Advances in social engineering and phishing attacks

Social engineering has long been used by fraudsters as a method of getting users to give up sensitive information that can be used for identity theft. However, this risk is becoming even more pronounced with the rise of ChatGPT and other LLMs, which can generate human-sounding text, and audio and video AI tools that enable users to easily create deepfakes without any technical expertise. 

As a result, even novice fraudsters can wage sophisticated engineering attacks with AI, using methods such as: 

  • Realistic chatbots: AI-powered chatbots or virtual assistants can be programmed to mimic human interactions, making them valuable tools for fraudsters. They can engage in social engineering attacks by convincingly impersonating trusted individuals or customer service representatives. This enables fraudsters to manipulate victims into sharing personal information, passwords, or financial details, leading to identity theft or financial fraud.
  • Audio and video deepfakes: Another emerging tactic in social engineering is the use of voice cloning or deepfake videos that sound or look identical to the users’ trusted contacts. By impersonating family members, friends, bosses or financial advisors through cloned voices or videos, fraudsters can wage even more convincing scams to deceive individuals into giving up sensitive information or making transactions on their behalf. 

Other fraud tactics that use generative AI

In addition to improving social engineering and circumventing biometric authentication systems, AI gives fraudsters the tools to commit other types of fraud and bypass even strong identity security measures, including Identity Verification Systems. Some of these tactics include: 

  1. Creating Authentic-Looking Identity Documents: One way fraudsters exploit AI and deepfakes is by creating counterfeit identity documents that appear genuine. With AI algorithms capable of generating highly realistic images, fraudsters can produce forged passports, driver’s licenses or other identification papers that pass visual inspections. These counterfeit documents can then be used to establish false identities and deceive identity verification systems.
  2. Impersonating Individuals with Deepfake Videos: Deepfake videos, which involve replacing a person’s face with someone else’s using AI algorithms, provide fraudsters with a powerful tool for impersonation. By using deepfake technology, fraudsters can create videos in which they appear to be someone else, potentially targeting individuals’ personal or professional relationships. In addition to social engineering, this technique can be used for financial fraud or even blackmail.
  3. Evading Fraud Detection Systems: Traditional fraud detection systems often rely on rule-based algorithms or pattern-recognition techniques. However, AI-powered fraudsters can employ deepfakes to evade these systems. By generating counterfeit data or manipulating patterns that AI models have learned from — a fraud technique known as adversarial attacks — fraudsters can trick algorithms into classifying fraudulent activities as legitimate. This poses challenges for fraud detection and increases the risk of undetected identity fraud.

Mitigating the risk of deepfakes

To protect against the rising threat of deepfakes, businesses must employ stricter identity security measures that work together to protect individuals throughout the entire customer journey. However, stitching together risk signals from multiple tools and solutions can be complicated due to the use of multiple fraud solutions, IDPs and customer databases.

Transmit Security uses AI to detect adversarial attacks and other AI-based fraud tactics and provides a natively integrated platform set of services to simplify identity security. With it, businesses can strengthen their protection against deepfakes and consolidate identity security with: 

Conclusion

AI and deepfakes have given fraudsters unprecedented tools to conduct identity fraud, and with the rise of generative AI that is accessible to consumers and easy to use, these attacks pose a bigger threat than ever. The increasing risks associated with these techniques are significant and require organizations, individuals and security professionals to remain vigilant and adapt their strategies accordingly. 

Strengthening identity verification processes, educating users about the risks and employing advanced detection technologies are essential in combating the evolving threats of AI-powered identity fraud. By staying informed and proactive, we can strive to stay one step ahead of fraudsters and protect ourselves from these emerging risks.

To find out more about how to protect customers from deepfakes using the Transmit Security Platform, check out our platform service brief or contact Sales to schedule a demo.

Authors

  • Nimrod Margalit, Transmit Security Sr. Product Manager
  • Rachel Kempf, Senior Technical Copywriter

    Rachel Kempf is a Senior Technical Copywriter at Transmit Security who works closely with the Product Management team to create highly technical, narratively compelling assets for customers and prospects. Prior to joining the team at Transmit Security, she worked as Senior Technical Copywriter and Editor-in-Chief for Azion Technologies, a global edge computing company, and wrote and edited blog posts and third-party research reports for Bizety, a research and consulting company in the CDN industry.