Robinson Cole LLP
High Contrast Mode
March 13, 2026 - Article

The Rise of Deepfakes: What to Know and How to Prepare

Share this page:

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.”

It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.”

Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one.

All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.”

What are the Risks of Deepfakes to Companies?

Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds.

In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes.

Tips to Help Prevent Fraud from Deepfakes

  1. Educate employees on what deepfake schemes are and how to detect them.
  2. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed.
  3. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models.
  4. Implement multi-factor authentication and authenticator apps on all critical applications.
  5. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials.
  6. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings.
  7. Transition away from using voice recognition as a primary authentication practice.
  8. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online.
  9. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools.
  10. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme.

Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized.

Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation.

Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com