Robinson Cole LLP
High Contrast Mode
Marquee

Data Privacy + Cybersecurity

Data privacy and cybersecurity increasingly affects all businesses and industries. To handle this complex and rapidly changing area of law, our Data Privacy + Cybersecurity practice group collaborates with lawyers throughout Robinson+Cole’s diverse practice areas.

Each member of our highly experienced team understands the spectrum of challenges businesses may face with evolving digital technologies. We are dedicated to helping you achieve success, providing you with the right resources to match your specific business needs.

Our Services

Our clients include public and private companies in all industries, including:

  • Software companies
  • Companies with websites and mobile apps
  • Health care providers and hospital systems
  • Retail and marketing companies
  • Higher education providers
  • Start-up companies
  • Tax-exempt organizations
  • Utilities, manufacturing, energy, and wireless telecommunications service providers

Our team regularly works with federal and state data privacy and security rules and regulations, including:

  • California Consumer Privacy Act (CCPA) and Privacy Rights Act and implementing regulations and state data privacy laws and emerging privacy regulations
  • Laws and regulations applicable to tracking technology and pixels
  • Children's Online Privacy Protection Act (COPPA)
  • Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM Act)
  • European Union (EU) General Data Protection Regulation (GDPR) and revised Standard Contractual Clauses (SCCs)
  • Fair Credit Reporting Act (FCRA)
  • Family Educational Rights and Privacy Act (FERPA)
  • Federal Aviation Administration’s (FAA) Small Unmanned Aerial Systems (UAS) regulations (Part 107), and state and local laws related to the use of UAS and privacy concerns
  • Federal Trade Commission Act (FTC Act)
  • FTC's Telemarketing Sales Rule (TSR)
  • Gramm-Leach-Bliley Act (GLBA)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • New York Department of Financial Services Cybersecurity Regulations
  • Consumer protection enforcement actions by Attorneys General or federal agencies including the Office for Civil Rights and the Federal Trade Commission
  • SEC cybersecurity regulations
  • State data security laws and regulations, including implementation of statutorily required Written Information Security Programs
  • State and federal data privacy and security laws and regulations related to employee and workplace privacy
  • State specific biometric information privacy laws and regulations
  • Telephone Consumer Protection Act (TCPA)
  • Video Privacy Protection Act (VPPA)

Our lawyers are knowledgeable about data collection technology, including the use of tracking technology like cookies and pixels for targeted advertising and behavioral advertising. We also understand the value and risks of collecting and using data for marketing and strategic purposes.

The team has a significant HIPAA compliance practice and assists covered entities and business associates navigate the intricacies of HIPAA, guidance from the OCR, and OCR enforcement actions. We have vast experience with HIPAA data breach response and statutory requirements.

Our Financial Services Cyber-Compliance team helps protect our banking, insurance, and financial services clients on a wide range of issues, including implementation of enterprise-wide cybersecurity programs, and adoption of required written cybersecurity policies and procedures to comply with state and federal laws.

Our Team

Our team is well-versed in incident and data breach response, mitigation, remediation, coordination, and litigation, including investigations by the U.S. Office for Civil Rights and state Attorneys General (AGs). We coordinate forensic investigations and mitigation tactics in the event of ransomware or other cyber attacks.

Our attorneys advise our clients in data mapping and development of enterprise-wide privacy and security plans, and compliance with privacy requirements and industry-specific regulations. We also advise on sharing and transfer of collected data and assist with strategy to minimize risk associated with the collection, use and disclosure of data. We regularly structure arrangements relating to data transfer, and prepare technology contracts and information security addenda to outline appropriate protection obligations for the sharing and care of customer and patient data.

We promote practices and policies to safeguard data against accidental or deliberate disclosure, including security programs. We provide tailored education programs for employees, executives and boards. We have completed dozens of cybersecurity tabletop exercises, which are designed to experience a live cybersecurity event and response.

Our lawyers also work with clients to develop website and mobile app privacy policies and terms and conditions of use, and social media policies, practices, and procedures.

Our Robinson+Cole team members also author the Data Privacy + Cybersecurity Insider blog, providing clients with timely, thoughtful, and cutting-edge legal news and perspectives about data privacy and cybersecurity issues. The widely-recognized blog has received multiple Readers Choice Awards distinction from JD Supra and featured in FeedSpot's "100 Best Infosec Blogs and Websites in 2025."

We actively speak at industry-sponsored programs on data privacy and cybersecurity developments, cases, trends, and agendas. We proactively track updates to federal and state privacy and security laws and proposals.

Our Data Privacy + Cybersecurity team is here to help you navigate the ever-evolving complexities of nationwide laws and regulations, providing skilled legal services for businesses in the digital sphere.

Data Privacy + Cybersecurity Insider


Phishing Now Top Method for Initial Unauthorized Network Access

According to Cisco Talus researchers, phishing is the primary method threat actors use to gain unauthorized access to networks, accounting for more than one-third of all incidents in the first quarter of 2026. This increase is attributed to threat actors using legitimate AI tools to enhance phishing campaigns, particularly against health care and government sectors.... Continue Reading

Visit Blog

SCOTUS Hears the Next Big Fourth Amendment Fight Over Digital Location Data

Earlier this year, the Pennsylvania Supreme Court held that users generally lack a reasonable expectation of privacy in unprotected Google search records, underscoring how aggressively some courts are still applying third-party doctrine principles to digital data. Commonwealth v. Kurtz, 348 A.3d 133 (Pa. 2025) (our previous blog post on Kurtz is available here). The question... Continue Reading

Visit Blog

CCPA Employee Data Rulemaking Could Reshape Employer Privacy Compliance in California

The California Consumer Privacy Act (CCPA) continues to stand apart as the only comprehensive state privacy law in the U.S. that applies to personal information relating to employees, job applicants, and independent contractors. Since that coverage expanded in January 2023, many employers have had to navigate the difficult task of applying a consumer privacy framework... Continue Reading

Visit Blog

Tempus AI Faces Class Action Cases for Collection of Genetic Information in Acquisition

Multiple class action cases have been filed against Tempus AI  alleging that, during its acquisition of Ambry Genetics, the company improperly collected and disclosed genetic information without obtaining prior written consent from individuals during its acquisition of Ambry. Tempus acquired Ambry, a genetic testing firm, in February 2025 for $600 million. The acquisition included the... Continue Reading

Visit Blog

EU AI Act Update: Omnibus Talks Stall, but Clock Is Still Ticking

Talks between European Union legislators broke down on Wednesday as they tried to agree on proposed amendments to the EU AI Act. At the center of the debate is the Digital Omnibus on AI, first introduced in November 2025, which would delay several key compliance deadlines under the Act. If approved, the Digital Omnibus would... Continue Reading

Visit Blog

What Legal AI Is Really Changing in Law Firm Economics

Legal commentary on artificial intelligence in law practice often focuses on speed: drafts that once took days can now be produced in hours, and research that once took hours can now be narrowed in minutes. Those gains are real, but they do not resolve the more important operational questions. Many firms still don’t know whether... Continue Reading

Visit Blog

Privacy Tip #489 – Social Media Scams #1 in 2025

The Federal Trade Commission (FTC) recently reported that, in 2025, social media scams were the costliest of all scams against consumers, with a whopping $2.1 billion lost. Thirty percent of those who reported losing funds in 2025 indicated that the scam started over social media. The number of 2025 scams beginning on social media increased... Continue Reading

Visit Blog

DOJ’s Big Win in North Korean IT Worker Fraud Scheme

On April 15, 2026, the Department of Justice (DOJ) announced that two U.S. nationals, Kejia Wang and Zhenxing Wang, were sentenced for facilitating a North Korean IT worker scheme that compromised over 80 U.S. identities, with sentences of 108 and 92 months respectively, supervised release, and forfeiture orders. The scheme involved the defendants operating “laptop... Continue Reading

Visit Blog

California’s DROP Regime will Change the Data Broker Risk Equation

California’s new Delete Request and Opt-Out Platform (DROP) goes live on August 1, 2026, and the compliance stakes are enormous. State officials have warned that a single missed deletion cycle could create theoretical penalty exposure of $1.5 billion for one data broker. That number reflects how aggressively the Delete Act is designed to work. One consumer request can... Continue Reading

Visit Blog

OpenAI’s New Privacy Filter: A Development with Limits

On April 22, 2026, OpenAI released its new Privacy Filter tool, designed to identify and mask sensitive information in text before that text is stored, shared, or used in downstream processing. OpenAI says the tool can detect items such as names, addresses, account numbers, private dates, and other personal data in documents, logs, and datasets... Continue Reading

Visit Blog

Legal AI Delivers More Value When It Is Tied to Business Outcomes

As corporate legal departments continue adopting AI, the conversation is shifting from experimentation to strategy. According to the Thomson Reuters Institute’s 2026 State of the Corporate Law Department Report, nearly half of legal departments now report department-wide AI adoption, and technology has become a top strategic priority for many general counsel. That momentum matters, but adoption... Continue Reading

Visit Blog

Privacy Tip #488 – Account Change Phishing Alerts from “Apple” Are Tricking Users

A new, yet old, scheme has been quite successful and users should beware. If you get an account change message from Apple, be on high alert that it is fake and malicious. According to Bleeping Computer, the scheme involves a threat actor using an Apple support email (e.g., appleid@id.apple.com) to send phishing emails to unsuspecting... Continue Reading

Visit Blog

Social Engineering Schemes Target C-Suite Executives

March was a busy month for former Black Basta affiliates who are using old social engineering techniques to target executives in the manufacturing, professional, scientific, and technical services industries. According to Reliaquest, the activity of the threat actors indicates that these sectors “were likely direct targets.” According to its report, “Attackers are using automation to... Continue Reading

Visit Blog

Click to Join, Hard to Leave: FTC Reopens Negative Option Rulemaking

On March 11, 2026, the Federal Trade Commission (FTC) announced an Advance Notice of Proposed Rulemaking (ANPRM) highlighting its Rule Concerning the Use of Prenotification Negative Option Plans, seeking comment on whether the rule should be amended or supplemented to better address deceptive or unfair negative option practices. The FTC describes negative options as marketing... Continue Reading

Visit Blog

CNN Must Defend Privacy Suit Alleging Data Sharing with Microsoft and Adtech Firms 

A federal judge has ruled that CNN must face a proposed class action alleging that its website shared consumers’ personal information with Microsoft and adtech firms without consent, in alleged violation of the California Invasion of Privacy Act (CIPA). The lawsuit challenges CNN’s alleged use of online tracking tools and the downstream sharing of data in the digital advertising ecosystem.  According... Continue Reading

Visit Blog

Experience


Software + Technology Contract Negotiations

Represented multiple companies in the negotiation of software and technology contracts with third-party vendors.

Start-Up Policy Development

Worked with multiple start-up organizations in developing privacy policies and terms of use for websites and mobile applications, as well as privacy and security plans and compliance programs.

Data Breach Assistance

Assisted dozens of organizations with reportable data breaches, including notification, mitigation, and regulatory enforcement, as well as class action defense.



News


April 17, 2026

Kathryn Rattigan Joins the Beta Gamma Sigma Society as Honorary Inductee

Data Privacy + Cybersecurity team partner Kathryn Rattigan was invited to join the Beta Gamma Sigma (BGS) Society as an honorary inductee at the Leo J. Meehan School of Business at Stonehill College. Her honorary membership reflects her exceptional leadership skills, service to the legal profession, and impact to the business community. BGS is the international business honor society for AACSB-accredited schools, which are the top 5% of business schools in the world and is comprised of individuals serving in critical leadership roles in corporate, entrepreneurial, government, non-profit, and academic sectors. In a ceremony on April 16, 2026, in Easton, Massachusetts, Kathryn provided brief remarks while accepting her invitation.

Beta Gamma Sigma Society
April 15, 2026

Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award

Massachusetts Lawyers Weekly
Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award teaser
March 19, 2026

Roma Patel Authors Article on Secondary Liability and AI

The Licensing Journal
April 17, 2026

Kathryn Rattigan Joins the Beta Gamma Sigma Society as Honorary Inductee

Data Privacy + Cybersecurity team partner Kathryn Rattigan was invited to join the Beta Gamma Sigma (BGS) Society as an honorary inductee at the Leo J. Meehan School of Business at Stonehill College. Her honorary membership reflects her exceptional leadership skills, service to the legal profession, and impact to the business community. BGS is the international business honor society for AACSB-accredited schools, which are the top 5% of business schools in the world and is comprised of individuals serving in critical leadership roles in corporate, entrepreneurial, government, non-profit, and academic sectors. In a ceremony on April 16, 2026, in Easton, Massachusetts, Kathryn provided brief remarks while accepting her invitation.

Beta Gamma Sigma Society
April 15, 2026

Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award

Massachusetts Lawyers Weekly
Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award teaser
March 19, 2026

Roma Patel Authors Article on Secondary Liability and AI

The Licensing Journal
March 18, 2026

Linn Freedman Sounds the Alarm About the Growth of Deepfake Content

Corporate Counsel
March 16, 2026

Kathryn Rattigan Quoted on Disney CCPA Opt-Out Settlement

Cybersecurity Law Report
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
November 26, 2025

Linn Freedman Discusses AI in Education

Law 401 Podcast

March 18, 2026

Linn Freedman Sounds the Alarm About the Growth of Deepfake Content

Corporate Counsel
March 16, 2026

Kathryn Rattigan Quoted on Disney CCPA Opt-Out Settlement

Cybersecurity Law Report
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
November 26, 2025

Linn Freedman Discusses AI in Education

Law 401 Podcast

Events


Past

Managing Matter Mobility - Setting Defensible Rules for Data Leaving or Entering the Firm

Mar 9 2026
Law.com Legalweek 2026
Past

Mastery of IG: Legal and Regulatory

Feb 19 2026
ARMA IG Mastery Session 4
Past

Managing Matter Mobility - Setting Defensible Rules for Data Leaving or Entering the Firm

Mar 9 2026
Law.com Legalweek 2026
Past

Mastery of IG: Legal and Regulatory

Feb 19 2026
ARMA IG Mastery Session 4
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

Deepfakes: A Demonstration of How They are Made and Used by Threat Actors

Nov 19 2025
Boston Bar Association 2025 Privacy, Cybersecurity & Digital Law Conference
Past

Fireside Chat | The Cyber Brief: Law, Liability & Response

Sep 19 2025
SCG Legal 2025 Annual Meeting
Past

CISO ExecNet National Symposium

Jul 29 2025
Chicago, IL
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

Deepfakes: A Demonstration of How They are Made and Used by Threat Actors

Nov 19 2025
Boston Bar Association 2025 Privacy, Cybersecurity & Digital Law Conference
Past

Fireside Chat | The Cyber Brief: Law, Liability & Response

Sep 19 2025
SCG Legal 2025 Annual Meeting
Past

CISO ExecNet National Symposium

Jul 29 2025
Chicago, IL

Publications


Data Privacy + Cybersecurity Insider teaser
April 30, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 23, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 30, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 23, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 2026

Copy That: Secondary Liability in the Age of AI

The Licensing Journal

A re-publishing of her Data Privacy + Cybersecurity Insider blog post, Roma's article explains that AI-related intellectual property risk is not limited to end users, but can extend to the companies that develop, market, or deploy AI tools if those tools appear to encourage infringement and how companies can best protect themselves from litigation. 

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider



Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 2026

Copy That: Secondary Liability in the Age of AI

The Licensing Journal

A re-publishing of her Data Privacy + Cybersecurity Insider blog post, Roma's article explains that AI-related intellectual property risk is not limited to end users, but can extend to the companies that develop, market, or deploy AI tools if those tools appear to encourage infringement and how companies can best protect themselves from litigation. 

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider