Robinson Cole LLP
High Contrast Mode
Marquee

Artificial Intelligence

Artificial intelligence (AI) is reshaping industries worldwide, creating opportunities and challenges around privacy, security, intellectual property, bias, non-discrimination, and other fundamental human rights. Robinson+Cole’s multidisciplinary AI team combines legal, regulatory, and business services to help clients navigate the complexities of AI law and address the evolving challenges it presents.

Our Services

Our team  draws on deep knowledge and experience across litigation, corporate, health care, privacy and cybersecurity, antitrust, employment, intellectual property, energy, environmental, and transactional practices. By working collaboratively, we help clients navigate the legal and business complexities of the development and use of AI in their organizations. In addition, our business professionals partner closely with our legal team to deliver tailored strategies for turning information into a valuable business asset to meet their business objectives.

 Our lawyers stay at the forefront of AI development, use, and risk, guiding clients in building governance programs, evaluating AI tools, negotiating vendor contracts, and implementing AI platforms. We track the evolving regulations to keep our clients informed and compliant, including developments from:

  • Federal Trade Commission (FTC)
  • Equal Opportunity Employment Commission (EEOC)
  • European Union's Artificial Intelligence Act, Regulation (EU) 2024/1689 (EU AI Act)
  • Professional Ethical Standards and Regulation
  • Rapidly evolving state laws and regulations applicable to the development, deployment, and use of AI technology, including the California Privacy Rights Act, the Colorado Protections for Artificial Intelligence Act and the Texas Responsible AI Governance Act, and others as they are implemented

Specifically, we can provide the following to our clients as part of an enterprise-wide AI governance program:

  • Mapping of AI use
  • Policies and procedures for design (as applicable), development (as applicable), and use of AI
  • Development of AI technology
  • Establish a cross-functional governance committee
  • Code of conduct, revisions to employee handbook, or other employee-facing policies
  • Training
  • Risk assessments
  • Vendor management and technology contracts
  • State law survey
  • Incident response plan and table-top exercises
  • Compliance with laws, regulations and guidance

Data Privacy + Cybersecurity

Robinson+Cole offers a comprehensive suite of data privacy, cybersecurity, and health care compliance services, including those addressing the unique challenges posed by AI. As AI technology becomes increasingly susceptible to phishing scams, cyberattacks, and ransomware, our extensive knowledge of governance and related federal and state regulatory requirements allows us to provide clients with practical actionable guidance to mitigate risk and secure compliance.

Intellectual Property

In addition, we are well-versed in the risks AI poses to intellectual property rights, copyright, and antitrust laws. Our team includes lawyers and business professionals with extensive experience handling patent applications across diverse technologies. We recognize how AI can potentially infringe on existing intellectual property rights and are committed to helping clients safeguard their interests in this complex and continually shifting environment.

Our Team

Our Artificial Intelligence team is dedicated to understanding this complex and evolving technology and the different scenarios that may impact our clients. Active thought leaders, authors, and presenters, we consistently offer timely and thoughtful discussions on a wide range of topics related to AI.

Experience


No results found.


Publications


Data Privacy + Cybersecurity Insider teaser
April 30, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 23, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 30, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 23, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 2026

Copy That: Secondary Liability in the Age of AI

The Licensing Journal

A re-publishing of her Data Privacy + Cybersecurity Insider blog post, Roma's article explains that AI-related intellectual property risk is not limited to end users, but can extend to the companies that develop, market, or deploy AI tools if those tools appear to encourage infringement and how companies can best protect themselves from litigation. 

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 5, 2026

Data Privacy + Cybersecurity Insider



Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 2026

Copy That: Secondary Liability in the Age of AI

The Licensing Journal

A re-publishing of her Data Privacy + Cybersecurity Insider blog post, Roma's article explains that AI-related intellectual property risk is not limited to end users, but can extend to the companies that develop, market, or deploy AI tools if those tools appear to encourage infringement and how companies can best protect themselves from litigation. 

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 5, 2026

Data Privacy + Cybersecurity Insider


News


April 28, 2026

Linn Freedman Reacts to New CT Rule Proposed as a Result of Generative AI Hallucinations

Data Privacy + Cybersecurity team and AI practice chair Linn Freedman discussed a recently proposed rule from the Rules Committee of the Connecticut Superior Court in the Law360 article, “Conns.’s Proposed AI Rule Not A Shock To Attys,” published on April 23, 2026. The proposed rule would put attorneys at risk if they fail to verify citations and evidence produced by artificial intelligence (AI). Linn said the rule is likely being put in writing because the court is concerned that lawyers continue to file hallucinations. “I think they’re reiterating to the entire bar what their obligations are, specifically with regard to the use of gen AI. I think it makes sense, because more and more lawyers are not aware, which I don’t understand at this point. They’re not aware of the consequences and the many examples of lawyers that have been sanctioned as a result of wasting the resources of both opposing counsel and the court when they are not providing accurate citations in cases.” Read the article.

Law 360
April 23, 2026

Linn Freedman Urges Heightened Awareness for All Critical Infrastructure

The Bond Buyer
April 15, 2026

Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award

Massachusetts Lawyers Weekly
Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award teaser
April 28, 2026

Linn Freedman Reacts to New CT Rule Proposed as a Result of Generative AI Hallucinations

Data Privacy + Cybersecurity team and AI practice chair Linn Freedman discussed a recently proposed rule from the Rules Committee of the Connecticut Superior Court in the Law360 article, “Conns.’s Proposed AI Rule Not A Shock To Attys,” published on April 23, 2026. The proposed rule would put attorneys at risk if they fail to verify citations and evidence produced by artificial intelligence (AI). Linn said the rule is likely being put in writing because the court is concerned that lawyers continue to file hallucinations. “I think they’re reiterating to the entire bar what their obligations are, specifically with regard to the use of gen AI. I think it makes sense, because more and more lawyers are not aware, which I don’t understand at this point. They’re not aware of the consequences and the many examples of lawyers that have been sanctioned as a result of wasting the resources of both opposing counsel and the court when they are not providing accurate citations in cases.” Read the article.

Law 360
April 23, 2026

Linn Freedman Urges Heightened Awareness for All Critical Infrastructure

The Bond Buyer
April 15, 2026

Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award

Massachusetts Lawyers Weekly
Robinson+Cole Presented with 2026 Law Firm Excellence in Innovation Award teaser
March 19, 2026

Roma Patel Authors Article on Secondary Liability and AI

The Licensing Journal
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
January 27, 2026

Jim Merrifield’s Elevation to Chief Data Officer Featured in Law360 Pulse

Law360 Pulse
December 1, 2025

Robinson+Cole Advances Innovation as First Am Law 200 Firm to Partner with Newcode.ai in United States

Firm showcases innovative agentic AI solution in firmwide program with Newcode.ai CEO Maged Helmy
Robinson+Cole Advances Innovation as First Am Law 200 Firm to Partner with Newcode.ai in United States teaser

March 19, 2026

Roma Patel Authors Article on Secondary Liability and AI

The Licensing Journal
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
January 27, 2026

Jim Merrifield’s Elevation to Chief Data Officer Featured in Law360 Pulse

Law360 Pulse
December 1, 2025

Robinson+Cole Advances Innovation as First Am Law 200 Firm to Partner with Newcode.ai in United States

Firm showcases innovative agentic AI solution in firmwide program with Newcode.ai CEO Maged Helmy
Robinson+Cole Advances Innovation as First Am Law 200 Firm to Partner with Newcode.ai in United States teaser

Events


Upcoming

Use of AI by Lawyers & Judges: Praise & Peril

Jun 11 2026
Rhode Island Bar Association 2026 Annual Meeting
Past

AI as a Friend, Not Foe: Welcoming AI to Master Information Governance

Apr 21 2026
ARMA InfoNEXT 2026
Upcoming

Use of AI by Lawyers & Judges: Praise & Peril

Jun 11 2026
Rhode Island Bar Association 2026 Annual Meeting
Past

AI as a Friend, Not Foe: Welcoming AI to Master Information Governance

Apr 21 2026
ARMA InfoNEXT 2026
Past

The Regulatory Roadmap for AI in Employment

Mar 27 2026
38th Annual Labor & Employment Law Conference
Past

Can AI Be Patented? Navigating Patent Subject Matter Eligibility

Feb 9 2026
2026 AUTM Annual Meeting
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

A CISOs Guide to the Legalities of AI

Jan 14 2026
Top of MIND Webinar
Past

The Regulatory Roadmap for AI in Employment

Mar 27 2026
38th Annual Labor & Employment Law Conference
Past

Can AI Be Patented? Navigating Patent Subject Matter Eligibility

Feb 9 2026
2026 AUTM Annual Meeting
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

A CISOs Guide to the Legalities of AI

Jan 14 2026
Top of MIND Webinar

Data Privacy + Cybersecurity Insider


Phishing Now Top Method for Initial Unauthorized Network Access

According to Cisco Talus researchers, phishing is the primary method threat actors use to gain unauthorized access to networks, accounting for more than one-third of all incidents in the first quarter of 2026. This increase is attributed to threat actors using legitimate AI tools to enhance phishing campaigns, particularly against health care and government sectors.... Continue Reading

Visit Blog

SCOTUS Hears the Next Big Fourth Amendment Fight Over Digital Location Data

Earlier this year, the Pennsylvania Supreme Court held that users generally lack a reasonable expectation of privacy in unprotected Google search records, underscoring how aggressively some courts are still applying third-party doctrine principles to digital data. Commonwealth v. Kurtz, 348 A.3d 133 (Pa. 2025) (our previous blog post on Kurtz is available here). The question... Continue Reading

Visit Blog

CCPA Employee Data Rulemaking Could Reshape Employer Privacy Compliance in California

The California Consumer Privacy Act (CCPA) continues to stand apart as the only comprehensive state privacy law in the U.S. that applies to personal information relating to employees, job applicants, and independent contractors. Since that coverage expanded in January 2023, many employers have had to navigate the difficult task of applying a consumer privacy framework... Continue Reading

Visit Blog

Tempus AI Faces Class Action Cases for Collection of Genetic Information in Acquisition

Multiple class action cases have been filed against Tempus AI  alleging that, during its acquisition of Ambry Genetics, the company improperly collected and disclosed genetic information without obtaining prior written consent from individuals during its acquisition of Ambry. Tempus acquired Ambry, a genetic testing firm, in February 2025 for $600 million. The acquisition included the... Continue Reading

Visit Blog

EU AI Act Update: Omnibus Talks Stall, but Clock Is Still Ticking

Talks between European Union legislators broke down on Wednesday as they tried to agree on proposed amendments to the EU AI Act. At the center of the debate is the Digital Omnibus on AI, first introduced in November 2025, which would delay several key compliance deadlines under the Act. If approved, the Digital Omnibus would... Continue Reading

Visit Blog

What Legal AI Is Really Changing in Law Firm Economics

Legal commentary on artificial intelligence in law practice often focuses on speed: drafts that once took days can now be produced in hours, and research that once took hours can now be narrowed in minutes. Those gains are real, but they do not resolve the more important operational questions. Many firms still don’t know whether... Continue Reading

Visit Blog

Privacy Tip #489 – Social Media Scams #1 in 2025

The Federal Trade Commission (FTC) recently reported that, in 2025, social media scams were the costliest of all scams against consumers, with a whopping $2.1 billion lost. Thirty percent of those who reported losing funds in 2025 indicated that the scam started over social media. The number of 2025 scams beginning on social media increased... Continue Reading

Visit Blog

DOJ’s Big Win in North Korean IT Worker Fraud Scheme

On April 15, 2026, the Department of Justice (DOJ) announced that two U.S. nationals, Kejia Wang and Zhenxing Wang, were sentenced for facilitating a North Korean IT worker scheme that compromised over 80 U.S. identities, with sentences of 108 and 92 months respectively, supervised release, and forfeiture orders. The scheme involved the defendants operating “laptop... Continue Reading

Visit Blog

California’s DROP Regime will Change the Data Broker Risk Equation

California’s new Delete Request and Opt-Out Platform (DROP) goes live on August 1, 2026, and the compliance stakes are enormous. State officials have warned that a single missed deletion cycle could create theoretical penalty exposure of $1.5 billion for one data broker. That number reflects how aggressively the Delete Act is designed to work. One consumer request can... Continue Reading

Visit Blog

OpenAI’s New Privacy Filter: A Development with Limits

On April 22, 2026, OpenAI released its new Privacy Filter tool, designed to identify and mask sensitive information in text before that text is stored, shared, or used in downstream processing. OpenAI says the tool can detect items such as names, addresses, account numbers, private dates, and other personal data in documents, logs, and datasets... Continue Reading

Visit Blog

Legal AI Delivers More Value When It Is Tied to Business Outcomes

As corporate legal departments continue adopting AI, the conversation is shifting from experimentation to strategy. According to the Thomson Reuters Institute’s 2026 State of the Corporate Law Department Report, nearly half of legal departments now report department-wide AI adoption, and technology has become a top strategic priority for many general counsel. That momentum matters, but adoption... Continue Reading

Visit Blog

Privacy Tip #488 – Account Change Phishing Alerts from “Apple” Are Tricking Users

A new, yet old, scheme has been quite successful and users should beware. If you get an account change message from Apple, be on high alert that it is fake and malicious. According to Bleeping Computer, the scheme involves a threat actor using an Apple support email (e.g., appleid@id.apple.com) to send phishing emails to unsuspecting... Continue Reading

Visit Blog

Social Engineering Schemes Target C-Suite Executives

March was a busy month for former Black Basta affiliates who are using old social engineering techniques to target executives in the manufacturing, professional, scientific, and technical services industries. According to Reliaquest, the activity of the threat actors indicates that these sectors “were likely direct targets.” According to its report, “Attackers are using automation to... Continue Reading

Visit Blog

Click to Join, Hard to Leave: FTC Reopens Negative Option Rulemaking

On March 11, 2026, the Federal Trade Commission (FTC) announced an Advance Notice of Proposed Rulemaking (ANPRM) highlighting its Rule Concerning the Use of Prenotification Negative Option Plans, seeking comment on whether the rule should be amended or supplemented to better address deceptive or unfair negative option practices. The FTC describes negative options as marketing... Continue Reading

Visit Blog

CNN Must Defend Privacy Suit Alleging Data Sharing with Microsoft and Adtech Firms 

A federal judge has ruled that CNN must face a proposed class action alleging that its website shared consumers’ personal information with Microsoft and adtech firms without consent, in alleged violation of the California Invasion of Privacy Act (CIPA). The lawsuit challenges CNN’s alleged use of online tracking tools and the downstream sharing of data in the digital advertising ecosystem.  According... Continue Reading

Visit Blog