Robinson Cole LLP
High Contrast Mode

Nationally ranked in Chambers USA in Privacy & Data Security since 2012 and globally since 2019, Linn Freedman is a leader in the field of privacy and cybersecurity law. Her clients have said that Linn “is a great lawyer, focused on understanding the client and the business” and “is very responsive, pragmatic and very good to work with," “travels to the end and back for her clients,” and has "extraordinary integrity and a great mind for creativity, while still adhering to regulatory compliance."

Linn is chair of our firm’s Data Privacy + Cybersecurity practice and of the Artificial Intelligence Team. She focuses on compliance with state and federal data privacy and security laws and regulations, emergency data breach response, mitigation and complex litigation. She counsels clients on state and federal investigations and enforcement actions. She has a particular focus on health information technology and has assisted clients with navigating laws governing the access, use and disclosure of protected health information and substance use disorder information and HIPAA compliance. 

Data Privacy + Cybersecurity Compliance

As a Certified Information Privacy Professional, Linn helps clients comply with all state and federal data privacy and security laws and regulations, and counsels them on  investigations and enforcement actions. 

Linn advises companies and organizations on best practices for the collection, maintenance, and sharing of high-risk data, to help avoid breaches and cyber intrusions. She assists with data mapping and development of privacy and security plans. She helps clients comply with constantly evolving, industry-specific privacy and data protection regulatory requirements in a rapidly changing area of law.

Linn also assists clients with data security regulatory requirements, including implementing Written Information Security Plans (WISPs). She provides guidance regarding privacy and data protection implications associated with the deployment of communication and data storage technologies, mobile applications, location-based services, and risks associated with the use of artificial intelligence. She works with clients to develop software and cloud vendor agreements, website and mobile app privacy policies and terms and conditions of use, social media policies, practices and procedures, assessing risk with tracking and pixel technology, and AI governance programs.

Linn has given presentations around the country on data privacy and cybersecurity, and she writes extensively on these topics, including for the firm’s Health Law Diagnosis and Data Privacy + Cybersecurity Insider blogs. The widely-recognized Insider blog has received multiple Readers Choice Awards distinction from JD Supra and featured in FeedSpot's "100 Best Infosec Blogs and Websites in 2025."

Security Incident + Data Breach Preparedness + Emergency Response

Linn assists clients with data breach preparedness, including assisting with vendor choice and pre-negotiating contracts for forensic, notification, and call center services. She also assists with the development and training of data breach response teams. If there is a security incident or data breach, Linn assists with all related investigation, negotiation, response, notification, remediation, coordination, and litigation. She has extensive experience with responding to security incidents, including ransomware attacks and business email compromise incidents. She is well-versed in helping clients with post-breach investigations by state and federal authorities. She also provides live, hands on security incident tabletop exercises for clients. 

Privacy + Class Action Litigation + Enforcement

If a data breach or privacy issue results in litigation or an enforcement action, Linn works with clients to resolve the matter through the court system or before federal or state regulatory agencies.  She also represents various companies in privacy litigation matters around unauthorized access, use or disclosure of personally identifiable and health information and in retrieving the unauthorized transfer of data from companies by employees. She represents companies responding to website pixel litigation. Linn is a former Assistant Attorney General for the State of Rhode Island and works with the AGs of multiple states around compliance and enforcement actions involving data breaches and data security.

HIPAA Compliance

Linn has extensive experience helping clients with HIPAA compliance. She regularly assists with HIPAA compliance programs and employee awareness training, cybersecurity in relation to websites and online portals, and data use and sharing agreements for health information exchanges.

She has deep experience helping clients defend enforcement actions by the Office for Civil Rights of the Department of Health and Human Services.

Linn is an Adjunct Professor of Law at Roger Williams University School of Law and a former Adjunct Professor in Brown University’s Executive Masters of Cybersecurity Program. Prior to joining our firm, Linn was a partner at Nixon Peabody, where she served as leader of the firm's Privacy & Data Protection Group. She also served as assistant attorney general and deputy chief of the Civil Division of the Attorney General's Office for the State of Rhode Island.  

  • Loyola University School of Law (Juris Doctor)
  • Newcomb College of Tulane University (Bachelors, with honors)
    • B.A., American Studies

  • Commonwealth of Massachusetts
  • State of Rhode Island
  • U.S. Supreme Court
  • U.S. Court of Appeals, 1st Circuit
  • U.S. Court of Appeals, 5th Circuit
  • U.S. District Court, District of Massachusetts
  • U.S. District Court, District of Rhode Island

Named one of Providence Business News2024 Leaders & Achievers honorees

Ranked as a leader in Chambers USA: America's Leading Lawyers for Business in the area of Privacy & Data Security nationwide since 2012 and global-wide since 2019

Named one of the “Women to Know in Health IT” by Becker’s HOSPITAL REVIEW in 2024, 2023, 2022 and 2020

Lifetime Achievement Award recipient as part of the 2023 Tech10 and Next Tech Generation Awards, presented by Rhode Island Monthly and the Tech10 Advisory Group

Certified Information Privacy Professional/US (CIPP/US) by the International Association of Privacy Professionals (IAPP)

Recognized as National Law Review Go-To Thought Leader

Recognized by Lexology as a "Legal Influencer" 

Selected by her peers for inclusion in The Best Lawyers in America© in the areas of Commercial Litigation and Privacy and Data Security Law since 2020 and in the area of Artificial Intelligence Law for 2026

2016-2026 JD Supra Readers' Choice Top Author and the #1 author in Cybersecurity

Recognized as one of 50 Top Healthcare IT Professionals by Health Data Management, 2015

Roger Williams University School of Law 2015 Champions for Justice award recipient

Rhode Island Department of Health Founder's Award recipient

Rhode Island Attorney General Justice Award recipient

Rhode Island Department of Health Award for Excellence in Public Health Promotion recipient

Profiled by Directors & Boards in their 2014 class of Directors to Watch

Providence Business News Business Women Industry Leader - Professional Services for 2012

Robinson+Cole Community Service Award Recipient, 2021

American Bar Association

Rhode Island Bar Association

Rhode Island Judiciary
Committee on Artificial Intelligence and the Courts

International Association of Privacy Professionals

American Health Lawyers Association

Defense Counsel of Rhode Island

CISO Executive Network

Foundation for Rhode Island Day Schools
President

Roger Williams University
Board Member, Secretary, Finance Committee, Chair, Governance Committee, University College Committee

Roger Williams University School of Law
Past Board Member, Member Pro Bono Advisory Committee, Adjunct Professor (Privacy Law)

Rhode Island Center for Justice
Board Member (2015 - present)

Professional Facilities Management
Board Member

Publications


Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 5, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
February 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
February 19, 2026

Data Privacy + Cybersecurity Insider



Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 5, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
February 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
February 19, 2026

Data Privacy + Cybersecurity Insider


News


March 18, 2026

Linn Freedman Sounds the Alarm About the Growth of Deepfake Content

Data Privacy, Cybersecurity + AI practice chair Linn F. Freedman cautioned readers about deepfakes – images formed through the use of “synthetic media,” including artificial intelligence – and their increasing threat to businesses and organizations in her article, “The Rise of Deepfakes: What to Know and How to Prepare,” published in Corporate Counsel on March 13, 2026. Linn highlights that the difference between deepfakes and more traditional methods of fraud “is the use of facial, voice or image recognition to trick the user into believing the request is legitimate, with confirmed authentication.” Linn suggests businesses provide employees with tools to identify, respond and mitigate a deepfake fraud scheme and consider implementing detection and behavior monitoring tools to prevent being victimized. Read the article.

Corporate Counsel
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
March 18, 2026

Linn Freedman Sounds the Alarm About the Growth of Deepfake Content

Data Privacy, Cybersecurity + AI practice chair Linn F. Freedman cautioned readers about deepfakes – images formed through the use of “synthetic media,” including artificial intelligence – and their increasing threat to businesses and organizations in her article, “The Rise of Deepfakes: What to Know and How to Prepare,” published in Corporate Counsel on March 13, 2026. Linn highlights that the difference between deepfakes and more traditional methods of fraud “is the use of facial, voice or image recognition to trick the user into believing the request is legitimate, with confirmed authentication.” Linn suggests businesses provide employees with tools to identify, respond and mitigate a deepfake fraud scheme and consider implementing detection and behavior monitoring tools to prevent being victimized. Read the article.

Corporate Counsel
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
December 18, 2025

Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala

Global M&A Network
Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala teaser
November 26, 2025

Linn Freedman Discusses AI in Education

Law 401 Podcast
October 16, 2025

Linn Freedman Invited to Join Inaugural Cybersecurity Council at Rhode Island College

September 25, 2025

Linn Freedman Expresses Concerns About Cyber Threats Exposing Sensitive Judicial Documents

Rhode Island Lawyers Weekly
August 26, 2025

78 Robinson+Cole Lawyers Listed in The Best Lawyers in America© 2026

Firm receives top listing in Connecticut lawyer count in national peer review survey
78 Robinson+Cole Lawyers Listed in The Best Lawyers in America© 2026 teaser

February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
December 18, 2025

Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala

Global M&A Network
Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala teaser
November 26, 2025

Linn Freedman Discusses AI in Education

Law 401 Podcast
October 16, 2025

Linn Freedman Invited to Join Inaugural Cybersecurity Council at Rhode Island College

September 25, 2025

Linn Freedman Expresses Concerns About Cyber Threats Exposing Sensitive Judicial Documents

Rhode Island Lawyers Weekly
August 26, 2025

78 Robinson+Cole Lawyers Listed in The Best Lawyers in America© 2026

Firm receives top listing in Connecticut lawyer count in national peer review survey
78 Robinson+Cole Lawyers Listed in The Best Lawyers in America© 2026 teaser

Events


Past

The Regulatory Roadmap for AI in Employment

Mar 27 2026
38th Annual Labor & Employment Law Conference
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

The Regulatory Roadmap for AI in Employment

Mar 27 2026
38th Annual Labor & Employment Law Conference
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

A CISOs Guide to the Legalities of AI

Jan 14 2026
Top of MIND Webinar
Past

Deepfakes: A Demonstration of How They are Made and Used by Threat Actors

Nov 19 2025
Boston Bar Association 2025 Privacy, Cybersecurity & Digital Law Conference
Past

CISO ExecNet National Symposium

Jul 29 2025
Chicago, IL
Past

Navigating Legal Implications in Litigation, Contractual Agreements, and Human Resources

Jun 16 2025
First Israel and Rhode Island Conference on AI and the Law
Past

A CISOs Guide to the Legalities of AI

Jan 14 2026
Top of MIND Webinar
Past

Deepfakes: A Demonstration of How They are Made and Used by Threat Actors

Nov 19 2025
Boston Bar Association 2025 Privacy, Cybersecurity & Digital Law Conference
Past

CISO ExecNet National Symposium

Jul 29 2025
Chicago, IL
Past

Navigating Legal Implications in Litigation, Contractual Agreements, and Human Resources

Jun 16 2025
First Israel and Rhode Island Conference on AI and the Law

Data Privacy + Cybersecurity Insider


Below is an excerpt of Data Privacy + Cybersecurity Insider blog posts authored by Linn.

Social Engineering Schemes Target C-Suite Executives

March was a busy month for former Black Basta affiliates who are using old social engineering techniques to target executives in the manufacturing, professional, scientific, and technical services industries. According to Reliaquest, the activity of the threat actors indicates that these sectors “were likely direct targets.” According to its report, “Attackers are using automation to... Continue Reading

Visit Blog

Privacy Tip #487 – Eurail Notifies 300,000+ Individuals of Data Breach

I have very fond memories of using a Eurail pass back in the day while backpacking through Europe as a student. I was saddened to see that Eurail was the victim of a data breach in December 2025 when attackers obtained access to travelers’ full names and contact information, including email addresses, passport details, ID... Continue Reading

Visit Blog

Joint Advisory Warns of Iran Cyber Actors Attacking U.S. Critical Infrastructure

Iran has always been a formidable cyber threat to the United States, but after the war in Iran commenced, the attacks are coming frequently and in full force. According to the Joint Cybersecurity Advisory issued on April 7, 2026, by the FBI, CISA, NSA, EPA, DOE, and Cyber Command, Iranian-based hackers are targeting operational technology... Continue Reading

Visit Blog

Water Treatment Facility Downed with Ransomware Attack

Critical infrastructure operators at the water treatment plant in Minot, North Dakota, were forced to resort to manual processes when its Supervisory Control and Data Acquisition (SCADA) system became inoperable as a result of a March 14, 2026, ransomware attack. The attackers are unidentified, but it comes in the wake of the war in Iran,... Continue Reading

Visit Blog

Winona County Victim of Cyber Attack

Minnesota Governor Tim Walz issued an emergency executive order on April 7, 2026, dispatching the Minnesota National Guard after Winona County requested assistance following a cyber attack disrupting its “critical systems and digital services.” The attack occurred on April 6, 2026, and is “significantly impairing the county’s ability to deliver vital emergency and municipal services.”... Continue Reading

Visit Blog

Privacy Tip #486 – “Stolen Credentials Are a Major Threat”

According to Security Week’s recent article, “Stolen Logins Are Fueling Everything from Ransomware to Nation-State Cyberattacks,” cybersecurity firm Ontinue’s 2H 2025 Threat Intelligence Report, showcases that “Attackers aren’t breaking in anymore, they’re logging in.” According to Ontinue’s Report, in the second half of 2025, “identity became the primary attack surface.”  This means that users were... Continue Reading

Visit Blog

FBI Warns: Iran Cyber Actors Using Telegram to Push Malware

The Federal Bureau of Investigation (FBI) recently released a FLASH warning highlighting malicious cyber activity conducted by threat actors operating on behalf of Iran’s Ministry of Intelligence and Security. According to the FBI, these threat actors are using Telegram as a command-and-control infrastructure to push malware “targeting Iranian dissidents, journalists opposed to Iran, and other... Continue Reading

Visit Blog

Mandiant M-Trends 2026 Report: Threat Actors Using AI in Attacks

Mandiant recently issued its M-Trends 2026 Report, a must read for all cybersecurity professionals. The report provides several conclusions and insights, including that both nation states and run of the mill financially motivated threat actors are “integrating AI to accelerate the attack lifecycle.” These threat actors are “increasingly relying on large language models (LLMs) as... Continue Reading

Visit Blog