Robinson Cole LLP
High Contrast Mode

Nationally ranked in Chambers USA in Privacy & Data Security since 2012 and globally since 2019, Linn Freedman is a leader in the field of privacy and cybersecurity law. Her clients have said that Linn “is a great lawyer, focused on understanding the client and the business” and “is very responsive, pragmatic and very good to work with," “travels to the end and back for her clients,” and has "extraordinary integrity and a great mind for creativity, while still adhering to regulatory compliance."

Linn is chair of our firm’s Data Privacy + Cybersecurity practice and of the Artificial Intelligence Team. She focuses on compliance with state and federal data privacy and security laws and regulations, emergency data breach response, mitigation and complex litigation. She counsels clients on state and federal investigations and enforcement actions. She has a particular focus on health information technology and has assisted clients with navigating laws governing the access, use and disclosure of protected health information and substance use disorder information and HIPAA compliance. 

Data Privacy + Cybersecurity Compliance

As a Certified Information Privacy Professional, Linn helps clients comply with all state and federal data privacy and security laws and regulations, and counsels them on  investigations and enforcement actions. 

Linn advises companies and organizations on best practices for the collection, maintenance, and sharing of high-risk data, to help avoid breaches and cyber intrusions. She assists with data mapping and development of privacy and security plans. She helps clients comply with constantly evolving, industry-specific privacy and data protection regulatory requirements in a rapidly changing area of law.

Linn also assists clients with data security regulatory requirements, including implementing Written Information Security Plans (WISPs). She provides guidance regarding privacy and data protection implications associated with the deployment of communication and data storage technologies, mobile applications, location-based services, and risks associated with the use of artificial intelligence. She works with clients to develop software and cloud vendor agreements, website and mobile app privacy policies and terms and conditions of use, social media policies, practices and procedures, assessing risk with tracking and pixel technology, and AI governance programs.

Linn has given presentations around the country on data privacy and cybersecurity, and she writes extensively on these topics, including for the firm’s Health Law Diagnosis and Data Privacy + Cybersecurity Insider blogs. The widely-recognized Insider blog has received multiple Readers Choice Awards distinction from JD Supra and featured in FeedSpot's "100 Best Infosec Blogs and Websites in 2025."

Security Incident + Data Breach Preparedness + Emergency Response

Linn assists clients with data breach preparedness, including assisting with vendor choice and pre-negotiating contracts for forensic, notification, and call center services. She also assists with the development and training of data breach response teams. If there is a security incident or data breach, Linn assists with all related investigation, negotiation, response, notification, remediation, coordination, and litigation. She has extensive experience with responding to security incidents, including ransomware attacks and business email compromise incidents. She is well-versed in helping clients with post-breach investigations by state and federal authorities. She also provides live, hands on security incident tabletop exercises for clients. 

Privacy + Class Action Litigation + Enforcement

If a data breach or privacy issue results in litigation or an enforcement action, Linn works with clients to resolve the matter through the court system or before federal or state regulatory agencies.  She also represents various companies in privacy litigation matters around unauthorized access, use or disclosure of personally identifiable and health information and in retrieving the unauthorized transfer of data from companies by employees. She represents companies responding to website pixel litigation. Linn is a former Assistant Attorney General for the State of Rhode Island and works with the AGs of multiple states around compliance and enforcement actions involving data breaches and data security.

HIPAA Compliance

Linn has extensive experience helping clients with HIPAA compliance. She regularly assists with HIPAA compliance programs and employee awareness training, cybersecurity in relation to websites and online portals, and data use and sharing agreements for health information exchanges.

She has deep experience helping clients defend enforcement actions by the Office for Civil Rights of the Department of Health and Human Services.

Linn is an Adjunct Professor of Law at Roger Williams University School of Law and a former Adjunct Professor in Brown University’s Executive Masters of Cybersecurity Program. Prior to joining our firm, Linn was a partner at Nixon Peabody, where she served as leader of the firm's Privacy & Data Protection Group. She also served as assistant attorney general and deputy chief of the Civil Division of the Attorney General's Office for the State of Rhode Island.  

  • Loyola University School of Law (Juris Doctor)
  • Newcomb College of Tulane University (Bachelors, with honors)
    • B.A., American Studies

  • Commonwealth of Massachusetts
  • State of Rhode Island
  • U.S. Supreme Court
  • U.S. Court of Appeals, 1st Circuit
  • U.S. Court of Appeals, 5th Circuit
  • U.S. District Court, District of Massachusetts
  • U.S. District Court, District of Rhode Island

Named one of Providence Business News2024 Leaders & Achievers honorees

Ranked as a leader in Chambers USA: America's Leading Lawyers for Business in the area of Privacy & Data Security nationwide since 2012 and global-wide since 2019

Named one of the “Women to Know in Health IT” by Becker’s HOSPITAL REVIEW in 2024, 2023, 2022 and 2020

Lifetime Achievement Award recipient as part of the 2023 Tech10 and Next Tech Generation Awards, presented by Rhode Island Monthly and the Tech10 Advisory Group

Certified Information Privacy Professional/US (CIPP/US) by the International Association of Privacy Professionals (IAPP)

Recognized as National Law Review Go-To Thought Leader

Recognized by Lexology as a "Legal Influencer" 

Selected by her peers for inclusion in The Best Lawyers in America© in the areas of Commercial Litigation and Privacy and Data Security Law since 2020 and in the area of Artificial Intelligence Law for 2026

2016-2026 JD Supra Readers' Choice Top Author and the #1 author in Cybersecurity

Recognized as one of 50 Top Healthcare IT Professionals by Health Data Management, 2015

Roger Williams University School of Law 2015 Champions for Justice award recipient

Rhode Island Department of Health Founder's Award recipient

Rhode Island Attorney General Justice Award recipient

Rhode Island Department of Health Award for Excellence in Public Health Promotion recipient

Profiled by Directors & Boards in their 2014 class of Directors to Watch

Providence Business News Business Women Industry Leader - Professional Services for 2012

Robinson+Cole Community Service Award Recipient, 2021

Rhode Island Bar Association

Rhode Island Judiciary
Committee on Artificial Intelligence and the Courts

International Association of Privacy Professionals

American Health Lawyers Association

CISO Executive Network

Roger Williams University
Board Member, Finance Committee, Chair, Governance Committee, Executive Committee, Extension School Committee

Roger Williams University School of Law
Past Board Member, Adjunct Professor (Privacy Law)

Rhode Island College - Advisory Council for the Institute for Cybersecurity & Emerging Technology

Rhode Island Center for Justice
Board Member (2015 - present)

Professional Facilities Management
Board Member, Secretary

American Bar Association

Defense Counsel of Rhode Island

Publications


Data Privacy + Cybersecurity Insider teaser
May 7, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 30, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 23, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
May 7, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 30, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 23, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider



Data Privacy + Cybersecurity Insider teaser
April 16, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
April 9, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 26, 2026

Data Privacy + Cybersecurity Insider

Data Privacy + Cybersecurity Insider teaser
March 19, 2026

Data Privacy + Cybersecurity Insider

March 13, 2026

The Rise of Deepfakes: What to Know and How to Prepare

Corporate Counsel

Deepfakes, images that are formed through the use of synthetic media, including artificial intelligence and machine learning, are increasingly becoming a threat to organizations. According to Britannica.com, a deepfake is “synthetic media, including images, videos or audio, generated by artificial intelligence (AI) technology that portrays something that does not exist in reality or events that have never occurred.” It is estimated by Keepnet that deepfakes “are growing at an alarming rate” from 500,000 in 2023 to approximately 8 million in 2025. More sobering is the prediction that deepfake content is “projected to increase by 900% annually.” This increase in deepfake content corresponds to a corresponding increase in phishing and fraud incidents against companies. Attempts to defraud companies with deepfake content increased 2,137% over the past three years, including deepfake “spear phishing attacks” against contact centers (when threat actors call a contact center in a vishing scheme, impersonating an employee to obtain information from the call center representative to change credentials so the threat actor can obtain access to the real employee’s account). Deepfake fraud rose 162%, voice deepfakes rose 680%, and contact center fraud accounted for $44.5 billion lost in 2025. Deloitte estimates that fraud losses in the United States “facilitated by generative AI are projected to climb…to $40 billion by 2027, with a compound annual growth rate of 32%.” Compounding the issue is that individuals have a difficult time detecting deepfakes. Although 60% of people believe they can spot a deepfake, the actual rate of detection is closer to 24.5%. Nearly three-quarters of people don’t believe they can tell a cloned voice from a real one. All of these statistics point to an indisputable conclusion that deepfakes are here to stay. They are now being used by threat actors the way email phishing schemes (phishing), SMS text schemes (smishing), voice phishing (vishing), and QR code (quishing) attacks have been used in the past. The difference from the more traditional methods of fraud in the use of deepfakes is the use of facial, voice, or image recognition to trick the user into believing that the request is legitimate, with confirmed authentication. Deepfakes have become such a risk to organizations that the Department of Homeland Security published a report entitled “Increasing Threat of Deepfake Identities,” which provides a useful and easy-to-understand history of the technology and its use, and the “inherent risk of deepfakes by malign actors.” The report states, “Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” These risks include deepfakes being used by nation states, the use of deepfakes for non-consensual pornography, the use of synthetic content to carry out fraud schemes and the “susceptibility of the public to believe what they see….As a result, we expect an emerging threat landscape wherein the attacks will become easier and more successful, and the efforts to counter and mitigate these threats will need orchestration and collaboration by governments, industry, and society.” What are the Risks of Deepfakes to Companies? Companies have expended vast amounts of time and resources to train employees on cybersecurity schemes, including phishing, smishing, vishing, and quishing. A basic tenet to combat these schemes is to train employees on how to spot a scheme and not trust anything that comes via email or text. What makes deepfakes so effective is that the threat actor has taken the suspicion and skepticism out of the mix by using a real voice or face to convince the target that the request is legitimate and coming from a trusted colleague or executive. The scheme may start with an email requesting a change in wiring instructions or a wire transfer. The target will follow company protocol to authenticate the request and ask for a telephone conference or video call with the individual who has requested or can approve the transaction. The threat actor provides the telephone number or sets up a video call with the target, and is able to use deepfake technology, including voice and facial recognition technology, impersonating the individual authorized to approve the transaction. Since the target is “seeing” and “hearing” the executive or colleague’s approval of the transaction, the target believes the person is real and the transaction proceeds. In general, employees are unsuspecting individuals who are trying to do their job and may not anticipate that fraudsters are lurking around every corner. This trusting nature is causing companies to become victims of fraud and must be addressed to combat the exploding incidents and billions of dollars in losses from deepfake schemes. Tips to Help Prevent Fraud from Deepfakes Educate employees on what deepfake schemes are and how to detect them. Consider showing employees, through a demonstration, how a deepfake is made (it’s very effective to use an executive as a guinea pig) and deployed. Educate employees about how the content they share online can be used to create a deepfake; provide tips to adjust their social media privacy settings to restrict access to photos and videos; and show them how content can be harvested to train deepfake models. Implement multi-factor authentication and authenticator apps on all critical applications. Train call center personnel on how deepfakes can be used to impersonate an employee to change credentials. Put detailed processes in place for the transmission of high-value funds, including multiple layers of approvals and authentication, including in-person meetings. Transition away from using voice recognition as a primary authentication practice. Embed difficult security questions into the authentication process, even when voice and facial recognition is being used. The security questions should elicit an answer that is not readily available online. Consider implementing deepfake recognition tools, including detection systems to verify that a person is physically present, 3D depth sensing, multi-angle face scans, and voice authentication tools. Instill a healthy dose of skepticism into the organization to enhance prior education and training so employees will be less susceptible to a deepfake scheme. Deepfake technology is developing at a rapid rate. It is becoming easier to use and more effective to carry out fraud schemes. The use of deepfakes by threat actors has exploded in the past year and is expected to increase exponentially. The use of deepfakes by threat actors and fraudsters makes incidents more complex and difficult to detect. Identifying the risk that deepfakes pose to your organization, providing your employees with tools to identify, respond, and mitigate a deepfake fraud scheme, and considering implementing detection and behavior monitoring tools will help prevent your organization from being victimized. Linn Freedman is chair of the Data Privacy + Cybersecurity and AI Teams at Robinson+Cole, LLP. Freedman focuses her practice on compliance with all state and federal data privacy and security laws and regulations, as well as emergency data breach response, mitigation and litigation. Reprinted with permission from the March 13, 2026 edition of Corporate Counsel© 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com

Data Privacy + Cybersecurity Insider teaser
March 12, 2026

Data Privacy + Cybersecurity Insider


News


April 28, 2026

Linn Freedman Reacts to New CT Rule Proposed as a Result of Generative AI Hallucinations

Data Privacy + Cybersecurity team and AI practice chair Linn Freedman discussed a recently proposed rule from the Rules Committee of the Connecticut Superior Court in the Law360 article, “Conns.’s Proposed AI Rule Not A Shock To Attys,” published on April 23, 2026. The proposed rule would put attorneys at risk if they fail to verify citations and evidence produced by artificial intelligence (AI). Linn said the rule is likely being put in writing because the court is concerned that lawyers continue to file hallucinations. “I think they’re reiterating to the entire bar what their obligations are, specifically with regard to the use of gen AI. I think it makes sense, because more and more lawyers are not aware, which I don’t understand at this point. They’re not aware of the consequences and the many examples of lawyers that have been sanctioned as a result of wasting the resources of both opposing counsel and the court when they are not providing accurate citations in cases.” Read the article.

Law 360
April 23, 2026

Linn Freedman Urges Heightened Awareness for All Critical Infrastructure

The Bond Buyer
March 18, 2026

Linn Freedman Sounds the Alarm About the Growth of Deepfake Content

Corporate Counsel
April 28, 2026

Linn Freedman Reacts to New CT Rule Proposed as a Result of Generative AI Hallucinations

Data Privacy + Cybersecurity team and AI practice chair Linn Freedman discussed a recently proposed rule from the Rules Committee of the Connecticut Superior Court in the Law360 article, “Conns.’s Proposed AI Rule Not A Shock To Attys,” published on April 23, 2026. The proposed rule would put attorneys at risk if they fail to verify citations and evidence produced by artificial intelligence (AI). Linn said the rule is likely being put in writing because the court is concerned that lawyers continue to file hallucinations. “I think they’re reiterating to the entire bar what their obligations are, specifically with regard to the use of gen AI. I think it makes sense, because more and more lawyers are not aware, which I don’t understand at this point. They’re not aware of the consequences and the many examples of lawyers that have been sanctioned as a result of wasting the resources of both opposing counsel and the court when they are not providing accurate citations in cases.” Read the article.

Law 360
April 23, 2026

Linn Freedman Urges Heightened Awareness for All Critical Infrastructure

The Bond Buyer
March 18, 2026

Linn Freedman Sounds the Alarm About the Growth of Deepfake Content

Corporate Counsel
February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
December 18, 2025

Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala

Global M&A Network
Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala teaser
November 26, 2025

Linn Freedman Discusses AI in Education

Law 401 Podcast
October 16, 2025

Linn Freedman Invited to Join Inaugural Cybersecurity Council at Rhode Island College


February 25, 2026

Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards

JD Supra
Data Privacy + Cybersecurity Team Receives 2026 Readers' Choice Awards teaser
February 19, 2026

Linn Freedman Receives Global Ranking in Chambers Global Guide 2026

Chambers & Partners
February 5, 2026

Linn Freedman Quoted in Cybersecurity Law Report on FTC Settlement

Cybersecurity Law Report
December 18, 2025

Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala

Global M&A Network
Business Transactions in Health Care Team Wins “Pharma & Devices Deal of the Year” at Global M&A Network’s 7th Annual USA Middle Markets M&A Atlas Awards Gala teaser
November 26, 2025

Linn Freedman Discusses AI in Education

Law 401 Podcast
October 16, 2025

Linn Freedman Invited to Join Inaugural Cybersecurity Council at Rhode Island College


Events


Upcoming

AI at Work: Real-World Lessons Learned

May 19 2026
Boston Bar Association Legal Hour
Upcoming

Use of AI by Lawyers & Judges: Praise & Peril

Jun 11 2026
Rhode Island Bar Association 2026 Annual Meeting
Upcoming

AI at Work: Real-World Lessons Learned

May 19 2026
Boston Bar Association Legal Hour
Upcoming

Use of AI by Lawyers & Judges: Praise & Peril

Jun 11 2026
Rhode Island Bar Association 2026 Annual Meeting
Past

The Regulatory Roadmap for AI in Employment

Mar 27 2026
38th Annual Labor & Employment Law Conference
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

A CISOs Guide to the Legalities of AI

Jan 14 2026
Top of MIND Webinar
Past

Deepfakes: A Demonstration of How They are Made and Used by Threat Actors

Nov 19 2025
Boston Bar Association 2025 Privacy, Cybersecurity & Digital Law Conference
Past

The Regulatory Roadmap for AI in Employment

Mar 27 2026
38th Annual Labor & Employment Law Conference
Past

State AI Laws and the Federal EO: Effective Dates, Scope, Enforcement, Compliance Planning

Jan 27 2026
Barbri Webinar
Past

A CISOs Guide to the Legalities of AI

Jan 14 2026
Top of MIND Webinar
Past

Deepfakes: A Demonstration of How They are Made and Used by Threat Actors

Nov 19 2025
Boston Bar Association 2025 Privacy, Cybersecurity & Digital Law Conference

Data Privacy + Cybersecurity Insider


Below is an excerpt of Data Privacy + Cybersecurity Insider blog posts authored by Linn.

ShinyHunters Target Medical Device Company Medtronic

Global medical device company Medtronic recently confirmed that it had been attacked by the threat actor group, ShinyHunters. According to Bleeping Computer, Medtronic is “the largest medical device maker in the world by revenue ($33.5 billion) and also develops healthcare technologies and therapies.” ShinyHunters alleges that it has stolen over nine million Medtronic records containing... Continue Reading

Visit Blog

CISA Warning: Firestarter Malware Persists in Cisco Devices

The Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) have confirmed that threat actors are using FIRESTARTER malware to maintain persistence on Cisco network devices, allowing the threat actors to maintain access even after patching and reboots.  FIRESTARTER malware targets Cisco Firepower and Secure Firewall devices running Adaptive Security... Continue Reading

Visit Blog

Privacy Tip #490 – Dating App Hijacks + Repurposes College Student’s TikTok Video

In the category of how technology can be fun, yet dangerous, a 19 year old college student alleges that the dating app Meete took a video she innocently posted on TikTok of her high school graduation, then “overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying... Continue Reading

Visit Blog

Phishing Now Top Method for Initial Unauthorized Network Access

According to Cisco Talus researchers, phishing is the primary method threat actors use to gain unauthorized access to networks, accounting for more than one-third of all incidents in the first quarter of 2026. This increase is attributed to threat actors using legitimate AI tools to enhance phishing campaigns, particularly against health care and government sectors.... Continue Reading

Visit Blog

Tempus AI Faces Class Action Cases for Collection of Genetic Information in Acquisition

Multiple class action cases have been filed against Tempus AI  alleging that, during its acquisition of Ambry Genetics, the company improperly collected and disclosed genetic information without obtaining prior written consent from individuals during its acquisition of Ambry. Tempus acquired Ambry, a genetic testing firm, in February 2025 for $600 million. The acquisition included the... Continue Reading

Visit Blog

Privacy Tip #489 – Social Media Scams #1 in 2025

The Federal Trade Commission (FTC) recently reported that, in 2025, social media scams were the costliest of all scams against consumers, with a whopping $2.1 billion lost. Thirty percent of those who reported losing funds in 2025 indicated that the scam started over social media. The number of 2025 scams beginning on social media increased... Continue Reading

Visit Blog

DOJ’s Big Win in North Korean IT Worker Fraud Scheme

On April 15, 2026, the Department of Justice (DOJ) announced that two U.S. nationals, Kejia Wang and Zhenxing Wang, were sentenced for facilitating a North Korean IT worker scheme that compromised over 80 U.S. identities, with sentences of 108 and 92 months respectively, supervised release, and forfeiture orders. The scheme involved the defendants operating “laptop... Continue Reading

Visit Blog