Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

California Enacts Landmark AI Safety Law But With Very Narrow Applicability

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The path to TFAIA was paved by failure. TFAIA’s predecessor SB 1047[3] overwhelmingly passed the legislature last year, but was ultimately blocked at the Governor’s desk. In his veto statement, Governor Newsom called for an approach to frontier model regulation “informed by an empirical trajectory analysis of AI systems and capabilities,” criticizing SB 1047 for applying stringent standards to even the most basic functions[4]. TFAIA thus represents a strategic pivot to regulation focused only on the most impactful AI models, which eliminates the kill switch requirement (which would mandate full shutdown capabilities for noncompliant systems), rigid testing and auditing regime and aggressive 72-hour timeline for incident reporting that doomed its predecessor.

TFAIA serves as California’s attempt to strike the balance of advancing AI innovation and competition while underscoring accountability for responsible AI development. The Act aims to bolster public trust and increase awareness of AI-specific risks by requiring developers to think critically about frontier AI capabilities.

Scope and Thresholds

Scoped narrowly to target the most powerful models capable of significant and catastrophic impact, TFAIA imposes certain requirements on “frontier models,” defined as foundation models (or general purpose models that are trained on broad data sets) trained using or intending to use a quantity of computing power greater than 10^26 integer or floating-point operations.[5]  In particular, all “frontier developers” (or persons that “trained or initiated the training” of frontier models) face baseline transparency requirements, with more burdensome obligations imposed on “large frontier developers” (namely, frontier developers that, together with affiliates, had annual gross revenues above $500 million in the preceding year).

Tailoring its scope even further, TFAIA focuses many of its requirements on prevention of “catastrophic risk”, defined as a foreseeable and material risk that a frontier model could (1) materially contribute to the death or serious injury of 50 or more people or (2) cause at least $1 billion in damages to property, in either case, arising from a single incident involving a frontier model, doing any of the following: (a) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (b) engaging in criminal conduct (conduct that would constitute murder, assault, extortion or theft) or a cyberattack, without meaningful human intervention; or (c) evading the control of its frontier developer or user.

Key Compliance Provisions

TFAIA imposes certain requirements on all frontier models, with heightened obligations on large frontier model developers:

  1. Transparency Reports. At or before the time of deploying a frontier model (or a substantially modified version of an existing frontier model), frontier model developers must implement and publish a transparency report on their website. Reports, which can under the Act be embedded in model or system cards, must include (a) the website of the frontier developer, (b) model details (e.g., release date, languages supported, intended uses, modalities, restrictions) and (c) mechanisms by which a person can communicate with the frontier developer.[6]
    Large frontier developers must further (x) include summaries of assessments of catastrophic risks resulting from use of the frontier model, the results of such assessments, the role of any third-party evaluators and the steps taken to fulfill the requirements of the frontier AI framework (see below) and (y) transmit to the Office of Emergency Services reports of any assessments of catastrophic risk resulting from internal use of their frontier models every three months or pursuant to another reasonable schedule specified by the developer.  The Act tasks the Office of Emergency Services with establishing a mechanism by which large frontier developers can confidentially submit such assessment reports of catastrophic risk.
  2. Critical Safety Incident Reporting. Frontier developers are required to report “critical safety incidents[7] to the Office of Emergency Services within 15 days of discovery.  To the extent a critical safety incident poses imminent risk of death or serious physical injury, the reporting window is shortened to 24 hours, with disclosure required to an appropriate authority based on the nature of the incident and as required by law.  Note, critical safety incidents pertaining to foundation models that do not qualify as frontier models are not required to be reported.  Importantly, TFAIA exempts the following reports from disclosure under the California Public Records Act: reports regarding critical safety incidents, reports of assessments of catastrophic risk and covered employee reports made pursuant to the whistleblower protections described below. 
  3. Frontier AI Frameworks for Large Frontier Developers. In addition to the above, large frontier developers must publish an annual (or, upon making a material modification to its framework, within 30 days of such modification) frontier AI framework describing the technical and organizational protocols relied upon to manage and assess how catastrophic risks are identified, mitigated, and governed. The framework must include documentation of a developer’s alignment with national/international standards, governance structures, thresholds used to identify and assess the frontier model’s capabilities to pose a catastrophic risk, mitigation processes (including independent review of potential for catastrophic risks and effectiveness of mitigation processes) and cybersecurity practices and processes for identifying and responding to critical safety incidents.  Large frontier developers are prohibited from making false or misleading claims about catastrophic risks from their frontier models or their compliance with their published frontier AI framework.  Additionally, these developers are permitted to redact information necessary to protect trade secrets, cybersecurity, public safety or national security or as required by law as long as they maintain records of unredacted versions for a period of at least five years.

Other Notable Provisions

In addition to the requirements imposed on frontier models, TFAIA resurrects CalCompute—a consortium tasked with development of a framework for the creation of a public cloud computing cluster first envisioned under SB 1047—which provides for access to advanced computing capabilities to support safe, equitable and sustainable AI development and deployment in the public interest. 

TFAIA also enhances protections for whistleblowers by (1) prohibiting frontier developers from adopting rules that would prevent employees from reporting catastrophic risks or retaliating against employees who report such risks, (2) requiring frontier developers to provide notice to their employees once a year of their rights as whistleblowers and (3) requiring large frontier developers to implement and maintain anonymous internal reporting channels. Notably, whistleblowers are empowered to bring civil actions for injunctive relief (as well as recovery of attorneys’ fees) against frontier developers for violations of their rights under the Act.

Enforcement and Rulemaking

Large frontier developers that fail to publish TFAIA-compliant reports or other documentation, make a false statement about catastrophic risk or their compliance with their frontier AI framework, fail to report a critical safety incident or fail to comply with their frontier AI framework could face penalties up to $1 million per violation, scaled to the severity of the offense. Such penalties can only be recovered by the Attorney General bringing a civil action. 

To ensure that the applicability of the TFAIA reflects technological change, the Act empowers the California Department of Technology—as opposed to the Attorney General as envisioned under SB 1047—to assess technological developments, research and international standards and recommend updates to key statutory definitions (of “frontier model,” “frontier developer” and “large frontier developer”) on or before January 1, 2027 and annually thereafter. 

Key Takeaways

With TFAIA, California provides a blueprint for regulations focused on the most impactful and powerful AI technology, establishing transparency, disclosure, and governance requirements for frontier model developers.  A similar bill, the Responsible AI Safety and Education (RAISE) Act, regulating frontier models awaits the signature of Governor Hochul in New York.  Although TFAIA and RAISE have similar applicability and frameworks,[8] RAISE imposes stricter requirements (72-hour window for reporting safety incidents) and higher penalties (up to $10 million for a first violation and $30 million for subsequent ones), similar to the failed SB 1047.  TFAIA’s success in navigating gubernatorial approval—where SB 1047 failed—demonstrates the effectiveness of a transparency-first approach over prescriptive mandates (as TFAIA largely focuses on disclosure requirements for covered models whereas RAISE does not require transparency reporting to the same extent nor include whistleblower protections, instead focusing on enforcement by imposing strict liability and strictly prohibiting models that create unreasonable risk of critical harms), suggesting the RAISE Act may be subject to further narrowing, or even a veto, by Governor Hochul. 

Most businesses, including the vast majority of AI developers, will be relieved that TFAIA has such narrow applicability.  For the few businesses that might meet TFAIA’s applicability thresholds, the law represents both immediate compliance obligations and a preview of the regulatory landscape to come. These businesses should:

  1. Conduct a threshold analysis to determine frontier developer or large frontier developer status
  2. Review existing AI safety practices against TFAIA requirements, particularly focusing on safety framework documentation and incident reporting capabilities
  3. Develop comprehensive frontier AI frameworks addressing the law’s required elements, including governance structures, risk assessment thresholds and cybersecurity practices
  4. Implement robust documentation systems to support transparency reporting requirements for model releases and modifications
  5. Create incident response procedures to identify and report critical safety incidents within required timelines (15-day standard, 24-hour emergency)
  6. Update whistleblower reporting mechanisms and ensure employees receive notice of their rights under the law
  7. Develop scalable compliance frameworks accommodating varying state requirements as other states, including New York, consider similar AI safety laws
  8. Consider voluntary adoption of TFAIA-style frameworks as industry best practices, even for companies below current thresholds

[1] The text of the Act can be found here.

[2] AB 2013 requires developers of generative AI systems to post documentation on their website describing the dataset(s) used for system training.

[3] The text of SB 1047 can be found here.

[4] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

[5] The computing power minimum includes computing from both initial training and subsequent fine-tuning or modifications.

[6] Notably, frontier developers can redact portions of their transparency reports to protect trade secrets and guard against cybersecurity or public safety threats; however, any such redactions must be justified within the repot which must be maintained for 5 years.

[7] The Act defines a “critical safety incident” to mean any of the following: (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.

[8] Unlike TFAIA, RAISE instead applies only to “large developers” defined as persons that  have (1) trained  at  least  one frontier model and (2) spent over $100 million in aggregate compute costs in training frontier models.

Interpol Says 260 Suspects in Online Romance Scams Have Been Arrested in Africa

26 September 2025 at 10:08

The operation took place in July and August and focused on scams in which perpetrators build online romantic relationships to extract money from targets or blackmail them with explicit images, Interpol said.

The post Interpol Says 260 Suspects in Online Romance Scams Have Been Arrested in Africa appeared first on SecurityWeek.

Scattered Spider Suspect Arrested in US

23 September 2025 at 05:25

The juvenile suspect surrendered on September 17 and was booked on computer intrusion, extortion, and identity theft charges.

The post Scattered Spider Suspect Arrested in US appeared first on SecurityWeek.

FBI Warns of Spoofed IC3 Website

22 September 2025 at 05:43

Threat actors likely spoofed the official government website for personal information theft and monetary fraudulent activity.

The post FBI Warns of Spoofed IC3 Website appeared first on SecurityWeek.

Enforcement Countdown: Is DOJ Ready for the Bulk Data Rule “Grace Period” to End?

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

Although it remains to be seen whether DOJ’s National Security Division (“NSD”) will have the necessary infrastructure and personnel in place to launch comprehensive investigations to enforce such an expansive regulatory program, companies should be wary to wait to verify the NSD’s operational readiness.  Instead, companies should bear in mind certain considerations, discussed below, when approaching this new and uncertain enforcement frontier.

The DSP is a brand new regulatory framework based on the Bulk Data Rule that imposes restrictions designed to prevent certain countries—China, Cuba, Iran, North Korea, Russia, and Venezuela—and covered persons from accessing Americans’ bulk sensitive personal data and U.S. government-related data.[2]  Violations of the Rule are subject to steep penalties.  Pursuant to the DSP and the International Emergency Economic Powers Act (“IEEPA”), DOJ is authorized to bring not only civil enforcement actions, but also criminal prosecutions for willful violations of the DSP’s requirements.  Civil penalties may reach up to the greater of $368,136 or twice the value of each violative transaction, while willful violations are punishable by up to 20 years imprisonment and a $1,000,000 fine.[3]

Although the DSP largely went into effect on April 8, 2025, DOJ instituted a 90-day limited enforcement period.  During this period, NSD stated it would deprioritize civil enforcement actions for companies and individuals making a “good-faith effort” to come into compliance with the DSP.  This grace period comes to an end on July 8, 2025.  As detailed below, this broad grant of investigative and enforcement authority—especially the potential for both civil and criminal liability—creates a number of potential logistical and legal challenges for DOJ.

Investigation and Enforcement Challenges

Enforcement of the DSP falls to the NSD, and more specifically to a small, specialized section named the Foreign Investment Review Section (“FIRS”).  Historically, FIRS was comprised of approximately 10-20 attorneys, with a niche portfolio of responsibilities that included representing DOJ on the Committee on Foreign Investment in the United States and Team Telecom.  With this portfolio, FIRS generally enjoyed a comparatively lower profile than other sections within the Department, leaving most federal prosecutors and criminal defense attorneys unfamiliar with its activities.

However, that all could change in the near future given that FIRS has been tasked with implementing and enforcing an entirely new regulatory and enforcement regime.  Going forward, FIRS – a section traditionally without litigators or a litigating function – will have both civil and criminal authority to investigate, bring enforcement actions, and prosecute violations of the Rule. 

Complications Associated with Adding Criminal Prosecutors to FIRS

The availability of criminal penalties under the DSP will require a number of changes at FIRS.  Notably, unlike other NSD sections, the scope of FIRS’s work did not previously include criminal prosecutions and instead maintained a regulatory focus.[4]

Given FIRS’s lack of experience with criminal cases, FIRS must now decide how it will staff enforcement matters going forward, including whether to hire federal prosecutors directly or to instead coordinate with U.S. Attorneys’ Offices or other sections of NSD in connection with criminal investigations and prosecutions.  It seems likely that NSD would consider staffing up FIRS in anticipation of its dual criminal and civil enforcement authority under the DSP.  But the introduction of criminal prosecutors into the same small section as civil regulators opens up potential risks in terms of parallel civil and criminal investigations:

  1. Due Process Considerations: While DOJ often conducts parallel criminal and civil investigations, such coordination is subject to limitations imposed by the Due Process Clause of the Fifth Amendment.[5]  In United States v. Kordel, the Supreme Court suggested that the Government may be found to have acted in bad faith in violation of the Fifth Amendment by bringing “a civil action solely to obtain evidence for its criminal prosecution” or by “fail[ing] to advise the defendant in its civil proceedings that it contemplates his criminal prosecution.”[6]  Lower courts have “occasionally suppressed evidence or dismissed indictments on due process grounds where the government made affirmative misrepresentations or conducted a civil investigation solely for purposes of advancing a criminal case.”[7]  In order to avoid such consequences, FIRS will have to ensure that any cooperation or coordination in parallel civil and criminal investigations of DSP violations complies with Due Process requirements.
  2. DOJ Internal Policy Limitations: In addition to Due Process requirements, internal DOJ guidance places guardrails around parallel or joint civil and criminal investigations.  Section 1-12.00 of the Justice Manual notes that “when conducted properly,” parallel investigations can “serve the best interests of law enforcement and the public.”[8]  However, the same section goes on to warn DOJ attorneys that “parallel proceedings must be handled carefully in order to avoid allegations of . . . abuse of civil process.”[9]  Section 1-12.100 addresses parallel or joint corporate investigations and similarly emphasizes that DOJ attorneys “should remain mindful of their ethical obligations not to use criminal enforcement authority unfairly to extract, or to attempt to extract, additional civil or administrative monetary payments.”[10]
  3. Maintaining the Secrecy of Rule 6(e) Grand Jury Materials: Finally, FIRS will need to implement precautions to ensure that its civil enforcement attorneys are walled off from the disclosure of materials covered by Federal Rule of Criminal Procedure 6(e).  Rule 6(e) establishes a general rule of secrecy for grand jury materials with limited exceptions.  Although Rule 6(e)(3)(A)(i) permits disclosure “to an attorney for the government for use in the performance of such attorney’s duty,” civil enforcement attorneys within FIRS could only view Rule 6(e) materials if they obtain a court order.[11]  Moreover, pursuant to DOJ guidance, even when disclosure is authorized for use in civil proceedings, it is considered a “better practice to forestall the disclosure until the criminal investigation is complete,” given the potential “danger of misuse, or the appearance thereof.”[12]  Given that none of the exceptions under Rule 6(e) appear readily applicable, criminal attorneys within FIRS will have to take particular precautions to ensure that grand jury material covered under Rule 6(e) is not disclosed to their civil colleagues.

Following July 8, as we wait to see whether FIRS initiates investigations and enforcement actions under the DSP, it will need to address the above limitations and potential pitfalls that come with parallel civil and criminal proceedings.  This will be especially important given the relatively small size of FIRS, its historic regulatory focus, and the addition of criminal prosecutors and criminal enforcement authority as it tries to administer an entirely new regulatory and enforcement regime.

Limited Investigative Resources

In addition to potential concerns associated with criminal enforcement of the DSP, there is also uncertainty about how FIRS will investigate potential violations.  Unlike traditional sanctions and export control enforcement, which relies on the Department of Treasury’s Office of Foreign Assets Control and the Department of Commerce’s Bureau of Industry and Security, respectively, it is unclear what, if any, dedicated investigative resources or interagency cooperation FIRS will have at its disposal.  While federal prosecutors typically investigate alongside agents from the Federal Bureau of Investigation and Homeland Security Investigations, such investigative resources historically were not allocated to FIRS, and it is unclear which federal investigating agency – if any – has been tasked with leading these investigations.  This raises questions about FIRS’s capacity to effectively investigate and bring enforcement actions for potential violations.

One option that could be considered is to have FIRS limit its role to civil enforcement and – to the extent it comes across potential criminal conduct – make criminal referrals to either (i) the appropriate United States Attorney’s Office, all of which have federal prosecutors who have been trained in national security investigations and have routine access to a grand jury, or (ii) NSD’s Counterintelligence and Export Control Section, which currently includes federal prosecutors that specialize in investigating criminal violations of sanctions and export control laws.

Alternatively, the Federal Trade Commission (“FTC”) could also provide investigative support regarding potential violations under the DSP given its enforcement authority under a related law: the Protecting Americans’ Data from Foreign Adversaries Act (“PADFA”).  The FTC has enforcement authority under PADFA to seek civil penalties but is first required to refer the matter to the DOJ.[13]  Given the potential overlap between the DSP and PADFA, the FTC may be particularly well-situated to investigate and refer cases of DSP violations to FIRS.

Seventh Amendment Implications: The Jarkesy Challenge

As noted above, the DOJ has broad authority to pursue both civil penalties and prosecute criminal offenses for non-compliance with the Bulk Data Rule under the DSP, but just how the DOJ plans to pursue civil penalties for violations is also unclear.  Specifically, to the extent the DOJ seeks to impose penalties in a way that implicates administrative proceedings, it is likely to face challenges following the Supreme Court’s decision in SEC v. Jarkesy.[14]  In Jarkesy, the Supreme Court held that the Seventh Amendment entitles a defendant to a jury trial when the SEC seeks civil penalties for securities fraud,[15] thereby limiting the SEC’s ability to adjudicate cases for civil penalties through its administrative proceedings.

Jarkesy’s reasoning regarding the Seventh Amendment’s application to actions seeking civil penalties could impact the DSP’s enforcement framework.[16]  Similar to the civil penalties at issue in Jarkesy, civil penalties imposed under the DSP and IEEPA serve to punish violations and deter future misconduct, as opposed to compensate victims.[17]  However, unlike antifraud provisions, the DSP arguably lacks clear common law analogies, and it is possible that the DSP and IEEPA could be viewed under the “public rights” exception given the links to national security.[18]

Going forward, Jarkesy is expected to affect how other federal agencies conduct enforcement actions seeking civil penalties.  The DOJ will have to consider these implications as it decides on an enforcement framework for imposing civil penalties for DSP violations.

Conclusion

The DSP represents the U.S.’s first data localization requirement ripe for enforcement, but its implementation faces substantial practical challenges that may hinder DOJ’s ability for wide-ranging or swift action.  As companies work to ensure their activities are in compliance with the DSP and the Bulk Data Rule ahead of July 8, many are left wondering whether the DOJ will be ready to begin investigating and enforcing this Rule given its breadth and the clear potential challenges that lie ahead.  While we await DOJ’s next steps toward enforcement, companies should be prepared to document their good-faith efforts to demonstrate compliance with the DSP and the Rule to prevent early investigations and enforcement actions.  Additionally, as emphasized by the DOJ’s non-binding Compliance Guidance,[19] companies that proactively implement compliance programs will be better positioned to respond and adapt to this uncertain enforcement environment.


[1] U.S. Dep’t of Just., Nat’l Sec. Div., Data Security Program: Implementation and Enforcement Policy Through July 8, 2025 (Apr. 11, 2025), https://www.justice.gov/opa/media/1396346/dl?inline [hereinafter Enforcement Policy].

[2] Our prior alert memorandum on the DSP is available here, and our alert on DOJ’s 90-day limited enforcement policy of the DSP is available here.

[3] Enforcement Policy, at 1.

[4] U.S. Dep’t of Just., Nat’l Sec. Div., NSD Organizational Chart (June 16, 2023), https://www.justice.gov/nsd/national-security-division-organization-chart

[5] See, e.g., United States v. Stringer, 535 F.3d 929, 933 (9th Cir. 2008) (“There is nothing improper about the government undertaking simultaneous criminal and civil investigations.”).

[6] See United States v. Kordel, 397 U.S. 1, 11 (1970) (holding that the Government did not violate due process when it used evidence from a routine FDA civil investigation to convict defendants of criminal misbranding given that the agency made similar requests for information in 75% of civil cases and there was no suggestion the Government brought the civil case solely to obtain evidence for the criminal prosecution).

[7] Stringer, 535 F.3d at 940 (collecting cases).

[8] Justice Manual 1-12.00 – Coordination of Parallel Criminal, Civil, Regulatory, and Administrative Proceedings (May 2018), https://www.justice.gov/jm/jm-1-12000-coordination-parallel-criminal-civil-regulatory-and-administrative-proceedings

[9] Id.

[10] Justice Manual 1-12.100 – Coordination of Corporate Resolution Penalties in Parallel and/or Joint Investigations and Proceedings Arising from the Same Misconduct (May 2018), https://www.justice.gov/jm/jm-1-12000-coordination-parallel-criminal-civil-regulatory-and-administrative-proceedings

[11] See United States v. Sells Eng’g, Inc., 463 U.S. 418, 427 (1983) (rejecting the argument that all attorneys within the DOJ’s civil division are covered under (A)(i), and instead holding that “(A)(i) disclosure is limited to use by those attorneys who conduct the criminal matters to which the materials pertain”).

[12] U.S. Dep’t of Just., Crim. Resource Manual, 156. Disclosure of Matters Occurring Before the Grand Jury to Department of Justice Attorneys and Assistant United States Attorneys (Oct. 2012), https://www.justice.gov/archives/jm/criminal-resource-manual-156-disclosure-matters-occurring-grand-jury-department-justice-attys

[13] A violation of PADFA is treated as a violation of an FTC rule pursuant to 15 U.S.C. § 57a(a)(1)(B).

[14] 603 U.S. 109 (2024).

[15] Id. at 140.

[16] The Court in Jarkesy also established a two-part test for determining whether a cause of action implicates the Seventh Amendment.  First, courts must determine whether the cause of action is “legal in nature” and whether the remedy sought is traditionally obtained in courts of law.  Id. at 121–27.  If legal in nature, courts must then assess whether the “public rights” exception permits congressional assignment of adjudication to an agency.  Id. at 127–34.

[17] Id. at 121–27.

[18] Id. at 135.

[19] U.S. Dep’t of Just., Nat’l Sec. Div., Data Security Program: Compliance Guide (Apr. 11, 2025), https://www.justice.gov/opa/media/1396356/dl

How Each Pillar of the 1st Amendment is Under Attack

30 March 2025 at 21:22

“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” -U.S. Constitution, First Amendment.

Image: Shutterstock, zimmytws.

In an address to Congress this month, President Trump claimed he had “brought free speech back to America.” But barely two months into his second term, the president has waged an unprecedented attack on the First Amendment rights of journalists, students, universities, government workers, lawyers and judges.

This story explores a slew of recent actions by the Trump administration that threaten to undermine all five pillars of the First Amendment to the U.S. Constitution, which guarantees freedoms concerning speech, religion, the media, the right to assembly, and the right to petition the government and seek redress for wrongs.

THE RIGHT TO PETITION

The right to petition allows citizens to communicate with the government, whether to complain, request action, or share viewpoints — without fear of reprisal. But that right is being assaulted by this administration on multiple levels. For starters, many GOP lawmakers are now heeding their leadership’s advice to stay away from local town hall meetings and avoid the wrath of constituents affected by the administration’s many federal budget and workforce cuts.

Another example: President Trump recently fired most of the people involved in processing Freedom of Information Act (FOIA) requests for government agencies. FOIA is an indispensable tool used by journalists and the public to request government records, and to hold leaders accountable.

The biggest story by far this week was the bombshell from The Atlantic editor Jeffrey Goldberg, who recounted how he was inadvertently added to a Signal group chat with National Security Advisor Michael Waltz and 16 other Trump administration officials discussing plans for an upcoming attack on Yemen.

One overlooked aspect of Goldberg’s incredible account is that by planning and coordinating the attack on Signal — which features messages that can auto-delete after a short time — administration officials were evidently seeking a way to avoid creating a lasting (and potentially FOIA-able) record of their deliberations.

“Intentional or not, use of Signal in this context was an act of erasure—because without Jeffrey Goldberg being accidentally added to the list, the general public would never have any record of these communications or any way to know they even occurred,” Tony Bradley wrote this week at Forbes.

Petitioning the government, particularly when it ignores your requests, often requires challenging federal agencies in court. But that becomes far more difficult if the most competent law firms start to shy away from cases that may involve crossing the president and his administration.

On March 22, the president issued a memorandum that directs heads of the Justice and Homeland Security Departments to “seek sanctions against attorneys and law firms who engage in frivolous, unreasonable and vexatious litigation against the United States,” or in matters that come before federal agencies.

The POTUS recently issued several executive orders railing against specific law firms with attorneys who worked legal cases against him. On Friday, the president announced that the law firm of Skadden, Arps, Slate, Meager & Flom had agreed to provide $100 million in pro bono work on issues that he supports.

Trump issued another order naming the firm Paul, Weiss, Rifkind, Wharton & Garrison, which ultimately agreed to pledge $40 million in pro bono legal services to the president’s causes.

Other Trump executive orders targeted law firms Jenner & Block and WilmerHale, both of which have attorneys that worked with special counsel Robert Mueller on the investigation into Russian interference in the 2016 election. But this week, two federal judges in separate rulings froze parts of those orders.

“There is no doubt this retaliatory action chills speech and legal advocacy, and that is qualified as a constitutional harm,” wrote Judge Richard Leon, who ruled against the executive order targeting WilmerHale.

President Trump recently took the extraordinary step of calling for the impeachment of federal judges who rule against the administration. Trump called U.S. District Judge James Boasberg a “Radical Left Lunatic” and urged he be removed from office for blocking deportation of Venezuelan alleged gang members under a rarely invoked wartime legal authority.

In a rare public rebuke to a sitting president, U.S. Supreme Court Justice John Roberts issued a statement on March 18 pointing out that “For more than two centuries, it has been established that impeachment is not an appropriate response to disagreement concerning a judicial decision.”

The U.S. Constitution provides that judges can be removed from office only through impeachment by the House of Representatives and conviction by the Senate. The Constitution also states that judges’ salaries cannot be reduced while they are in office.

Undeterred, House Speaker Mike Johnson this week suggested the administration could still use the power of its purse to keep courts in line, and even floated the idea of wholesale eliminating federal courts.

“We do have authority over the federal courts as you know,” Johnson said. “We can eliminate an entire district court. We have power of funding over the courts, and all these other things. But desperate times call for desperate measures, and Congress is going to act, so stay tuned for that.”

FREEDOM OF ASSEMBLY

President Trump has taken a number of actions to discourage lawful demonstrations at universities and colleges across the country, threatening to cut federal funding for any college that supports protests he deems “illegal.”

A Trump executive order in January outlined a broad federal crackdown on what he called “the explosion of antisemitism” on U.S. college campuses. This administration has asserted that foreign students who are lawfully in the United States on visas do not enjoy the same free speech or due process rights as citizens.

Reuters reports that the acting civil rights director at the Department of Education on March 10 sent letters to 60 educational institutions warning they could lose federal funding if they don’t do more to combat anti-semitism. On March 20, Trump issued an order calling for the closure of the Education Department.

Meanwhile, U.S. Immigration and Customs Enforcement (ICE) agents have been detaining and trying to deport pro-Palestinian students who are legally in the United States. The administration is targeting students and academics who spoke out against Israel’s attacks on Gaza, or who were active in campus protests against U.S. support for the attacks. Secretary of State Marco Rubio told reporters Thursday that at least 300 foreign students have seen their visas revoked under President Trump, a far higher number than was previously known.

In his first term, Trump threatened to use the national guard or the U.S. military to deal with protesters, and in campaigning for re-election he promised to revisit the idea.

“I think the bigger problem is the enemy from within,” Trump told Fox News in October 2024. “We have some very bad people. We have some sick people, radical left lunatics. And I think they’re the big — and it should be very easily handled by, if necessary, by National Guard, or if really necessary, by the military, because they can’t let that happen.”

This term, Trump acted swiftly to remove the top judicial advocates in the armed forces who would almost certainly push back on any request by the president to use U.S. soldiers in an effort to quell public protests, or to arrest and detain immigrants. In late February, the president and Defense Secretary Pete Hegseth fired the top legal officers for the military services — those responsible for ensuring the Uniform Code of Military Justice is followed by commanders.

Military.com warns that the purge “sets an alarming precedent for a crucial job in the military, as President Donald Trump has mused about using the military in unorthodox and potentially illegal ways.” Hegseth told reporters the removals were necessary because he didn’t want them to pose any “roadblocks to orders that are given by a commander in chief.”

FREEDOM OF THE PRESS

President Trump has sued a number of U.S. news outlets, including 60 Minutes, CNN, The Washington Post, The New York Times and other smaller media organizations for unflattering coverage.

In a $10 billion lawsuit against 60 Minutes and its parent Paramount, Trump claims they selectively edited an interview with former Vice President Kamala Harris prior to the 2024 election. The TV news show last month published transcripts of the interview at the heart of the dispute, but Paramount is reportedly considering a settlement to avoid potentially damaging its chances of winning the administration’s approval for a pending multibillion-dollar merger.

The president sued The Des Moines Register and its parent company, Gannett, for publishing a poll showing Trump trailing Harris in the 2024 presidential election in Iowa (a state that went for Trump). The POTUS also is suing the Pulitzer Prize board over 2018 awards given to The New York Times and The Washington Post for their coverage of purported Russian interference in the 2016 election.

Whether or not any of the president’s lawsuits against news organizations have merit or succeed is almost beside the point. The strategy behind suing the media is to make reporters and newsrooms think twice about criticizing or challenging the president and his administration. The president also knows some media outlets will find it more expedient to settle.

Trump also sued ABC News and George Stephanopoulos for stating that the president had been found liable for “rape” in a civil case [Trump was found liable of sexually abusing and defaming E. Jean Carroll]. ABC parent Disney settled that claim by agreeing to donate $15 million to the Trump Presidential Library.

Following the attack on the U.S. Capitol on Jan. 6, 2021, Facebook blocked President Trump’s account. Trump sued Meta, and after the president’s victory in 2024 Meta settled and agreed to pay Trump $25 million: $22 million would go to his presidential library, and the rest to legal fees. Meta CEO Mark Zuckerberg also announced Facebook and Instagram would get rid of fact-checkers and rely instead on reader-submitted “community notes” to debunk disinformation on the social media platform.

Brendan Carr, the president’s pick to run the Federal Communications Commission (FCC), has pledged to “dismantle the censorship cartel and restore free speech rights for everyday Americans.” But on January 22, 2025, the FCC reopened complaints against ABC, CBS and NBC over their coverage of the 2024 election. The previous FCC chair had dismissed the complaints as attacks on the First Amendment and an attempt to weaponize the agency for political purposes.

According to Reuters, the complaints call for an investigation into how ABC News moderated the pre-election TV debate between Trump and Biden, and appearances of then-Vice President Harris on 60 Minutes and on NBC’s “Saturday Night Live.”

Since then, the FCC has opened investigations into NPR and PBS, alleging that they are breaking sponsorship rules. The Center for Democracy & Technology (CDT), a think tank based in Washington, D.C., noted that the FCC is also investigating KCBS in San Francisco for reporting on the location of federal immigration authorities.

“Even if these investigations are ultimately closed without action, the mere fact of opening them – and the implicit threat to the news stations’ license to operate – can have the effect of deterring the press from news coverage that the Administration dislikes,” the CDT’s Kate Ruane observed.

Trump has repeatedly threatened to “open up” libel laws, with the goal of making it easier to sue media organizations for unfavorable coverage. But this week, the U.S. Supreme Court declined to hear a challenge brought by Trump donor and Las Vegas casino magnate Steve Wynn to overturn the landmark 1964 decision in New York Times v. Sullivan, which insulates the press from libel suits over good-faith criticism of public figures.

The president also has insisted on picking which reporters and news outlets should be allowed to cover White House events and participate in the press pool that trails the president. He barred the Associated Press from the White House and Air Force One over their refusal to call the Gulf of Mexico by another name.

And the Defense Department has ordered a number of top media outlets to vacate their spots at the Pentagon, including CNN, The Hill, The Washington Post, The New York Times, NBC News, Politico and National Public Radio.

“Incoming media outlets include the New York Post, Breitbart, the Washington Examiner, the Free Press, the Daily Caller, Newsmax, the Huffington Post and One America News Network, most of whom are seen as conservative or favoring Republican President Donald Trump,” Reuters reported.

FREEDOM OF SPEECH

Shortly after Trump took office again in January 2025, the administration began circulating lists of hundreds of words that government staff and agencies shall not use in their reports and communications.

The Brookings Institution notes that in moving to comply with this anti-speech directive, federal agencies have purged countless taxpayer-funded data sets from a swathe of government websites, including data on crime, sexual orientation, gender, education, climate, and global development.

The New York Times reports that in the past two months, hundreds of terabytes of digital resources analyzing data have been taken off government websites.

“While in many cases the underlying data still exists, the tools that make it possible for the public and researchers to use that data have been removed,” The Times wrote.

On Jan. 27, Trump issued a memo (PDF) that paused all federally funded programs pending a review of those programs for alignment with the administration’s priorities. Among those was ensuring that no funding goes toward advancing “Marxist equity, transgenderism, and green new deal social engineering policies.”

According to the CDT, this order is a blatant attempt to force government grantees to cease engaging in speech that the current administration dislikes, including speech about the benefits of diversity, climate change, and LGBTQ issues.

“The First Amendment does not permit the government to discriminate against grantees because it does not like some of the viewpoints they espouse,” the CDT’s Ruane wrote. “Indeed, those groups that are challenging the constitutionality of the order argued as much in their complaint, and have won an injunction blocking its implementation.”

On January 20, the same day Trump issued an executive order on free speech, the president also issued an executive order titled “Reevaluating and Realigning United States Foreign Aid,” which froze funding for programs run by the U.S. Agency for International Development (USAID). Among those were programs designed to empower civil society and human rights groups, journalists and others responding to digital repression and Internet shutdowns.

According to the Electronic Frontier Foundation (EFF), this includes many freedom technologies that use cryptography, fight censorship, protect freedom of speech, privacy and anonymity for millions of people around the world.

“While the State Department has issued some limited waivers, so far those waivers do not seem to cover the open source internet freedom technologies,” the EFF wrote about the USAID disruptions. “As a result, many of these projects have to stop or severely curtail their work, lay off talented workers, and stop or slow further development.”

On March 14, the president signed another executive order that effectively gutted the U.S. Agency for Global Media (USAGM), which oversees or funds media outlets including Radio Free Europe/Radio Liberty and Voice of America (VOA). The USAGM also oversees Radio Free Asia, which supporters say has been one of the most reliable tools used by the government to combat Chinese propaganda.

But this week, U.S. District Court Judge Royce Lamberth, a Reagan appointee, temporarily blocked USAGM’s closure by the administration.

“RFE/RL has, for decades, operated as one of the organizations that Congress has statutorily designated to carry out this policy,” Lamberth wrote in a 10-page opinion. “The leadership of USAGM cannot, with one sentence of reasoning offering virtually no explanation, force RFE/RL to shut down — even if the President has told them to do so.”

FREEDOM OF RELIGION

The Trump administration rescinded a decades-old policy that instructed officers not to take immigration enforcement actions in or near “sensitive” or “protected” places, such as churches, schools, and hospitals.

That directive was immediately challenged in a case brought by a group of Quakers, Baptists and Sikhs, who argued the policy reversal was keeping people from attending services for fear of being arrested on civil immigration violations. On Feb. 24, a federal judge agreed and blocked ICE agents from entering churches or targeting migrants nearby.

The president’s executive order allegedly addressing antisemitism came with a fact sheet that described college campuses as “infested” with “terrorists” and “jihadists.” Multiple faith groups expressed alarm over the order, saying it attempts to weaponize antisemitism and promote “dehumanizing anti-immigrant policies.

The president also announced the creation of a “Task Force to Eradicate Anti-Christian Bias,” to be led by Attorney General Pam Bondi. Never mind that Christianity is easily the largest faith in America and that Christians are well-represented in Congress.

The Rev. Paul Brandeis Raushenbush, a Baptist minister and head of the progressive Interfaith Alliance, issued a statement accusing Trump of hypocrisy in claiming to champion religion by creating the task force.

“From allowing immigration raids in churches, to targeting faith-based charities, to suppressing religious diversity, the Trump Administration’s aggressive government overreach is infringing on religious freedom in a way we haven’t seen for generations,” Raushenbush said.

A statement from Americans United for Separation of Church and State said the task force could lead to religious persecution of those with other faiths.

“Rather than protecting religious beliefs, this task force will misuse religious freedom to justify bigotry, discrimination, and the subversion of our civil rights laws,” said Rachel Laser, the group’s president and CEO.

Where is President Trump going with all these blatant attacks on the First Amendment? The president has made no secret of his affection for autocratic leaders and “strongmen” around the world, and he is particularly enamored with Hungary’s far-right Prime Minister Viktor Orbán, who has visited Trump’s Mar-a-Lago resort twice in the past year.

A March 15 essay in The Atlantic by Hungarian investigative journalist András Pethő recounts how Orbán rose to power by consolidating control over the courts, and by building his own media universe while simultaneously placing a stranglehold on the independent press.

“As I watch from afar what’s happening to the free press in the United States during the first weeks of Trump’s second presidency — the verbal bullying, the legal harassment, the buckling by media owners in the face of threats — it all looks very familiar,” Pethő wrote. “The MAGA authorities have learned Orbán’s lessons well.”

CPPA Enforcement Action Against Honda Underscores Need for CCPA Compliant Privacy Practices

On March 12, the California Privacy Protection Agency (“CPPA”) announced an enforcement action against American Honda Motor Co. (“Honda”), with a $632,500 fine for violating the California Consumer Privacy Act and its implementing regulations (“CCPA”).[1]  This action, which is the CCPA’s first non-data broker action, arose in connection with the Enforcement Division’s ongoing investigative sweep of connected vehicle manufacturers and related technologies, and serves as a cautionary tale for companies handling consumer personal information, highlighting the stringent requirements of the CCPA and the consequences of non-compliance.

Alleged CCPA Violations

In connection with its review of Honda’s data privacy practices, the CPPA’s Enforcement Division concluded that Honda violated the CCPA’s requirements by:

  1. Placing an undue burden on consumers, requiring Californians to verify their identity and provide excessive personal information to exercise certain privacy rights, such as the right to opt-out of sale or sharing and the right to limit;
  2. Making it difficult for Californians to authorize other individuals or organizations (known as “authorized agents”) to exercise their privacy rights;
  3. Employing dark patterns, by using an online privacy management tool that failed to offer Californians their privacy choices in a symmetrical or equal way; and
  4. Sharing consumers’ personal information with ad tech companies without contracts that contain the necessary terms to protect privacy.

Below, we summarize the conduct giving rise to the alleged violations, and provide practical tips for businesses to consider for implementation.

1. Undue Burden on Requests to Opt-Out of Sale/Sharing and Requests to Limit

According to the Stipulated Final Order, Honda provided consumers with the same webform to submit all of their CCPA privacy rights requests irrespective of whether the requests required identity verification or not, in violation of the CCPA. Specifically, the CCPA distinguishes between privacy rights that permit a business to conduct prior identity verification (e.g., rights to know/access, correct and delete) and those that do not (e.g., rights to opt-out of data sales or “sharing” and to limit the use and disclosure of sensitive personal information), meaning businesses are prohibited from requiring consumers to verify their identities before actioning opt-out or limitation requests.[2] 

In reviewing Honda’s practices, the CPPA found that the use of the same webform for all privacy rights requests, and in turn by requiring personal information be provided before honoring opt-out and limitation requests, Honda imposed an unlawful verification standard on California consumers.  In addition, the CPPA further found that the webform required consumers to provide more information than necessary[3] for Honda to verify requests to access, delete and change their data.  Accordingly, the CPPA found that Honda’s webform was unduly burdensome, interfering with the ability of consumers to exercise their rights thereby violating the CCPA.

  • Practice Tip.  Businesses covered by the CCPA should review their consumer rights requests processes and methods to confirm that they do not require verification in order for consumers to submit consumer opt-out and limitation requests, and should further limit the information required to be provided by consumers in order to submit other privacy rights requests to only the information truly necessary to confirm the identity of the requestor.

2. Undue Burden on Exercise of CCPA Rights through Authorized Agents

Similar to the allegations above, the second alleged violation arose in connection with Honda’s practice of requiring consumers to directly confirm that they had given permission to their authorized agents to submit opt-out and limitation requests on their behalf. 

Under the CCPA, consumers can authorize other persons or entities to exercise their aforementioned rights, and, as above, the CCPA prohibits verification requirements for rights to opt-out and limit.  While businesses may require authorized agents to provide proof of authorization, the CCPA prohibits requiring consumers to directly confirm that authorized agents have their permission.  Instead, businesses are only allowed to contact consumers directly to check authorization, provided this relates to requests to know/access, correct or delete personal information.

Despite these requirements, because Honda’s process for submitting CCPA privacy rights requests did not distinguish between verifiable and non-verifiable requests, and Honda sent confirmatory correspondence directly to consumers to confirm they had given permission to the authorized agent for all such privacy requests, the CPPA found Honda in violation of the CCPA.

  • Practice Tip.  As above, businesses should audit their consumer rights requests procedures and mechanisms to ensure that they do not impose verification requirements, including those related to the use of authorized agents, in connection with opt-out and limitation requests.

3. Asymmetry in Cookie Management Tool

The third alleged violation regards Honda’s use of a cookie consent management tool on its website used to effectuate consumer requests to opt-out of personal information “sharing”, which was configured to opt consumers in by default.

Specifically, through the OneTrust cookie consent management tool utilized on Honda’s websites, consumers were automatically opted-in to the “sharing” of their personal information by default as shown below.  To opt-out, consumers were required to take multiple steps (i.e., to toggle the button to turn off cookies and then confirm their choices) while opting in required either no steps or, assuming a consumer were to decide to opt back in after opting out, only one step to “allow all” cookies.

 The CCPA requires business to design and implement methods for submitting CCPA requests that are easy to understand, provide symmetrical choices and avoid confusing language, interactive elements or choice architecture that impairs one’s ability to make a choice and are easy to execute.  Here, the CPPA focused specifically on providing symmetrical choices, meaning that the path for a consumer to exercise a more privacy-protective option cannot be longer or more difficult or time-consuming than the path to exercise a less privacy-protective option because that would impair or interfere with the consumer’s ability to make a choice.  The Stipulated Final Order went further to confirm that a website banner that provides only two options when seeking consumers’ consent to use their personal information—such as “Accept All” and “More Information,” or “Accept All” and “Preferences”—is not equal or symmetrical.

  • Practice Tip.  Businesses must audit their cookie consent management tools to ensure that consumers are not opted-in to data “sales” or “sharing” by default, and that the tool does not require a consumer to take more steps to effectuate consumer opt-out requests than to opt-in.  Moreover, cookie consent management tools that present only two options should allow consumers to either “accept” or “reject” all cookies (rather than presenting the option to “accept” and another option that is not full rejection (such as to receive more information or go to a “preferences” page)).

4. Absence of Contractual Safeguards with Vendors

Finally, the CPPA alleged that although Honda disclosed consumer personal information to third-party advertising technology vendors in situations where such disclosure was a “sale” or “sharing” under the CCPA, it failed to enter into a CCPA-compliant contract with such vendors.  Specifically, businesses that “sell” or “share” personal information to or with a third party must enter into agreements containing explicit provisions prescribed by the CCPA to ensure protection of consumers’ personal information. The CPPA found that by failing to implement such contractual safeguards, Honda placed consumers’ personal information at risk.

  • Practice Tip.  Businesses should audit all contracts pursuant to which consumer personal information is disclosed or otherwise made available to third parties, particularly third-party advertising technology vendors, to ensure the provisions required by the CCPA are included.

Enforcement Remedies

In addition to a $632,500 fine[4], the Stipulated Final Order requires Honda to (1) modify its methods for consumers to submit CCPA requests, including with respect to its method for the submission and confirmation of CCPA requests by authorized agents, (2) change its cookie preference tool to avoid dark patterns and ensure symmetry in choice, (3) ensure all personnel handling CCPA requests are adequately trained and (4) enter into compliant contracts with all external recipients of consumer personal information within 180 days.

Conclusion

The enforcement action against Honda underscores the importance of strict compliance with the CCPA. Businesses must ensure that their processes for handling consumer privacy requests are straightforward, do not require unnecessary information, and provide equal choice options, and must enter into CCPA compliant contracts prior to and in connection with the disclosure of consumer personal information to third parties.


[1] The Stipulated Final Order (the “Stipulated Final Order”) can be found here.

[2] Under the CCPA, businesses can verify requests to delete, correct and know personal information of consumers because of the potential harm to consumers from imposters accessing, deleting or changing their personal information; conversely, requests to opt-out of sale or sharing and requests to limit use and disclosure are prohibited from having a verification requirement because of the minimal potential harm to consumers.  Accordingly, while businesses may ask for additional information in connection with such requests to identify the relevant data in their systems, they cannot ask for more information than necessary to process such requests and, to the extent they can comply without additional information, they must do so.

[3] Specifically, the form required consumers to provide their first name, last name, address, city, state, zip code, email address and phone number, although Honda “need[ed] only two data points from [the relevant] consumer to identify [them] within its database.” 

[4] Notably, the Stipulated Final Order details the number of consumers whose rights were implicated by some of Honda’s practices, serving as a reminder to businesses that CCPA fines apply on a per violation basis.

Data Act FAQs – Key Takeaways for Manufacturers and Data Holders

On 3 February 2025, the European Commission (“EC”) published an updated version of its frequently asked questions (“FAQs”) on the EU Data Act.[1]  The Data Act, which is intended to make data more accessible to users of IoT devices in the EU, entered into force on 11 January 2024 and will become generally applicable as of 12 September 2025.

The FAQs, first published in September 2024, address the key concepts of “connected product” and “related service.” The latest iteration of the FAQs contains incremental updates which provide greater insight into how the EC believes that manufacturers and data holders should interpret their obligations under the Data Act.

Key Takeaways for Manufacturers and Data Holders

  1. “Connected Products” includes various smart devices, including smartphones and TVs.[2]  The FAQs acknowledge the broad definition of connected products under the Data Act and provide examples of devices that would fall under this category. In particular, despite ambiguity created from previous iterations of the Data Act, the EC has confirmed its view in the FAQs that devices such as smartphones, smart home devices and TVs are in-scope as connected products.
  2. Two conditions must be satisfied for a digital service to constitute a “Related Service.”[3]  It is expressly noted that the following conditions must be satisfied for a digital service to be a related service: (a) there must be a two-way exchange of data between the connected product and the service provider, and (b) the service must affect the connected product’s functions, behaviour, or operation. The FAQs also provide several factors that could help businesses determine whether a digital service is a related service, including user expectations for that product category, replaceability of the digital service, and pre-installation of the digital service on the connected product. Although these factors are not determinative, they may provide helpful guidance to businesses assessing whether their services fall within this definition (for example, if the service can easily be replaced by a third-party alternative, it may not meet the threshold of a related service). Ultimately, the EC has noted that practice and courts’ interpretations will play an essential role in further delineating if a digital service is a related service – so time will tell.
  3. Manufacturers have some discretion as to whether data will be directly or indirectly accessible.[4]  Importantly, the FAQs suggest that manufacturers/providers have a significant degree of discretion whether or not to design or redesign their connected products or related services to provide direct access to data. The FAQs list out certain criteria which can be taken into account when determining whether to design for direct access[5] or indirect access.[6] In this respect, the FAQs note that the wording of Article 3(1) (access by design) leaves flexibility as to whether design changes need to be implemented and it is acknowledged that data holders may prefer to offer indirect access to the data. It is also noted that the manufacturer may implement a solution that “works best for them” and consider, as part of its assessment, whether direct access is technically possible, the costs of potential technical modifications, and the difficulty of protecting trade secrets or intellectual property or of ensuring the connected product’s security.
  4. Readily available data without disproportionate effort.[7]  The FAQs confirm the position that readily available data is “product data and related service data that a data holder can obtain without disproportionate effort going beyond a simple operation.”  The EC provided some further clarity by highlighting that only data generated or collected after the entry into application of the Data Act (i.e., after 12 September 2025) should be considered “readily available data” as the definition does not include a reference to the time of their generation or collection. However, the FAQs do not provide further clarity on what would constitute “disproportionate effort” – arguably leaving businesses with further discretion to interpret this in the context of their products and services.
  5. Data made available under the Data Act should be ‘easily usable and understandable’ by users and third parties.[8]  The FAQs expressly note that data holders are required to share data of the same quality as they make available to themselves to facilitate the use of the data across the data economy.This indicates that raw and pre-processed data may require some additional investment to be usable. However, the FAQs make clear that there is no requirement for data holders to make substantial investments into such processes. Indeed, it may be the case that where the level of investment into processing the data is substantial, the Chapter II obligations may not apply to that data.
  6. Data generated outside of the EU may be subject to the Data Act.[9]  The EC’s position is that when a connected product is placed on the market in the EU, all the data generated by that connected product both inside and outside the EU will be subject to the Data Act. For example, if a user purchases a smart appliance in the EU and subsequently takes it to the US with them on vacation, any data generated by the use of the appliance in the US would also fall within the scope of the Data Act.
  7. Manufacturers will not be data holders if they do not control access to the data.[10]  It is explained in the FAQs that determining who is the data holder depends on who “controls access to the readily available data”. In particular, the FAQs acknowledge that manufacturers may contract out the role of “data holder” to a third party for all or part of their connected products. This seems to suggest that where the manufacturer does not control access to the readily available data, it will not be a data holder. In addition, a related service provider that is not the manufacturer of the connected product may also be a data holder if it controls access to readily available data that is generated by the related service it provides to the user. The FAQs further confirm that there may be instances where there is no data holder, i.e., in the case of direct access, where only the user has access to data stored directly on the connected product without the involvement of the manufacturer.
  8. Data holders can use non-personal data for any purpose agreed with the user (subject to limited exceptions).[11]  The FAQs reaffirm the position that a data holder can use the non-personal data generated by the user for any purpose, provided that this is agreed with the user.[12]  Furthermore, the data holder must not derive from such data any insights about the economic situation, assets and production methods of the user in any other manner that could undermine the commercial position of the user. Where data generated by the user includes personal data, data holders should ensure any use of such data is in compliance with the EU GDPR. To ensure compliance with the GDPR, data holders may apply privacy-enhancing technologies (“PETs”); however, the EC’s view is that applying PETs does not necessarily mean that the resulting data will be considered ‘derived’ or ‘inferred’ such that they would fall out-of-scope of the Data Act.
  9. Users may be able to request access to data from previous users of their connected product.[13]  The FAQs note that the Data Act “can be read as giving users the right to access and port readily available data generated by the use of connected objects, including data generated by other users before them.” Subsequent users may therefore have a legitimate interest in such data, for example, in respect of updates or incidents. However, the rights of previous users and other applicable law (e.g., the right to be forgotten under the EU GDPR) must be respected. Moreover, data holders are able to delete certain historical data after a reasonable retention period.[14] 

Although the initial set of FAQs, and the subsequent incremental updates, provide further guidance for businesses whose products or services may fall in scope of the Data Act, there are still areas of uncertainty that are yet to be addressed. As the FAQs are a “living document”, they may continue to be updated as and when the EC deems it necessary. It is also important to note that while the FAQs provide some useful guidance on Data Act interpretation, the Data Act is subject to supplemental domestic implementation and enforcement by national competent authorities of EU member states. Businesses should therefore pay careful attention to guidance published by national authorities in the member states and sectoral areas in which they operate.


[1] See https://digital-strategy.ec.europa.eu/en/library/commission-publishes-frequently-asked-questions-about-data-act.

[2] See Question 7 of the FAQs.

[3] See Question 10 of the FAQs.

[4] See Question 17 and 22 of the FAQs.

[5] I.e., ‘where relevant and technically feasible’ the user has the technical means to access, stream or download the data without the involvement of the data holder. For further information, see Article 3(1) of the Data Act.

[6] I.e., the connected product or related service is designed in such a way that the user must ask the data holder for access. For further information, see Article 4(1) of the Data Act.

[7] See Question 4 of the FAQs.

[8] See Question 5 of the FAQs.

[9] See Question 9 of the FAQs.

[10] See Question 21 of the FAQs.

[11] See Question 29 of the FAQs and Question 13 of the FAQs.

[12] See also Article 4(13) of the Data Act.

[13] See Question 33 of the FAQs.

[14] See Recital 24 of the Data Act.

New York Legislature Passes Health Data Privacy Bill

Last week, the New York legislature passed the New York Health Information Privacy Act (S929) (“NYHIPA” or the “Act”)[1]. The Act, which is currently awaiting the Governor’s signature, seeks to regulate the collection, sale and processing of healthcare information, akin to Washington’s My Health My Data Act.

Importantly, the Act as currently drafted is very broad and may have far-reaching consequences giving rise to extensive compliance obligations, including as a result of the fact that it (i) extends to non-health related data, (ii) does not contain applicability thresholds based on the number of individuals whose data is processed, or the type of activity carried out, by the regulated entity, (iii) requires minimal nexus to New York and applies to non-New York entities that process non-New York residents’ data, and (iv) applies to information collected in the context of employment and business-to business relationships. If signed by the Governor, the Act will go into effect one year after it becomes law.

Below, we provide an overview of the broad categories of entities and data subject to NYHIPA, the key compliance obligations and consumer rights provided, and what businesses need to know in order to comply.

Who and What is Covered by the Act?

Regulated Health Information.  The Act covers a wide range of data given the broad definition of “regulated health information.” Specifically, “regulated health information” includes “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual” (the foregoing referred to herein as “RHI”); by definition, RHI does not include “deidentified information,”[2] protected health information (“PHI”) governed by HIPAA or information collected as part of a clinical trial. The Act’s provisions also apply to seemingly non-health related data, such as location information and payment information collected in connection with health-related products or services, as well any inference that can be drawn or derived therefrom.

Accordingly, the Act as drafted implicates a significant amount of information and, as further discussed below, given the absence of applicability thresholds (e.g., based on the number of New York residents whose data is processed), applies to a vast number of entities. RHI is not limited to medical records, but covers biometric data, genetic information, and even information that could identify a person indirectly. Additionally, since the Act lacks a definition of “individual,” it arguably applies to information collected in the context of commercial and employment relationships unlike typical U.S. state privacy laws, expanding the compliance obligations of entities both within and outside New York’s borders.

Regulated Entities. In a stark contrast to the processing thresholds advanced by other US privacy laws, the Act defines a regulated entity as any entity that:

  1. Controls the processing of RHI of an individual who is a New York resident;
  2. Controls the processing of RHI of an individual who is physically present in New York, or
  3. Is located in New York and controls the processing of RHI.

Excluded from coverage are local, state, and federal governments and municipal corporations (given that any information they process is exempt from the Act’s reach), as well as HIPAA covered entities solely to the extent they maintain patient information in the same manner as PHI. Additionally there is no exemption for nonprofits or entities regulated by the GLBA, meaning additional restrictions may be imposed on the financial information they collect (e.g., payment transactions relating to physical or mental health, or from which inferences can be drawn) to the extent processed in connection with health-related purposes.

Unlike other state privacy laws enacted to date, the Act’s extraterritorial application will impact many organizations beyond those that conduct business in New York as, even if the entity itself is located outside the state, its activities will be subject to the Act so long as it processes RHI regarding individuals (not even necessarily state residents) physically present in New York.   Further, individuals beyond New York residents may benefit from the Act’s protections, given that any entity located in New York will be covered by the Act regardless of where the individual whose RHI is processed is domiciled.

Compliance Obligations

Entities subject to typical U.S. consumer privacy laws will recognize a number of familiar obligations imposed by NYHIPA, including:

1. Obligations to provide a publicly available privacy policy through a regularly used interface (e.g., a website or platform) informing such individual what RHI will be collected, the nature and purpose of processing, to whom and for what purposes RHI will be disclosed, and how consumers can request access to or deletion of their RHI;

2. Restrictions on “selling”[3] RHI;

    • Notably, it is unclear whether, based on the current drafting of the Act, all “sales” of RHI are expressly prohibited (other than in the context of business transactions), as the exceptions that would seem to be appropriate (i.e., where an individual provides a valid authorization or the processing is otherwise necessary for a permitted purpose) are not clearly provided with respect to RHI sales and instead only appear to be tied to other types of RHI processing.  Such exceptions would appear to be appropriate in the context of “sales”, given that reading the Act any other way appears to suggest that any sharing of RHI is prohibited where valuable consideration is provided in exchange.  By way of example, if no such exceptions apply, then there is a risk that regulated entities would be prohibited from providing RHI to their service providers if that would be considered, under a broad interpretation of “sale”, sharing RHI for “valuable consideration”  (i.e., the relevant services).

    3. Restrictions on otherwise processing RHI unless (a) the covered entity obtains valid authorization as governed by the Act, detailed further below, (which must be easily revocable at any time) or (b) the processing is “strictly necessary” for one of seven specific purposes enumerated in the Act (e.g., to provide the product or service requested, to comply with legal obligations, for internal business operations excluding marketing);

    4. Providing individuals access and deletion rights, including by providing an easy mechanism by which individuals can effectuate such rights and allowing such requests to be made by an individual’s authorized agent, with which regulated entities must comply within 30 days;

    • Deletion requests must also be passed to and honored by a regulated entity’s third party service providers.

    5. Implementing reasonable administrative, physical, and technical safeguards to protect the confidentiality and security of RHI;

    6. Securely disposing of RHI pursuant to a publicly available retention schedule, where disposal must occur no later than 60 days after retention is no longer necessary for the permissible purposes or for which consent was given; and

    7. Entering into contracts with third party service providers, imposing equivalent confidentiality, information security, access and deletion obligations, as well as processing restrictions, as those imposed on the regulated entity under the Act.

    Valid Authorization

    While many U.S. state privacy laws contain prescriptive requirements regarding what constitutes consumer consent, NYHIPA goes a step further in providing not only a number of requirements on how an authorization must be presented to be valid, but also substantive requirements to include in authorization request forms. 

    In order for an authorization to be considered valid, it must meet specific criteria including that the request: (i) must be made separately from any other transaction or part of a transaction, (ii) cannot be sought until at least twenty-four hours after an individual creates an account or first uses the requested product or service, (iii) cannot be obtained through a dark pattern, (iv) if made for multiple processing activities, must allow for specific authorization for each specific activity, and (v) cannot relate to an activity for which the individual has revoked or withheld consent in the past year.  Following trends set by recent privacy-related litigations, such as California wiretapping litigation, the Act makes clear that requests for consent must be specific to the particular processing activity, and cannot be bundled with other disclosures or consent requests.  Further, consent must be clearly communicated to the relevant individual, and freely revocable.

    In terms of substantive requirements, the Act further requires that valid authorizations disclose the RHI to be collected and the purposes for which it will be processed, the names or categories of third parties with whom RHI will be disclosed (similar to the approaches taken in the Oregon and Delaware consumer privacy laws), any monetary or valuable consideration that may be received by the regulated entity, assurances that failure to consent will not affect an individual’s experience, the expiration date of the authorization, which may be up to one year from when authorization was provided and how the individual can revoke consent, how the individual can request access to or deletion and any other information material to the individual’s decision-making. Authorizations must also be executed by the individual, though can be done electronically. 

    Enforcement

    Enforcement rights under the Act are primarily vested in the New York AG, who has broad authority to investigate violations, and impose civil penalties on entities that engage, or are about to engage, in unlawful acts or practices under the NYHIPA.  The New York AG can commence an action within 6 years of becoming aware of the alleged violation, and, in addition to seeking an injunction, can seek civil penalties of not more than  $15,000 per violation or 20% of revenue obtained from New York consumers within  the  past  fiscal  year, whichever is greater, as well as any such other and further relief as the court may deem proper. The Act also contemplates rulemaking authority for the New York AG.

    Conclusion

    The applicability of NYHIPA is broad, covering a wide array of entities involved in the collection, use, and management of RHI within New York. To determine whether NYHIPA applies, an organization must evaluate its role in handling health information, the nature of the data it processes, and its geographic operations. Until now, state consumer privacy laws have been focused on comprehensive data privacy, designed on the Washington model. Perhaps New York is showing us a shift back to sectoral laws instead. At this current juncture, it is unclear whether Governor Hochul will sign the law as drafted given it is likely to be subject to a number of challenges, including on First Amendment grounds; Cleary Gottlieb will keep monitoring for updates.


    [1] The text of the bill can be found here.

    [2] “Deidentified information” under the Act has the same meaning provided under comprehensive U.S. state privacy laws (i.e., information that cannot reasonably be used to infer information about, or otherwise be linked to, a particular individual, household or device, provided that the regulated entity or service provider (i) implements reasonable measures to prevent reidentification, (ii) publicly commits to process the information in deidentified form and not attempt to reidentify the information and (iii) imposes contractual obligations on third party recipients consistent with the foregoing (i)-(iii).

    [3] “Sell” under the Act is defined as sharing RHI for monetary or other valuable consideration, exempting only sharing of RHI in the context of a business transaction in which a third party assumes control of all or part of the covered entity’s assets.

    Cybersecurity Disclosure and Enforcement Developments and Predictions

    The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


    The SEC pursued multiple high profile enforcement actions in 2024, alongside issuing additional guidance around compliance with the new cybersecurity disclosure rules. Together these developments demonstrate a continued focus by the SEC on robust disclosure frameworks for cybersecurity incidents. Public companies will need to bear these developments in mind as they continue to grapple with cybersecurity disclosure requirements going into 2025.

    SEC Disclosure Rules and Guidance

    The SEC’s cybersecurity disclosure rules became effective in late 2023, and 2024 marked the first full year of required compliance. The rules added Item 1.05 to Form 8-K, requiring domestic public companies to disclose certain information within four business days of determining that they have experienced a material cybersecurity incident, including the material aspects of the nature, scope and timing of an incident and the material impact or reasonably likely impact of the incident on the company.

    Read the full post

    SEC Charges Four Companies Impacted by Data Breach with Misleading Cyber Disclosures

    On October 22, 2024, the SEC announced settled enforcement actions charging four companies with making materially misleading disclosures regarding cybersecurity risks and intrusions. These cases mark the first to bring charges against companies who were downstream victims of the well-known cyber-attack on software company SolarWinds. The four companies were providers of IT services and digital communications products and settled the charges for amounts ranging from $990,000 to $4 million.

    In 2023, the SEC sued SolarWinds and its Chief Information Security Officer for allegedly misleading disclosures and deficient controls. Most of the SEC’s claims in that case were dismissed by a judge in the Southern District of New York, in part because the judge ruled that SolarWinds’ post-incident disclosures did not misleadingly minimize the severity of the intrusion. This new round of charges indicates the SEC’s intent to continue to enforce disclosure and reporting requirements surrounding cybersecurity breaches. The SEC’s recent charges focus on the companies’ continued use of generic and hypothetical language following significant data breaches, as well as allegations of downplaying the severity of the breaches by omitting material information about their nature and extent. Public companies should carefully consider the lessons from these actions when making disclosures following a cybersecurity breach.  

    Background

    According to the SEC’s allegations, which the companies neither admitted nor denied, in December 2020, each of the four companies charged last week learned that its systems had been affected by the SolarWinds data breach. Public reporting at the time indicated that the breach was likely performed by a state-sponsored threat actor. Each of the companies performed investigations of the breach, determining that the threat actor had been active in their systems for some period of time and accessed certain company or customer information.[1]

    The SEC brought negligent fraud charges against all four companies, charging two primary types of materially misleading disclosures. Two companies, Check Point[2] and Unisys,[3] were charged because the SEC believed their post-breach risk factor disclosures—containing generic and hypothetical language about the risk of cybersecurity breaches similar to their pre-breach disclosures—were misleading given that the companies had become aware of the actual SolarWinds-related breaches. The SEC alleged that the other two companies, Avaya[4] and Mimecast,[5] while they did make specific disclosures that they had been affected by cybersecurity breaches, misleadingly omitted details that the SEC asserted would be material to investors. The SEC noted that all four companies were in the information technology industry, with large private and government customers, and therefore their reputation and ability to attract and retain customers would be affected by disclosure of a data breach.

     The Charges

    There were two categories of charges.

    Charges for disclosing hypothetical cyber risks in wake of actual cyber attack. The SEC has repeatedly brought charges against companies for allegedly using generic and/or hypothetical language in their risk factors after a known data breach.[6] That trend has continued with the recent actions against Check Point and Unisys.

    i. Check Point

    Check Point’s Form 20-F disclosures in 2021 and 2022 stated, “We regularly face attempts by others to gain unauthorized access…” and “[f]rom time to time we encounter intrusions or attempts at gaining unauthorized access to our products and network. To date, none have resulted in any material adverse impact to our business or operations.”[7] These filings were virtually unchanged before and after the data breach. The SEC alleged that these risk disclosures were materially misleading because the company’s risk profile materially changed as a result of the SolarWinds compromise-related activity for two reasons: the threat actor was likely a nation-state and the threat actor “persisted in the network unmonitored for several months and took steps, including deployment and removal of unauthorized software and attempting to move laterally” in the company’s environment.[8]

    ii. Unisys

    The company’s risk factors in its Form 10-Ks following the breach were substantially unchanged from 2019. The risk factor language was hypothetical: cyberattacks “could … result in the loss … or the unauthorized disclosure or misuse of information…” and “if our systems are accessed ….”[9] The SEC alleged that hypothetical language is insufficient when the company is aware that a material breach occurred. The SEC also alleged that the company did not maintain adequate disclosure controls and procedures because they had no procedures to ensure that, in the event of a known cybersecurity incident, information was escalated to senior management, which in this case did not happen for several months. The SEC’s order also alleged that the company’s investigative process after the breach “suffered from gaps that prevented it from identifying the full scope of the compromise,” and that these gaps constituted a material change to the company’s risk profile that should have been disclosed.[10]

    Charges for allegedly failing to disclose material information. Two of the charged companies did disclose that their systems had been affected by suspicious activity, but the SEC nevertheless found fault with those disclosures.

    i. Avaya

    In its Form 10-Q filed two months after learning of the breach, the company disclosed that it was investigating suspicious activity that it “believed resulted in unauthorized access to our email system,” with evidence of access to a “limited number of Company email messages.”[11] The SEC alleged that these statements were materially misleading because they “minimized the compromise and omitted material facts” that were known to the company “regarding the scope and potential impact of the incident,”[12] namely, omitting: (i) that the intrusions were likely the work of a state actor, and (ii) that the company had only been able to access 44 of the 145 files compromised by the threat actor and therefore could not determine whether these additional files contained sensitive information.[13]

    ii. Mimecast

    In its Form 8-Ks filed in the months after learning of the breach, Mimecast disclosed that an authentication certificate had been compromised by a sophisticated threat actor, that a small number of customers were targeted, that the incident was related to SolarWinds, and that some of the company’s source code had been downloaded. The company stated that the code was “incomplete and would be insufficient to build and run” any aspect of the company’s service.[14] The SEC alleged that these statements were materially misleading “by providing quantification regarding certain aspects of the compromise but not disclosing additional material information on the scope and impact of the incident,” such as the fact that the threat actor had accessed a database containing encrypted credentials for some 31,000 customers and another database with systems and configuration information for 17,000 customers, and by not disclosing that the threat actor had exported source code amounting to more than half of the source code of the affected projects, or information about the importance of that code.[15]

    Dissenting Statement

    The two Republican Commissioners, Hester Peirce and Mark Uyeda, voted against the actions and issued a dissenting statement accusing the Commission of “playing Monday morning quarterback.”[16] The dissenters noted two key issues across the orders. First, the dissenters viewed the cases as requiring disclosure of details about the cybersecurity incident itself, despite previous Commission statements that disclosures should instead be focused on the “impact” of the incident.[17] Second, the dissenters argued that many of the statements the SEC alleged to be material would not be material to the reasonable investor, such as the specific percentage of code exfiltrated by the threat actor.[18]  

    The SEC Is Not Backing Off After SolarWinds

    These enforcement actions come months after the Southern District of New York rejected several claims the SEC brought against SolarWinds for the original breach.[19] The recent actions show that the SEC is not backing away from aggressively reviewing incident and other related cybersecurity disclosures. Notably, the SEC did not allege that any of the companies’ cybersecurity practices violated the Exchange Act’s internal controls provision.  In an issue of first impression, the SolarWinds court held that the internal controls provisions focus on accounting controls and do not encompass the kind of cyber defenses at issue in that case.  It is not clear whether the absence of such charges here represents the SEC adopting a new position after the SolarWinds ruling, or rather a reflection of these cases involving different cybersecurity and intrusions. The SEC did allege failure to maintain proper disclosure controls in one of the four new orders, which was another allegation rejected by the SolarWinds court as insufficiently pled.[20] Moreover, the SolarWinds court dismissed claims that the company had misled its investors by making incomplete disclosures after its cyber intrusion, finding that the company adequately conveyed the severity of the intrusion and that any alleged omissions were not material or misleading.  While the dissenters questioned whether the allegedly misleading disclosures here were any different than those in SolarWinds, at a minimum these cases show that the SEC will continue to closely scrutinize post-incident disclosures, notwithstanding its loss in SolarWinds.

    Takeaways

    There are several takeaways from these charges.

    • The SEC is signaling an aggressive enforcement environment and continuing to bring claims against companies for deficient disclosure controls, despite similar charges being rejected in SolarWinds. The Unisys order shows that the SEC will continue to pursue disclosure controls charges where, in its view, a company did not adequately escalate incidents to management, consider the aggregate impact of related incidents, or adopt procedures to guide materiality determinations, among other things.
    • The SEC will reliably bring charges against companies that use generic or hypothetical risk factor language to describe the threat of cybersecurity incidents when the company’s “risk profile changed materially”[21] due to a known breach.
    • The SEC will give heightened scrutiny to disclosures by companies in sectors such as information technology and data security, because in the SEC’s view cybersecurity breaches are more likely to affect the reputation and ability to attract customers for these types of companies.
    • Companies should take care in crafting disclosures about the potential impact of cybersecurity breaches, including in Form 8-K and risk factor disclosure, and consider factors such as:
      • Whether the threat actor is likely affiliated with a nation-state.
      • Whether, or the extent to which, the threat actor persisted in the company’s environment.
      • If the company seeks to quantify the impact of the intrusion, such as by the number of files or customers affected, the SEC will scrutinize whether the company selectively disclosed quantitative information in a misleading way.
      • Whether the company should disclose not only the number of files or amount customer data compromised, but the importance of the files or data and the uses that can be made of them.
      • If the company quantifies the impact of the intrusion but is aware of gaps in its investigation or in the available data that mean the severity of the impact could have been worse, the SEC may consider it misleading not to disclose those facts.

    [1] For information on the four orders, See Press Release, SEC Charges Four Companies With Misleading Cyber Disclosures, SEC, https://www.sec.gov/newsroom/press-releases/2024-174.

    [2] Check Point Software Technologies Ltd., Securities Act Release No. 11321, Exchange Act release No. 101399, SEC File No. 3-22270 (Oct. 22, 2024).

    [3] Unisys Corporation, Securities Act Release No. 11323, Exchange Act Release No. 101401, SEC File No. 3-22272 (Oct. 22, 2024).

    [4] Avaya Holdings Corp., Securities Act Release No. 11320, Exchange Act Release No. 101398, SEC File No. 3-22269 (Oct. 22, 2024).

    [5] Mimecast Limited, Securities Act Release No. 11322, Exchange Act Release No. 101400, SEC File No. 3-22271 (Oct 22, 2024).

    [6] Press Release, Altaba, Formerly Known as Yahoo!, Charged With Failing to Disclose Massive Cybersecurity Breach; Agrees To Pay $35 Million, SEC,https://www.sec.gov/newsroom/press-releases/2018-71; Press Release, SEC Charges Software Company Blackbaud Inc. for Misleading Disclosures About Ransomware Attack That Impacted Charitable Donors, SEC, https://www.sec.gov/newsroom/press-releases/2023-48.

    [7] Check Point, supra note 2, at 2–4.

    [8] Id.

    [9] Unisys Corporation, supra note 3, at 6.

    [10] Id. at 5–7.

    [11] Avaya Holdings Corp, supra note 4, at 4.

    [12] Id. at 2.

    [13] Id. at 4.

    [14] Mimecast Limited, supra note 5, at 4.

    [15] Id.

    [16] Statement, Comm’rs Peirce and Uyeda, Statement Regarding Administrative Proceedings Against SolarWinds Customers (Oct. 22, 2024), https://www.sec.gov/newsroom/speeches-statements/peirce-uyeda-statement-solarwinds-102224.

    [17] Id.

    [18] Id.

    [19] See Cleary Alert Memo, SDNY Court Dismisses Several SEC Claims Against SolarWinds and its CISO (July 26, 2024).

    [20] Id.

    [21] Unisys Corporation, supra note 3,at 5.

    DOJ Brings Lawsuit Against TikTok Over Alleged Violations of the Children’s Online Privacy Protection Act

    Following on the heels of major developments coming out of the Senate last week to advance privacy protections for children online, the Department of Justice (“DOJ”) officially filed a lawsuit on Friday against TikTok, Inc., its parent company, ByteDance, and certain affiliates (collectively, “TikTok”), over alleged violations of the Children’s Online Privacy Protection Act (“COPPA”) and its implementing rule (the “COPPA Rule”) as well as an existing FTC 2019 consent order (the “2019 Order”) alleging violations of the same.[1]

    After an investigation by the Federal Trade Commission (“FTC”) into TikTok’s compliance with the 2109 Order allegedly revealed a flagrant, continued disregard for children’s privacy protections, the FTC took the rare step of releasing a public statement referring the complaint to the DOJ which subsequently filed suit in the Central District of California last week.  “TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” said FTC Chair Lina M. Khan.  “The FTC will continue to use the full scope of its authorities to protect children online—especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

    According to the complaint, TikTok is alleged to have violated not only COPPA and the COPPA Rule but also the 2019 Order by:

    1. Knowingly allowing millions of children under thirteen to create and use TikTok accounts that are not reserved for children, enabling full access to the TikTok platform to view, make and share content without verifiable parental consent;
    2. Collecting extensive data, including personal information, from children without justification and sharing it with third parties without verifiable parental consent;
    3. Failing to comply with parents’ requests to delete their children’s accounts or personal information; and
    4. Failing to delete the accounts and information of users TikTok knows are children in direct violation of the 2019 Order. 

    In highlighting a number of actions undertaken by TikTok, which allegedly led to “unlawful, massive-scale invasions of children’s privacy”, the DOJ’s complaint contains several allegations that TikTok knowingly disregarded its obligations under applicable law and under the 2019 Order requiring TikTok to prevent child users from accessing its platform without verifiable parental consent and to take measures to protect, safeguard and ensure the privacy of the information of its child users once obtained. Among others, the DOJ alleged the following illegal practices:

    • Insufficient Age Identification Practices.  Despite implementing age gates since March 2019 on its platform in efforts to direct users under thirteen to TikTok Kids Mode (a version of the app designed for younger users which allows users to view videos but not create or upload videos, post information publicly or message other users) the complaint alleges that TikTok continued to knowingly create accounts for child users that were not on Kids Mode without requesting parental consent by allowing child users to evade the age gate.  Specifically, upon entering their birthdates and being directed to Kids Mode, under-age users could simply restart the account creation process in order to provide a new birthdate to gain access to the general TikTok platform without restriction (even though TikTok knew it was the same person); alternatively, users could also avoid the age gate entirely by logging in via third-party online services in which case TikTok did not verify the user’s age at all. 
    • Unlawful and Overinclusive Data Collection from Child Users. Even where child users were directed to Kids Mode, the complaint alleges that personal information was collected from children, such as username, password and birthday as well as other persistent identifiers such as IP addresses or unique device IDs, without providing notice to parents and receiving consent as required under COPPA.  TikTok also collected voluminous account activity data which was then combined with persistent identifiers to amass profiles on child users and widely shared with third parties without justification.  For example, until at least mid-2020, TikTok is alleged to have shared information collected via Kids Modes accounts with Facebook and AppsFlyer, a third party marketing analytics firm, to increase user engagement; the collection and sharing of persistent identifiers without parental consent was unlawful under the COPPA Rule because use of such data was not limited to the purpose of providing “support” for TikTok’s “internal operations”.
    • Failures to Honor Deletion Requests.  Though the COPPA Rule and the 2019 Order required TikTok to delete personal information collected from children at their parent’s request, TikTok failed to inform parents of this right and separately to act upon such requests.  To request deletion under TikTok’s policies, TikTok allegedly employed an unreasonable and burdensome process, often times requiring parents to undertake a series of convoluted administrative actions to delete their child’s account before taking action, including scrolling through multiple webpages to find and click on a series of links and menu options that gave no clear indication that they apply to such a request.  Even where parents successfully navigated this process, their requests were infrequently honored due to rigid policies maintained by TikTok related to account deletion.[2]   The complaint also suggests that even where such accounts were deleted, TikTok maintained certain personal information related to such users, such as application activity log data, for up to eighteen months without justification.
    • Failures to Delete Accounts Independently Identified by TikTok as Children’s Accounts. In clear violation of the 2019 Order, TikTok is also alleged to have employed deficient technologies, processes and procedures to identify children’s accounts for deletion, and even appears to have ignored accounts flagged by its own human content moderators as belonging to a child and ripe for deletion.  Instead, despite strict mandates to delete such accounts, TikTok’s internal policies permitted account deletion only if rigid criteria were satisfied—such as explicit admissions by the user of their age—and provided human reviewers with insufficient resources or time to conduct even the limited review permitted under such policies.[3]

    In addition to a permanent injunction to cease the infringing acts and prevent further violations of COPPA, the complaint requests that the court impose civil penalties against TikTok under the FTC Act, which allows civil penalties of up to $51,744 per violation, per day.  Given the uptick in recent enforcement related to children’s privacy issues and potential for material fines, entities should carefully consider the scope of COPPA’s coverage to their existing products and services, as well as their existing policies, practices and product functionality, to ensure compliance and avoid regulatory scrutiny.


    [1] Specifically, the 2019 Order (i) imposed a $5.7 million civil penalty, (ii) required TikTok to destroy personal information of users under the age of thirteen and, by May 2019, remove accounts of users whose age could not be identified, (iii) enjoined TikTok from violating the COPPA Rule and (iv) required TikTok to retain certain records related to compliance with the COPPA Rule and the 2019 Order.

    [2] According to the complaint, in a sample of approximately 1,700 children’s TikTok accounts about which TikTok received complaints and deletion requests between March 21, 2019, and December 14, 2020, approximately 500 (30%) remained active as of November 1, 2021, and several hundred were still active in March 2023.

    [3] For example, despite having tens of millions of monthly active users at times since the entry of the 2019 Order, TikTok’s consent moderation team included fewer than two dozen fulltime human moderators responsible for identifying and removing material that violated all of its content related policies, including identifying and deleting accounts of unauthorized users under thirteen.  Further, during at least some periods since 2019, TikTok human moderators spent an average of only five to seven seconds reviewing each flagged account to determine if it belonged to a child.

    Cybersecurity Law Enters Into Force

    On July 17, 2024, Law No. 90/2024 containing provisions for strengthening national cybersecurity and addressing cybercrime (the “Cybersecurity Law”) entered into force.

    The new legislation strengthens national cybersecurity, at a time when cyber-attacks have increased significantly.[1]

    The Cybersecurity Law:

    1. seeks to strengthen the resilience of (a) public administrations, (b) operators that are subject to the application of the Italian National Cybersecurity Perimeter (“Perimeter”) legislation, (c) operators of essential services and providers of digital services, as defined in Italian Legislative Decree No. 65/2018, which implements the first  EU Directive 2016/1148 on security of network and information systems (“NIS 1 Operators”) and (d) operators providing public communications networks or publicly accessible electronic communications services (“Telecommunication Operators”), by establishing detailed rules on public procurement of IT goods and services that are essential for the protection of national strategic interests;
    2. imposes new incident reporting obligations;
    3. increases the role of the National Cybersecurity Agency (the “NCA”);
    4. enhances data security measures by establishing the National Cryptographic Center; and
    5. significantly focuses on the fight against cybercrime by increasing penalties for existing criminal offenses and introducing new criminal offenses in relation to individuals and entities under Italian Legislative Decree No. 231/2001 (“Decree 231”).

    The Cybersecurity Law provisions are in addition to the existing Italian cybersecurity regulatory framework, which includes, as mentioned, the Perimeter legislation (Decree Law No. 105/2019),[2]  the Digital Operational Resilience Act (Regulation (EU) 2022/2554, “DORA”), and Italian Legislative Decree No. 65/2018, which implements the NIS 1 Directive.[3]

    1. Scope

    The Cybersecurity Law imposes obligations on Public Administrations[4] and on in-house companies that provide Public Administrations with: IT services; transportation services; urban, domestic or industrial wastewater collection, disposal or treatment services; and waste management services (“Public Operators”). These in-house companies are included within the scope of the law as they are considered to be critical infrastructure providers, in relation to which cybersecurity vulnerabilities may impact the entire supply chain of goods and services.

    In addition, the Cybersecurity Law increases some of the obligations imposed on NIS 1 Operators, Telecommunication Operators and operators included in the Perimeter.

    2. Incident reporting obligation

    According to Article 1 of the Cybersecurity Law, Public Operators are required to report to the NCA all incidents impacting networks, information systems, and IT services listed in the taxonomy included in the NCA Resolution.[5]

    Public Operators must submit an initial report within 24 hours of becoming aware of the incident and a complete report within 72 hours, using the channels available on the NCA website.

    Public Operators may also voluntarily report incidents not included in the NCA Resolution taxonomy. These voluntary reports are processed only after mandatory ones to avoid unduly burdening the Italian Computer Security Response Team. Furthermore, submitting a voluntary report shall not impose any new obligations on the notifying party beyond what would be required if the report was not submitted.[6]

    In the case of non-compliance with the reporting obligation, Article 1(5) of the Cybersecurity Law requires the NCA to issue a notice to the Public Operator, informing it that repeated non-compliance over a 5-year period will result in an administrative fine ranging from €25,000 to €125,000. Additionally, the NCA may conduct inspections within 12 months of identifying a delay or omission in compliance with the reporting obligation to verify that the Public Operator has taken steps to enhance resilience against the risk of incidents.

    The incident reporting obligation takes effect immediately for central public administrations included in the Italian National Institute of Statistics (“ISTAT”) list, as well as for regions, the autonomous provinces of Trento and Bolzano, and metropolitan cities. For all other Public Operators, this obligation will take effect 180 days after the law enters into force.

    Under Article 1 of the Cybersecurity Law, the reporting obligation is extended to more entities than those included in the Perimeter. In addition, the amendment to Article 1(3-bis) of Italian Decree-Law No. 105/2019 (establishing the Perimeter) extends the reporting procedure and timeframes set out in the Cybersecurity Law (initial reporting within 24 hours and complete reporting within 72 hours) to incidents that affect networks, information systems, and IT services other than ICT Assets[7] of entities included in the Perimeter.

    The reporting obligation under Article 1 of the Cybersecurity Law does not apply to (i) NIS 1 Operators; (ii) operators included in the Perimeter in relation to incidents affecting ICT Assets (for which the provisions of the Perimeter legislation remain applicable); (iii) State bodies in charge of public and military security; (iv) the Department of Security Information, (v) the External and Internal Information and Security Agencies.

    3. Addressing cybersecurity vulnerabilities reported by the NCA

    The Cybersecurity Law outlines how to handle reports of the NCA addressed to Public Operators, entities included in the Perimeter, and NIS 1 and Telecommunication Operators.

    In particular, the NCA may identify specific cybersecurity vulnerabilities that could affect the abovementioned recipients. These entities are required to promptly address the identified vulnerabilities within a maximum of 15 days, unless justified technical or organizational constraints prevent them from doing so immediately or necessitate postponement beyond the specified deadline.

    Failure to comply with this provision will result in an administrative fine ranging from €25,000 to €125,000.

    4. Contact person and cybersecurity structure

    Public Operators must establish a cybersecurity structure and designate a cybersecurity contact person (with specific expertise). This contact person, whose name must be communicated to the NCA, will be the NCA’s contact point for cybersecurity matters.

    The obligations, introduced for Public Operators are similar to those provided for the entities included in the Perimeter. For instance, Public Operators are required to: (i) implement internal information security policies; (ii) maintain an information risk management plan; (iii) set out the roles and responsibilities of the parties involved; (iv) implement actions to enhance information risk management based on NCA guidelines; and (v) continuously monitor security threats and system vulnerabilities to ensure timely security updates when necessary.

    5. Enhancing data security measures

    Public Operators, as well as operators included in the Perimeter and NIS 1 Operators, must verify that computer and electronic communication programs and applications use cryptographic solutions that comply with the guidelines on encryption and password storage issued by the NCA and the Data Protection Authority. In particular, in order to prevent encrypted data from being accessible to third parties, these entities must also ensure that the applications and programs specified in the regulation are free from known vulnerabilities.

    Within the framework of the national cybersecurity strategy, the NCA has an increased role in promoting cryptography. This involves the development of standards, guidelines, and recommendations to strengthen information system security. Furthermore, the NCA conducts evaluations of cryptographic system security and coordinates initiatives aimed at advocating for cryptography as a critical cybersecurity tool.

    For this purpose, the Cybersecurity Law provides for the creation of a National Cryptographic Center within the NCA, which operates under the guidelines set out by the NCA’s General Director.

    6. Public procurement of ICT goods, systems and services

    When procuring certain categories of ICT goods, systems and services for activities involving the protection of strategic national interests, public administrations, public service operators, publicly controlled companies,[8] and entities included in the Perimeter must ensure that the ICT goods and services acquired comply with particular criteria and technical standards, thereby safeguarding the confidentiality, integrity, and availability of processed data. These essential cybersecurity standards will be set out in a DPCM, to be adopted within 120 days of the Cybersecurity Law coming into force.

    This new obligation stands alongside the existing requirement for entities included in the Perimeter to carry out an evaluation process through the Centre for National Evaluation and Certification (the “CVCN”) to ensure the security of ICT Assets intended for deployment under the Perimeter, as set out in the DPCM dated June 15, 2021. Accordingly, entities under the Perimeter are required, in addition, to assess compliance with essential cybersecurity standards outlined in the abovementioned DPCM for ICT goods and services that are not subject to CVCN evaluation.

    7. Restrictions on personnel recruitment

    The Cybersecurity Law introduces several restrictions, for private entities, to hire individuals who have held specific roles within certain central public administrations, which, if breached, will result in the contract entered into becoming null and void (Articles 12 and 13).

    For instance, the Cybersecurity Law precludes, for a period of two years starting from the last training course, NCA employees who have attended, in the interest and at the expense of the NCA, specific specialized training courses, from taking positions with private entities aimed at performing cybersecurity-related tasks.

    8. Amendments to the Dora Regulation scope

    Lastly, the Cybersecurity Law amends the law implementing the DORA regulation to include, in addition to “financial entities”, financial intermediaries[9] and Poste Italiane S.p.A in relation to its Bancoposta business.

    The objective of this amendment is to ensure a high level of digital operational resilience and to maintain stability across the financial sector. Consequently, in the exercise of the delegated power, the Government will make the appropriate adjustments and additions to the regulations governing these entities to align their operational resilience measures with those outlined in the DORA Regulation. These changes will apply to the activities undertaken by each entity concerned. Additionally, the Bank of Italy will assume supervisory, investigative, and sanctioning responsibilities over these entities.

    9. Main amendments to the regulation on cybercrime

    The Cybersecurity Law strengthens the fight against cybercrime by introducing significant amendments to both the Italian Criminal Code (the “ICC”) and the Italian Code of Criminal Procedure (the “ICCP”).

    In particular, the Cybersecurity Law:

    • Increases criminal penalties for a range of cybercrimes, including the crime of unauthorized access to computer systems and the crime of destruction of computer data, information, and programs;
    • Introduces new aggravating circumstances.  It extends the aggravating circumstance which applies when the crime is committed “by a public official or a person in charge of a public service, through abuse of power or in violation of the duties of his or her position or service, by a person who, also abusively, exercises the profession of private investigator, or by abuse of the position of computer system operator”, to apply to all cybercrimes covered by the Cybersecurity Law.  It introduces a new aggravating circumstance for the crime of fraud in cases where the act is committed remotely by means of computer or telematic tools capable of impeding one’s own or another’s identification.[10] It also increases the penalties provided for the existing aggravating circumstances;
    • Introduces two new mitigating circumstances (Articles 623-quater and 639-ter ICC), applicable to specific cybercrimes,[11] which can reduce penalties by (i) up to one-third if the crime can be considered to be “minor” because of the manner in which it was committed, or if the damage or risk is particularly insignificant;  (ii) from one-half to two-thirds if the offender takes steps to prevent further consequences of the crime. This includes actively assisting the authorities in gathering evidence or recovering the proceeds of the crime or the instruments used to commit the crime;
    • Repeals Article 615-quinquies ICC, which punishes the unlawful possession, distribution and installation of instruments, devices or programs designed to damage or interrupt a computer or telematic system, and replaces it with the new criminal offense outlined in Article 635-quater.1 ICC; [12]
    • Introduces the new crime of cyber-extortion (Article 629(3) ICC), which punishes by imprisonment of 6 to 12 years and a fine of € 5,000 to € 10,000 (penalties that may be increased if certain aggravating circumstances are met)[13] anyone who, by committing or threatening to commit specific cybercrimes,[14] forces another person to do or refrain from doing something in order to obtain an unjust benefit for himself or herself or for others to the detriment of others. For example, the new crime could apply in cases where a person, having hacked into a computer system and manipulated or damaged information, data or programs, demands a ransom for the restoration of the computer system and its data.

    In addition, the Cybersecurity Law provides for: (i) the allocation of the preliminary investigation of cybercrimes to the district prosecutor’s office; (ii) the application of a “simplified” system for granting an extension of the preliminary investigation period for cybercrimes;[15] and (iii) the extension of the maximum period for preliminary investigation to two years.

    10. Amendments to Decree 231 and next steps for companies

    The Cybersecurity Law introduces significant amendments to Decree 231. In particular, the Cybersecurity Law:

    • Increases the penalties for cybercrimes established by Article 24-bis of Decree 231, providing for (i) a maximum fine of € 1,084,300 for the offenses referred to in Article 24-bis(1)  of Decree 231,[16] and (ii) a maximum fine of € 619,600 for the offenses referred to in Article 24-bis(2) [17]  of Decree 231;[18]
    • Expands the list of crimes that may trigger liability for companies and other legal entities under Decree 231, by including the new crime of cyber-extortion (new Article 24-bis(1-bis) of Decree 231) which is subject to the following penalties (i) a maximum fine of € 1,239,200, and (ii) disqualification penalties set out in Article 9(2) of Decree 231 (i.e., disqualification from conducting business; suspension or revocation of authorizations, licenses or concessions instrumental to the commission of the crime; prohibition from entering into contracts with the public administration; exclusion from grants, loans, contributions and subsidies with the possible revocation of those already granted; and ban on advertising goods and services) for a period of at least two years.

    In light of these developments, companies should consider reviewing and updating their policies and procedures to ensure that they are adequate to prevent new offenses that may trigger liability under Decree 231. In particular, companies should consider implementing new and more specific control measures, in addition to those already in place to prevent the commission of cybercrimes (which may already constitute a safeguard, even with respect to the newly introduced crime of cyber-extortion). Measures may include ensuring the proper use of IT tools, maintaining security standards for user identity, data integrity and confidentiality, monitoring employee network usage, and providing targeted information and training to company personnel.

    11. Conclusion

    The new Cybersecurity Law, while fitting into a complex regulatory framework that will need further changes, including  in the short term (consider, in this regard, that as early as October 2024 the NIS 2 Directive will have to be implemented) nevertheless represents a concrete response to the sudden and substantial increase in cyber threats. In particular, the expansion of incident reporting requirements to include new stakeholders and the introduction of stricter reporting deadlines for incidents not affecting ICT Assets aim to enhance national cyber resilience and security. This approach ensures that critical infrastructure providers have better control over cybersecurity incidents.

    The increased penalties for cybercrimes, the introduction of new criminal offenses, and the developments regarding corporate liability under Decree 231 are also consistent with the above objectives. These measures are intended to tackle the increasing threat of cybercrime, although their effectiveness in practice remains to be seen.


    [1] According to the Report published by the Italian Association for Information Security (“CLUSIT”) 2024, in 2023 cyber-attacks increased by 11% globally and by 65% at the Italian level.

    [2] Together with the relevant implementing decrees: Italian President of the Council of Ministers’ Decree (“DPCM”) No. 131 of July 30, 2020; Italian Presidential Decree (“DPR”) No. 54 of February 5, 2021; DPCM No. 81 of April 14, 2021; Italian Legislative Decree No. 82 of June 14, 2021; DPCM of June 15, 2021; DPCM No. 92 of May 18, 2022; and the NCA Resolution of January 3, 2023 (the “NCA Resolution”).

    [3] However, the Cybersecurity Law does not specifically refer to EU Directive 2022/2055 (the “NIS 2 Directive”), which Member States are required to implement by October 17, 2024.

    [4] Specifically, according to the Cybersecurity Law, the following are considered public administrations: central public administrations included in ISTAT annual list of public administrations; regions and autonomous provinces of Trento and Bolzano; metropolitan cities; municipalities with a population of more than 100,000 inhabitants and in any case, regional capitals; urban public transportation companies with a catchment area of not less than 100,000 inhabitants; suburban public transportation companies operating within metropolitan cities; and local health care companies.

    [5] See https://www.gazzettaufficiale.it/eli/id/2023/01/10/23A00114/sg.

    [6] See Article 18, paragraphs 3, 4 and 5 of Italian Legislative Decree No. 65/2018.

    [7] Defined, in accordance with Art. 1. letter m) of DPCM 131/2020 as a “set of networks, information systems and information services, or parts thereof, of any nature, considered unitarily for the purpose of performing essential functions of the State or for the provision of essential services.

    [8] Operators referred to in Article 2(2) of the Digital Administration Code (Italian Legislative Decree No. 82/2005).

    [9] Listed in the register provided for in Article 106 of the Consolidated Law on Banking and Credit, referred to in Italian Legislative Decree No. 385/1993.

    [10] New paragraph 2-ter of Article 640 ICC.

    [11] In particular, Article 623-quater ICC applies to the criminal offenses set out in Articles 615-ter (Unauthorized access to a computer or telematic system), 615-quater (Possession, distribution and unauthorized installation of tools, codes and other means of access to computer or telematic systems), 617-quater (Unlawful interception, obstruction, or disruption of computer or telematic communications), 617-quinquies (Possession, distribution and unauthorized installation of tools and other means to intercept, obstruct or interrupt computer or telematic communications) and 617-sexies ICC (Falsifying, altering or suppressing the content of computer or telematic communications). Article 639-ter ICC instead applies to the criminal offenses set out in Articles 629(3) (new crime of cyber-extortion), 635-ter (Damage to information, data and computer programs of a public nature or interest), 635-quarter.1 (Unauthorized possession, distribution, or installation of tools, devices, or programs designed to damage or interfere with a computer or telematic system) and 635-quinquies ICC (Damage to public utility computer or telematic systems).

    [12] The new provision addresses the same conduct for which penalties were provided for under former Article 615-quinquies ICC and provides for the same penalties, with the addition of the aggravating circumstances set out in Article 615-ter(2.1) and Article 615-ter(3) ICC.

    [13] In particular, a penalty of imprisonment of 8 to 22 years and a fine of € 6,000 to € 18,000 applies if the aggravating circumstances referred to in the paragraph 3 of Article 628 ICC (i.e., the aggravating circumstances provided for the crime of robbery) are met, or where the crime is committed against a person incapacitated by age or infirmity.

    [14] That is, those set out in Articles 615-ter, 617-quater, 617-sexies, and 635-bis (Damage to computer information, data and programs), 635-quater (Damage to computer or telematic systems) and 635-quinquies ICC.

    [15] In particular, the “simplified” regime is provided for under Article 406(5-bis) ICCP, which provides that the judge shall issue an order within ten days from the submission of the request for extension of the preliminary investigation period by the public prosecutor. This provision, which is reserved for particularly serious crimes, is intended to allow a more timely and effective investigation of the commission of the crime.

    [16] That is, the crimes under Articles 615-ter, 617-quater, 617-quinquies, 635-bis, 635-ter, 635-quater and 635-quinquies ICC.

    [17] That is, the crimes under Articles 615-quater and 635-quater(1) ICC.

    [18] The disqualification penalties provided for these cybercrimes remain unchanged.

    EU Court of Justice confirms earlier case law on broad interpretation of “personal data” and offers extensive interpretation of “joint controllership”, with possible broad ramifications in the AdTech industry and beyond

    On March 7, 2024, the Court of Justice of the European Union (the “CJEU”) handed down its judgment in the IAB Europe case, answering a request for a preliminary ruling under Article 267 TFEU from the Brussels Market Court.[1]  The case revolves around IAB Europe’s Transparency and Consent Framework (“TCF”) and has been closely monitored by the AdTech industry ever since the Belgian DPA investigated and subsequently imposed a 250,000 euro fine on IAB Europe for alleged breaches of GDPR and e-Privacy rules back in 2022.[2]

    Factual Background

    IAB Europe is a European-level standard setting association for the digital marketing and advertising ecosystem.  Back in 2018, when GDPR entered into force, it designed the TCF as a set of rules and guidelines that addresses challenges posed by GDPR and e-Privacy rules in the context of online advertising auctions (such as real-time bidding).  The goal was to help AdTech companies that do not have any direct interaction with the website user (i.e., any company in the AdTech ecosystem that is not the website publisher, such as ad-networks, ad-exchanges, demand-side platforms) to ensure that the consent that the website publisher obtained (through cookies or similar technologies) is valid under the GDPR (i.e., freely given, specific, informed and unambiguous) and that, therefore, those AdTech companies can rely on that consent to serve ads to those users in compliance with GDPR and e-Privacy rules.

    On a technical level, overly simplified, the TCF is used to record consent (or lack thereof) or objections to the reliance on legitimate interests under GDPR among IAB’s members by storing the information on consents and objections in a Transparency and Consent String (the “TC String”).  The TC String is a coded representation (a string of letters and numbers) of a user’s preferences, which is shared with data brokers and advertising platforms participating in the TCF auction protocol who would not otherwise have a way to know whether users have consented or objected to the processing of their personal data.[3]

    First Question: Does the TC String constitute Personal Data?

    The CJEU now ruled, echoing its earlier decision in Breyer,[4] that the TC String may constitute personal data under the GDPR to the extent those data may, by “reasonable means”, be associated with an identifier such as an IP address, allowing the data subject to be (re-)identified.  The fact that IAB Europe can neither access the data that are processed by its members under its membership rules without an external contribution, nor combine the TC String with other factors itself, did not preclude the TC String from potentially being considered personal data according to the CJEU.[5] 

    Second Question: Does IAB Europe act as Data Controller?

    Secondly, the Court decided that IAB Europe, as a sectoral organization proposing a framework of rules regarding consent to personal data processing, which contains not only binding technical rules but also rules setting out in detail the arrangements for storing and disseminating personal data, should be deemed a joint controller together with its members if and to the extent it exerts influence over the processing “for its own purposes” and, together with its members, determines the means behind such operations (e.g., through technical standards).  In the IAB Europe case, this concerns in particular the facilitation by IAB of the sale and purchase of advertising space among its members and its enforcement of rules on TC String content and handling.  It also seemed particularly relevant to the Court that IAB Europe could suspend membership in case of breach of the TC String rules and technical requirements by one of its members, which may result in the exclusion of that member from the TCF.

    Further, in keeping with earlier CJEU case-law[6], the Court found it irrelevant that IAB Europe does not itself have direct access to the personal data processed by its members.  This does not in and of itself preclude IAB Europe from holding the status of joint controller under GDPR.

    However, the Court also reiterated that joint controllership doesn’t automatically extend to subsequent processing by third parties, such as – in this case – website or application providers further processing the TC String following its initial creation, unless the joint controller continues to (jointly) determine the purpose and means of that subsequent processing.  This is in line with the Court’s 2019 Fashion ID judgment.[7]  In addition, the Court opined that the existence of joint controllership “does not necessarily imply equal responsibility” of the various operators engaged in the processing of personal data. The level of responsibility of each individual operator must be assessed in the light of all the relevant circumstances of a particular case, including the extent to which the different operators are involved at different stages of the data processing or to different degrees.  So not all joint controllers are created equal.

    Key Takeaways

    In our view, the first finding is not groundbreaking.  It largely confirms the Court’s previous case-law establishing that “personal data” must be interpreted broadly under GDPR, meaning the standard for truly “anonymized data” continues to be very high.  It will now be for the Brussels Market Court to determine whether, based on the specific facts of the IAB Europe case, the TC String indeed constitutes personal data.

    The second finding may have caught more people off guard.  While it will again be up to the Brussels Market Court to determine whether IAB Europe is actually a joint controller in respect of the personal data alleged to be included in the TC String, the Court’s expansive interpretation of the concept of joint controllership (i.e., where “two or more controllers jointly determine the purposes and means of processing” (Article 26 GDPR)) could have broader ramifications beyond the AdTech industry. 

    Organizations who until now have consistently taken the position that they do not qualify as a data controller in respect of data processing activities of their members, users or customers, may need to re-assess that position and, based on the specific factual circumstances relevant to them, consider whether they might in fact be subject to GDPR’s onerous obligations imposed on data controllers.  This may be particularly relevant for standard-setting bodies and industry associations active or established in Europe, potentially hampering their ability to continue developing relevant standards and rules.  Arguably, this could even capture certain providers or deployers of software and other computer systems, including those developing or deploying AI models and systems, in case they would be found to issue “binding technical rules” and “rules setting out in detail the arrangements for storing and disseminating personal data”, and they would actually enforce those rules against third parties using their models and systems to process personal data. 

    Even if some solace can be found from a liability perspective in the confirmation by the Court that joint controllership relating to the initial collection of personal data does not automatically extend to the subsequent processing activities carried out by third-parties, and that not all joint controllers are created equal, the compliance burden on “newfound joint controllers” may nevertheless be burdensome because key obligations on lawfulness, transparency, data security and accountability are triggered irrespective of the “degree” of controllership in question.

    In our view that would take the concept of “joint controllership” too far beyond its literal meaning and originally intended purpose, but it remains to be seen which other enforcement actions will be taken and which other cases raising similar questions may find their way through the European courts in the coming months and years.


    [1]           CJEU, judgment of March 7, 2024, IAB Europe, C-604/22, ECLI:EU:C:2024:214 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=283529&pageIndex=0&doclang=FR&mode=req&dir=&occ=first&part=1&cid=167405).

    [2]           For more information on the original case in front of the Belgian DPA, see the DPA’s dedicated landing page: https://www.dataprotectionauthority.be/iab-europe-held-responsible-for-a-mechanism-that-infringes-the-gdpr.

    [3]           For more information, see the IAB Europe website: https://iabeurope.eu/.

    [4]           CJEU, judgment of 19 October 2016, Breyer, C‑582/14, EU:C:2016:779, paragraphs 41-49 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=184668&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1303370).

    [5]           Recital 26 of GDPR further clarifies that, “to ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.”  This will always require a fact-intensive, case-by-case inquiry, but it is now even more clear that “it is not required that all the information enabling the identification of the data subject must be in the hands of one person” (CJEU, IAB Europe judgment, §40).

    [6]           CJEU, judgment of July 10, 2018, Jehovan todistajat, C‑25/17, EU:C:2018:551, paragraph 69 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=203822&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1305431), and CJEU; judgment of June 5, 2018, Wirtschaftsakademie Schleswig-Holstein, C‑210/16, EU:C:2018:388, paragraph 38 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=202543&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1305548).

    [7]           CJEU, judgment of July 29, 2019, Fashion ID, C‑40/17, EU:C:2019:629, paragraph 74 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=216555&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1305826), as commented on in our earlier blog post here: https://www.clearycyberwatch.com/2019/08/cjeu-judgment-in-the-fashion-id-case-the-role-as-controller-under-eu-data-protection-law-of-the-website-operator-that-features-a-facebook-like-button/; See also the EDPB Guidelines 07/2020 on the concepts of controller and processor in the GDPR (version 2.1, adopted on July 7, 2021), in relation to the concept of “converging decisions”, at paragraphs 54-58 (https://www.edpb.europa.eu/system/files/2023-10/EDPB_guidelines_202007_controllerprocessor_final_en.pdf).

    Biden Administration Executive Order Targets Bulk Data Transactions

    The Biden administration recently issued Executive Order 14117 (the “Order”) on “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.”  Building upon earlier Executive Orders[1], the Order was motivated by growing fears that “countries of concern” may use artificial intelligence and other advanced technologies to analyze and manipulate bulk sensitive personal data for nefarious purposes.  In particular, the Order notes that unfettered access to American’s bulk sensitive personal data and United States governmental data by countries of concern, whether via data brokers, third-party vendor agreements or otherwise, may pose heightened national security risks. To address these possibilities, the Order directs the Attorney General to issue regulations prohibiting or restricting U.S. persons from entering into certain transactions that pose an unacceptable risk to the national security of the United States.  Last week, the Department of Justice (“DOJ”) issued an Advance Notice of Proposed Rulemaking, outlining its preliminary approach to the rulemaking and seeking comments on dozens of issues ranging from the definition of bulk U.S. sensitive personal data to mitigation of compliance costs. 

    The forthcoming proposed rule will apply to transactions that (i) involve bulk sensitive personal data or U.S. Government-related data; (ii) are part of a class of transactions determined by the Attorney General to pose an unacceptable risk to the national security of the U.S.; (iii) were initiated, are pending, or will be completed after the effective date of the regulations; (iv) do not qualify for an exemption and are not authorized by a license as set forth in the regulations; and (v) are not “incident to and part of the provision of financial services, including banking, capital markets, and financial insurance services, or required for compliance with any Federal statutory or regulatory requirements.”  The proposed rule will be published for public notice and comment by August 26, 2024.  What is interesting is that the Order specifically does NOT impose generalized data localization requirements or prohibit commercial transactions with countries of concern, but rather is tailored to the types of transactions described above.

    The proposed rule will also (i) identify classes of prohibited transactions; (ii) identify classes of restricted transactions; (iii) identify countries of concern and other covered persons; (iv) establish mechanisms to provide further clarity regarding the Order and any implementing regulations; (v) establish a process to issue licenses authorizing transactions that would otherwise be prohibited or restricted; (vi) define relevant terms; (vii) address coordination with other government entities; and (viii) address the need for recordkeeping and reporting of transactions to inform investigative, enforcement, and regulatory efforts.  Among other factors, the proposed regulations will consider both the nature of the class of transaction and the volume of bulk sensitive personal data involved.  Any proposed regulations will also “establish thresholds and due diligence requirements for entities to use in assessing whether a transaction is a prohibited transaction or a restricted transaction.”  Additionally, the Secretary of Homeland Security is directed to propose and seek public comment on security requirements to mitigate the risk posed by restricted transactions.  The security requirements will be based on the National Institute of Standards and Technology Cybersecurity and Privacy Frameworks.  The Secretary of Homeland Security will also issue interpretive guidance regarding such security requirements and the Attorney General will issue enforcement guidance.

    Several other agencies are also directed or advised by the Order to address risks relating to network infrastructure, health data and human genomic data, and the data brokerage industry.  The Order also requires the  Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence to make recommendations as to how to mitigate risks from transfers of bulk sensitive personal data to countries of concern that have already occurred.

    Many of the key concepts in the Order, including “countries of concern” and prohibited and restricted transactions will be further defined and clarified through the rulemaking process. However, it is clear that transactions involving cross-border transfers of large quantities of sensitive personal information will be the enhanced focus of regulatory scrutiny and eventual enforcement, particularly if it involves countries of concern.  The DOJ is accepting comments to the Advance Notice of Proposed Rulemaking until April 19, 2024.  The public will also have the opportunity to comment on the DOJ’s proposed rule later this year.


    [1] Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain) and Executive Order 14034 of June 9, 2021 (Protecting Americans’ Sensitive Data from Foreign Adversaries).

    New Privacy Laws Enacted in New Jersey and New Hampshire

    On January 16, 2024, New Jersey officially became one of a growing number states with comprehensive privacy laws, as Governor Phil Murphy signed Senate Bill 332 (the “New Jersey Privacy Act”) into law.[1]  New Hampshire followed closely behind, with its own comprehensive privacy law, Senate Bill 255 (the “New Hampshire Privacy Act” and, together with the New Jersey Privacy Act, the “Acts”), signed into law by Governor Chris Sununu on March 6, 2024.[2] 

    As with many of the other comprehensive privacy laws enacted around the country in the past few years, the Acts are based on the Washington Privacy Act model, containing many familiar consumer rights and protections, though with some notable differences highlighted below.  Joining all currently enacted comprehensive U.S. state privacy laws with the exception of California, the New Jersey Privacy Act and the New Hampshire Privacy Act do not include a private right of action and do not apply to New Jersey or New Hampshire residents acting in a commercial or employment context.  The New Jersey Privacy Act will come into effect 365 days from enactment, or January 15, 2025, with certain provisions, including regarding universal opt-out mechanisms discussed below, coming into effect later in 2025, while the New Hampshire Privacy Act will come into effect on January 1, 2025.

    Applicability

    Processing Thresholds.  Following the trend set by other comprehensive state privacy laws, such as those in Connecticut and Colorado, the New Jersey Privacy Act applies to controllers that (i) conduct business in New Jersey or produce products or services that are targeted to New Jersey residents and (ii) during a calendar year either control or process the personal data of (a) at least 100,000 consumers (i.e., New Jersey residents acting in an individual or household context), excluding personal data processed solely for the purpose of completing a payment transaction or (b) at least 25,000 consumers and derive revenue, or receive a discount on the price of any goods or services, from the sale[3] of personal data.

    The New Hampshire Privacy Act similarly follows the applicability standards of many prior state privacy laws, though with a few changes to account for the smaller population of the state.  The New Hampshire Privacy Act applies to persons that (i) conduct business in New Hampshire or produce products or services that are targeted to New Hampshire residents and (ii) during a one year period either control or process the personal data of (a) not less than 35,000 unique consumers (i.e., New Hampshire residents acting in an individual or household context), excluding personal data controlled or processed solely for the purpose of completing a payment transaction or (b) not less than 10,000 unique consumers and derived more than 25 percent of gross revenue from the sale of personal data. 

    Exceptions.  While the New Jersey Privacy Act contains some common exceptions to applicability, such as for protected health information collected by a covered entity or business associate under the Health Insurance Portability and Accountability Act or financial institutions and their affiliates or data subject to the Gramm-Leach-Bliley Act, there is no exception for non-profit organizations or higher education institutions.  Non-profit organizations that may be exempt under many other state privacy laws (i.e., Colorado, Delaware (which only exempts nonprofits dedicated to preventing and addressing insurance crime) and Oregon (where the non-profit applicability exemption will expire in July of 2025)) will need to pay close attention to the New Jersey Privacy Act, since such an organization will need to meet the standard requirements of the New Jersey Privacy Act if it meets the general applicability threshold by either processing or selling the personal data of the relevant number of New Jersey-based consumers. 

    The New Hampshire Privacy Act also contains many of the familiar exceptions to applicability, including for non-profit organizations and higher education institutions.  However, the exception for financial institutions or data subject to Title V of the Gramm-Leach-Bliley Act, does not include affiliates of such institutions.  Entities that have some affiliates that are subject to the Gramm-Leach-Bliley Act but others that are not will need to carefully consider applicability under the New Hampshire Privacy Act.

    Data Protected

    Both Acts apply to a similar set of data as other state comprehensive privacy laws, applying to personal data that is “linked or reasonably linkable to an identified or identifiable ” individual.[4] However, there are a few notable expansions in the types of data the Acts cover and the protections afforded to certain data when compared with other similar state privacy laws.

    Sensitive Data.  The definition of sensitive data under the New Jersey Privacy Act includes not only typical information such as personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition, etc., but also a few more unique categories.  First, like California, the definition encompasses financial information, which includes a consumer’s account number, account log-in, financial account or credit or debit card number in combination with any required security or access code or password that would permit access to a consumer’s financial account.  Following Oregon and Delaware’s definitions, sensitive data also includes personal data revealing status as transgender or non-binary.  Conversely, the New Hampshire Privacy Act’s sensitive data definition largely aligns with other state laws, without such additions.  Like other state privacy laws with the exception of California, both Acts require consumer consent to process sensitive data, and such processing additionally requires controllers to conduct data protection assessments, as discussed later in this post. 

    Children’s and Minors’ Data.  In addition to requirements to process personal data of children under the age of 13 in accordance with the Children’s Online Privacy Protection Act, the New Jersey Privacy Act requires controllers to obtain consent before processing personal data for purposes of targeted advertising, selling personal data or profiling in furtherance of decisions that produce legal or similarly significant effects where the controller has actual knowledge, or willfully disregards, that the consumer is at least 13 years old but younger than 17 years old.  The New Hampshire Privacy Act has a similar requirement as regards the processing of a minor’s data, but consent is only required where a controller is processing personal data for purposes of targeted advertising or selling personal data (and not profiling) and the requirement applies when a controller both has actual knowledge and willfully disregards that the consumer is at least 13 years old but younger than 16.

    Other Notable Provisions

                While this post dose not attempt to cover all provisions of the Acts, there are a few additional provisions that differentiate the New Jersey Privacy Act and the New Hampshire Privacy Act from similar state privacy acts.

    Website Link.  Similar to California, the New Hampshire Privacy Act requires that controllers provide a “conspicuous link” on the controller’s website that enables a consumer or their agent to opt-out of targeted advertising or the sale of personal data.

    Data Protection Assessments.  Like other state privacy laws, both Acts require controllers to conduct data protection assessments for processing activities that present a heightened risk of harm to a consumer.  The New Jersey Privacy Act is unique, however, in that it makes clear that such assessments must be conducted before the relevant processing activity requiring such assessment can occur.  In other words, controllers are expressly prohibited from conducting processing activities that present a heightened risk of harm to consumers without first conducting and documenting a data protection assessment of each of its processing activities involving personal data acquired on or after the New Jersey Privacy Act’s effective date.  Fortunately, in line with the requirements set forth under other state regimes, including New Hampshire, “heightened risk” is defined to include processing personal data for targeted advertising,  profiling if it presents certain reasonably foreseeable risks, selling personal data and processing sensitive data, and the items required to be considered in the data protection assessments, including weighing benefits of processing against rights of the consumer and using de-identified data, are also in line with other states’ requirements.  Accordingly, to the extent controllers covered by the Acts who engage in the aforementioned processing activities are also subject to the requirements to conduct data protection assessments under other currently effective privacy regimes, such controllers should be able to leverage such assessments for compliance purposes.

    Universal Opt-Out.  Both Acts require controllers to recognize universal opt-out signals if controllers undertake certain processing activities.  The New Jersey Privacy Act provides that no later than 6 months after the New Jersey Privacy Act’s effective date, controllers that process personal data for targeted advertising or that sell personal data must allow consumers to exercise their rights to opt-out of such processing through a user-selected universal opt-out mechanism (the technical specifications for which will be subject to further regulation as discussed below).  Under the New Hampshire Privacy Act, controllers that process personal data for targeted advertising or sell personal data must allow consumers to opt-out through an opt-out preference signal no later than January 1, 2025, which is the same as the New Hampshire Privacy Act’s effective date.  Both Acts set forth a number of requirements for the universal opt-out mechanisms, with New Hampshire’s aligning more closely with terms used in other state privacy laws that contain universal opt-out mechanisms such as Colorado and Connecticut; however, both Acts instruct that the universal opt-out mechanisms should be “as consistent as possible” with similar mechanisms required by federal or state law or regulation, highlighting the intent to encourage standard opt-out mechanisms. 

    Rulemaking.  New Jersey becomes only the third state with a comprehensive privacy law to specifically contemplate rulemaking by a state agency, joining California and Colorado.  Here, the Director of the Division of Consumer Affairs in the Department of Law and Public Safety is empowered to promulgate rules and regulations necessary to effectuate the purposes of the New Jersey Privacy Act, including with regard to universal opt-out mechanisms as discussed above.  No timeline is given for the enactment of such rules, but as seen in the rulemaking process occurring in California, such rules could have significant impacts on privacy requirements in the state.  The New Hampshire Privacy Act provides for only limited rulemaking by the secretary of state with respect to establishing standards for “clear and meaningful” privacy notices and the means by which consumers may submit requests to exercise their rights.

    Sunsetting Cure Periods.  Both acts contain cure periods before actions are brought against controllers (30 days in New Jersey and 60 days in New Hampshire), but these cure periods are set to expire under each of the Acts.  The New Jersey Privacy Act requires the Division of Consumer Affairs in the Department of Law and Public Safety issue a notice to the controller in violation if cure is deemed possible up until 18 months after the effective date of the act (July 2026), whereas the New Hampshire Privacy Act requires the attorney general to issue a notice of violation to the controller if cure is possible only until December 31, 2025, after which the notice of violation is discretionary.  The sunsetting cure periods indicate that the states expect entities to come into compliance with the new requirements reasonably quickly.

    Conclusion             The New Jersey Privacy Act and the New Hampshire Privacy Act do not break the mold when it comes to comprehensive privacy laws in the United States.  However, differences in applicability, scope of protection and requirements on data controllers means that businesses must pay close attention to the nuances of each new privacy law enacted to ensure continued compliance.


    [1] The full text of Senate Bill 332 is available here.

    [2] The full text of Senate Bill 255 is available here.

    [3] Note that both the New Jersey Privacy Act and New Hampshire Privacy Act define “sales” to include exchanges of personal data to a third party for monetary or other valuable consideration. 

    [4] This definition in both Acts also carves out de-identified and publicly available information which follow the definitions set forth under other state privacy laws; however, the New Jersey Privacy Act is silent with respect to pseudonymous data, suggesting that such data may qualify as personal data subject to the New Jersey Privacy Act’s requirements and restrictions. By contrast, the New Hampshire Privacy Act provides that certain of the rights afforded to consumers do not apply to pseudonymized data in cases where the controller is able to demonstrate that any information necessary to identify the consumer is kept separately and subject to effective controls to prevent the controller from accessing it.

    ❌
    ❌