Normal view

There are new articles available, click to refresh the page.
Today — 12 May 2026Main stream
Before yesterdayMain stream

One House Democrat is pressing Commerce on the government’s spyware use

7 May 2026 at 06:00

A House Democrat who’s been at the forefront of congressional efforts to scrutinize the federal government’s use of commercial spyware wants the Commerce Department to brief Capitol Hill amid apprehension that the Trump administration might further embrace the technology.

Rep. Summer Lee, D-Pa., sent a letter to the department Thursday seeking a briefing on several developments stemming from Immigration and Customs Enforcement acknowledging its use of Paragon’s Graphite spyware, as well as an American company purchasing a controlling stake in Israel’s NSO Group. The Commerce Department sanctioned NSO Group under former President Joe Biden after widespread abuse allegations, including eavesdropping on government officials, activists and journalists.

“The Trump Administration appears to be broadly receptive to using commercial spyware to infiltrate cell phones and allowing U.S. investment in sanctioned spyware companies like NSO Group,” Lee wrote in her letter to Commerce Secretary Howard Lutnick, which CyberScoop is first reporting.

NSO Group’s new executive chairman, David Friedman, is a former Trump ambassador to Israel and was his bankruptcy attorney. He has said in November that he expects the administration will be “receptive” to using NSO Group tech.

“Given those close ties between NSO Group and the Trump Administration, and the serious concerns about how NSO’s technology could be used to spy on Americans, we write to request information regarding the purchase of NSO Group by an American company and the potential usage of NSO Group spyware by federal law enforcement,” wrote Lee, who sits on the Oversight and Government Reform panel and is the top Democrat on its Federal Law Enforcement Subcommittee.

Lee was one of the authors of a recent Democratic letter seeking confirmation of ICE’s use of Paragon’s Graphite, which ICE acknowledged. But they criticized the administration for not answering all their questions, in addition to being outraged.

In her latest letter, Lee asked the Commerce Department to brief Oversight and Government Reform Committee staff about internal department deliberations, Commerce communication with the White House and any outside conversations — including with Friedman — about government use of NSO Group technology or any other commercial spyware, and American investment in NSO.

NSO Group “appears to view the Trump administration as friendly to its interests in the United States, pitching itself as a vital tool for the U.S. government to safeguard national security,” Lee wrote, citing company court filings that it “is reasonably foreseeable that a law enforcement or intelligence agency of the United States will use Pegasus.”

The Biden administration sanctions, and court losses in a case against Meta, represented setbacks for NSO Group’s ambitions. And prior to the U.S. investment firm controlling stake purchase last fall, the Commerce Department under Trump rebuffed efforts to remove NSO Group from its sanctions list.

But the tens of millions of dollars worth of investment, following news that Israel had used Pegasus to track people kidnapped or murdered by Hamas, was a boon.

NSO Group maintains that its products are designed only to help law enforcement and intelligence fight terrorism and crime, and that it vets its customers in advance as well as investigates misuse. News accounts and other investigations have turned up a multitude of abuses.

There have been scattered reports of U.S. flirtation with using NSO Group technology. The FBI acknowledged it had bought a Pegasus license, but stopped short of deploying it. The Times of London reported that “it is believed” the Central Intelligence Agency used Pegasus spyware as part of a rescue mission last month for a U.S. airman downed in Iran.

You can read the full letter below.

The post One House Democrat is pressing Commerce on the government’s spyware use appeared first on CyberScoop.

Key Takeaways From the EDPB’s Draft Guidelines on Scientific Research

On April 15, 2026, the European Data Protection Board (EDPB) adopted guidelines on the processing of personal data for scientific research purposes.[1] The guidelines aim to clarify GDPR compliance requirements for scientific research involving personal data.

The concepts addressed by the EDPB are of particular relevance to companies active in life sciences, artificial intelligence (AI), and advanced technology R&D.

The guidelines are open for public consultation until June 25, 2026.

The most significant aspect of the guidelines is the EDPB’s clarification of what constitutes “genuine” scientific research. The guidelines set out six key-indicative factors to be considered alongside the nature, scope, context, and purposes of the processing. These factors appear to restrict the scope of processing that can be classified as scientific research, meaning that researchers may need to re-evaluate whether their activities genuinely qualify for the GDPR’s more flexible treatment of scientific research.

Six Factor Test to Define “Scientific Research” Under GDPR

The six key-indicative factors are as follows:[2]

  1. Methodical and systematic approach: The research activities, including formulation and testing of a hypothesis, follow a methodical and systematic approach in the relevant field, for example in accordance with a comprehensive research plan.
  • Adherence to ethical standards: The research activities adhere to ethical standards in the relevant field, including respect for human autonomy and consent, transparency, accountability, and (human) oversight.
  • Verifiability and transparency: The research activities aim to achieve verifiable results, with hypotheses, methods, data and conclusions open to criticism (normally through peer review), and results shared with other parties, for example by publication.
  • Autonomy and independence: The research activities are conducted autonomously and independently, with the research team having the freedom to define research questions, identify methods, choose scientific theories, and disseminate results. The researchers have academic or scientific qualifications in the relevant field.
  • Objectives of the research: The research activities aim to contribute to the growth of society’s general knowledge and wellbeing. This does not exclude research that may also further commercial interests, but the EDPB does suggest in one of the examples included in the guidelines that research “solely concerned with furthering […] commercial interests” would not qualify.
  • Potential to contribute to existing scientific knowledge or apply existing knowledge in novel ways: The research activities have the potential to contribute to existing scientific knowledge or apply existing knowledge in novel ways, and their scientific merits can be subject to assessment, review or approval by independent experts or committees.

If all six factors are met, the activities can be presumed to constitute scientific research. If not, the controller must justify and demonstrate why the activities should nonetheless qualify.

Anonymization and Pseudonymization in the Context of Scientific Research

The remainder of the guidelines address GDPR compliance more generally in the context of scientific research, including with respect to: data protection principles, lawfulness of processing, transparency, data subjects’ rights, attribution of responsibility, and appropriate safeguards.

While these sections largely restate existing principles (albeit with useful clarifications on “broad” and “dynamic” consent, including through specific examples on how organizations can navigate the tension with the principles of specificity and purpose limitation as part of their overall data protection governance structure), the EDPB’s views on data minimization merit highlighting.[3] The EDPB takes the view that, because personal data must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed[4], anonymization should be the default approach for scientific research. Once data is truly anonymized, it falls outside the scope of the GDPR entirely, although the anonymization process itself must still comply with GDPR requirements.[5] Where research aims cannot be achieved using anonymized data, personal data should be pseudonymized.[6] Processing data that can directly identify individuals should only occur where “strictly” necessary and proportionate to the research purpose.[7] Controllers will welcome the clarity provided by the guidelines, though ongoing compliance may require updates to internal processes. The full practical implications will become clearer once the dedicated guidance on anonymization and pseudonymization is published later this year.

Data subjects must be transparently informed about whether their data is processed in identifiable or pseudonymized form, and must not be misled into believing that their data is anonymized when it is not.[8]

Other Recent EDPB Updates

In addition to adopting these guidelines, the EDPB established a dedicated “sprint team” to finalize its upcoming and much anticipated guidelines on anonymization by summer 2026.[9] The questions of when personal data qualifies as “anonymous” under the GDPR and under what circumstances personal data (including sensitive personal data) can be used to train AI models, is currently also the subject of ongoing negotiations at EU level on the Digital Omnibus Package.[10]

Finally, the EDPB adopted two opinions approving two sets of Europrivacy certification criteria as a European Privacy Label, simplifying the data transfer process and enhancing accountability in high-risk sectors. The first approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.[11] The second recognizes Europrivacy certification criteria as a European Data Protection Seal that can be used as a transfer mechanism under Articles 42 and 46 GDPR.[12] This will allow data importers outside Europe that are not subject to the GDPR to seek Europrivacy certification for transferred data they receive.


[1] EDPB Press Release, April 16, 2026, available here.

[2] EDPB Guidelines, section 2.1.

[3] EDPB Guidelines, section 8.3.

[4] GDPR Article 5(1)(c).

[5] EDPB Guidelines, para. 156.

[6] EDPB Guidelines, paras. 157-158.

[7] EDPB Guidelines, para. 159.

[8] EDPB Guidelines, para. 164.

[9] EDPB Press Release, April 16, 2026, available here.

[10] Cleary AI and Technology Insights, “Reset or rollback: Unpacking the EU’s Digital Omnibus Package”, November 21, 2025, available here.

[11] Opinion 14/2026 on the Europrivacy certification criteria regarding their approval by the Board as European Data Protection Seal pursuant to Article 42.5 GDP, adopted April 15, 2026, available here.

[12] Opinion 15/2026 on the Europrivacy certification criteria regarding their approval by the Board as European Data Protection Seal to be used as tool for transfers pursuant to Articles 42 and 46 GDPR, adopted April 15, 2026, available here.

U.S. companies hit with record fines for privacy in 2025

By: djohnson
28 April 2026 at 03:30

U.S. states issued $3.45 billion in privacy-related fines to companies in 2025, a total larger than the last five years combined, according to research and advisory firm Gartner.

The increase is driven in part by stronger, more established privacy laws in states like California, new interstate partnerships built around enforcing laws across state lines, and a renewed focus to how AI and automation affect privacy.

The data indicates that “regulators are shifting their efforts away from awareness to full scale enforcement,” marking a significant shift from even the last few years in how aggressively states are investigating and penalizing companies for privacy law violations.

“This is increasingly becoming the standard in 2026 and for the coming two years,” Gartner’s analysis concludes.

Privacy related fines have gone up significantly in recent years. (Source: Gartner)

The California Consumer Privacy Act had consumer privacy provisions go live in 2023, but for years enforcement was largely dormant. According to Nader Heinen, a data protection and AI analyst at Gartner and co-author of the research, that enforcement lag mirrors the way other major privacy laws, like Europe’s Global Data Protection Regulation, have been carried out in order to “lead with a bit of guidance” for companies while using enforcement sparingly.

But that era appears to be over. In 2025, the California Privacy Protection Agency has used the law to pursue violators across a wide range of industries— not just large conglomerates, but smaller and mid-sized companies in tech, the auto industry, and consumer products, including off-the-shelf goods and apparel.

Heinen said some businesses “weren’t paying attention” and may have been lulled into a false sense of complacency as regulators spun up their enforcement teams, leading to a harsh 2025.

“Unfortunately what happens when so much time passes between the legislation and starting enforcement regularly, is a lot of organizations let their privacy program atrophy,” he said.

States have also sought to combine their resources to target and penalize privacy violators across state lines. Last year, ten states came together to form the Consortium of Privacy Regulators, pledging to coordinate investigations and enforcement of common privacy laws around accessing, deleting and preventing the sale of personal information.

Beyond laws like the CCPA, states have been updating existing privacy and data-protection laws to more directly address harms from automated decision-making technologies, including AI. State privacy regulators are especially focused on how personal or private data is used to train AI systems and  help it make inferences.

Gartner expects privacy fines to further increase in the coming years and Heinen said states will likely again lead the way on building the legal infrastructure to enforce data privacy in the AI age as they become the main conduit for lingering anxiety about the potential negative impacts of the technology.

“You have to put yourself in the position of these state legislatures,” Heinen said. “Their constituencies – the voting public – is telling them we’re worried about AI. AI anxiety is a thing. Everybody’s worried about whether AI is going to take their job or impact their capacity to find a job, so they want to see legislation in place to protect them.”

This past month, House Republicans unveiled their latest attempt to pass comprehensive federal privacy legislation with a bill that would preempt tougher state laws like those in California. In particular, the CCPA gives residents a private right of action – the legal right to sue companies directly – for violation of privacy laws.

On Monday, Tom Kemp, executive director of the California Privacy Protection Agency, wrote to House Energy and Commerce Chair Brett Guthrie, R-Ky., to oppose the bill, arguing it would provide “a ceiling” for Americans’ data privacy protections rather than a “floor” to build on.

“Preemption would strip away important existing state privacy provisions that protect tens of millions of Americans now,” Kemp wrote. “That would be a significant step backward in privacy protection at a time when individuals are increasingly concerned about their privacy and security online, and when challenges from data-intensive new technologies such as AI are developing quickly.”

The post U.S. companies hit with record fines for privacy in 2025 appeared first on CyberScoop.

Lawmakers ponder terrorism designations, homicide charges over hospital ransomware attacks

21 April 2026 at 14:49

Lawmakers at a hearing Tuesday explored ways to beef up punishments for ransomware attacks against hospitals, possibly by labeling them as more severe crimes.

One proposal floated at the House Homeland Security Committee hearing, to treat ransomware attacks as terrorism, is an idea Congress has flirted with before. Another would be to press prosecutors to pursue homicide charges in attacks on hospitals where death resulted — something German authorities also once pondered.

A former top FBI cyber official, Cynthia Kaiser, put forward both ideas at the hearing, a joint meeting of the subcommittees on Border Security and Enforcement and Cybersecurity and Infrastructure Protection on cybercrime, drawing questions and interest from members.

“I believe there are no penalties too severe for individuals that would target our health care system,” said Mississippi Rep. Michael Guest, chair of the border subcommittee, whose home state of Mississippi’s health care clinics closed following a February ransomware attack.

The suggestions stem from a growing focus by ransomware attackers on the health care sector, with incidents doubling from 238 in 2024 to 460 in 2025 according to FBI statistics, making it the top targeted sector.

Kaiser, now senior vice of the Halcyon ransomware research center, said terrorism designations from the State, Treasury and Justice departments could lead to further sanctions, restricted travel and other punishments. Justice Department guidance on homicide charges could clarify its authorities, she said.

“It sounds like the language is there, it just has not been applied in these circumstances,” said Rep. Lou Correa of California, the top Democrat on Guest’s subpanel.

The notion of more closely entwining cyberattacks and terrorism is something both Congress and the executive branch have examined recently.

The fiscal 2025 Senate intelligence authorization bill would have directly linked ransomware to terrorism, although the final version of the bill that became law was less explicit than the original Senate language. The Treasury Department last month asked for public feedback on changing a terrorism risk insurance program to address cyber-related losses.

A University of Minnesota study from 2023 estimated that hospital ransomware attacks were responsible for dozens of deaths of Medicare patients. German authorities in 2020 opened a negligent homicide investigation following a death in the aftermath of a ransomware attack, but ultimately decided against charges.

The Trump administration’s national cyber strategy advocates for taking a more offensive approach to hackers. It released an executive order on cybercrime and fraud the same day it published the strategy. Kaiser said the proposals are in line with those approaches.

Hackers know their attacks could end lives, she said. “They have simply decided these deaths are someone else’s problem,” Kaiser said.

The post Lawmakers ponder terrorism designations, homicide charges over hospital ransomware attacks appeared first on CyberScoop.

Officials seize 53 DDoS-for-hire domains in ongoing crackdown

16 April 2026 at 13:26

Authorities from 21 countries took down 53 domains and arrested four people allegedly involved in distributed denial-of-service operations used by more than 75,000 cybercriminals, Europol said Thursday. 

The globally coordinated effort dubbed “Operation PowerOFF” disrupted booter services and seized and dismantled infrastructure, including servers and databases, that supported the DDoS-for-hire services, officials said.

Law enforcement agencies obtained data on more than 3 million alleged criminal user accounts from the seized databases, and ultimately sent more than 75,000 emails and letters to participants, warning them to halt their activities.

Officials from the countries involved in the operation also served 25 search warrants, removed more than 100 URLs advertising DDoS-for-hire services in search engine results and created search engine ads to target young people searching for DDoS-for-hire tools.

The operation, which is ongoing, primarily targets IP stressors or DDoS booters that cybercriminals use to inundate websites, servers and networks with junk traffic, rendering legitimate services inaccessible. 

Officials described DDoS-for-hire tools as prolific and easily accessible, often including tutorials that allow non-tech savvy people to initiate attacks on various organizations.

“Attacks are often regionally focused, with users targeting servers and websites within their continent, and directed at a wide range of targets including online marketplaces, telecommunications providers and other web-based services,” Europol said in a news release. “Motivations vary from curiosity to ideological purposes linked to hacktivism, as well as financial gain through extortion or the disruption of competitors’ services.”

Operation PowerOFF is supported by multiple law enforcement agencies from the United States, United Kingdom, Australia, Austria, Belgium, Brazil, Bulgaria, Denmark, Estonia, Finland, Germany, Japan, Latvia, Lithuania, Luxembourg, the Netherlands, Norway, Poland, Portugal, Sweden and Thailand.

The international crackdown disrupted other popular DDoS-for-hire services in late 2024, netting three arrests and 27 domain takedowns. Authorities in Poland in May arrested four alleged administrators of DDoS-for-hire tools that cybercriminals used to launch thousands of attacks from 2022 to 2025.

The post Officials seize 53 DDoS-for-hire domains in ongoing crackdown appeared first on CyberScoop.

New York’s RAISE Act vs. California’s TFAIA: What Companies Need to Know

As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.

The final version of the Act[2] is narrower than the version of the Act enacted by the legislature in June, reflecting negotiations that more closely align the Act with California’s SB 53 (the “TFAIA”), which took effect January 1. However, while the Act shares California’s focus on transparency and safety, it diverges in critical ways, particularly regarding enforcement mechanisms and reporting timelines. Additional chapter amendments (expected to be finalized in early 2026) will further align New York with California by substituting a $500 million revenue threshold for compute-cost triggers and adjusting reporting timelines, penalties and oversight mechanisms. Below, we discuss the RAISE Act’s requirements at a high level, while also flagging key distinctions from the TFAIA, and anticipated revisions before the law takes effect on January 1, 2027.

 Applicability Thresholds and Scope

As enacted, the RAISE Act applies to (1) frontier models with a certain compute intensity and cost and (2) large developers with aggregate compute spend.

Specifically, the RAISE Act currently defines “frontier model” as an AI model trained using greater than 10^26 computational operations with a compute cost exceeding $100 million, or a model produced through “knowledge distillation”[3], and applies to “large developers” meaning persons that have trained at least one frontier model (the compute cost of which exceeds $5 million) and spent over $100 million in aggregate compute costs training frontier models.[4]

However, significant changes are expected that will bring the RAISE Act in line with applicability thresholds set forth under TFAIA. While California’s TFAIA is likewise limited to “frontier models” using computing power greater than 10^26 operations, the TFAIA distinguishes “large frontier developers” using a revenue threshold where developers (together with affiliates) with annual gross revenues above $500 million in the preceding year face heightened obligations. The California regime thus layers a compute-based model definition with a revenue-based developer trigger, creating a narrower class of entities subject to more stringent transparency and governance documentation.

Although the RAISE Act, as signed, uses compute-cost thresholds to define covered entities, public reporting suggests that Governor Hochul has secured legislative agreement to replace those provisions with a revenue-based trigger that mirrors California’s approach. Specifically, New York policymakers have publicly signaled an intent to align the “large developer” trigger with California’s $500 million revenue threshold to materially harmonize coverage with the TFAIA, simplifying compliance for companies operating in both jurisdictions. The revisions would have the effect of narrowing applicability given that many emerging AI developers (particularly those attracting substantial venture capital to fund compute-intensive model development) may quickly exceed compute-cost thresholds while generating little or no revenue, and that international competitors operating at lower revenue levels could otherwise face disproportionate regulatory burdens under a compute-only framework.

Key Operative Requirements

The RAISE Act imposes three core obligations on large developers:

  1. Safety and Security Protocols. Before deploying a frontier model, developers must implement a written safety and security protocol similar in nature to the frontier AI framework required under the TFAIA. Specifically, the protocol must consist of documented technical and organizational protocols that (a) specify reasonable protections to reduce the risk of “critical harm”[5], (b) describe reasonable cybersecurity protections against unauthorized access to or misuse of frontier models that could lead to “critical harm” and (c) outline detailed testing procedures and assessment measures to evaluate unreasonable risk of “critical harm” (including how the frontier model could be misused or modified, how it could evade control of the large developer or user, etc.), (d) state compliance requirements with specificity to allow for confirmation of adoption and otherwise describe how the developer will comply with the Act and (e) designate senior personnel responsible for ensuring compliance. The protocol must be conspicuously posted (though the posted version may be appropriately redacted) and transmitted to the NY Attorney General and Division of Homeland Security and Emergency Services (with redactions only where required by federal law) upon request. Frontier model developers must further annually review and, where applicable, modify and republish the protocol to account for changes in model capabilities and industry best practices. Finally, developers are further required to implement appropriate safeguards to prevent unreasonable risk of “critical harm” and are prohibited from deploying a frontier model if doing so would create an unreasonable risk of “critical harm” (although this last requirement is anticipated to be removed in the chapter amendments).
  2. Safety Incident Reporting. The most significant operational difference between New York and California’s regimes lies in incident reporting timelines. Under the RAISE Act, large developers must disclose reportable safety incidents[6] to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or within 72 hours of learning facts sufficient to establish a reasonable belief that a safety incident has occurred. California’s TFAIA, by contrast, requires frontier developers to report “critical safety incidents” within 15 days of discovery, with a shortened 24-hour window only for incidents posing imminent risk of death or serious physical injury. New York’s uniform 72-hour requirement thus represents a middle ground (i.e., stricter than California’s standard timeline but more flexible than the 24-hour emergency threshold).
  3. Recordkeeping. Large developers must record and retain (a) copies of its unredacted safety and security protocol, including records and dates of any updates or revisions and (b) information on specific tests and test results with sufficient detail for third parties to replicate the testing procedure, in each case, for as long as the frontier model is deployed plus 5 years.

In addition, the Act confirms that large developers violate the Act where they “knowingly make false or materially misleading statements or omissions in or regarding documents produced” under the Act and, unless removed by the chapter amendments, requires annual, independent third party compliance audits with detailed reporting that must also be conspicuously published and provided to regulatory authorities.

Enforcement

In addition to oversight by an AI office to be established within the New York Department of Financial Services, the RAISE Act grants the Attorney General authority to bring civil actions for violations of the Act. Following anticipated chapter amendments, penalties will be capped at $1 million for initial violations and $3 million for repeat offenses (substantially reduced from the $10 million and $30 million figures in the originally signed statute). The Attorney General may also pursue injunctive or declaratory relief. Critically, the Act does not establish a private right of action.

By comparison, California’s TFAIA authorizes the California Attorney General to seek civil penalties up to $1 million per violation, scaled to the severity of the offense, and also contains provisions that empower whistleblowers to bring civil actions for injunctive relief and recovery of attorneys’ fees for violations of their rights.[7]

Key Takeaways

Most businesses, including the vast majority of AI developers, will be relieved that the RAISE Act has narrow applicability. With thresholds targeting only frontier models and anticipated chapter amendments further narrowing coverage, the Act is unlikely to materially impact most organization’ operations. However, compliance remains a moving target, and thus businesses must stay abreast of legislative developments (particularly in light of the recently issued Executive Order aimed at state AI law preemption)[8].

For the few businesses that may meet the RAISE Act’s applicability thresholds, the alignment between New York and California’s frameworks offers a welcome development what is already slated to be an otherwise fragmented regulatory environment. Just as state privacy laws have created a challenging patchwork of requirements that businesses have learned to navigate, the harmonization of New York’s revenue threshold with California’s TFAIA represents a step toward more coherent multi-state compliance. However, where requirements diverge (such as New York’s stricter 72-hour incident reporting window compared to California’s 15-day standard) covered entities should draw upon the strategies and infrastructure developed through their privacy compliance programs. The same disciplined approach to documentation, risk assessment and incident response that businesses have refined while managing obligations under state privacy laws and the GDPR can be effectively adapted to address the RAISE Act’s nuanced requirements. 

To prepare for compliance:

  • Prepare for Threshold Alignment: Businesses should (a) anticipate January amendments replacing New York’s compute-cost thresholds with California’s $500 million revenue standard and (b) conduct threshold analyses to determine whether it will qualify as a large frontier developer under the harmonized framework.
  • Implement Dual-Compliant Safety Protocols: While awaiting confirmation of New York’s amendments, covered entities should develop safety and security protocols that satisfy both states’ requirements, combining New York’s emphasis on pre-deployment implementation with California’s focus on annual public disclosure and risk assessment reporting.
  • Prioritize Incident Response Capabilities: New York’s 72-hour reporting window demands robust incident detection and response systems. Covered entities operating in both jurisdictions should build compliance infrastructure around the stricter New York timeline to ensure dual compliance, including by revising contracts where relevant to revise reporting timelines by third party vendors.
  • Account for Enforcement Risk: With penalties up to $3 million for repeat violations, New York’s RAISE Act presents potentially higher financial exposure than California’s framework. Risk management strategies should reflect this disparity, with particular attention to documentation practices and compliance verification to avoid repeat violations.

[1] A copy of the RAISE Act can be accessed here.

[2] This article reflects the RAISE Act as it will be implemented following expected chapter amendments that Governor Hochul and legislative leaders committed to enacting in January 2026, including substituting a $500 million revenue threshold for the compute-cost triggers in the enacted text, reducing enforcement penalties and establishing a Department of Financial Services oversight office.

[3] Defined in the Act as “any supervised learning technique that uses a larger artificial intelligence model or the output of a larger artificial intelligence model to train a smaller artificial intelligence model with similar or equivalent capabilities as the larger artificial intelligence model.”

[4] Notably, the Act applies to frontier models “developed, deployed or operating in whole or in part in New York State”, and exempts accredited colleges and universities conducting academic research or persons that subsequently transfer full intellectual property rights in its frontier model to a third party.

[5] The Act defines “critical harm” to mean the death or serious injury of at least 100 people or at least $1 billion of damages to rights in money or property caused or materially enabled by a large developer’s use, storage, or release of a frontier model, through either of the following: (a) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (b) an AI model engaging in conduct that does both of the following: (i) acts with no meaningful human intervention; and (ii) would, if committed by a human, constitute a crime specified in the penal law that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.

[6] The Act defines “safety incident” broadly to include known incidences of critical harm, autonomous model behavior, theft or unauthorized access to model weights, critical failure of technical controls or unauthorized use of a frontier model.

[7] Notably, RAISE Act does expressly (a) prohibit large developers, or their contractors or subcontractors, from preventing an employee from disclosing or attempting to disclose information to the large developer or the NY Attorney General, if the employee has reasonable cause to believe that the large developer’s activities pose an unreasonable or substantial risk of “critical harm”, regardless of the employer’s compliance with applicable law and (b) permit an employee to seek injunctive relief for any harms caused by such retaliation.

[8] For our Firm’s detailed analysis of the Executive Order, see here.

President Trump Signs Executive Order Seeking to Preempt State AI Regulation

For more insights and analysis from Cleary lawyers on policy and regulatory developments from a legal perspective, visit What to Expect From a Second Trump Administration.

On December 11, 2025, President Donald Trump signed an executive order titled Establishing A National Policy Framework For Artificial Intelligence (the “Order”)[1]. The Order’s policy objective is to “enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”[2] and comes after Congress considered but did not advance federal legislation that would have preempted state AI regulation earlier this year. The Order justifies federal intervention on three grounds:

  1. The growing number of different state regulatory frameworks has created a fragmented and inconsistent compliance landscape, particularly for small and medium sized businesses.
  2. Some state laws require AI developers to incorporate “ideological bias” into their model outputs. For example, the Order alleges that Colorado’s ban on algorithmic discrimination could pressure AI models to produce inaccurate results in order to avoid differential treatment or impact on protected groups.
  3. Certain state laws may go beyond their proper authority by regulating conduct outside their borders, raising concerns about interference with interstate commerce.

Below we summarize the key elements of the Order, followed by key takeaways for businesses to consider as they develop AI governance programs mapped to an ever-shifting regulatory framework.

Key Provisions of the Order

Building upon Executive Order 14179 of 23 January 2025 (Removing Barriers to American Leadership in Artificial Intelligence), which revoked the Biden Administration’s attempt to regulate the AI industry, the Order escalates federal efforts to prevent state-level AI regulation through forthcoming litigation, funding restrictions and agency preemption proceedings.

Specifically, the Order sets out a multi-pronged federal effort to challenge state AI laws and promote a single federal regime:

  • AI Litigation Task Force: The Order directs the Attorney General to establish an AI Litigation Task Force within 30 days of the Order to challenge state AI laws that conflict with the spirit of the Order, such as laws that unconstitutionally regulate interstate commerce, are preempted by existing federal regulations or are otherwise unlawful (e.g., laws that compel disclosures by AI developers or deployers in violation of the First Amendment).
  • State Law Evaluation: The Order requires the Secretary of Commerce to publish an evaluation of existing state AI laws, within 90 days of the Order, that identifies: (i) onerous laws conflicting with the Order’s main goals (e.g., laws that force AI systems to alter truthful outputs or compel disclosure in violation of the First Amendment), (ii) laws that should be referred to the AI Litigation Task Force, and (iii) laws that promote the development of AI consistent with the policy of the Order.
  • Federal Funding Restrictions: Within 90 days of the Order, the Secretary of Commerce must issue a policy notice specifying the conditions under which States may be eligible for certain remaining federal funding. Specifically, states with onerous AI laws, as identified by the Secretary of Commerce, lose eligibility for certain non-deployment funds under the Broadband Equity Access and Deployment program to the maximum extent allowed by federal law. Executive agencies are also directed to consider conditioning discretionary grants on states not enacting conflicting AI laws or agreeing not to enforce existing ones during the funding period.
  • Federal Preemption Standards: The Order also tasks key agencies with developing federal standards. Within 90 days of the Order, (i) the Federal Communications Commission is required to consider adopting a federal reporting and disclosure standard for AI models that preempts conflicting state laws and (ii) the Federal Trade Commission (“FTC”) is required to issue a policy statement on the application of the FTC Act’s prohibition on unfair and deceptive acts to AI models.  In particular, the Order directs the FTC guidance to explain the circumstances when state laws requiring “alterations to the truthful outputs of AI models” are preempted by the FTC Act’s prohibition on deceptive practices.
  • Legislative Framework: The Order calls for the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare a legislative proposal to establish a uniform federal policy framework preempting conflicting state legislation (with limited carve-outs for child safety, AI compute and data center infrastructure, state government use of AI and other topics, as to be determined).

Key Takeaways

The Order marks the Trump Administration’s most substantial effort to date in shaping the federal approach to AI regulation, prioritizing industry flexibility over prescriptive requirements and potentially creating tension with state regulatory frameworks across various jurisdictions whilst leaving open questions about how AI safety, accountability and consumer protection will be addressed at the federal level.. With Congress deadlocked on comprehensive AI legislation and states determined to defend their regulatory authority, the resulting legal battles will likely determine the future of AI legislation in the United States for years to come. 

Given the ever-evolving legal landscape and in the absence of federal legislation, businesses should consider aligning their AI governance programs with industry standards, such as NIST’s AI Risk Management Framework and ISO 42001, which are likely to persist even as the regulatory environment continues to shift.  Businesses should also focus compliance efforts on ensuring transparency and truthfulness in their use and development of AI given the FTC’s recent focus on such principles as aligned with the Order’s directive.[3] Thoughtful AI governance aligned with NIST and ISO principles, coupled with transparent disclosures and documentation, will help businesses navigate the evolving AI compliance landscape.


[1] The text of the Order can be found here.

[2] See Section 2 of the Order.

[3] By directing the FTC to consider application of its deceptive practices prohibition to AI, the Order dovetails with the FTC’s recent enforcement efforts, which have largely focused on appropriate and sufficient disclosure of AI usage and businesses’ alleged misrepresentations of AI capabilities.

GDPR vs. the hosting defence: How wary should online platforms be of the EU Court of Justice Russmedia judgment?

CJEU ruling heralded as “landmark” GDPR judgment turns on a specific set of facts and requires careful interpretation in the post-DSA regulatory reality.

The judgment of the Court of Justice of the European Union (CJEU) in the Russmedia case is a significant ruling for online platforms. Caution is needed when making inferences from the specific facts and circumstances of that case, which involved a severe breach of privacy, the processing of sensitive personal data, and an operator of an online marketplace that the CJEU deemed a “data controller” in respect of its processing of that sensitive personal data.

Key facts and findings

The case can be traced back to August 2018, when an anonymous third party published a false advertisement on an online marketplace operated by Russmedia Digital.[1] The ad falsely and maliciously presented a woman as offering sexual services and included photographs of the woman and her personal telephone number. When contacted by the woman, Russmedia took down the ad within the hour, but at that point it had already been reproduced on other websites and the damage was done.

On these facts, the Court found that Russmedia, as operator of the online marketplace, should be qualified as a “controller” under GDPR in respect of the processing of the sensitive personal data contained in the ad and that, in that specific capacity, Russmedia should have taken the following actions, in each case “by means of appropriate technical and organisational measures” (within the meaning of GDPR), to prevent the harm caused:

  • Proactively screen ads proposed to be placed on its platform to identify ads that contain sensitive personal data (a.k.a. special categories of personal data within the meaning of Article 9 of GDPR).[2]
  • If an ad containing sensitive data is identified during the screening, perform an identity check – before publishing the ad – to verify if the advertiser is the person whose sensitive data appear in the ad.
  • If the advertiser is not the person whose sensitive data are included, refuse publication unless the advertiser can prove that the relevant person has given his or her explicit consent to the publication of the ad on the online marketplace.[3]
  • Prevent ads containing sensitive personal data from being scraped (copied) from the online marketplace and unlawfully published on other websites.[4]

The Court also held that Russmedia could not rely on the hosting liability safe harbour provisions of the e-Commerce Directive. Russmedia had successfully invoked the safe harbour before the Romanian court. The CJEU disagreed, however, and held that the application of the liability exemptions provided for by the e-Commerce Directive safe harbour in a case where a breach of GDPR was (allegedly) at issue and where – crucially – the operator in question qualified as a “controller” in relation to the processing of the sensitive personal data in question would “interfere with the GDPR regime” (at §131). Therefore, in this specific instance, Russmedia could not invoke the e-Commerce Directive hosting liability safe harbour provisions to defend against the claim for breach of its obligations as a controller under the GDPR.

Why the precedential value of the judgment should not be overstated

A number of findings of the Court require a detailed analysis and raise some challenging interpretations of the GDPR and the e-Commerce Directive. For example:

  • The Court adopted a broad interpretation of the concept of “controller” under GDPR and applied it to the very specific set of facts and circumstances of the case. The fact that Russmedia’s general terms and conditions gave it “considerable freedom to exploit the information published on [its] marketplace […] for its own advertising and commercial purposes” (at §§67), in combination with the specific architecture of the online marketplace, seem to have been determining factors. In reaching its conclusion, the Court did not clearly differentiate between the roles of the key actors during the different stages of processing of the personal data in question (e.g., the placement of the ad by the third-party advertiser vs. any subsequent processing by the marketplace operator for its own purposes).[5] This stands in stark contrast to a seemingly more measured approach taken by Advocate General (AG) Szpunar in his opinion. The AG opined that the third-party advertiser alone determined the purpose of the ad, since Russmedia had no knowledge of why the advertiser would post the ad. The AG also more clearly distinguished the role of the marketplace operator when processing sensitive personal data contained in ads from its role when processing personal data of advertisers (e.g., when creating or managing their accounts) and, on that basis, concluded that Russmedia qualified as a processor (not a controller) in relation to the processing of sensitive personal data contained in ads posted on the online marketplace.[6]
  • The Court appears to have moved very quickly from qualifying the online marketplace operator as “controller” to subsequently grounding several potentially far-reaching and highly specific ex-ante screening and due diligence obligations for data controllers processing sensitive personal data, in the much more general GDPR principles of accountability, data protection by design and by default, and data security (in particular Articles 5(2), 24, 25 and 32 of GDPR).
  • The exclusion of GDPR breaches from the hosting liability safe harbour is dealt with only briefly – almost in passing (at §§129-136) – and could have benefited from more elaborate analysis, in particular regarding the potential impact of the exclusion to the careful balance struck by the EU legislator in respect of the liability of intermediary service providers under the e-Commerce Directive.[7]

Moreover, the judgment is fundamentally predicated on several highly specific facts, which were highlighted by the Court itself:

  • The Court went out of its way to stress the particular sensitivity of the personal data in question and the severity of the consequences for the data subject (see, for example, at §§47-53 and 90-96). The judgment should be read in a context where the Court had already signalled that it would be a champion of European data protection rights in a world where the harmful effects of online harassment are becoming increasingly severe and visible. The findings of the Court should therefore not necessarily be extrapolated to apply to all types of personal data or all data processing activities subject to GDPR.
  • To come to the conclusion that Russmedia was a “joint controller” in relation to the processing of the sensitive personal data included in the harmful ad in question, the Court analysed in considerable detail the specific manner in which Russmedia operated its online marketplace. Relevant elements taken into account by the Court included – as set out above – the broad rights Russmedia reserved for itself in relation to further processing of personal data included in ads, the specific architecture of the online marketplace, as well as the fact that there appear to have been few constraints on anonymous advertisers placing potentially harmful and false ads on the online marketplace in a way that means injured parties have no recourse to, or way of identifying, such malicious third-party advertisers (see, for example, at §§69-73).
  • The Court was asked to rule on the e-Commerce Directive, which governed the underlying facts back in 2018. The hosting liability safe harbour provisions of the e-Commerce Directive have since been replaced by the Digital Services Act.[8]

The precedential value of the judgment should therefore not be overstated:

  • Other online marketplaces may be operated in a different manner, have a different architecture and content limitations, and may therefore not qualify as “controller” in relation to the processing of sensitive personal data included in ads placed on their platforms by third parties.
  • Most ads will not contain any sensitive personal data, and are therefore much less likely to cause the type of severe harm to data subjects which was at issue here. Those ads would not trigger the same requirements that the Court seems to impose on Russmedia in this specific case.
  • The e-Commerce Directive has been replaced by the DSA. Although the DSA incorporated hosting liability safe harbour provisions that mirror to a large extent the equivalent language in the e-Commerce Directive, there are some important textual differences that may provide scope for broader protection under the DSA. If the same facts as those at issue in this case were to occur today, the analysis under the DSA may be different and more nuanced.[9] Case law on the hosting liability safe harbour (even some of the other recent e-Commerce Directive rulings from the CJEU) appears to be evolving to take into account technological advancements and the practical architectural realities of today’s online marketplaces and content hosting platforms.

Practical takeaways for operators which are nevertheless impacted by the judgment

The findings of the Court were limited to general findings of law, since the judgment was in response to a request for a preliminary ruling from the Romanian court of appeal. It therefore remains to be seen how these findings will be applied by national courts and data protection authorities to specific fact patterns sufficiently similar to the ones at issue in Russmedia.

For example, the Court did not specify how operators of online marketplaces should operationalise the requirements summarised above. Several of those requirements – such as preventing ads from being scraped or pre-screening ads for sensitive personal data before they are published – indeed appear difficult to reconcile with how online marketplaces and the AdTech ecosystem operate in reality and, even if they were to operate differently, what is (and may in the future become) technically feasible at scale.

Moreover, the GDPR neither compels organisations to do the impossible nor requires absolute data protection in any and all circumstances. The GDPR allows due account to be taken of “the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing” of personal data (Articles 25 and 32 of GDPR).[10] Accordingly, we expect that a key battleground will remain the issue of what measures are technically feasible and proportionate considering the “state of the art”. The Russmedia judgment still offers considerable leeway on how to ensure GDPR compliance, even for operators whose online platforms may fall within the specific scope of the judgment.


[1] See §§30 and 31 of the Judgment of December 2, 2025, Russmedia Digital and Inform Media Press, Case C-492/23, available here.

[2] The Court came to the unsurprising conclusion that the data in question qualified as special category personal data since they concerned the data subject’s sex life and sexual orientation. The fact that the data was untrue and harmful did not change that conclusion (see Judgment, § 53). There is an active debate, however, on how broadly the concept of special category personal data should be interpreted under the GDPR, including in the context of the preparation of the EU’s proposed Digital Omnibus Package (which we commented on in an earlier blog post “Reset or rollback: Unpacking the EU’s Digital Omnibus Package”).

[3] Or that another exception under Article 9(2) of GDPR is satisfied that can be relied on to justify the publication without consent, which seems rather theoretical in the context of an online marketplace such as the one operated by Russmedia as described in the Judgment.

[4] The Court held that, to this end, the operator “must consider in particular all technical measures available in the current state of technical knowledge that are apt to block the copying and reproduction of online content” (§122).

[5] The Court held that the anonymous third-party advertiser was also a “joint controller”, together with Russmedia (see Judgment, §§54-75), and clarified that “the existence of joint responsibility does not necessarily imply equal responsibility”(§63), leaving it to the national court to determine the exact extent of Russmedia’s responsibility in the case at hand; On earlier CJEU case-law adopting a comparably extensive interpretation of joint controllership, see our earlier blog post “EU Court of Justice confirms earlier case law on broad interpretation of “personal data” and offers extensive interpretation of “joint controllership”, with possible broad ramifications in the AdTech industry and beyond”.

[6] See §111 and following of the AG opinion of February 6, 2025, available here.

[7] For example, even though the Court held that the requirements imposed on Russmedia “cannot, in any event, be classified as […] a general monitoring obligation” prohibited by Article 15 of the e-Commerce Directive, this can certainly be debated.

[8] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act); In accordance with Article 89 of the Digital Services Act (DSA), references to Articles 12 to 15 of the e-Commerce Directive (Directive 2000/31/EC) are now to be construed as references to Articles 4, 5, 6 and 8 of the DSA.

[9] The AG also hinted at this in §160 of his opinion, by pointing to the textual differences between the e-Commerce Directive and the DSA.

[10] Even the Court admitted, in respect of the anti-scraping measures referenced above, that “the unlawful dissemination of personal data initially published online is [not] sufficient to conclude that the measures adopted by the controller concerned were not appropriate” (at §123).

California Enacts Landmark AI Safety Law But With Very Narrow Applicability

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The path to TFAIA was paved by failure. TFAIA’s predecessor SB 1047[3] overwhelmingly passed the legislature last year, but was ultimately blocked at the Governor’s desk. In his veto statement, Governor Newsom called for an approach to frontier model regulation “informed by an empirical trajectory analysis of AI systems and capabilities,” criticizing SB 1047 for applying stringent standards to even the most basic functions[4]. TFAIA thus represents a strategic pivot to regulation focused only on the most impactful AI models, which eliminates the kill switch requirement (which would mandate full shutdown capabilities for noncompliant systems), rigid testing and auditing regime and aggressive 72-hour timeline for incident reporting that doomed its predecessor.

TFAIA serves as California’s attempt to strike the balance of advancing AI innovation and competition while underscoring accountability for responsible AI development. The Act aims to bolster public trust and increase awareness of AI-specific risks by requiring developers to think critically about frontier AI capabilities.

Scope and Thresholds

Scoped narrowly to target the most powerful models capable of significant and catastrophic impact, TFAIA imposes certain requirements on “frontier models,” defined as foundation models (or general purpose models that are trained on broad data sets) trained using or intending to use a quantity of computing power greater than 10^26 integer or floating-point operations.[5]  In particular, all “frontier developers” (or persons that “trained or initiated the training” of frontier models) face baseline transparency requirements, with more burdensome obligations imposed on “large frontier developers” (namely, frontier developers that, together with affiliates, had annual gross revenues above $500 million in the preceding year).

Tailoring its scope even further, TFAIA focuses many of its requirements on prevention of “catastrophic risk”, defined as a foreseeable and material risk that a frontier model could (1) materially contribute to the death or serious injury of 50 or more people or (2) cause at least $1 billion in damages to property, in either case, arising from a single incident involving a frontier model, doing any of the following: (a) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (b) engaging in criminal conduct (conduct that would constitute murder, assault, extortion or theft) or a cyberattack, without meaningful human intervention; or (c) evading the control of its frontier developer or user.

Key Compliance Provisions

TFAIA imposes certain requirements on all frontier models, with heightened obligations on large frontier model developers:

  1. Transparency Reports. At or before the time of deploying a frontier model (or a substantially modified version of an existing frontier model), frontier model developers must implement and publish a transparency report on their website. Reports, which can under the Act be embedded in model or system cards, must include (a) the website of the frontier developer, (b) model details (e.g., release date, languages supported, intended uses, modalities, restrictions) and (c) mechanisms by which a person can communicate with the frontier developer.[6]
    Large frontier developers must further (x) include summaries of assessments of catastrophic risks resulting from use of the frontier model, the results of such assessments, the role of any third-party evaluators and the steps taken to fulfill the requirements of the frontier AI framework (see below) and (y) transmit to the Office of Emergency Services reports of any assessments of catastrophic risk resulting from internal use of their frontier models every three months or pursuant to another reasonable schedule specified by the developer.  The Act tasks the Office of Emergency Services with establishing a mechanism by which large frontier developers can confidentially submit such assessment reports of catastrophic risk.
  2. Critical Safety Incident Reporting. Frontier developers are required to report “critical safety incidents[7] to the Office of Emergency Services within 15 days of discovery.  To the extent a critical safety incident poses imminent risk of death or serious physical injury, the reporting window is shortened to 24 hours, with disclosure required to an appropriate authority based on the nature of the incident and as required by law.  Note, critical safety incidents pertaining to foundation models that do not qualify as frontier models are not required to be reported.  Importantly, TFAIA exempts the following reports from disclosure under the California Public Records Act: reports regarding critical safety incidents, reports of assessments of catastrophic risk and covered employee reports made pursuant to the whistleblower protections described below. 
  3. Frontier AI Frameworks for Large Frontier Developers. In addition to the above, large frontier developers must publish an annual (or, upon making a material modification to its framework, within 30 days of such modification) frontier AI framework describing the technical and organizational protocols relied upon to manage and assess how catastrophic risks are identified, mitigated, and governed. The framework must include documentation of a developer’s alignment with national/international standards, governance structures, thresholds used to identify and assess the frontier model’s capabilities to pose a catastrophic risk, mitigation processes (including independent review of potential for catastrophic risks and effectiveness of mitigation processes) and cybersecurity practices and processes for identifying and responding to critical safety incidents.  Large frontier developers are prohibited from making false or misleading claims about catastrophic risks from their frontier models or their compliance with their published frontier AI framework.  Additionally, these developers are permitted to redact information necessary to protect trade secrets, cybersecurity, public safety or national security or as required by law as long as they maintain records of unredacted versions for a period of at least five years.

Other Notable Provisions

In addition to the requirements imposed on frontier models, TFAIA resurrects CalCompute—a consortium tasked with development of a framework for the creation of a public cloud computing cluster first envisioned under SB 1047—which provides for access to advanced computing capabilities to support safe, equitable and sustainable AI development and deployment in the public interest. 

TFAIA also enhances protections for whistleblowers by (1) prohibiting frontier developers from adopting rules that would prevent employees from reporting catastrophic risks or retaliating against employees who report such risks, (2) requiring frontier developers to provide notice to their employees once a year of their rights as whistleblowers and (3) requiring large frontier developers to implement and maintain anonymous internal reporting channels. Notably, whistleblowers are empowered to bring civil actions for injunctive relief (as well as recovery of attorneys’ fees) against frontier developers for violations of their rights under the Act.

Enforcement and Rulemaking

Large frontier developers that fail to publish TFAIA-compliant reports or other documentation, make a false statement about catastrophic risk or their compliance with their frontier AI framework, fail to report a critical safety incident or fail to comply with their frontier AI framework could face penalties up to $1 million per violation, scaled to the severity of the offense. Such penalties can only be recovered by the Attorney General bringing a civil action. 

To ensure that the applicability of the TFAIA reflects technological change, the Act empowers the California Department of Technology—as opposed to the Attorney General as envisioned under SB 1047—to assess technological developments, research and international standards and recommend updates to key statutory definitions (of “frontier model,” “frontier developer” and “large frontier developer”) on or before January 1, 2027 and annually thereafter. 

Key Takeaways

With TFAIA, California provides a blueprint for regulations focused on the most impactful and powerful AI technology, establishing transparency, disclosure, and governance requirements for frontier model developers.  A similar bill, the Responsible AI Safety and Education (RAISE) Act, regulating frontier models awaits the signature of Governor Hochul in New York.  Although TFAIA and RAISE have similar applicability and frameworks,[8] RAISE imposes stricter requirements (72-hour window for reporting safety incidents) and higher penalties (up to $10 million for a first violation and $30 million for subsequent ones), similar to the failed SB 1047.  TFAIA’s success in navigating gubernatorial approval—where SB 1047 failed—demonstrates the effectiveness of a transparency-first approach over prescriptive mandates (as TFAIA largely focuses on disclosure requirements for covered models whereas RAISE does not require transparency reporting to the same extent nor include whistleblower protections, instead focusing on enforcement by imposing strict liability and strictly prohibiting models that create unreasonable risk of critical harms), suggesting the RAISE Act may be subject to further narrowing, or even a veto, by Governor Hochul. 

Most businesses, including the vast majority of AI developers, will be relieved that TFAIA has such narrow applicability.  For the few businesses that might meet TFAIA’s applicability thresholds, the law represents both immediate compliance obligations and a preview of the regulatory landscape to come. These businesses should:

  1. Conduct a threshold analysis to determine frontier developer or large frontier developer status
  2. Review existing AI safety practices against TFAIA requirements, particularly focusing on safety framework documentation and incident reporting capabilities
  3. Develop comprehensive frontier AI frameworks addressing the law’s required elements, including governance structures, risk assessment thresholds and cybersecurity practices
  4. Implement robust documentation systems to support transparency reporting requirements for model releases and modifications
  5. Create incident response procedures to identify and report critical safety incidents within required timelines (15-day standard, 24-hour emergency)
  6. Update whistleblower reporting mechanisms and ensure employees receive notice of their rights under the law
  7. Develop scalable compliance frameworks accommodating varying state requirements as other states, including New York, consider similar AI safety laws
  8. Consider voluntary adoption of TFAIA-style frameworks as industry best practices, even for companies below current thresholds

[1] The text of the Act can be found here.

[2] AB 2013 requires developers of generative AI systems to post documentation on their website describing the dataset(s) used for system training.

[3] The text of SB 1047 can be found here.

[4] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

[5] The computing power minimum includes computing from both initial training and subsequent fine-tuning or modifications.

[6] Notably, frontier developers can redact portions of their transparency reports to protect trade secrets and guard against cybersecurity or public safety threats; however, any such redactions must be justified within the repot which must be maintained for 5 years.

[7] The Act defines a “critical safety incident” to mean any of the following: (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.

[8] Unlike TFAIA, RAISE instead applies only to “large developers” defined as persons that  have (1) trained  at  least  one frontier model and (2) spent over $100 million in aggregate compute costs in training frontier models.

Enforcement Countdown: Is DOJ Ready for the Bulk Data Rule “Grace Period” to End?

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

Although it remains to be seen whether DOJ’s National Security Division (“NSD”) will have the necessary infrastructure and personnel in place to launch comprehensive investigations to enforce such an expansive regulatory program, companies should be wary to wait to verify the NSD’s operational readiness.  Instead, companies should bear in mind certain considerations, discussed below, when approaching this new and uncertain enforcement frontier.

The DSP is a brand new regulatory framework based on the Bulk Data Rule that imposes restrictions designed to prevent certain countries—China, Cuba, Iran, North Korea, Russia, and Venezuela—and covered persons from accessing Americans’ bulk sensitive personal data and U.S. government-related data.[2]  Violations of the Rule are subject to steep penalties.  Pursuant to the DSP and the International Emergency Economic Powers Act (“IEEPA”), DOJ is authorized to bring not only civil enforcement actions, but also criminal prosecutions for willful violations of the DSP’s requirements.  Civil penalties may reach up to the greater of $368,136 or twice the value of each violative transaction, while willful violations are punishable by up to 20 years imprisonment and a $1,000,000 fine.[3]

Although the DSP largely went into effect on April 8, 2025, DOJ instituted a 90-day limited enforcement period.  During this period, NSD stated it would deprioritize civil enforcement actions for companies and individuals making a “good-faith effort” to come into compliance with the DSP.  This grace period comes to an end on July 8, 2025.  As detailed below, this broad grant of investigative and enforcement authority—especially the potential for both civil and criminal liability—creates a number of potential logistical and legal challenges for DOJ.

Investigation and Enforcement Challenges

Enforcement of the DSP falls to the NSD, and more specifically to a small, specialized section named the Foreign Investment Review Section (“FIRS”).  Historically, FIRS was comprised of approximately 10-20 attorneys, with a niche portfolio of responsibilities that included representing DOJ on the Committee on Foreign Investment in the United States and Team Telecom.  With this portfolio, FIRS generally enjoyed a comparatively lower profile than other sections within the Department, leaving most federal prosecutors and criminal defense attorneys unfamiliar with its activities.

However, that all could change in the near future given that FIRS has been tasked with implementing and enforcing an entirely new regulatory and enforcement regime.  Going forward, FIRS – a section traditionally without litigators or a litigating function – will have both civil and criminal authority to investigate, bring enforcement actions, and prosecute violations of the Rule. 

Complications Associated with Adding Criminal Prosecutors to FIRS

The availability of criminal penalties under the DSP will require a number of changes at FIRS.  Notably, unlike other NSD sections, the scope of FIRS’s work did not previously include criminal prosecutions and instead maintained a regulatory focus.[4]

Given FIRS’s lack of experience with criminal cases, FIRS must now decide how it will staff enforcement matters going forward, including whether to hire federal prosecutors directly or to instead coordinate with U.S. Attorneys’ Offices or other sections of NSD in connection with criminal investigations and prosecutions.  It seems likely that NSD would consider staffing up FIRS in anticipation of its dual criminal and civil enforcement authority under the DSP.  But the introduction of criminal prosecutors into the same small section as civil regulators opens up potential risks in terms of parallel civil and criminal investigations:

  1. Due Process Considerations: While DOJ often conducts parallel criminal and civil investigations, such coordination is subject to limitations imposed by the Due Process Clause of the Fifth Amendment.[5]  In United States v. Kordel, the Supreme Court suggested that the Government may be found to have acted in bad faith in violation of the Fifth Amendment by bringing “a civil action solely to obtain evidence for its criminal prosecution” or by “fail[ing] to advise the defendant in its civil proceedings that it contemplates his criminal prosecution.”[6]  Lower courts have “occasionally suppressed evidence or dismissed indictments on due process grounds where the government made affirmative misrepresentations or conducted a civil investigation solely for purposes of advancing a criminal case.”[7]  In order to avoid such consequences, FIRS will have to ensure that any cooperation or coordination in parallel civil and criminal investigations of DSP violations complies with Due Process requirements.
  2. DOJ Internal Policy Limitations: In addition to Due Process requirements, internal DOJ guidance places guardrails around parallel or joint civil and criminal investigations.  Section 1-12.00 of the Justice Manual notes that “when conducted properly,” parallel investigations can “serve the best interests of law enforcement and the public.”[8]  However, the same section goes on to warn DOJ attorneys that “parallel proceedings must be handled carefully in order to avoid allegations of . . . abuse of civil process.”[9]  Section 1-12.100 addresses parallel or joint corporate investigations and similarly emphasizes that DOJ attorneys “should remain mindful of their ethical obligations not to use criminal enforcement authority unfairly to extract, or to attempt to extract, additional civil or administrative monetary payments.”[10]
  3. Maintaining the Secrecy of Rule 6(e) Grand Jury Materials: Finally, FIRS will need to implement precautions to ensure that its civil enforcement attorneys are walled off from the disclosure of materials covered by Federal Rule of Criminal Procedure 6(e).  Rule 6(e) establishes a general rule of secrecy for grand jury materials with limited exceptions.  Although Rule 6(e)(3)(A)(i) permits disclosure “to an attorney for the government for use in the performance of such attorney’s duty,” civil enforcement attorneys within FIRS could only view Rule 6(e) materials if they obtain a court order.[11]  Moreover, pursuant to DOJ guidance, even when disclosure is authorized for use in civil proceedings, it is considered a “better practice to forestall the disclosure until the criminal investigation is complete,” given the potential “danger of misuse, or the appearance thereof.”[12]  Given that none of the exceptions under Rule 6(e) appear readily applicable, criminal attorneys within FIRS will have to take particular precautions to ensure that grand jury material covered under Rule 6(e) is not disclosed to their civil colleagues.

Following July 8, as we wait to see whether FIRS initiates investigations and enforcement actions under the DSP, it will need to address the above limitations and potential pitfalls that come with parallel civil and criminal proceedings.  This will be especially important given the relatively small size of FIRS, its historic regulatory focus, and the addition of criminal prosecutors and criminal enforcement authority as it tries to administer an entirely new regulatory and enforcement regime.

Limited Investigative Resources

In addition to potential concerns associated with criminal enforcement of the DSP, there is also uncertainty about how FIRS will investigate potential violations.  Unlike traditional sanctions and export control enforcement, which relies on the Department of Treasury’s Office of Foreign Assets Control and the Department of Commerce’s Bureau of Industry and Security, respectively, it is unclear what, if any, dedicated investigative resources or interagency cooperation FIRS will have at its disposal.  While federal prosecutors typically investigate alongside agents from the Federal Bureau of Investigation and Homeland Security Investigations, such investigative resources historically were not allocated to FIRS, and it is unclear which federal investigating agency – if any – has been tasked with leading these investigations.  This raises questions about FIRS’s capacity to effectively investigate and bring enforcement actions for potential violations.

One option that could be considered is to have FIRS limit its role to civil enforcement and – to the extent it comes across potential criminal conduct – make criminal referrals to either (i) the appropriate United States Attorney’s Office, all of which have federal prosecutors who have been trained in national security investigations and have routine access to a grand jury, or (ii) NSD’s Counterintelligence and Export Control Section, which currently includes federal prosecutors that specialize in investigating criminal violations of sanctions and export control laws.

Alternatively, the Federal Trade Commission (“FTC”) could also provide investigative support regarding potential violations under the DSP given its enforcement authority under a related law: the Protecting Americans’ Data from Foreign Adversaries Act (“PADFA”).  The FTC has enforcement authority under PADFA to seek civil penalties but is first required to refer the matter to the DOJ.[13]  Given the potential overlap between the DSP and PADFA, the FTC may be particularly well-situated to investigate and refer cases of DSP violations to FIRS.

Seventh Amendment Implications: The Jarkesy Challenge

As noted above, the DOJ has broad authority to pursue both civil penalties and prosecute criminal offenses for non-compliance with the Bulk Data Rule under the DSP, but just how the DOJ plans to pursue civil penalties for violations is also unclear.  Specifically, to the extent the DOJ seeks to impose penalties in a way that implicates administrative proceedings, it is likely to face challenges following the Supreme Court’s decision in SEC v. Jarkesy.[14]  In Jarkesy, the Supreme Court held that the Seventh Amendment entitles a defendant to a jury trial when the SEC seeks civil penalties for securities fraud,[15] thereby limiting the SEC’s ability to adjudicate cases for civil penalties through its administrative proceedings.

Jarkesy’s reasoning regarding the Seventh Amendment’s application to actions seeking civil penalties could impact the DSP’s enforcement framework.[16]  Similar to the civil penalties at issue in Jarkesy, civil penalties imposed under the DSP and IEEPA serve to punish violations and deter future misconduct, as opposed to compensate victims.[17]  However, unlike antifraud provisions, the DSP arguably lacks clear common law analogies, and it is possible that the DSP and IEEPA could be viewed under the “public rights” exception given the links to national security.[18]

Going forward, Jarkesy is expected to affect how other federal agencies conduct enforcement actions seeking civil penalties.  The DOJ will have to consider these implications as it decides on an enforcement framework for imposing civil penalties for DSP violations.

Conclusion

The DSP represents the U.S.’s first data localization requirement ripe for enforcement, but its implementation faces substantial practical challenges that may hinder DOJ’s ability for wide-ranging or swift action.  As companies work to ensure their activities are in compliance with the DSP and the Bulk Data Rule ahead of July 8, many are left wondering whether the DOJ will be ready to begin investigating and enforcing this Rule given its breadth and the clear potential challenges that lie ahead.  While we await DOJ’s next steps toward enforcement, companies should be prepared to document their good-faith efforts to demonstrate compliance with the DSP and the Rule to prevent early investigations and enforcement actions.  Additionally, as emphasized by the DOJ’s non-binding Compliance Guidance,[19] companies that proactively implement compliance programs will be better positioned to respond and adapt to this uncertain enforcement environment.


[1] U.S. Dep’t of Just., Nat’l Sec. Div., Data Security Program: Implementation and Enforcement Policy Through July 8, 2025 (Apr. 11, 2025), https://www.justice.gov/opa/media/1396346/dl?inline [hereinafter Enforcement Policy].

[2] Our prior alert memorandum on the DSP is available here, and our alert on DOJ’s 90-day limited enforcement policy of the DSP is available here.

[3] Enforcement Policy, at 1.

[4] U.S. Dep’t of Just., Nat’l Sec. Div., NSD Organizational Chart (June 16, 2023), https://www.justice.gov/nsd/national-security-division-organization-chart

[5] See, e.g., United States v. Stringer, 535 F.3d 929, 933 (9th Cir. 2008) (“There is nothing improper about the government undertaking simultaneous criminal and civil investigations.”).

[6] See United States v. Kordel, 397 U.S. 1, 11 (1970) (holding that the Government did not violate due process when it used evidence from a routine FDA civil investigation to convict defendants of criminal misbranding given that the agency made similar requests for information in 75% of civil cases and there was no suggestion the Government brought the civil case solely to obtain evidence for the criminal prosecution).

[7] Stringer, 535 F.3d at 940 (collecting cases).

[8] Justice Manual 1-12.00 – Coordination of Parallel Criminal, Civil, Regulatory, and Administrative Proceedings (May 2018), https://www.justice.gov/jm/jm-1-12000-coordination-parallel-criminal-civil-regulatory-and-administrative-proceedings

[9] Id.

[10] Justice Manual 1-12.100 – Coordination of Corporate Resolution Penalties in Parallel and/or Joint Investigations and Proceedings Arising from the Same Misconduct (May 2018), https://www.justice.gov/jm/jm-1-12000-coordination-parallel-criminal-civil-regulatory-and-administrative-proceedings

[11] See United States v. Sells Eng’g, Inc., 463 U.S. 418, 427 (1983) (rejecting the argument that all attorneys within the DOJ’s civil division are covered under (A)(i), and instead holding that “(A)(i) disclosure is limited to use by those attorneys who conduct the criminal matters to which the materials pertain”).

[12] U.S. Dep’t of Just., Crim. Resource Manual, 156. Disclosure of Matters Occurring Before the Grand Jury to Department of Justice Attorneys and Assistant United States Attorneys (Oct. 2012), https://www.justice.gov/archives/jm/criminal-resource-manual-156-disclosure-matters-occurring-grand-jury-department-justice-attys

[13] A violation of PADFA is treated as a violation of an FTC rule pursuant to 15 U.S.C. § 57a(a)(1)(B).

[14] 603 U.S. 109 (2024).

[15] Id. at 140.

[16] The Court in Jarkesy also established a two-part test for determining whether a cause of action implicates the Seventh Amendment.  First, courts must determine whether the cause of action is “legal in nature” and whether the remedy sought is traditionally obtained in courts of law.  Id. at 121–27.  If legal in nature, courts must then assess whether the “public rights” exception permits congressional assignment of adjudication to an agency.  Id. at 127–34.

[17] Id. at 121–27.

[18] Id. at 135.

[19] U.S. Dep’t of Just., Nat’l Sec. Div., Data Security Program: Compliance Guide (Apr. 11, 2025), https://www.justice.gov/opa/media/1396356/dl

CPPA Enforcement Action Against Honda Underscores Need for CCPA Compliant Privacy Practices

On March 12, the California Privacy Protection Agency (“CPPA”) announced an enforcement action against American Honda Motor Co. (“Honda”), with a $632,500 fine for violating the California Consumer Privacy Act and its implementing regulations (“CCPA”).[1]  This action, which is the CCPA’s first non-data broker action, arose in connection with the Enforcement Division’s ongoing investigative sweep of connected vehicle manufacturers and related technologies, and serves as a cautionary tale for companies handling consumer personal information, highlighting the stringent requirements of the CCPA and the consequences of non-compliance.

Alleged CCPA Violations

In connection with its review of Honda’s data privacy practices, the CPPA’s Enforcement Division concluded that Honda violated the CCPA’s requirements by:

  1. Placing an undue burden on consumers, requiring Californians to verify their identity and provide excessive personal information to exercise certain privacy rights, such as the right to opt-out of sale or sharing and the right to limit;
  2. Making it difficult for Californians to authorize other individuals or organizations (known as “authorized agents”) to exercise their privacy rights;
  3. Employing dark patterns, by using an online privacy management tool that failed to offer Californians their privacy choices in a symmetrical or equal way; and
  4. Sharing consumers’ personal information with ad tech companies without contracts that contain the necessary terms to protect privacy.

Below, we summarize the conduct giving rise to the alleged violations, and provide practical tips for businesses to consider for implementation.

1. Undue Burden on Requests to Opt-Out of Sale/Sharing and Requests to Limit

According to the Stipulated Final Order, Honda provided consumers with the same webform to submit all of their CCPA privacy rights requests irrespective of whether the requests required identity verification or not, in violation of the CCPA. Specifically, the CCPA distinguishes between privacy rights that permit a business to conduct prior identity verification (e.g., rights to know/access, correct and delete) and those that do not (e.g., rights to opt-out of data sales or “sharing” and to limit the use and disclosure of sensitive personal information), meaning businesses are prohibited from requiring consumers to verify their identities before actioning opt-out or limitation requests.[2] 

In reviewing Honda’s practices, the CPPA found that the use of the same webform for all privacy rights requests, and in turn by requiring personal information be provided before honoring opt-out and limitation requests, Honda imposed an unlawful verification standard on California consumers.  In addition, the CPPA further found that the webform required consumers to provide more information than necessary[3] for Honda to verify requests to access, delete and change their data.  Accordingly, the CPPA found that Honda’s webform was unduly burdensome, interfering with the ability of consumers to exercise their rights thereby violating the CCPA.

  • Practice Tip.  Businesses covered by the CCPA should review their consumer rights requests processes and methods to confirm that they do not require verification in order for consumers to submit consumer opt-out and limitation requests, and should further limit the information required to be provided by consumers in order to submit other privacy rights requests to only the information truly necessary to confirm the identity of the requestor.

2. Undue Burden on Exercise of CCPA Rights through Authorized Agents

Similar to the allegations above, the second alleged violation arose in connection with Honda’s practice of requiring consumers to directly confirm that they had given permission to their authorized agents to submit opt-out and limitation requests on their behalf. 

Under the CCPA, consumers can authorize other persons or entities to exercise their aforementioned rights, and, as above, the CCPA prohibits verification requirements for rights to opt-out and limit.  While businesses may require authorized agents to provide proof of authorization, the CCPA prohibits requiring consumers to directly confirm that authorized agents have their permission.  Instead, businesses are only allowed to contact consumers directly to check authorization, provided this relates to requests to know/access, correct or delete personal information.

Despite these requirements, because Honda’s process for submitting CCPA privacy rights requests did not distinguish between verifiable and non-verifiable requests, and Honda sent confirmatory correspondence directly to consumers to confirm they had given permission to the authorized agent for all such privacy requests, the CPPA found Honda in violation of the CCPA.

  • Practice Tip.  As above, businesses should audit their consumer rights requests procedures and mechanisms to ensure that they do not impose verification requirements, including those related to the use of authorized agents, in connection with opt-out and limitation requests.

3. Asymmetry in Cookie Management Tool

The third alleged violation regards Honda’s use of a cookie consent management tool on its website used to effectuate consumer requests to opt-out of personal information “sharing”, which was configured to opt consumers in by default.

Specifically, through the OneTrust cookie consent management tool utilized on Honda’s websites, consumers were automatically opted-in to the “sharing” of their personal information by default as shown below.  To opt-out, consumers were required to take multiple steps (i.e., to toggle the button to turn off cookies and then confirm their choices) while opting in required either no steps or, assuming a consumer were to decide to opt back in after opting out, only one step to “allow all” cookies.

 The CCPA requires business to design and implement methods for submitting CCPA requests that are easy to understand, provide symmetrical choices and avoid confusing language, interactive elements or choice architecture that impairs one’s ability to make a choice and are easy to execute.  Here, the CPPA focused specifically on providing symmetrical choices, meaning that the path for a consumer to exercise a more privacy-protective option cannot be longer or more difficult or time-consuming than the path to exercise a less privacy-protective option because that would impair or interfere with the consumer’s ability to make a choice.  The Stipulated Final Order went further to confirm that a website banner that provides only two options when seeking consumers’ consent to use their personal information—such as “Accept All” and “More Information,” or “Accept All” and “Preferences”—is not equal or symmetrical.

  • Practice Tip.  Businesses must audit their cookie consent management tools to ensure that consumers are not opted-in to data “sales” or “sharing” by default, and that the tool does not require a consumer to take more steps to effectuate consumer opt-out requests than to opt-in.  Moreover, cookie consent management tools that present only two options should allow consumers to either “accept” or “reject” all cookies (rather than presenting the option to “accept” and another option that is not full rejection (such as to receive more information or go to a “preferences” page)).

4. Absence of Contractual Safeguards with Vendors

Finally, the CPPA alleged that although Honda disclosed consumer personal information to third-party advertising technology vendors in situations where such disclosure was a “sale” or “sharing” under the CCPA, it failed to enter into a CCPA-compliant contract with such vendors.  Specifically, businesses that “sell” or “share” personal information to or with a third party must enter into agreements containing explicit provisions prescribed by the CCPA to ensure protection of consumers’ personal information. The CPPA found that by failing to implement such contractual safeguards, Honda placed consumers’ personal information at risk.

  • Practice Tip.  Businesses should audit all contracts pursuant to which consumer personal information is disclosed or otherwise made available to third parties, particularly third-party advertising technology vendors, to ensure the provisions required by the CCPA are included.

Enforcement Remedies

In addition to a $632,500 fine[4], the Stipulated Final Order requires Honda to (1) modify its methods for consumers to submit CCPA requests, including with respect to its method for the submission and confirmation of CCPA requests by authorized agents, (2) change its cookie preference tool to avoid dark patterns and ensure symmetry in choice, (3) ensure all personnel handling CCPA requests are adequately trained and (4) enter into compliant contracts with all external recipients of consumer personal information within 180 days.

Conclusion

The enforcement action against Honda underscores the importance of strict compliance with the CCPA. Businesses must ensure that their processes for handling consumer privacy requests are straightforward, do not require unnecessary information, and provide equal choice options, and must enter into CCPA compliant contracts prior to and in connection with the disclosure of consumer personal information to third parties.


[1] The Stipulated Final Order (the “Stipulated Final Order”) can be found here.

[2] Under the CCPA, businesses can verify requests to delete, correct and know personal information of consumers because of the potential harm to consumers from imposters accessing, deleting or changing their personal information; conversely, requests to opt-out of sale or sharing and requests to limit use and disclosure are prohibited from having a verification requirement because of the minimal potential harm to consumers.  Accordingly, while businesses may ask for additional information in connection with such requests to identify the relevant data in their systems, they cannot ask for more information than necessary to process such requests and, to the extent they can comply without additional information, they must do so.

[3] Specifically, the form required consumers to provide their first name, last name, address, city, state, zip code, email address and phone number, although Honda “need[ed] only two data points from [the relevant] consumer to identify [them] within its database.” 

[4] Notably, the Stipulated Final Order details the number of consumers whose rights were implicated by some of Honda’s practices, serving as a reminder to businesses that CCPA fines apply on a per violation basis.

Data Act FAQs – Key Takeaways for Manufacturers and Data Holders

On 3 February 2025, the European Commission (“EC”) published an updated version of its frequently asked questions (“FAQs”) on the EU Data Act.[1]  The Data Act, which is intended to make data more accessible to users of IoT devices in the EU, entered into force on 11 January 2024 and will become generally applicable as of 12 September 2025.

The FAQs, first published in September 2024, address the key concepts of “connected product” and “related service.” The latest iteration of the FAQs contains incremental updates which provide greater insight into how the EC believes that manufacturers and data holders should interpret their obligations under the Data Act.

Key Takeaways for Manufacturers and Data Holders

  1. “Connected Products” includes various smart devices, including smartphones and TVs.[2]  The FAQs acknowledge the broad definition of connected products under the Data Act and provide examples of devices that would fall under this category. In particular, despite ambiguity created from previous iterations of the Data Act, the EC has confirmed its view in the FAQs that devices such as smartphones, smart home devices and TVs are in-scope as connected products.
  2. Two conditions must be satisfied for a digital service to constitute a “Related Service.”[3]  It is expressly noted that the following conditions must be satisfied for a digital service to be a related service: (a) there must be a two-way exchange of data between the connected product and the service provider, and (b) the service must affect the connected product’s functions, behaviour, or operation. The FAQs also provide several factors that could help businesses determine whether a digital service is a related service, including user expectations for that product category, replaceability of the digital service, and pre-installation of the digital service on the connected product. Although these factors are not determinative, they may provide helpful guidance to businesses assessing whether their services fall within this definition (for example, if the service can easily be replaced by a third-party alternative, it may not meet the threshold of a related service). Ultimately, the EC has noted that practice and courts’ interpretations will play an essential role in further delineating if a digital service is a related service – so time will tell.
  3. Manufacturers have some discretion as to whether data will be directly or indirectly accessible.[4]  Importantly, the FAQs suggest that manufacturers/providers have a significant degree of discretion whether or not to design or redesign their connected products or related services to provide direct access to data. The FAQs list out certain criteria which can be taken into account when determining whether to design for direct access[5] or indirect access.[6] In this respect, the FAQs note that the wording of Article 3(1) (access by design) leaves flexibility as to whether design changes need to be implemented and it is acknowledged that data holders may prefer to offer indirect access to the data. It is also noted that the manufacturer may implement a solution that “works best for them” and consider, as part of its assessment, whether direct access is technically possible, the costs of potential technical modifications, and the difficulty of protecting trade secrets or intellectual property or of ensuring the connected product’s security.
  4. Readily available data without disproportionate effort.[7]  The FAQs confirm the position that readily available data is “product data and related service data that a data holder can obtain without disproportionate effort going beyond a simple operation.”  The EC provided some further clarity by highlighting that only data generated or collected after the entry into application of the Data Act (i.e., after 12 September 2025) should be considered “readily available data” as the definition does not include a reference to the time of their generation or collection. However, the FAQs do not provide further clarity on what would constitute “disproportionate effort” – arguably leaving businesses with further discretion to interpret this in the context of their products and services.
  5. Data made available under the Data Act should be ‘easily usable and understandable’ by users and third parties.[8]  The FAQs expressly note that data holders are required to share data of the same quality as they make available to themselves to facilitate the use of the data across the data economy.This indicates that raw and pre-processed data may require some additional investment to be usable. However, the FAQs make clear that there is no requirement for data holders to make substantial investments into such processes. Indeed, it may be the case that where the level of investment into processing the data is substantial, the Chapter II obligations may not apply to that data.
  6. Data generated outside of the EU may be subject to the Data Act.[9]  The EC’s position is that when a connected product is placed on the market in the EU, all the data generated by that connected product both inside and outside the EU will be subject to the Data Act. For example, if a user purchases a smart appliance in the EU and subsequently takes it to the US with them on vacation, any data generated by the use of the appliance in the US would also fall within the scope of the Data Act.
  7. Manufacturers will not be data holders if they do not control access to the data.[10]  It is explained in the FAQs that determining who is the data holder depends on who “controls access to the readily available data”. In particular, the FAQs acknowledge that manufacturers may contract out the role of “data holder” to a third party for all or part of their connected products. This seems to suggest that where the manufacturer does not control access to the readily available data, it will not be a data holder. In addition, a related service provider that is not the manufacturer of the connected product may also be a data holder if it controls access to readily available data that is generated by the related service it provides to the user. The FAQs further confirm that there may be instances where there is no data holder, i.e., in the case of direct access, where only the user has access to data stored directly on the connected product without the involvement of the manufacturer.
  8. Data holders can use non-personal data for any purpose agreed with the user (subject to limited exceptions).[11]  The FAQs reaffirm the position that a data holder can use the non-personal data generated by the user for any purpose, provided that this is agreed with the user.[12]  Furthermore, the data holder must not derive from such data any insights about the economic situation, assets and production methods of the user in any other manner that could undermine the commercial position of the user. Where data generated by the user includes personal data, data holders should ensure any use of such data is in compliance with the EU GDPR. To ensure compliance with the GDPR, data holders may apply privacy-enhancing technologies (“PETs”); however, the EC’s view is that applying PETs does not necessarily mean that the resulting data will be considered ‘derived’ or ‘inferred’ such that they would fall out-of-scope of the Data Act.
  9. Users may be able to request access to data from previous users of their connected product.[13]  The FAQs note that the Data Act “can be read as giving users the right to access and port readily available data generated by the use of connected objects, including data generated by other users before them.” Subsequent users may therefore have a legitimate interest in such data, for example, in respect of updates or incidents. However, the rights of previous users and other applicable law (e.g., the right to be forgotten under the EU GDPR) must be respected. Moreover, data holders are able to delete certain historical data after a reasonable retention period.[14] 

Although the initial set of FAQs, and the subsequent incremental updates, provide further guidance for businesses whose products or services may fall in scope of the Data Act, there are still areas of uncertainty that are yet to be addressed. As the FAQs are a “living document”, they may continue to be updated as and when the EC deems it necessary. It is also important to note that while the FAQs provide some useful guidance on Data Act interpretation, the Data Act is subject to supplemental domestic implementation and enforcement by national competent authorities of EU member states. Businesses should therefore pay careful attention to guidance published by national authorities in the member states and sectoral areas in which they operate.


[1] See https://digital-strategy.ec.europa.eu/en/library/commission-publishes-frequently-asked-questions-about-data-act.

[2] See Question 7 of the FAQs.

[3] See Question 10 of the FAQs.

[4] See Question 17 and 22 of the FAQs.

[5] I.e., ‘where relevant and technically feasible’ the user has the technical means to access, stream or download the data without the involvement of the data holder. For further information, see Article 3(1) of the Data Act.

[6] I.e., the connected product or related service is designed in such a way that the user must ask the data holder for access. For further information, see Article 4(1) of the Data Act.

[7] See Question 4 of the FAQs.

[8] See Question 5 of the FAQs.

[9] See Question 9 of the FAQs.

[10] See Question 21 of the FAQs.

[11] See Question 29 of the FAQs and Question 13 of the FAQs.

[12] See also Article 4(13) of the Data Act.

[13] See Question 33 of the FAQs.

[14] See Recital 24 of the Data Act.

❌
❌