Normal view

There are new articles available, click to refresh the page.
Before yesterdayCleary Cybersecurity and Privacy Watch

California Enacts Landmark AI Safety Law But With Very Narrow Applicability

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.

The path to TFAIA was paved by failure. TFAIA’s predecessor SB 1047[3] overwhelmingly passed the legislature last year, but was ultimately blocked at the Governor’s desk. In his veto statement, Governor Newsom called for an approach to frontier model regulation “informed by an empirical trajectory analysis of AI systems and capabilities,” criticizing SB 1047 for applying stringent standards to even the most basic functions[4]. TFAIA thus represents a strategic pivot to regulation focused only on the most impactful AI models, which eliminates the kill switch requirement (which would mandate full shutdown capabilities for noncompliant systems), rigid testing and auditing regime and aggressive 72-hour timeline for incident reporting that doomed its predecessor.

TFAIA serves as California’s attempt to strike the balance of advancing AI innovation and competition while underscoring accountability for responsible AI development. The Act aims to bolster public trust and increase awareness of AI-specific risks by requiring developers to think critically about frontier AI capabilities.

Scope and Thresholds

Scoped narrowly to target the most powerful models capable of significant and catastrophic impact, TFAIA imposes certain requirements on “frontier models,” defined as foundation models (or general purpose models that are trained on broad data sets) trained using or intending to use a quantity of computing power greater than 10^26 integer or floating-point operations.[5]  In particular, all “frontier developers” (or persons that “trained or initiated the training” of frontier models) face baseline transparency requirements, with more burdensome obligations imposed on “large frontier developers” (namely, frontier developers that, together with affiliates, had annual gross revenues above $500 million in the preceding year).

Tailoring its scope even further, TFAIA focuses many of its requirements on prevention of “catastrophic risk”, defined as a foreseeable and material risk that a frontier model could (1) materially contribute to the death or serious injury of 50 or more people or (2) cause at least $1 billion in damages to property, in either case, arising from a single incident involving a frontier model, doing any of the following: (a) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (b) engaging in criminal conduct (conduct that would constitute murder, assault, extortion or theft) or a cyberattack, without meaningful human intervention; or (c) evading the control of its frontier developer or user.

Key Compliance Provisions

TFAIA imposes certain requirements on all frontier models, with heightened obligations on large frontier model developers:

  1. Transparency Reports. At or before the time of deploying a frontier model (or a substantially modified version of an existing frontier model), frontier model developers must implement and publish a transparency report on their website. Reports, which can under the Act be embedded in model or system cards, must include (a) the website of the frontier developer, (b) model details (e.g., release date, languages supported, intended uses, modalities, restrictions) and (c) mechanisms by which a person can communicate with the frontier developer.[6]
    Large frontier developers must further (x) include summaries of assessments of catastrophic risks resulting from use of the frontier model, the results of such assessments, the role of any third-party evaluators and the steps taken to fulfill the requirements of the frontier AI framework (see below) and (y) transmit to the Office of Emergency Services reports of any assessments of catastrophic risk resulting from internal use of their frontier models every three months or pursuant to another reasonable schedule specified by the developer.  The Act tasks the Office of Emergency Services with establishing a mechanism by which large frontier developers can confidentially submit such assessment reports of catastrophic risk.
  2. Critical Safety Incident Reporting. Frontier developers are required to report “critical safety incidents[7] to the Office of Emergency Services within 15 days of discovery.  To the extent a critical safety incident poses imminent risk of death or serious physical injury, the reporting window is shortened to 24 hours, with disclosure required to an appropriate authority based on the nature of the incident and as required by law.  Note, critical safety incidents pertaining to foundation models that do not qualify as frontier models are not required to be reported.  Importantly, TFAIA exempts the following reports from disclosure under the California Public Records Act: reports regarding critical safety incidents, reports of assessments of catastrophic risk and covered employee reports made pursuant to the whistleblower protections described below. 
  3. Frontier AI Frameworks for Large Frontier Developers. In addition to the above, large frontier developers must publish an annual (or, upon making a material modification to its framework, within 30 days of such modification) frontier AI framework describing the technical and organizational protocols relied upon to manage and assess how catastrophic risks are identified, mitigated, and governed. The framework must include documentation of a developer’s alignment with national/international standards, governance structures, thresholds used to identify and assess the frontier model’s capabilities to pose a catastrophic risk, mitigation processes (including independent review of potential for catastrophic risks and effectiveness of mitigation processes) and cybersecurity practices and processes for identifying and responding to critical safety incidents.  Large frontier developers are prohibited from making false or misleading claims about catastrophic risks from their frontier models or their compliance with their published frontier AI framework.  Additionally, these developers are permitted to redact information necessary to protect trade secrets, cybersecurity, public safety or national security or as required by law as long as they maintain records of unredacted versions for a period of at least five years.

Other Notable Provisions

In addition to the requirements imposed on frontier models, TFAIA resurrects CalCompute—a consortium tasked with development of a framework for the creation of a public cloud computing cluster first envisioned under SB 1047—which provides for access to advanced computing capabilities to support safe, equitable and sustainable AI development and deployment in the public interest. 

TFAIA also enhances protections for whistleblowers by (1) prohibiting frontier developers from adopting rules that would prevent employees from reporting catastrophic risks or retaliating against employees who report such risks, (2) requiring frontier developers to provide notice to their employees once a year of their rights as whistleblowers and (3) requiring large frontier developers to implement and maintain anonymous internal reporting channels. Notably, whistleblowers are empowered to bring civil actions for injunctive relief (as well as recovery of attorneys’ fees) against frontier developers for violations of their rights under the Act.

Enforcement and Rulemaking

Large frontier developers that fail to publish TFAIA-compliant reports or other documentation, make a false statement about catastrophic risk or their compliance with their frontier AI framework, fail to report a critical safety incident or fail to comply with their frontier AI framework could face penalties up to $1 million per violation, scaled to the severity of the offense. Such penalties can only be recovered by the Attorney General bringing a civil action. 

To ensure that the applicability of the TFAIA reflects technological change, the Act empowers the California Department of Technology—as opposed to the Attorney General as envisioned under SB 1047—to assess technological developments, research and international standards and recommend updates to key statutory definitions (of “frontier model,” “frontier developer” and “large frontier developer”) on or before January 1, 2027 and annually thereafter. 

Key Takeaways

With TFAIA, California provides a blueprint for regulations focused on the most impactful and powerful AI technology, establishing transparency, disclosure, and governance requirements for frontier model developers.  A similar bill, the Responsible AI Safety and Education (RAISE) Act, regulating frontier models awaits the signature of Governor Hochul in New York.  Although TFAIA and RAISE have similar applicability and frameworks,[8] RAISE imposes stricter requirements (72-hour window for reporting safety incidents) and higher penalties (up to $10 million for a first violation and $30 million for subsequent ones), similar to the failed SB 1047.  TFAIA’s success in navigating gubernatorial approval—where SB 1047 failed—demonstrates the effectiveness of a transparency-first approach over prescriptive mandates (as TFAIA largely focuses on disclosure requirements for covered models whereas RAISE does not require transparency reporting to the same extent nor include whistleblower protections, instead focusing on enforcement by imposing strict liability and strictly prohibiting models that create unreasonable risk of critical harms), suggesting the RAISE Act may be subject to further narrowing, or even a veto, by Governor Hochul. 

Most businesses, including the vast majority of AI developers, will be relieved that TFAIA has such narrow applicability.  For the few businesses that might meet TFAIA’s applicability thresholds, the law represents both immediate compliance obligations and a preview of the regulatory landscape to come. These businesses should:

  1. Conduct a threshold analysis to determine frontier developer or large frontier developer status
  2. Review existing AI safety practices against TFAIA requirements, particularly focusing on safety framework documentation and incident reporting capabilities
  3. Develop comprehensive frontier AI frameworks addressing the law’s required elements, including governance structures, risk assessment thresholds and cybersecurity practices
  4. Implement robust documentation systems to support transparency reporting requirements for model releases and modifications
  5. Create incident response procedures to identify and report critical safety incidents within required timelines (15-day standard, 24-hour emergency)
  6. Update whistleblower reporting mechanisms and ensure employees receive notice of their rights under the law
  7. Develop scalable compliance frameworks accommodating varying state requirements as other states, including New York, consider similar AI safety laws
  8. Consider voluntary adoption of TFAIA-style frameworks as industry best practices, even for companies below current thresholds

[1] The text of the Act can be found here.

[2] AB 2013 requires developers of generative AI systems to post documentation on their website describing the dataset(s) used for system training.

[3] The text of SB 1047 can be found here.

[4] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.

[5] The computing power minimum includes computing from both initial training and subsequent fine-tuning or modifications.

[6] Notably, frontier developers can redact portions of their transparency reports to protect trade secrets and guard against cybersecurity or public safety threats; however, any such redactions must be justified within the repot which must be maintained for 5 years.

[7] The Act defines a “critical safety incident” to mean any of the following: (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.

[8] Unlike TFAIA, RAISE instead applies only to “large developers” defined as persons that  have (1) trained  at  least  one frontier model and (2) spent over $100 million in aggregate compute costs in training frontier models.

Enforcement Countdown: Is DOJ Ready for the Bulk Data Rule “Grace Period” to End?

As of July 8, the U.S. Department of Justice (“DOJ”) is scheduled to begin full enforcement of its Data Security Program (“DSP”) and the recently issued Bulk Data Rule after its 90-day limited enforcement policy expires, ushering in “full compliance” requirements for U.S. companies and individuals.[1] 

Although it remains to be seen whether DOJ’s National Security Division (“NSD”) will have the necessary infrastructure and personnel in place to launch comprehensive investigations to enforce such an expansive regulatory program, companies should be wary to wait to verify the NSD’s operational readiness.  Instead, companies should bear in mind certain considerations, discussed below, when approaching this new and uncertain enforcement frontier.

The DSP is a brand new regulatory framework based on the Bulk Data Rule that imposes restrictions designed to prevent certain countries—China, Cuba, Iran, North Korea, Russia, and Venezuela—and covered persons from accessing Americans’ bulk sensitive personal data and U.S. government-related data.[2]  Violations of the Rule are subject to steep penalties.  Pursuant to the DSP and the International Emergency Economic Powers Act (“IEEPA”), DOJ is authorized to bring not only civil enforcement actions, but also criminal prosecutions for willful violations of the DSP’s requirements.  Civil penalties may reach up to the greater of $368,136 or twice the value of each violative transaction, while willful violations are punishable by up to 20 years imprisonment and a $1,000,000 fine.[3]

Although the DSP largely went into effect on April 8, 2025, DOJ instituted a 90-day limited enforcement period.  During this period, NSD stated it would deprioritize civil enforcement actions for companies and individuals making a “good-faith effort” to come into compliance with the DSP.  This grace period comes to an end on July 8, 2025.  As detailed below, this broad grant of investigative and enforcement authority—especially the potential for both civil and criminal liability—creates a number of potential logistical and legal challenges for DOJ.

Investigation and Enforcement Challenges

Enforcement of the DSP falls to the NSD, and more specifically to a small, specialized section named the Foreign Investment Review Section (“FIRS”).  Historically, FIRS was comprised of approximately 10-20 attorneys, with a niche portfolio of responsibilities that included representing DOJ on the Committee on Foreign Investment in the United States and Team Telecom.  With this portfolio, FIRS generally enjoyed a comparatively lower profile than other sections within the Department, leaving most federal prosecutors and criminal defense attorneys unfamiliar with its activities.

However, that all could change in the near future given that FIRS has been tasked with implementing and enforcing an entirely new regulatory and enforcement regime.  Going forward, FIRS – a section traditionally without litigators or a litigating function – will have both civil and criminal authority to investigate, bring enforcement actions, and prosecute violations of the Rule. 

Complications Associated with Adding Criminal Prosecutors to FIRS

The availability of criminal penalties under the DSP will require a number of changes at FIRS.  Notably, unlike other NSD sections, the scope of FIRS’s work did not previously include criminal prosecutions and instead maintained a regulatory focus.[4]

Given FIRS’s lack of experience with criminal cases, FIRS must now decide how it will staff enforcement matters going forward, including whether to hire federal prosecutors directly or to instead coordinate with U.S. Attorneys’ Offices or other sections of NSD in connection with criminal investigations and prosecutions.  It seems likely that NSD would consider staffing up FIRS in anticipation of its dual criminal and civil enforcement authority under the DSP.  But the introduction of criminal prosecutors into the same small section as civil regulators opens up potential risks in terms of parallel civil and criminal investigations:

  1. Due Process Considerations: While DOJ often conducts parallel criminal and civil investigations, such coordination is subject to limitations imposed by the Due Process Clause of the Fifth Amendment.[5]  In United States v. Kordel, the Supreme Court suggested that the Government may be found to have acted in bad faith in violation of the Fifth Amendment by bringing “a civil action solely to obtain evidence for its criminal prosecution” or by “fail[ing] to advise the defendant in its civil proceedings that it contemplates his criminal prosecution.”[6]  Lower courts have “occasionally suppressed evidence or dismissed indictments on due process grounds where the government made affirmative misrepresentations or conducted a civil investigation solely for purposes of advancing a criminal case.”[7]  In order to avoid such consequences, FIRS will have to ensure that any cooperation or coordination in parallel civil and criminal investigations of DSP violations complies with Due Process requirements.
  2. DOJ Internal Policy Limitations: In addition to Due Process requirements, internal DOJ guidance places guardrails around parallel or joint civil and criminal investigations.  Section 1-12.00 of the Justice Manual notes that “when conducted properly,” parallel investigations can “serve the best interests of law enforcement and the public.”[8]  However, the same section goes on to warn DOJ attorneys that “parallel proceedings must be handled carefully in order to avoid allegations of . . . abuse of civil process.”[9]  Section 1-12.100 addresses parallel or joint corporate investigations and similarly emphasizes that DOJ attorneys “should remain mindful of their ethical obligations not to use criminal enforcement authority unfairly to extract, or to attempt to extract, additional civil or administrative monetary payments.”[10]
  3. Maintaining the Secrecy of Rule 6(e) Grand Jury Materials: Finally, FIRS will need to implement precautions to ensure that its civil enforcement attorneys are walled off from the disclosure of materials covered by Federal Rule of Criminal Procedure 6(e).  Rule 6(e) establishes a general rule of secrecy for grand jury materials with limited exceptions.  Although Rule 6(e)(3)(A)(i) permits disclosure “to an attorney for the government for use in the performance of such attorney’s duty,” civil enforcement attorneys within FIRS could only view Rule 6(e) materials if they obtain a court order.[11]  Moreover, pursuant to DOJ guidance, even when disclosure is authorized for use in civil proceedings, it is considered a “better practice to forestall the disclosure until the criminal investigation is complete,” given the potential “danger of misuse, or the appearance thereof.”[12]  Given that none of the exceptions under Rule 6(e) appear readily applicable, criminal attorneys within FIRS will have to take particular precautions to ensure that grand jury material covered under Rule 6(e) is not disclosed to their civil colleagues.

Following July 8, as we wait to see whether FIRS initiates investigations and enforcement actions under the DSP, it will need to address the above limitations and potential pitfalls that come with parallel civil and criminal proceedings.  This will be especially important given the relatively small size of FIRS, its historic regulatory focus, and the addition of criminal prosecutors and criminal enforcement authority as it tries to administer an entirely new regulatory and enforcement regime.

Limited Investigative Resources

In addition to potential concerns associated with criminal enforcement of the DSP, there is also uncertainty about how FIRS will investigate potential violations.  Unlike traditional sanctions and export control enforcement, which relies on the Department of Treasury’s Office of Foreign Assets Control and the Department of Commerce’s Bureau of Industry and Security, respectively, it is unclear what, if any, dedicated investigative resources or interagency cooperation FIRS will have at its disposal.  While federal prosecutors typically investigate alongside agents from the Federal Bureau of Investigation and Homeland Security Investigations, such investigative resources historically were not allocated to FIRS, and it is unclear which federal investigating agency – if any – has been tasked with leading these investigations.  This raises questions about FIRS’s capacity to effectively investigate and bring enforcement actions for potential violations.

One option that could be considered is to have FIRS limit its role to civil enforcement and – to the extent it comes across potential criminal conduct – make criminal referrals to either (i) the appropriate United States Attorney’s Office, all of which have federal prosecutors who have been trained in national security investigations and have routine access to a grand jury, or (ii) NSD’s Counterintelligence and Export Control Section, which currently includes federal prosecutors that specialize in investigating criminal violations of sanctions and export control laws.

Alternatively, the Federal Trade Commission (“FTC”) could also provide investigative support regarding potential violations under the DSP given its enforcement authority under a related law: the Protecting Americans’ Data from Foreign Adversaries Act (“PADFA”).  The FTC has enforcement authority under PADFA to seek civil penalties but is first required to refer the matter to the DOJ.[13]  Given the potential overlap between the DSP and PADFA, the FTC may be particularly well-situated to investigate and refer cases of DSP violations to FIRS.

Seventh Amendment Implications: The Jarkesy Challenge

As noted above, the DOJ has broad authority to pursue both civil penalties and prosecute criminal offenses for non-compliance with the Bulk Data Rule under the DSP, but just how the DOJ plans to pursue civil penalties for violations is also unclear.  Specifically, to the extent the DOJ seeks to impose penalties in a way that implicates administrative proceedings, it is likely to face challenges following the Supreme Court’s decision in SEC v. Jarkesy.[14]  In Jarkesy, the Supreme Court held that the Seventh Amendment entitles a defendant to a jury trial when the SEC seeks civil penalties for securities fraud,[15] thereby limiting the SEC’s ability to adjudicate cases for civil penalties through its administrative proceedings.

Jarkesy’s reasoning regarding the Seventh Amendment’s application to actions seeking civil penalties could impact the DSP’s enforcement framework.[16]  Similar to the civil penalties at issue in Jarkesy, civil penalties imposed under the DSP and IEEPA serve to punish violations and deter future misconduct, as opposed to compensate victims.[17]  However, unlike antifraud provisions, the DSP arguably lacks clear common law analogies, and it is possible that the DSP and IEEPA could be viewed under the “public rights” exception given the links to national security.[18]

Going forward, Jarkesy is expected to affect how other federal agencies conduct enforcement actions seeking civil penalties.  The DOJ will have to consider these implications as it decides on an enforcement framework for imposing civil penalties for DSP violations.

Conclusion

The DSP represents the U.S.’s first data localization requirement ripe for enforcement, but its implementation faces substantial practical challenges that may hinder DOJ’s ability for wide-ranging or swift action.  As companies work to ensure their activities are in compliance with the DSP and the Bulk Data Rule ahead of July 8, many are left wondering whether the DOJ will be ready to begin investigating and enforcing this Rule given its breadth and the clear potential challenges that lie ahead.  While we await DOJ’s next steps toward enforcement, companies should be prepared to document their good-faith efforts to demonstrate compliance with the DSP and the Rule to prevent early investigations and enforcement actions.  Additionally, as emphasized by the DOJ’s non-binding Compliance Guidance,[19] companies that proactively implement compliance programs will be better positioned to respond and adapt to this uncertain enforcement environment.


[1] U.S. Dep’t of Just., Nat’l Sec. Div., Data Security Program: Implementation and Enforcement Policy Through July 8, 2025 (Apr. 11, 2025), https://www.justice.gov/opa/media/1396346/dl?inline [hereinafter Enforcement Policy].

[2] Our prior alert memorandum on the DSP is available here, and our alert on DOJ’s 90-day limited enforcement policy of the DSP is available here.

[3] Enforcement Policy, at 1.

[4] U.S. Dep’t of Just., Nat’l Sec. Div., NSD Organizational Chart (June 16, 2023), https://www.justice.gov/nsd/national-security-division-organization-chart

[5] See, e.g., United States v. Stringer, 535 F.3d 929, 933 (9th Cir. 2008) (“There is nothing improper about the government undertaking simultaneous criminal and civil investigations.”).

[6] See United States v. Kordel, 397 U.S. 1, 11 (1970) (holding that the Government did not violate due process when it used evidence from a routine FDA civil investigation to convict defendants of criminal misbranding given that the agency made similar requests for information in 75% of civil cases and there was no suggestion the Government brought the civil case solely to obtain evidence for the criminal prosecution).

[7] Stringer, 535 F.3d at 940 (collecting cases).

[8] Justice Manual 1-12.00 – Coordination of Parallel Criminal, Civil, Regulatory, and Administrative Proceedings (May 2018), https://www.justice.gov/jm/jm-1-12000-coordination-parallel-criminal-civil-regulatory-and-administrative-proceedings

[9] Id.

[10] Justice Manual 1-12.100 – Coordination of Corporate Resolution Penalties in Parallel and/or Joint Investigations and Proceedings Arising from the Same Misconduct (May 2018), https://www.justice.gov/jm/jm-1-12000-coordination-parallel-criminal-civil-regulatory-and-administrative-proceedings

[11] See United States v. Sells Eng’g, Inc., 463 U.S. 418, 427 (1983) (rejecting the argument that all attorneys within the DOJ’s civil division are covered under (A)(i), and instead holding that “(A)(i) disclosure is limited to use by those attorneys who conduct the criminal matters to which the materials pertain”).

[12] U.S. Dep’t of Just., Crim. Resource Manual, 156. Disclosure of Matters Occurring Before the Grand Jury to Department of Justice Attorneys and Assistant United States Attorneys (Oct. 2012), https://www.justice.gov/archives/jm/criminal-resource-manual-156-disclosure-matters-occurring-grand-jury-department-justice-attys

[13] A violation of PADFA is treated as a violation of an FTC rule pursuant to 15 U.S.C. § 57a(a)(1)(B).

[14] 603 U.S. 109 (2024).

[15] Id. at 140.

[16] The Court in Jarkesy also established a two-part test for determining whether a cause of action implicates the Seventh Amendment.  First, courts must determine whether the cause of action is “legal in nature” and whether the remedy sought is traditionally obtained in courts of law.  Id. at 121–27.  If legal in nature, courts must then assess whether the “public rights” exception permits congressional assignment of adjudication to an agency.  Id. at 127–34.

[17] Id. at 121–27.

[18] Id. at 135.

[19] U.S. Dep’t of Just., Nat’l Sec. Div., Data Security Program: Compliance Guide (Apr. 11, 2025), https://www.justice.gov/opa/media/1396356/dl

CPPA Enforcement Action Against Honda Underscores Need for CCPA Compliant Privacy Practices

On March 12, the California Privacy Protection Agency (“CPPA”) announced an enforcement action against American Honda Motor Co. (“Honda”), with a $632,500 fine for violating the California Consumer Privacy Act and its implementing regulations (“CCPA”).[1]  This action, which is the CCPA’s first non-data broker action, arose in connection with the Enforcement Division’s ongoing investigative sweep of connected vehicle manufacturers and related technologies, and serves as a cautionary tale for companies handling consumer personal information, highlighting the stringent requirements of the CCPA and the consequences of non-compliance.

Alleged CCPA Violations

In connection with its review of Honda’s data privacy practices, the CPPA’s Enforcement Division concluded that Honda violated the CCPA’s requirements by:

  1. Placing an undue burden on consumers, requiring Californians to verify their identity and provide excessive personal information to exercise certain privacy rights, such as the right to opt-out of sale or sharing and the right to limit;
  2. Making it difficult for Californians to authorize other individuals or organizations (known as “authorized agents”) to exercise their privacy rights;
  3. Employing dark patterns, by using an online privacy management tool that failed to offer Californians their privacy choices in a symmetrical or equal way; and
  4. Sharing consumers’ personal information with ad tech companies without contracts that contain the necessary terms to protect privacy.

Below, we summarize the conduct giving rise to the alleged violations, and provide practical tips for businesses to consider for implementation.

1. Undue Burden on Requests to Opt-Out of Sale/Sharing and Requests to Limit

According to the Stipulated Final Order, Honda provided consumers with the same webform to submit all of their CCPA privacy rights requests irrespective of whether the requests required identity verification or not, in violation of the CCPA. Specifically, the CCPA distinguishes between privacy rights that permit a business to conduct prior identity verification (e.g., rights to know/access, correct and delete) and those that do not (e.g., rights to opt-out of data sales or “sharing” and to limit the use and disclosure of sensitive personal information), meaning businesses are prohibited from requiring consumers to verify their identities before actioning opt-out or limitation requests.[2] 

In reviewing Honda’s practices, the CPPA found that the use of the same webform for all privacy rights requests, and in turn by requiring personal information be provided before honoring opt-out and limitation requests, Honda imposed an unlawful verification standard on California consumers.  In addition, the CPPA further found that the webform required consumers to provide more information than necessary[3] for Honda to verify requests to access, delete and change their data.  Accordingly, the CPPA found that Honda’s webform was unduly burdensome, interfering with the ability of consumers to exercise their rights thereby violating the CCPA.

  • Practice Tip.  Businesses covered by the CCPA should review their consumer rights requests processes and methods to confirm that they do not require verification in order for consumers to submit consumer opt-out and limitation requests, and should further limit the information required to be provided by consumers in order to submit other privacy rights requests to only the information truly necessary to confirm the identity of the requestor.

2. Undue Burden on Exercise of CCPA Rights through Authorized Agents

Similar to the allegations above, the second alleged violation arose in connection with Honda’s practice of requiring consumers to directly confirm that they had given permission to their authorized agents to submit opt-out and limitation requests on their behalf. 

Under the CCPA, consumers can authorize other persons or entities to exercise their aforementioned rights, and, as above, the CCPA prohibits verification requirements for rights to opt-out and limit.  While businesses may require authorized agents to provide proof of authorization, the CCPA prohibits requiring consumers to directly confirm that authorized agents have their permission.  Instead, businesses are only allowed to contact consumers directly to check authorization, provided this relates to requests to know/access, correct or delete personal information.

Despite these requirements, because Honda’s process for submitting CCPA privacy rights requests did not distinguish between verifiable and non-verifiable requests, and Honda sent confirmatory correspondence directly to consumers to confirm they had given permission to the authorized agent for all such privacy requests, the CPPA found Honda in violation of the CCPA.

  • Practice Tip.  As above, businesses should audit their consumer rights requests procedures and mechanisms to ensure that they do not impose verification requirements, including those related to the use of authorized agents, in connection with opt-out and limitation requests.

3. Asymmetry in Cookie Management Tool

The third alleged violation regards Honda’s use of a cookie consent management tool on its website used to effectuate consumer requests to opt-out of personal information “sharing”, which was configured to opt consumers in by default.

Specifically, through the OneTrust cookie consent management tool utilized on Honda’s websites, consumers were automatically opted-in to the “sharing” of their personal information by default as shown below.  To opt-out, consumers were required to take multiple steps (i.e., to toggle the button to turn off cookies and then confirm their choices) while opting in required either no steps or, assuming a consumer were to decide to opt back in after opting out, only one step to “allow all” cookies.

 The CCPA requires business to design and implement methods for submitting CCPA requests that are easy to understand, provide symmetrical choices and avoid confusing language, interactive elements or choice architecture that impairs one’s ability to make a choice and are easy to execute.  Here, the CPPA focused specifically on providing symmetrical choices, meaning that the path for a consumer to exercise a more privacy-protective option cannot be longer or more difficult or time-consuming than the path to exercise a less privacy-protective option because that would impair or interfere with the consumer’s ability to make a choice.  The Stipulated Final Order went further to confirm that a website banner that provides only two options when seeking consumers’ consent to use their personal information—such as “Accept All” and “More Information,” or “Accept All” and “Preferences”—is not equal or symmetrical.

  • Practice Tip.  Businesses must audit their cookie consent management tools to ensure that consumers are not opted-in to data “sales” or “sharing” by default, and that the tool does not require a consumer to take more steps to effectuate consumer opt-out requests than to opt-in.  Moreover, cookie consent management tools that present only two options should allow consumers to either “accept” or “reject” all cookies (rather than presenting the option to “accept” and another option that is not full rejection (such as to receive more information or go to a “preferences” page)).

4. Absence of Contractual Safeguards with Vendors

Finally, the CPPA alleged that although Honda disclosed consumer personal information to third-party advertising technology vendors in situations where such disclosure was a “sale” or “sharing” under the CCPA, it failed to enter into a CCPA-compliant contract with such vendors.  Specifically, businesses that “sell” or “share” personal information to or with a third party must enter into agreements containing explicit provisions prescribed by the CCPA to ensure protection of consumers’ personal information. The CPPA found that by failing to implement such contractual safeguards, Honda placed consumers’ personal information at risk.

  • Practice Tip.  Businesses should audit all contracts pursuant to which consumer personal information is disclosed or otherwise made available to third parties, particularly third-party advertising technology vendors, to ensure the provisions required by the CCPA are included.

Enforcement Remedies

In addition to a $632,500 fine[4], the Stipulated Final Order requires Honda to (1) modify its methods for consumers to submit CCPA requests, including with respect to its method for the submission and confirmation of CCPA requests by authorized agents, (2) change its cookie preference tool to avoid dark patterns and ensure symmetry in choice, (3) ensure all personnel handling CCPA requests are adequately trained and (4) enter into compliant contracts with all external recipients of consumer personal information within 180 days.

Conclusion

The enforcement action against Honda underscores the importance of strict compliance with the CCPA. Businesses must ensure that their processes for handling consumer privacy requests are straightforward, do not require unnecessary information, and provide equal choice options, and must enter into CCPA compliant contracts prior to and in connection with the disclosure of consumer personal information to third parties.


[1] The Stipulated Final Order (the “Stipulated Final Order”) can be found here.

[2] Under the CCPA, businesses can verify requests to delete, correct and know personal information of consumers because of the potential harm to consumers from imposters accessing, deleting or changing their personal information; conversely, requests to opt-out of sale or sharing and requests to limit use and disclosure are prohibited from having a verification requirement because of the minimal potential harm to consumers.  Accordingly, while businesses may ask for additional information in connection with such requests to identify the relevant data in their systems, they cannot ask for more information than necessary to process such requests and, to the extent they can comply without additional information, they must do so.

[3] Specifically, the form required consumers to provide their first name, last name, address, city, state, zip code, email address and phone number, although Honda “need[ed] only two data points from [the relevant] consumer to identify [them] within its database.” 

[4] Notably, the Stipulated Final Order details the number of consumers whose rights were implicated by some of Honda’s practices, serving as a reminder to businesses that CCPA fines apply on a per violation basis.

Data Act FAQs – Key Takeaways for Manufacturers and Data Holders

On 3 February 2025, the European Commission (“EC”) published an updated version of its frequently asked questions (“FAQs”) on the EU Data Act.[1]  The Data Act, which is intended to make data more accessible to users of IoT devices in the EU, entered into force on 11 January 2024 and will become generally applicable as of 12 September 2025.

The FAQs, first published in September 2024, address the key concepts of “connected product” and “related service.” The latest iteration of the FAQs contains incremental updates which provide greater insight into how the EC believes that manufacturers and data holders should interpret their obligations under the Data Act.

Key Takeaways for Manufacturers and Data Holders

  1. “Connected Products” includes various smart devices, including smartphones and TVs.[2]  The FAQs acknowledge the broad definition of connected products under the Data Act and provide examples of devices that would fall under this category. In particular, despite ambiguity created from previous iterations of the Data Act, the EC has confirmed its view in the FAQs that devices such as smartphones, smart home devices and TVs are in-scope as connected products.
  2. Two conditions must be satisfied for a digital service to constitute a “Related Service.”[3]  It is expressly noted that the following conditions must be satisfied for a digital service to be a related service: (a) there must be a two-way exchange of data between the connected product and the service provider, and (b) the service must affect the connected product’s functions, behaviour, or operation. The FAQs also provide several factors that could help businesses determine whether a digital service is a related service, including user expectations for that product category, replaceability of the digital service, and pre-installation of the digital service on the connected product. Although these factors are not determinative, they may provide helpful guidance to businesses assessing whether their services fall within this definition (for example, if the service can easily be replaced by a third-party alternative, it may not meet the threshold of a related service). Ultimately, the EC has noted that practice and courts’ interpretations will play an essential role in further delineating if a digital service is a related service – so time will tell.
  3. Manufacturers have some discretion as to whether data will be directly or indirectly accessible.[4]  Importantly, the FAQs suggest that manufacturers/providers have a significant degree of discretion whether or not to design or redesign their connected products or related services to provide direct access to data. The FAQs list out certain criteria which can be taken into account when determining whether to design for direct access[5] or indirect access.[6] In this respect, the FAQs note that the wording of Article 3(1) (access by design) leaves flexibility as to whether design changes need to be implemented and it is acknowledged that data holders may prefer to offer indirect access to the data. It is also noted that the manufacturer may implement a solution that “works best for them” and consider, as part of its assessment, whether direct access is technically possible, the costs of potential technical modifications, and the difficulty of protecting trade secrets or intellectual property or of ensuring the connected product’s security.
  4. Readily available data without disproportionate effort.[7]  The FAQs confirm the position that readily available data is “product data and related service data that a data holder can obtain without disproportionate effort going beyond a simple operation.”  The EC provided some further clarity by highlighting that only data generated or collected after the entry into application of the Data Act (i.e., after 12 September 2025) should be considered “readily available data” as the definition does not include a reference to the time of their generation or collection. However, the FAQs do not provide further clarity on what would constitute “disproportionate effort” – arguably leaving businesses with further discretion to interpret this in the context of their products and services.
  5. Data made available under the Data Act should be ‘easily usable and understandable’ by users and third parties.[8]  The FAQs expressly note that data holders are required to share data of the same quality as they make available to themselves to facilitate the use of the data across the data economy.This indicates that raw and pre-processed data may require some additional investment to be usable. However, the FAQs make clear that there is no requirement for data holders to make substantial investments into such processes. Indeed, it may be the case that where the level of investment into processing the data is substantial, the Chapter II obligations may not apply to that data.
  6. Data generated outside of the EU may be subject to the Data Act.[9]  The EC’s position is that when a connected product is placed on the market in the EU, all the data generated by that connected product both inside and outside the EU will be subject to the Data Act. For example, if a user purchases a smart appliance in the EU and subsequently takes it to the US with them on vacation, any data generated by the use of the appliance in the US would also fall within the scope of the Data Act.
  7. Manufacturers will not be data holders if they do not control access to the data.[10]  It is explained in the FAQs that determining who is the data holder depends on who “controls access to the readily available data”. In particular, the FAQs acknowledge that manufacturers may contract out the role of “data holder” to a third party for all or part of their connected products. This seems to suggest that where the manufacturer does not control access to the readily available data, it will not be a data holder. In addition, a related service provider that is not the manufacturer of the connected product may also be a data holder if it controls access to readily available data that is generated by the related service it provides to the user. The FAQs further confirm that there may be instances where there is no data holder, i.e., in the case of direct access, where only the user has access to data stored directly on the connected product without the involvement of the manufacturer.
  8. Data holders can use non-personal data for any purpose agreed with the user (subject to limited exceptions).[11]  The FAQs reaffirm the position that a data holder can use the non-personal data generated by the user for any purpose, provided that this is agreed with the user.[12]  Furthermore, the data holder must not derive from such data any insights about the economic situation, assets and production methods of the user in any other manner that could undermine the commercial position of the user. Where data generated by the user includes personal data, data holders should ensure any use of such data is in compliance with the EU GDPR. To ensure compliance with the GDPR, data holders may apply privacy-enhancing technologies (“PETs”); however, the EC’s view is that applying PETs does not necessarily mean that the resulting data will be considered ‘derived’ or ‘inferred’ such that they would fall out-of-scope of the Data Act.
  9. Users may be able to request access to data from previous users of their connected product.[13]  The FAQs note that the Data Act “can be read as giving users the right to access and port readily available data generated by the use of connected objects, including data generated by other users before them.” Subsequent users may therefore have a legitimate interest in such data, for example, in respect of updates or incidents. However, the rights of previous users and other applicable law (e.g., the right to be forgotten under the EU GDPR) must be respected. Moreover, data holders are able to delete certain historical data after a reasonable retention period.[14] 

Although the initial set of FAQs, and the subsequent incremental updates, provide further guidance for businesses whose products or services may fall in scope of the Data Act, there are still areas of uncertainty that are yet to be addressed. As the FAQs are a “living document”, they may continue to be updated as and when the EC deems it necessary. It is also important to note that while the FAQs provide some useful guidance on Data Act interpretation, the Data Act is subject to supplemental domestic implementation and enforcement by national competent authorities of EU member states. Businesses should therefore pay careful attention to guidance published by national authorities in the member states and sectoral areas in which they operate.


[1] See https://digital-strategy.ec.europa.eu/en/library/commission-publishes-frequently-asked-questions-about-data-act.

[2] See Question 7 of the FAQs.

[3] See Question 10 of the FAQs.

[4] See Question 17 and 22 of the FAQs.

[5] I.e., ‘where relevant and technically feasible’ the user has the technical means to access, stream or download the data without the involvement of the data holder. For further information, see Article 3(1) of the Data Act.

[6] I.e., the connected product or related service is designed in such a way that the user must ask the data holder for access. For further information, see Article 4(1) of the Data Act.

[7] See Question 4 of the FAQs.

[8] See Question 5 of the FAQs.

[9] See Question 9 of the FAQs.

[10] See Question 21 of the FAQs.

[11] See Question 29 of the FAQs and Question 13 of the FAQs.

[12] See also Article 4(13) of the Data Act.

[13] See Question 33 of the FAQs.

[14] See Recital 24 of the Data Act.

New York Legislature Passes Health Data Privacy Bill

Last week, the New York legislature passed the New York Health Information Privacy Act (S929) (“NYHIPA” or the “Act”)[1]. The Act, which is currently awaiting the Governor’s signature, seeks to regulate the collection, sale and processing of healthcare information, akin to Washington’s My Health My Data Act.

Importantly, the Act as currently drafted is very broad and may have far-reaching consequences giving rise to extensive compliance obligations, including as a result of the fact that it (i) extends to non-health related data, (ii) does not contain applicability thresholds based on the number of individuals whose data is processed, or the type of activity carried out, by the regulated entity, (iii) requires minimal nexus to New York and applies to non-New York entities that process non-New York residents’ data, and (iv) applies to information collected in the context of employment and business-to business relationships. If signed by the Governor, the Act will go into effect one year after it becomes law.

Below, we provide an overview of the broad categories of entities and data subject to NYHIPA, the key compliance obligations and consumer rights provided, and what businesses need to know in order to comply.

Who and What is Covered by the Act?

Regulated Health Information.  The Act covers a wide range of data given the broad definition of “regulated health information.” Specifically, “regulated health information” includes “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual” (the foregoing referred to herein as “RHI”); by definition, RHI does not include “deidentified information,”[2] protected health information (“PHI”) governed by HIPAA or information collected as part of a clinical trial. The Act’s provisions also apply to seemingly non-health related data, such as location information and payment information collected in connection with health-related products or services, as well any inference that can be drawn or derived therefrom.

Accordingly, the Act as drafted implicates a significant amount of information and, as further discussed below, given the absence of applicability thresholds (e.g., based on the number of New York residents whose data is processed), applies to a vast number of entities. RHI is not limited to medical records, but covers biometric data, genetic information, and even information that could identify a person indirectly. Additionally, since the Act lacks a definition of “individual,” it arguably applies to information collected in the context of commercial and employment relationships unlike typical U.S. state privacy laws, expanding the compliance obligations of entities both within and outside New York’s borders.

Regulated Entities. In a stark contrast to the processing thresholds advanced by other US privacy laws, the Act defines a regulated entity as any entity that:

  1. Controls the processing of RHI of an individual who is a New York resident;
  2. Controls the processing of RHI of an individual who is physically present in New York, or
  3. Is located in New York and controls the processing of RHI.

Excluded from coverage are local, state, and federal governments and municipal corporations (given that any information they process is exempt from the Act’s reach), as well as HIPAA covered entities solely to the extent they maintain patient information in the same manner as PHI. Additionally there is no exemption for nonprofits or entities regulated by the GLBA, meaning additional restrictions may be imposed on the financial information they collect (e.g., payment transactions relating to physical or mental health, or from which inferences can be drawn) to the extent processed in connection with health-related purposes.

Unlike other state privacy laws enacted to date, the Act’s extraterritorial application will impact many organizations beyond those that conduct business in New York as, even if the entity itself is located outside the state, its activities will be subject to the Act so long as it processes RHI regarding individuals (not even necessarily state residents) physically present in New York.   Further, individuals beyond New York residents may benefit from the Act’s protections, given that any entity located in New York will be covered by the Act regardless of where the individual whose RHI is processed is domiciled.

Compliance Obligations

Entities subject to typical U.S. consumer privacy laws will recognize a number of familiar obligations imposed by NYHIPA, including:

1. Obligations to provide a publicly available privacy policy through a regularly used interface (e.g., a website or platform) informing such individual what RHI will be collected, the nature and purpose of processing, to whom and for what purposes RHI will be disclosed, and how consumers can request access to or deletion of their RHI;

2. Restrictions on “selling”[3] RHI;

    • Notably, it is unclear whether, based on the current drafting of the Act, all “sales” of RHI are expressly prohibited (other than in the context of business transactions), as the exceptions that would seem to be appropriate (i.e., where an individual provides a valid authorization or the processing is otherwise necessary for a permitted purpose) are not clearly provided with respect to RHI sales and instead only appear to be tied to other types of RHI processing.  Such exceptions would appear to be appropriate in the context of “sales”, given that reading the Act any other way appears to suggest that any sharing of RHI is prohibited where valuable consideration is provided in exchange.  By way of example, if no such exceptions apply, then there is a risk that regulated entities would be prohibited from providing RHI to their service providers if that would be considered, under a broad interpretation of “sale”, sharing RHI for “valuable consideration”  (i.e., the relevant services).

    3. Restrictions on otherwise processing RHI unless (a) the covered entity obtains valid authorization as governed by the Act, detailed further below, (which must be easily revocable at any time) or (b) the processing is “strictly necessary” for one of seven specific purposes enumerated in the Act (e.g., to provide the product or service requested, to comply with legal obligations, for internal business operations excluding marketing);

    4. Providing individuals access and deletion rights, including by providing an easy mechanism by which individuals can effectuate such rights and allowing such requests to be made by an individual’s authorized agent, with which regulated entities must comply within 30 days;

    • Deletion requests must also be passed to and honored by a regulated entity’s third party service providers.

    5. Implementing reasonable administrative, physical, and technical safeguards to protect the confidentiality and security of RHI;

    6. Securely disposing of RHI pursuant to a publicly available retention schedule, where disposal must occur no later than 60 days after retention is no longer necessary for the permissible purposes or for which consent was given; and

    7. Entering into contracts with third party service providers, imposing equivalent confidentiality, information security, access and deletion obligations, as well as processing restrictions, as those imposed on the regulated entity under the Act.

    Valid Authorization

    While many U.S. state privacy laws contain prescriptive requirements regarding what constitutes consumer consent, NYHIPA goes a step further in providing not only a number of requirements on how an authorization must be presented to be valid, but also substantive requirements to include in authorization request forms. 

    In order for an authorization to be considered valid, it must meet specific criteria including that the request: (i) must be made separately from any other transaction or part of a transaction, (ii) cannot be sought until at least twenty-four hours after an individual creates an account or first uses the requested product or service, (iii) cannot be obtained through a dark pattern, (iv) if made for multiple processing activities, must allow for specific authorization for each specific activity, and (v) cannot relate to an activity for which the individual has revoked or withheld consent in the past year.  Following trends set by recent privacy-related litigations, such as California wiretapping litigation, the Act makes clear that requests for consent must be specific to the particular processing activity, and cannot be bundled with other disclosures or consent requests.  Further, consent must be clearly communicated to the relevant individual, and freely revocable.

    In terms of substantive requirements, the Act further requires that valid authorizations disclose the RHI to be collected and the purposes for which it will be processed, the names or categories of third parties with whom RHI will be disclosed (similar to the approaches taken in the Oregon and Delaware consumer privacy laws), any monetary or valuable consideration that may be received by the regulated entity, assurances that failure to consent will not affect an individual’s experience, the expiration date of the authorization, which may be up to one year from when authorization was provided and how the individual can revoke consent, how the individual can request access to or deletion and any other information material to the individual’s decision-making. Authorizations must also be executed by the individual, though can be done electronically. 

    Enforcement

    Enforcement rights under the Act are primarily vested in the New York AG, who has broad authority to investigate violations, and impose civil penalties on entities that engage, or are about to engage, in unlawful acts or practices under the NYHIPA.  The New York AG can commence an action within 6 years of becoming aware of the alleged violation, and, in addition to seeking an injunction, can seek civil penalties of not more than  $15,000 per violation or 20% of revenue obtained from New York consumers within  the  past  fiscal  year, whichever is greater, as well as any such other and further relief as the court may deem proper. The Act also contemplates rulemaking authority for the New York AG.

    Conclusion

    The applicability of NYHIPA is broad, covering a wide array of entities involved in the collection, use, and management of RHI within New York. To determine whether NYHIPA applies, an organization must evaluate its role in handling health information, the nature of the data it processes, and its geographic operations. Until now, state consumer privacy laws have been focused on comprehensive data privacy, designed on the Washington model. Perhaps New York is showing us a shift back to sectoral laws instead. At this current juncture, it is unclear whether Governor Hochul will sign the law as drafted given it is likely to be subject to a number of challenges, including on First Amendment grounds; Cleary Gottlieb will keep monitoring for updates.


    [1] The text of the bill can be found here.

    [2] “Deidentified information” under the Act has the same meaning provided under comprehensive U.S. state privacy laws (i.e., information that cannot reasonably be used to infer information about, or otherwise be linked to, a particular individual, household or device, provided that the regulated entity or service provider (i) implements reasonable measures to prevent reidentification, (ii) publicly commits to process the information in deidentified form and not attempt to reidentify the information and (iii) imposes contractual obligations on third party recipients consistent with the foregoing (i)-(iii).

    [3] “Sell” under the Act is defined as sharing RHI for monetary or other valuable consideration, exempting only sharing of RHI in the context of a business transaction in which a third party assumes control of all or part of the covered entity’s assets.

    Cybersecurity Disclosure and Enforcement Developments and Predictions

    The following is part of our annual publication Selected Issues for Boards of Directors in 2025Explore all topics or download the PDF.


    The SEC pursued multiple high profile enforcement actions in 2024, alongside issuing additional guidance around compliance with the new cybersecurity disclosure rules. Together these developments demonstrate a continued focus by the SEC on robust disclosure frameworks for cybersecurity incidents. Public companies will need to bear these developments in mind as they continue to grapple with cybersecurity disclosure requirements going into 2025.

    SEC Disclosure Rules and Guidance

    The SEC’s cybersecurity disclosure rules became effective in late 2023, and 2024 marked the first full year of required compliance. The rules added Item 1.05 to Form 8-K, requiring domestic public companies to disclose certain information within four business days of determining that they have experienced a material cybersecurity incident, including the material aspects of the nature, scope and timing of an incident and the material impact or reasonably likely impact of the incident on the company.

    Read the full post

    SEC Charges Four Companies Impacted by Data Breach with Misleading Cyber Disclosures

    On October 22, 2024, the SEC announced settled enforcement actions charging four companies with making materially misleading disclosures regarding cybersecurity risks and intrusions. These cases mark the first to bring charges against companies who were downstream victims of the well-known cyber-attack on software company SolarWinds. The four companies were providers of IT services and digital communications products and settled the charges for amounts ranging from $990,000 to $4 million.

    In 2023, the SEC sued SolarWinds and its Chief Information Security Officer for allegedly misleading disclosures and deficient controls. Most of the SEC’s claims in that case were dismissed by a judge in the Southern District of New York, in part because the judge ruled that SolarWinds’ post-incident disclosures did not misleadingly minimize the severity of the intrusion. This new round of charges indicates the SEC’s intent to continue to enforce disclosure and reporting requirements surrounding cybersecurity breaches. The SEC’s recent charges focus on the companies’ continued use of generic and hypothetical language following significant data breaches, as well as allegations of downplaying the severity of the breaches by omitting material information about their nature and extent. Public companies should carefully consider the lessons from these actions when making disclosures following a cybersecurity breach.  

    Background

    According to the SEC’s allegations, which the companies neither admitted nor denied, in December 2020, each of the four companies charged last week learned that its systems had been affected by the SolarWinds data breach. Public reporting at the time indicated that the breach was likely performed by a state-sponsored threat actor. Each of the companies performed investigations of the breach, determining that the threat actor had been active in their systems for some period of time and accessed certain company or customer information.[1]

    The SEC brought negligent fraud charges against all four companies, charging two primary types of materially misleading disclosures. Two companies, Check Point[2] and Unisys,[3] were charged because the SEC believed their post-breach risk factor disclosures—containing generic and hypothetical language about the risk of cybersecurity breaches similar to their pre-breach disclosures—were misleading given that the companies had become aware of the actual SolarWinds-related breaches. The SEC alleged that the other two companies, Avaya[4] and Mimecast,[5] while they did make specific disclosures that they had been affected by cybersecurity breaches, misleadingly omitted details that the SEC asserted would be material to investors. The SEC noted that all four companies were in the information technology industry, with large private and government customers, and therefore their reputation and ability to attract and retain customers would be affected by disclosure of a data breach.

     The Charges

    There were two categories of charges.

    Charges for disclosing hypothetical cyber risks in wake of actual cyber attack. The SEC has repeatedly brought charges against companies for allegedly using generic and/or hypothetical language in their risk factors after a known data breach.[6] That trend has continued with the recent actions against Check Point and Unisys.

    i. Check Point

    Check Point’s Form 20-F disclosures in 2021 and 2022 stated, “We regularly face attempts by others to gain unauthorized access…” and “[f]rom time to time we encounter intrusions or attempts at gaining unauthorized access to our products and network. To date, none have resulted in any material adverse impact to our business or operations.”[7] These filings were virtually unchanged before and after the data breach. The SEC alleged that these risk disclosures were materially misleading because the company’s risk profile materially changed as a result of the SolarWinds compromise-related activity for two reasons: the threat actor was likely a nation-state and the threat actor “persisted in the network unmonitored for several months and took steps, including deployment and removal of unauthorized software and attempting to move laterally” in the company’s environment.[8]

    ii. Unisys

    The company’s risk factors in its Form 10-Ks following the breach were substantially unchanged from 2019. The risk factor language was hypothetical: cyberattacks “could … result in the loss … or the unauthorized disclosure or misuse of information…” and “if our systems are accessed ….”[9] The SEC alleged that hypothetical language is insufficient when the company is aware that a material breach occurred. The SEC also alleged that the company did not maintain adequate disclosure controls and procedures because they had no procedures to ensure that, in the event of a known cybersecurity incident, information was escalated to senior management, which in this case did not happen for several months. The SEC’s order also alleged that the company’s investigative process after the breach “suffered from gaps that prevented it from identifying the full scope of the compromise,” and that these gaps constituted a material change to the company’s risk profile that should have been disclosed.[10]

    Charges for allegedly failing to disclose material information. Two of the charged companies did disclose that their systems had been affected by suspicious activity, but the SEC nevertheless found fault with those disclosures.

    i. Avaya

    In its Form 10-Q filed two months after learning of the breach, the company disclosed that it was investigating suspicious activity that it “believed resulted in unauthorized access to our email system,” with evidence of access to a “limited number of Company email messages.”[11] The SEC alleged that these statements were materially misleading because they “minimized the compromise and omitted material facts” that were known to the company “regarding the scope and potential impact of the incident,”[12] namely, omitting: (i) that the intrusions were likely the work of a state actor, and (ii) that the company had only been able to access 44 of the 145 files compromised by the threat actor and therefore could not determine whether these additional files contained sensitive information.[13]

    ii. Mimecast

    In its Form 8-Ks filed in the months after learning of the breach, Mimecast disclosed that an authentication certificate had been compromised by a sophisticated threat actor, that a small number of customers were targeted, that the incident was related to SolarWinds, and that some of the company’s source code had been downloaded. The company stated that the code was “incomplete and would be insufficient to build and run” any aspect of the company’s service.[14] The SEC alleged that these statements were materially misleading “by providing quantification regarding certain aspects of the compromise but not disclosing additional material information on the scope and impact of the incident,” such as the fact that the threat actor had accessed a database containing encrypted credentials for some 31,000 customers and another database with systems and configuration information for 17,000 customers, and by not disclosing that the threat actor had exported source code amounting to more than half of the source code of the affected projects, or information about the importance of that code.[15]

    Dissenting Statement

    The two Republican Commissioners, Hester Peirce and Mark Uyeda, voted against the actions and issued a dissenting statement accusing the Commission of “playing Monday morning quarterback.”[16] The dissenters noted two key issues across the orders. First, the dissenters viewed the cases as requiring disclosure of details about the cybersecurity incident itself, despite previous Commission statements that disclosures should instead be focused on the “impact” of the incident.[17] Second, the dissenters argued that many of the statements the SEC alleged to be material would not be material to the reasonable investor, such as the specific percentage of code exfiltrated by the threat actor.[18]  

    The SEC Is Not Backing Off After SolarWinds

    These enforcement actions come months after the Southern District of New York rejected several claims the SEC brought against SolarWinds for the original breach.[19] The recent actions show that the SEC is not backing away from aggressively reviewing incident and other related cybersecurity disclosures. Notably, the SEC did not allege that any of the companies’ cybersecurity practices violated the Exchange Act’s internal controls provision.  In an issue of first impression, the SolarWinds court held that the internal controls provisions focus on accounting controls and do not encompass the kind of cyber defenses at issue in that case.  It is not clear whether the absence of such charges here represents the SEC adopting a new position after the SolarWinds ruling, or rather a reflection of these cases involving different cybersecurity and intrusions. The SEC did allege failure to maintain proper disclosure controls in one of the four new orders, which was another allegation rejected by the SolarWinds court as insufficiently pled.[20] Moreover, the SolarWinds court dismissed claims that the company had misled its investors by making incomplete disclosures after its cyber intrusion, finding that the company adequately conveyed the severity of the intrusion and that any alleged omissions were not material or misleading.  While the dissenters questioned whether the allegedly misleading disclosures here were any different than those in SolarWinds, at a minimum these cases show that the SEC will continue to closely scrutinize post-incident disclosures, notwithstanding its loss in SolarWinds.

    Takeaways

    There are several takeaways from these charges.

    • The SEC is signaling an aggressive enforcement environment and continuing to bring claims against companies for deficient disclosure controls, despite similar charges being rejected in SolarWinds. The Unisys order shows that the SEC will continue to pursue disclosure controls charges where, in its view, a company did not adequately escalate incidents to management, consider the aggregate impact of related incidents, or adopt procedures to guide materiality determinations, among other things.
    • The SEC will reliably bring charges against companies that use generic or hypothetical risk factor language to describe the threat of cybersecurity incidents when the company’s “risk profile changed materially”[21] due to a known breach.
    • The SEC will give heightened scrutiny to disclosures by companies in sectors such as information technology and data security, because in the SEC’s view cybersecurity breaches are more likely to affect the reputation and ability to attract customers for these types of companies.
    • Companies should take care in crafting disclosures about the potential impact of cybersecurity breaches, including in Form 8-K and risk factor disclosure, and consider factors such as:
      • Whether the threat actor is likely affiliated with a nation-state.
      • Whether, or the extent to which, the threat actor persisted in the company’s environment.
      • If the company seeks to quantify the impact of the intrusion, such as by the number of files or customers affected, the SEC will scrutinize whether the company selectively disclosed quantitative information in a misleading way.
      • Whether the company should disclose not only the number of files or amount customer data compromised, but the importance of the files or data and the uses that can be made of them.
      • If the company quantifies the impact of the intrusion but is aware of gaps in its investigation or in the available data that mean the severity of the impact could have been worse, the SEC may consider it misleading not to disclose those facts.

    [1] For information on the four orders, See Press Release, SEC Charges Four Companies With Misleading Cyber Disclosures, SEC, https://www.sec.gov/newsroom/press-releases/2024-174.

    [2] Check Point Software Technologies Ltd., Securities Act Release No. 11321, Exchange Act release No. 101399, SEC File No. 3-22270 (Oct. 22, 2024).

    [3] Unisys Corporation, Securities Act Release No. 11323, Exchange Act Release No. 101401, SEC File No. 3-22272 (Oct. 22, 2024).

    [4] Avaya Holdings Corp., Securities Act Release No. 11320, Exchange Act Release No. 101398, SEC File No. 3-22269 (Oct. 22, 2024).

    [5] Mimecast Limited, Securities Act Release No. 11322, Exchange Act Release No. 101400, SEC File No. 3-22271 (Oct 22, 2024).

    [6] Press Release, Altaba, Formerly Known as Yahoo!, Charged With Failing to Disclose Massive Cybersecurity Breach; Agrees To Pay $35 Million, SEC,https://www.sec.gov/newsroom/press-releases/2018-71; Press Release, SEC Charges Software Company Blackbaud Inc. for Misleading Disclosures About Ransomware Attack That Impacted Charitable Donors, SEC, https://www.sec.gov/newsroom/press-releases/2023-48.

    [7] Check Point, supra note 2, at 2–4.

    [8] Id.

    [9] Unisys Corporation, supra note 3, at 6.

    [10] Id. at 5–7.

    [11] Avaya Holdings Corp, supra note 4, at 4.

    [12] Id. at 2.

    [13] Id. at 4.

    [14] Mimecast Limited, supra note 5, at 4.

    [15] Id.

    [16] Statement, Comm’rs Peirce and Uyeda, Statement Regarding Administrative Proceedings Against SolarWinds Customers (Oct. 22, 2024), https://www.sec.gov/newsroom/speeches-statements/peirce-uyeda-statement-solarwinds-102224.

    [17] Id.

    [18] Id.

    [19] See Cleary Alert Memo, SDNY Court Dismisses Several SEC Claims Against SolarWinds and its CISO (July 26, 2024).

    [20] Id.

    [21] Unisys Corporation, supra note 3,at 5.

    New York Department of Financial Services Issues Guidance on Cybersecurity Risks Arising from Artificial Intelligence

    Last week, the New York Department of Financial Services (“DFS”) issued guidance addressed to executives and information security personnel of entities regulated by DFS to assist them in understanding and assessing cybersecurity risks associated with the use of artificial intelligence (“AI”), and implementing appropriate controls to mitigate such risks (the “Guidance”).[1] In particular, and to address inquiries received by DFS regarding AI’s impact on cyber risk, the Guidance is intended is to explain how the framework set forth in DFS’ Cybersecurity Regulation (23 NYCRR Part 500) should be used to assess and address such risks.

    Below, we provide a high-level overview of the cyber risks identified by DFS related to the use of AI as well as the mitigating controls DFS recommends covered entities adopt to minimize the likelihood and impact of such risks.  Even for entities that are not regulated by DFS, the Guidance provides a roadmap for how other regulators may view AI-related cyber risks. 

    Cybersecurity Risks Related to the Use of AI.  The Guidance identifies two categories of risks specific to cybersecurity posed by an organization’s deployment of AI:

    • Risks caused by threat actors’ use of AI (e.g., AI-enabled social engineering and AI-enhanced cybersecurity attacks):

    AI has enabled threat actors to create highly personalized and sophisticated social engineering attacks that are more convincing, and therefore more successful. In particular, threat actors are using AI to create audio, video and text “deepfakes” that target specific individuals, convincing employees to disclose sensitive information about themselves and their employers or share credentials enabling access to their organization’s information systems and nonpublic information. Deepfakes have also been used to mimic an individual’s appearance or voice to circumvent IT verification procedures as well as biometric verification technology.

    AI has also allowed threat actors to amplify the “potency, scale, and speed of existing types of cyberattacks.” For example, AI can be used to more efficiently identify and exploit security vulnerabilities, allowing broader access to protected information and systems at a faster rate. It can also accelerate the development of new malware variants and enhance ransomware such that it can bypass defensive security controls, evading detection. Even threat actors who are not technically skilled may now be able to launch attacks using AI products and services, resulting in a potential increase in the number and severity of cyberattacks.

    • Risks caused by a covered entity’s use or reliance upon AI.

    Products that use AI require the collection and processing of substantial amounts of data, including non-public information (“NPI”). Covered entities that develop or deploy AI are at risk because threat actors have a greater incentive to target these entities to extract NPI for malicious purposes and/or financial gain. AI tools that require storage of biometric data, like facial and fingerprint recognition, pose a great risk as stolen biometric data can be used to generate deepfakes, imitate authorized users, bypass multi-factor authentication (“MFA”) and gain access to NPI.

    Working with third party vendors in gathering data for AI-powered tools exposes organizations to additional vulnerabilities. For example, if a covered entities’ vendors or suppliers are compromised in a cybersecurity incident, its NPI could be exposed and become a gateway for broader attacks on its network.

    Measures to Mitigate AI-related Threats

    Using its Cybersecurity Regulation as a framework, DFS suggests a number of controls and measures to help entities combat the aforementioned AI-related cybersecurity risks. Such controls include:

    • Designing cybersecurity risk assessments that account for AI-related risks in the use of AI by the covered entity and its vendors and suppliers;
    • Applying robust access controls to combat deepfakes and other AI-enhanced social engineering attacks;[2]
    • Maintaining defensive cybersecurity programs to protect against deepfakes and other AI threats;
    • Implementing third party vendor and supplier policies and management procedures that include due diligence on threats facing such vendors and suppliers from the use of AI and how such threats, if exploited, could impact the covered entity;
    • Enforcing data minimization policies to limit NPI a threat actor can access in case MFA fails; and
    • Training AI development personnel on securing and defending AI systems as well as other personnel on drafting queries to avoid disclosing NPI.

    Conclusion

    As AI continues to evolve, so too will AI-related cybersecurity risks, meaning it is of critical importance that all companies are proactive in identifying, assessing and mitigating the risks applicable to its business. To ensure speedy detection of, and response to, such threats, and attempt to avoid regulatory scrutiny or enforcement, covered entities should review, and where necessary update, its existing cybersecurity policies and procedures and implement mitigating controls using the Cybersecurity Regulation as a framework in line with DFS’ Guidance.


    [1] A copy of the DFS Guidance can be found here.

    [2] Notably, DFS encourages entities to consider using authentication factors that can withstand AI-manipulated deepfakes, and other AI-enhanced attacks by avoiding authentication via SMS text, voice or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys. Additionally, DFS recommends using technology with liveness detection or texture analysis, or requiring authentication via more than one biometric modality at the same time to protect against AI impersonation.

    DOJ Brings Lawsuit Against TikTok Over Alleged Violations of the Children’s Online Privacy Protection Act

    Following on the heels of major developments coming out of the Senate last week to advance privacy protections for children online, the Department of Justice (“DOJ”) officially filed a lawsuit on Friday against TikTok, Inc., its parent company, ByteDance, and certain affiliates (collectively, “TikTok”), over alleged violations of the Children’s Online Privacy Protection Act (“COPPA”) and its implementing rule (the “COPPA Rule”) as well as an existing FTC 2019 consent order (the “2019 Order”) alleging violations of the same.[1]

    After an investigation by the Federal Trade Commission (“FTC”) into TikTok’s compliance with the 2109 Order allegedly revealed a flagrant, continued disregard for children’s privacy protections, the FTC took the rare step of releasing a public statement referring the complaint to the DOJ which subsequently filed suit in the Central District of California last week.  “TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” said FTC Chair Lina M. Khan.  “The FTC will continue to use the full scope of its authorities to protect children online—especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

    According to the complaint, TikTok is alleged to have violated not only COPPA and the COPPA Rule but also the 2019 Order by:

    1. Knowingly allowing millions of children under thirteen to create and use TikTok accounts that are not reserved for children, enabling full access to the TikTok platform to view, make and share content without verifiable parental consent;
    2. Collecting extensive data, including personal information, from children without justification and sharing it with third parties without verifiable parental consent;
    3. Failing to comply with parents’ requests to delete their children’s accounts or personal information; and
    4. Failing to delete the accounts and information of users TikTok knows are children in direct violation of the 2019 Order. 

    In highlighting a number of actions undertaken by TikTok, which allegedly led to “unlawful, massive-scale invasions of children’s privacy”, the DOJ’s complaint contains several allegations that TikTok knowingly disregarded its obligations under applicable law and under the 2019 Order requiring TikTok to prevent child users from accessing its platform without verifiable parental consent and to take measures to protect, safeguard and ensure the privacy of the information of its child users once obtained. Among others, the DOJ alleged the following illegal practices:

    • Insufficient Age Identification Practices.  Despite implementing age gates since March 2019 on its platform in efforts to direct users under thirteen to TikTok Kids Mode (a version of the app designed for younger users which allows users to view videos but not create or upload videos, post information publicly or message other users) the complaint alleges that TikTok continued to knowingly create accounts for child users that were not on Kids Mode without requesting parental consent by allowing child users to evade the age gate.  Specifically, upon entering their birthdates and being directed to Kids Mode, under-age users could simply restart the account creation process in order to provide a new birthdate to gain access to the general TikTok platform without restriction (even though TikTok knew it was the same person); alternatively, users could also avoid the age gate entirely by logging in via third-party online services in which case TikTok did not verify the user’s age at all. 
    • Unlawful and Overinclusive Data Collection from Child Users. Even where child users were directed to Kids Mode, the complaint alleges that personal information was collected from children, such as username, password and birthday as well as other persistent identifiers such as IP addresses or unique device IDs, without providing notice to parents and receiving consent as required under COPPA.  TikTok also collected voluminous account activity data which was then combined with persistent identifiers to amass profiles on child users and widely shared with third parties without justification.  For example, until at least mid-2020, TikTok is alleged to have shared information collected via Kids Modes accounts with Facebook and AppsFlyer, a third party marketing analytics firm, to increase user engagement; the collection and sharing of persistent identifiers without parental consent was unlawful under the COPPA Rule because use of such data was not limited to the purpose of providing “support” for TikTok’s “internal operations”.
    • Failures to Honor Deletion Requests.  Though the COPPA Rule and the 2019 Order required TikTok to delete personal information collected from children at their parent’s request, TikTok failed to inform parents of this right and separately to act upon such requests.  To request deletion under TikTok’s policies, TikTok allegedly employed an unreasonable and burdensome process, often times requiring parents to undertake a series of convoluted administrative actions to delete their child’s account before taking action, including scrolling through multiple webpages to find and click on a series of links and menu options that gave no clear indication that they apply to such a request.  Even where parents successfully navigated this process, their requests were infrequently honored due to rigid policies maintained by TikTok related to account deletion.[2]   The complaint also suggests that even where such accounts were deleted, TikTok maintained certain personal information related to such users, such as application activity log data, for up to eighteen months without justification.
    • Failures to Delete Accounts Independently Identified by TikTok as Children’s Accounts. In clear violation of the 2019 Order, TikTok is also alleged to have employed deficient technologies, processes and procedures to identify children’s accounts for deletion, and even appears to have ignored accounts flagged by its own human content moderators as belonging to a child and ripe for deletion.  Instead, despite strict mandates to delete such accounts, TikTok’s internal policies permitted account deletion only if rigid criteria were satisfied—such as explicit admissions by the user of their age—and provided human reviewers with insufficient resources or time to conduct even the limited review permitted under such policies.[3]

    In addition to a permanent injunction to cease the infringing acts and prevent further violations of COPPA, the complaint requests that the court impose civil penalties against TikTok under the FTC Act, which allows civil penalties of up to $51,744 per violation, per day.  Given the uptick in recent enforcement related to children’s privacy issues and potential for material fines, entities should carefully consider the scope of COPPA’s coverage to their existing products and services, as well as their existing policies, practices and product functionality, to ensure compliance and avoid regulatory scrutiny.


    [1] Specifically, the 2019 Order (i) imposed a $5.7 million civil penalty, (ii) required TikTok to destroy personal information of users under the age of thirteen and, by May 2019, remove accounts of users whose age could not be identified, (iii) enjoined TikTok from violating the COPPA Rule and (iv) required TikTok to retain certain records related to compliance with the COPPA Rule and the 2019 Order.

    [2] According to the complaint, in a sample of approximately 1,700 children’s TikTok accounts about which TikTok received complaints and deletion requests between March 21, 2019, and December 14, 2020, approximately 500 (30%) remained active as of November 1, 2021, and several hundred were still active in March 2023.

    [3] For example, despite having tens of millions of monthly active users at times since the entry of the 2019 Order, TikTok’s consent moderation team included fewer than two dozen fulltime human moderators responsible for identifying and removing material that violated all of its content related policies, including identifying and deleting accounts of unauthorized users under thirteen.  Further, during at least some periods since 2019, TikTok human moderators spent an average of only five to seven seconds reviewing each flagged account to determine if it belonged to a child.

    Cybersecurity Law Enters Into Force

    On July 17, 2024, Law No. 90/2024 containing provisions for strengthening national cybersecurity and addressing cybercrime (the “Cybersecurity Law”) entered into force.

    The new legislation strengthens national cybersecurity, at a time when cyber-attacks have increased significantly.[1]

    The Cybersecurity Law:

    1. seeks to strengthen the resilience of (a) public administrations, (b) operators that are subject to the application of the Italian National Cybersecurity Perimeter (“Perimeter”) legislation, (c) operators of essential services and providers of digital services, as defined in Italian Legislative Decree No. 65/2018, which implements the first  EU Directive 2016/1148 on security of network and information systems (“NIS 1 Operators”) and (d) operators providing public communications networks or publicly accessible electronic communications services (“Telecommunication Operators”), by establishing detailed rules on public procurement of IT goods and services that are essential for the protection of national strategic interests;
    2. imposes new incident reporting obligations;
    3. increases the role of the National Cybersecurity Agency (the “NCA”);
    4. enhances data security measures by establishing the National Cryptographic Center; and
    5. significantly focuses on the fight against cybercrime by increasing penalties for existing criminal offenses and introducing new criminal offenses in relation to individuals and entities under Italian Legislative Decree No. 231/2001 (“Decree 231”).

    The Cybersecurity Law provisions are in addition to the existing Italian cybersecurity regulatory framework, which includes, as mentioned, the Perimeter legislation (Decree Law No. 105/2019),[2]  the Digital Operational Resilience Act (Regulation (EU) 2022/2554, “DORA”), and Italian Legislative Decree No. 65/2018, which implements the NIS 1 Directive.[3]

    1. Scope

    The Cybersecurity Law imposes obligations on Public Administrations[4] and on in-house companies that provide Public Administrations with: IT services; transportation services; urban, domestic or industrial wastewater collection, disposal or treatment services; and waste management services (“Public Operators”). These in-house companies are included within the scope of the law as they are considered to be critical infrastructure providers, in relation to which cybersecurity vulnerabilities may impact the entire supply chain of goods and services.

    In addition, the Cybersecurity Law increases some of the obligations imposed on NIS 1 Operators, Telecommunication Operators and operators included in the Perimeter.

    2. Incident reporting obligation

    According to Article 1 of the Cybersecurity Law, Public Operators are required to report to the NCA all incidents impacting networks, information systems, and IT services listed in the taxonomy included in the NCA Resolution.[5]

    Public Operators must submit an initial report within 24 hours of becoming aware of the incident and a complete report within 72 hours, using the channels available on the NCA website.

    Public Operators may also voluntarily report incidents not included in the NCA Resolution taxonomy. These voluntary reports are processed only after mandatory ones to avoid unduly burdening the Italian Computer Security Response Team. Furthermore, submitting a voluntary report shall not impose any new obligations on the notifying party beyond what would be required if the report was not submitted.[6]

    In the case of non-compliance with the reporting obligation, Article 1(5) of the Cybersecurity Law requires the NCA to issue a notice to the Public Operator, informing it that repeated non-compliance over a 5-year period will result in an administrative fine ranging from €25,000 to €125,000. Additionally, the NCA may conduct inspections within 12 months of identifying a delay or omission in compliance with the reporting obligation to verify that the Public Operator has taken steps to enhance resilience against the risk of incidents.

    The incident reporting obligation takes effect immediately for central public administrations included in the Italian National Institute of Statistics (“ISTAT”) list, as well as for regions, the autonomous provinces of Trento and Bolzano, and metropolitan cities. For all other Public Operators, this obligation will take effect 180 days after the law enters into force.

    Under Article 1 of the Cybersecurity Law, the reporting obligation is extended to more entities than those included in the Perimeter. In addition, the amendment to Article 1(3-bis) of Italian Decree-Law No. 105/2019 (establishing the Perimeter) extends the reporting procedure and timeframes set out in the Cybersecurity Law (initial reporting within 24 hours and complete reporting within 72 hours) to incidents that affect networks, information systems, and IT services other than ICT Assets[7] of entities included in the Perimeter.

    The reporting obligation under Article 1 of the Cybersecurity Law does not apply to (i) NIS 1 Operators; (ii) operators included in the Perimeter in relation to incidents affecting ICT Assets (for which the provisions of the Perimeter legislation remain applicable); (iii) State bodies in charge of public and military security; (iv) the Department of Security Information, (v) the External and Internal Information and Security Agencies.

    3. Addressing cybersecurity vulnerabilities reported by the NCA

    The Cybersecurity Law outlines how to handle reports of the NCA addressed to Public Operators, entities included in the Perimeter, and NIS 1 and Telecommunication Operators.

    In particular, the NCA may identify specific cybersecurity vulnerabilities that could affect the abovementioned recipients. These entities are required to promptly address the identified vulnerabilities within a maximum of 15 days, unless justified technical or organizational constraints prevent them from doing so immediately or necessitate postponement beyond the specified deadline.

    Failure to comply with this provision will result in an administrative fine ranging from €25,000 to €125,000.

    4. Contact person and cybersecurity structure

    Public Operators must establish a cybersecurity structure and designate a cybersecurity contact person (with specific expertise). This contact person, whose name must be communicated to the NCA, will be the NCA’s contact point for cybersecurity matters.

    The obligations, introduced for Public Operators are similar to those provided for the entities included in the Perimeter. For instance, Public Operators are required to: (i) implement internal information security policies; (ii) maintain an information risk management plan; (iii) set out the roles and responsibilities of the parties involved; (iv) implement actions to enhance information risk management based on NCA guidelines; and (v) continuously monitor security threats and system vulnerabilities to ensure timely security updates when necessary.

    5. Enhancing data security measures

    Public Operators, as well as operators included in the Perimeter and NIS 1 Operators, must verify that computer and electronic communication programs and applications use cryptographic solutions that comply with the guidelines on encryption and password storage issued by the NCA and the Data Protection Authority. In particular, in order to prevent encrypted data from being accessible to third parties, these entities must also ensure that the applications and programs specified in the regulation are free from known vulnerabilities.

    Within the framework of the national cybersecurity strategy, the NCA has an increased role in promoting cryptography. This involves the development of standards, guidelines, and recommendations to strengthen information system security. Furthermore, the NCA conducts evaluations of cryptographic system security and coordinates initiatives aimed at advocating for cryptography as a critical cybersecurity tool.

    For this purpose, the Cybersecurity Law provides for the creation of a National Cryptographic Center within the NCA, which operates under the guidelines set out by the NCA’s General Director.

    6. Public procurement of ICT goods, systems and services

    When procuring certain categories of ICT goods, systems and services for activities involving the protection of strategic national interests, public administrations, public service operators, publicly controlled companies,[8] and entities included in the Perimeter must ensure that the ICT goods and services acquired comply with particular criteria and technical standards, thereby safeguarding the confidentiality, integrity, and availability of processed data. These essential cybersecurity standards will be set out in a DPCM, to be adopted within 120 days of the Cybersecurity Law coming into force.

    This new obligation stands alongside the existing requirement for entities included in the Perimeter to carry out an evaluation process through the Centre for National Evaluation and Certification (the “CVCN”) to ensure the security of ICT Assets intended for deployment under the Perimeter, as set out in the DPCM dated June 15, 2021. Accordingly, entities under the Perimeter are required, in addition, to assess compliance with essential cybersecurity standards outlined in the abovementioned DPCM for ICT goods and services that are not subject to CVCN evaluation.

    7. Restrictions on personnel recruitment

    The Cybersecurity Law introduces several restrictions, for private entities, to hire individuals who have held specific roles within certain central public administrations, which, if breached, will result in the contract entered into becoming null and void (Articles 12 and 13).

    For instance, the Cybersecurity Law precludes, for a period of two years starting from the last training course, NCA employees who have attended, in the interest and at the expense of the NCA, specific specialized training courses, from taking positions with private entities aimed at performing cybersecurity-related tasks.

    8. Amendments to the Dora Regulation scope

    Lastly, the Cybersecurity Law amends the law implementing the DORA regulation to include, in addition to “financial entities”, financial intermediaries[9] and Poste Italiane S.p.A in relation to its Bancoposta business.

    The objective of this amendment is to ensure a high level of digital operational resilience and to maintain stability across the financial sector. Consequently, in the exercise of the delegated power, the Government will make the appropriate adjustments and additions to the regulations governing these entities to align their operational resilience measures with those outlined in the DORA Regulation. These changes will apply to the activities undertaken by each entity concerned. Additionally, the Bank of Italy will assume supervisory, investigative, and sanctioning responsibilities over these entities.

    9. Main amendments to the regulation on cybercrime

    The Cybersecurity Law strengthens the fight against cybercrime by introducing significant amendments to both the Italian Criminal Code (the “ICC”) and the Italian Code of Criminal Procedure (the “ICCP”).

    In particular, the Cybersecurity Law:

    • Increases criminal penalties for a range of cybercrimes, including the crime of unauthorized access to computer systems and the crime of destruction of computer data, information, and programs;
    • Introduces new aggravating circumstances.  It extends the aggravating circumstance which applies when the crime is committed “by a public official or a person in charge of a public service, through abuse of power or in violation of the duties of his or her position or service, by a person who, also abusively, exercises the profession of private investigator, or by abuse of the position of computer system operator”, to apply to all cybercrimes covered by the Cybersecurity Law.  It introduces a new aggravating circumstance for the crime of fraud in cases where the act is committed remotely by means of computer or telematic tools capable of impeding one’s own or another’s identification.[10] It also increases the penalties provided for the existing aggravating circumstances;
    • Introduces two new mitigating circumstances (Articles 623-quater and 639-ter ICC), applicable to specific cybercrimes,[11] which can reduce penalties by (i) up to one-third if the crime can be considered to be “minor” because of the manner in which it was committed, or if the damage or risk is particularly insignificant;  (ii) from one-half to two-thirds if the offender takes steps to prevent further consequences of the crime. This includes actively assisting the authorities in gathering evidence or recovering the proceeds of the crime or the instruments used to commit the crime;
    • Repeals Article 615-quinquies ICC, which punishes the unlawful possession, distribution and installation of instruments, devices or programs designed to damage or interrupt a computer or telematic system, and replaces it with the new criminal offense outlined in Article 635-quater.1 ICC; [12]
    • Introduces the new crime of cyber-extortion (Article 629(3) ICC), which punishes by imprisonment of 6 to 12 years and a fine of € 5,000 to € 10,000 (penalties that may be increased if certain aggravating circumstances are met)[13] anyone who, by committing or threatening to commit specific cybercrimes,[14] forces another person to do or refrain from doing something in order to obtain an unjust benefit for himself or herself or for others to the detriment of others. For example, the new crime could apply in cases where a person, having hacked into a computer system and manipulated or damaged information, data or programs, demands a ransom for the restoration of the computer system and its data.

    In addition, the Cybersecurity Law provides for: (i) the allocation of the preliminary investigation of cybercrimes to the district prosecutor’s office; (ii) the application of a “simplified” system for granting an extension of the preliminary investigation period for cybercrimes;[15] and (iii) the extension of the maximum period for preliminary investigation to two years.

    10. Amendments to Decree 231 and next steps for companies

    The Cybersecurity Law introduces significant amendments to Decree 231. In particular, the Cybersecurity Law:

    • Increases the penalties for cybercrimes established by Article 24-bis of Decree 231, providing for (i) a maximum fine of € 1,084,300 for the offenses referred to in Article 24-bis(1)  of Decree 231,[16] and (ii) a maximum fine of € 619,600 for the offenses referred to in Article 24-bis(2) [17]  of Decree 231;[18]
    • Expands the list of crimes that may trigger liability for companies and other legal entities under Decree 231, by including the new crime of cyber-extortion (new Article 24-bis(1-bis) of Decree 231) which is subject to the following penalties (i) a maximum fine of € 1,239,200, and (ii) disqualification penalties set out in Article 9(2) of Decree 231 (i.e., disqualification from conducting business; suspension or revocation of authorizations, licenses or concessions instrumental to the commission of the crime; prohibition from entering into contracts with the public administration; exclusion from grants, loans, contributions and subsidies with the possible revocation of those already granted; and ban on advertising goods and services) for a period of at least two years.

    In light of these developments, companies should consider reviewing and updating their policies and procedures to ensure that they are adequate to prevent new offenses that may trigger liability under Decree 231. In particular, companies should consider implementing new and more specific control measures, in addition to those already in place to prevent the commission of cybercrimes (which may already constitute a safeguard, even with respect to the newly introduced crime of cyber-extortion). Measures may include ensuring the proper use of IT tools, maintaining security standards for user identity, data integrity and confidentiality, monitoring employee network usage, and providing targeted information and training to company personnel.

    11. Conclusion

    The new Cybersecurity Law, while fitting into a complex regulatory framework that will need further changes, including  in the short term (consider, in this regard, that as early as October 2024 the NIS 2 Directive will have to be implemented) nevertheless represents a concrete response to the sudden and substantial increase in cyber threats. In particular, the expansion of incident reporting requirements to include new stakeholders and the introduction of stricter reporting deadlines for incidents not affecting ICT Assets aim to enhance national cyber resilience and security. This approach ensures that critical infrastructure providers have better control over cybersecurity incidents.

    The increased penalties for cybercrimes, the introduction of new criminal offenses, and the developments regarding corporate liability under Decree 231 are also consistent with the above objectives. These measures are intended to tackle the increasing threat of cybercrime, although their effectiveness in practice remains to be seen.


    [1] According to the Report published by the Italian Association for Information Security (“CLUSIT”) 2024, in 2023 cyber-attacks increased by 11% globally and by 65% at the Italian level.

    [2] Together with the relevant implementing decrees: Italian President of the Council of Ministers’ Decree (“DPCM”) No. 131 of July 30, 2020; Italian Presidential Decree (“DPR”) No. 54 of February 5, 2021; DPCM No. 81 of April 14, 2021; Italian Legislative Decree No. 82 of June 14, 2021; DPCM of June 15, 2021; DPCM No. 92 of May 18, 2022; and the NCA Resolution of January 3, 2023 (the “NCA Resolution”).

    [3] However, the Cybersecurity Law does not specifically refer to EU Directive 2022/2055 (the “NIS 2 Directive”), which Member States are required to implement by October 17, 2024.

    [4] Specifically, according to the Cybersecurity Law, the following are considered public administrations: central public administrations included in ISTAT annual list of public administrations; regions and autonomous provinces of Trento and Bolzano; metropolitan cities; municipalities with a population of more than 100,000 inhabitants and in any case, regional capitals; urban public transportation companies with a catchment area of not less than 100,000 inhabitants; suburban public transportation companies operating within metropolitan cities; and local health care companies.

    [5] See https://www.gazzettaufficiale.it/eli/id/2023/01/10/23A00114/sg.

    [6] See Article 18, paragraphs 3, 4 and 5 of Italian Legislative Decree No. 65/2018.

    [7] Defined, in accordance with Art. 1. letter m) of DPCM 131/2020 as a “set of networks, information systems and information services, or parts thereof, of any nature, considered unitarily for the purpose of performing essential functions of the State or for the provision of essential services.

    [8] Operators referred to in Article 2(2) of the Digital Administration Code (Italian Legislative Decree No. 82/2005).

    [9] Listed in the register provided for in Article 106 of the Consolidated Law on Banking and Credit, referred to in Italian Legislative Decree No. 385/1993.

    [10] New paragraph 2-ter of Article 640 ICC.

    [11] In particular, Article 623-quater ICC applies to the criminal offenses set out in Articles 615-ter (Unauthorized access to a computer or telematic system), 615-quater (Possession, distribution and unauthorized installation of tools, codes and other means of access to computer or telematic systems), 617-quater (Unlawful interception, obstruction, or disruption of computer or telematic communications), 617-quinquies (Possession, distribution and unauthorized installation of tools and other means to intercept, obstruct or interrupt computer or telematic communications) and 617-sexies ICC (Falsifying, altering or suppressing the content of computer or telematic communications). Article 639-ter ICC instead applies to the criminal offenses set out in Articles 629(3) (new crime of cyber-extortion), 635-ter (Damage to information, data and computer programs of a public nature or interest), 635-quarter.1 (Unauthorized possession, distribution, or installation of tools, devices, or programs designed to damage or interfere with a computer or telematic system) and 635-quinquies ICC (Damage to public utility computer or telematic systems).

    [12] The new provision addresses the same conduct for which penalties were provided for under former Article 615-quinquies ICC and provides for the same penalties, with the addition of the aggravating circumstances set out in Article 615-ter(2.1) and Article 615-ter(3) ICC.

    [13] In particular, a penalty of imprisonment of 8 to 22 years and a fine of € 6,000 to € 18,000 applies if the aggravating circumstances referred to in the paragraph 3 of Article 628 ICC (i.e., the aggravating circumstances provided for the crime of robbery) are met, or where the crime is committed against a person incapacitated by age or infirmity.

    [14] That is, those set out in Articles 615-ter, 617-quater, 617-sexies, and 635-bis (Damage to computer information, data and programs), 635-quater (Damage to computer or telematic systems) and 635-quinquies ICC.

    [15] In particular, the “simplified” regime is provided for under Article 406(5-bis) ICCP, which provides that the judge shall issue an order within ten days from the submission of the request for extension of the preliminary investigation period by the public prosecutor. This provision, which is reserved for particularly serious crimes, is intended to allow a more timely and effective investigation of the commission of the crime.

    [16] That is, the crimes under Articles 615-ter, 617-quater, 617-quinquies, 635-bis, 635-ter, 635-quater and 635-quinquies ICC.

    [17] That is, the crimes under Articles 615-quater and 635-quater(1) ICC.

    [18] The disqualification penalties provided for these cybercrimes remain unchanged.

    FTC Announces Reforms to the Health Breach Notification Rule

    On April 26, 2024, the Federal Trade Commission (“FTC” or the “Commission”) announced changes to the Health Breach Notification Rule (“HBNR”), which requires certain entities not covered by the Health Insurance Portability and Accountability Act (“HIPAA”) to notify consumers, the FTC, and, in some cases, the media of breaches of unsecured personally identifiable health data.[1]  The final rule seeks to address technological and industry advancements since the original HBNR was adopted in 2009 by clarifying the rule’s applicability to direct-to-consumer health technologies (such as fitness trackers) which have proliferated in recent years.  The final rule also expands the information that covered entities must provide to consumers when notifying individuals of a data breach.

    The Health Breach Notification Rule

    Section 13407 of the American Recovery and Reinvestment Act of 2009 (“the Act”) created certain protections for personal health records (“PHRs”), electronic records of individually identifiable health information “that can be drawn from multiple sources and that [are] managed, shared, and controlled by or primarily for the individual.”[2]  Since vendors of PHRs and PHR related entities (defined below) were collecting consumers’ health information but were not subject to HIPAA’s security requirements, the Act directed the FTC to issue a rule requiring such entities, and their third party service providers, to provide notification of any breach of unsecured PHR identifiable health information.

    • Businesses qualify as vendors of PHRs if they offer or maintain an electronic record of identifiable health information on an individual that can be drawn from multiple sources and that is managed, shared and controlled by or primarily for the individual.  Thus, if a company offers a health app that collects information from a consumer and can sync with a consumer’s fitness tracker, remote blood pressure cuffs, connected blood glucose monitor, etc., then that company would qualify as a vendor of PHRs.  The health app itself would qualify as a PHR.
    • Businesses qualify as PHR related entities if they interact with a vendor of PHRs either by offering products or services through the vendor’s website, app, or other online service – even if the vendor’s service is covered by HIPAA – or by accessing information in a PHR or sending information to a PHR.  For instance, if a company offers fitness trackers, remote blood pressure cuffs, connected blood glucose monitors, etc., that can send information to health apps, then that company would qualify as a PHR related entity.
    • Businesses qualify as third-party service providers if they offer services involving the use, maintenance, disclosure, or disposal of health information to vendors of PHRs or PHR related entities.  For example, if a vendor of PHRs hires a company to provide billing, debt collection, or data storage services related to health information, then that company would qualify as a third party service provider.

    The rule that the Commission issued in 2009 requires vendors of PHRs and related entities not covered by HIPAA to notify consumers, the FTC, and, in some cases, the media of breaches of unsecured personally identifiable health data impacting 500 or more individuals.  Third party service providers must also notify covered entities of any data breaches of unsecured PHR identifiable health information. 

    While the core of this rule remains the same, the FTC has updated the HBNR in light of the increasing amount of health data that companies collect from consumers and the growing incentive for companies to disclose that data for marketing or other purposes.

    Modifications to the Rule

    The finalized changes to the rule include:

    • Revising or creating definitions for “PHR identifiable health information,” “covered health care provider,” and “health care services or supplies” to underscore that the rule covers health apps and similar technologies not covered by HIPAA.  Under the revised rule, developers of health apps and similar technologies providing “health care services or supplies” are considered “covered health care providers” and data collected or used by their products constitutes “PHR identifiable health information.” 
    • Updating the definition of “personal health record” to clarify what it means for a PHR to “draw information from multiple sources.” Under the updated rule, a product qualifies as a PHR if it has the technical capacity to draw information (not just health data) from multiple sources, regardless of whether the consumer enables such syncing features. 
    • Modifying the definition of “breach of security” to cover not only data breaches, but also unauthorized disclosures.  A company thus commits a breach of security when it shares or sells consumers’ information to third parties in a manner inconsistent with the company’s representations to consumers (i.e., without disclosure and without affirmative express consent).
    • Amending the definition of “PHR related entities” in two ways to clarify the HBNR’s scope.  First, the final rule broadens the definition to cover entities that offer products and services through a PHR vendor’s online services (including mobile device apps).  Second, the rule narrowed the definition to cover only entities that access or send “unsecured PHR identifiable health information” to a PHR, as opposed to entities that access or send any other information to a PHR.  For example, remote blood pressure cuffs or fitness trackers could qualify as a PHR related entity when individuals sync them with a health app (a PHR).  However, a grocery delivery service that sends information about food purchases to a diet and fitness app is unlikely to qualify as a PHR related entity.
    • Authorizing electronic notification for individuals who have specified electronic mail as their primary contact method.  The rule defines “electronic mail” to mean email in combination with text messaging, within-app messaging, and/or an electronic banner.  Any notification delivered via electronic mail must be clear and conspicuous.
    • Expanding the content included in consumer notices to incorporate four additions.  First, the notice must include the full name or identity (or a description, if providing the name or identity would pose a risk to individuals or the notifying entity) of any known third parties that acquired unsecured PHR identifiable health information as a result of a breach.  Second, the updated rule expands the exemplar list of data that should be included in a notice’s description of affected information.  The updated list of potential data now includes, among other information, health diagnosis or condition, lab results, medications, other treatment information, the individual’s use of a health-related app, and device identifiers.  Third, the notice must also now include a brief description of what the breached entity is doing to protect affected individuals.  Finally, the new rule requires that the notice provide two or more of the following methods of contacting the notifying entity: toll-free phone number, email address, website, within-app, or postal address.
    • Changing the timing of FTC notification under the HBNR.  Under the original rule, covered entities had to notify the FTC within ten business days of discovering a breach involving 500 or more individuals.  Now, covered entities who experience a breach involving 500 or more people must notify the FTC at the same time they notify affected individuals and in no case later than 60 calendar days after the discovery of the breach.
    • Improving the readability of the HBNR by including explanatory parentheticals for internal cross-references, adding statutory citations, consolidating notice and timing requirements, and revising the Enforcement section to state the penalties for non-compliance more plainly.

    The final rule will go into effect 60 days after its publication in the Federal Register.


    [1] FTC Finalizes Changes to the Health Breach Notification Rule, (Apr. 26, 2024), https://www.ftc.gov/news-events/news/press-releases/2024/04/ftc-finalizes-changes-health-breach-notification-rule.

    [2] 42 U.S.C. 17921(11).

    EHDS – The EU Parliament formally adopts the Provisional Agreement: Key Takeaways and Next Steps

    In our Alert Memorandum of 19 July 2022 (available here), we outlined the European Commission’s (the “Commission”) proposal for a regulation on the “European Health Data Space” (the “Regulation” or the “EHDS”). The proposal, which was published in May 2022, is the first of nine European sector- and domain-specific data spaces set out by the Commission in 2020 in the context of its “European strategy for data”.

    The EU is now reportedly aiming to conclude the EHDS dossier and adopt the Regulation before the end of the EU Parliament’s current term (June 2024). To this end, on 15 March 2024, the EU Council and the EU Parliament announced that they had reached a provisional agreement on the text of the Regulation (the text is available here). And on 24 April 2024, the EU Parliament formally adopted the text of the provisional agreement.

    Background:

    The proposed Regulation is an initiative that attempts to create a “European Health Union” to make it easier to exchange and access health data at EU level. The Regulation builds on other recent EU reforms such as the recently enacted Data Act and the proposed AI Act. It seeks to tackle legacy systemic issues that have hindered lawful access to electronic health data. It promotes the electronic exchange of health data by enhancing individuals’ access to and portability of these data and by enabling innovators and researchers to process these data through reliable and secure mechanisms. It contains rules that govern both primary use (i.e., use of such data in the context of healthcare) and secondary use of health data (e.g. use for non-healthcare purposes such as research, innovation, policy-making, statistics).

    Recent Proposals:

    On 6 December 2023, the EU Council issued a press release (available here) confirming the agreement on the EU Council’s position and its mandate to start negotiations with the EU Parliament as soon as possible in order to reach a provisional agreement on the proposed Regulation (see the EU Council’s proposed amendments here). Subsequently, on 13 December 2023, the EU Parliament finalised its proposed amendments to the Regulation (see the EU Parliament’s proposed amendments here).

    Following the inter-institutional trilogue negotiations between the EU Parliament, the EU Council and the Commission, on 15 March 2024, the EU Council and the EU Parliament issued a press release (available here) confirming the reach of a provisional agreement on the text of the Regulation. They introduced new rules and also modified or clarified some of the rules that were originally proposed (some of which were outlined in our Alert Memorandum of 19 July 2022).

    Some of the highlights from the provisional agreement are as follows:

    • Scope of Prohibited Purposes: The new text seeks to expand and clarifies the scope of prohibited purposes for secondary use of health data. For instance, the Regulation now provides that the secondary use of health data to take decisions that will produce economic or social effects should be prohibited – this provides an additional prohibition on top of the original proposal, which intended to prohibit secondary use of health data only where the decisions produced “legal” effects. In addition, the Regulation further includes within the scope of the prohibited purposes: (i) decisions in relation to job offers; (ii) offering less favourable terms in the provision of goods or services; (iii) decisions regarding conditions of taking loans or any other discriminative decisions taken on the basis of health data.
    • Categories of Personal Data subject to Secondary Use: As above, electronic health data can be subject to “secondary use” and health data holder should make certain categories of electronic data available for secondary use. The EU Parliament and the EU Council confirmed in their provisional agreement that Member States will be able to establish trusted data holders that can securely process requests for access to health data in order to reduce the administrative burden. The text includes a number of amendments to such categories of electronic data that can be made available for secondary use.
    • IP and Trade Secrets:
      • The EU Commission’s first draft of the Regulation did not include specific measures to preserve the confidentiality of IP rights and trade secrets; however, the Regulation now includes a set of new provisions on the protection of IP rights and trade secrets (Recital 40c, Article 33a). Accordingly, where health data is protected by IP rights or trade secrets, the Regulation should not be used to reduce or circumvent such protection. The provisions impose, among other things, an obligation on the “health data access bodies”[1] to take all specific measures, including legal, organisational and technical measures that are necessary to preserve the confidentiality of  data entailing IP rights or trade secrets. Such legal, organisational and technical measures could include common electronic health data access contractual arrangements, specific obligations in relation to the rights that fall within the data permit, pre-processing the data to generate derived data that protects a trade secret (but still has utility for the user or configuration of the secure processing environment so that such data is not accessible by the health data). If a health data user requests access to such data but should the granting of access of electronic health data for secondary purpose incur a serious risk that cannot be addressed in a satisfactory manner of infringing the intellectual property rights, trade secrets and/or the regulatory data protection right, the health data access body must refuse access and explain the reason to the user (see Article 33a(1)(d) of the Regulation).
      • In addition, the Regulation now includes additional obligations to health data holders[2] with respect to electronic health data that entail IP rights or trade secrets. For example, the original proposals required a health data holder to make the electronic data they hold available upon request to the health data access body in certain circumstances. The Regulation now requires health data holders to inform the health data access body of such IP rights or trade secrets, as well as to indicate which parts of the datasets are concerned and justify why the data needs the specific protection which the data benefits from, when communicating to the health data access body the dataset descriptions for the datasets they hold, or at the latest following a request from the health data access body.
      • The Regulation also requires health data access bodies to apply certain criteria when deciding to grant or refuse access to health data. These criteria include whether the requests demonstrate sufficient safeguards to protect the health data holder and the natural persons concerned; whether there is a lawful basis the GDPR in case of access to pseudonymised health data; whether the requested data is necessary for the purpose described in the request application. In addition, the health data body must also take into account certain risks when deciding on the same. The health data access body must permit the data access where it concludes that the above-mentioned criteria are met and the risks that it must take into account are sufficiently mitigated.
    • Transparency: The Regulation now intends to impose an additional obligation on the data holders to provide certain information to natural persons about their processing of personal health data. This information obligation is intended to supplement the transparency obligations that the data holders may have under the GDPR.
    • Right to access to personal electronic health data:The Regulation now addsthe individuals’ right to download their electronic health data and specifies that the right to access to personal electronic health data in the context of the EHDS complements the right to data portability under Article 20 of the GDPR (see Recital 11). In this context it should be noted that the GDPR right to data portability is limited only to data processed based on consent or contract – which excludes data processed under other legal bases, such as when the processing is based on law – and only concerns data provided by the data subject to a controller, excluding many inferred or indirect data, such as diagnoses, or tests.
    • Right to opt-out and need to obtain consent: New Recital 37c and Article 35f provide patients with a right to opt-out of the processing of all their health data for secondary use, except for purposes of public interest, policy making, statistics and research purposes in the public interest. In addition, individuals shall be provided with sufficient information on their right to opt-out, including on the benefits and drawbacks when exercising this right. In addition, Member States may put in place stricter measures governing access to certain kinds of sensitive data, such as genetic data, for research purposes.
    • Data localisation: Data localisation requirements are imposed in Articles 60a and 60aa. These provisions are intended to requires that personal electronic health data be stored exclusively for the purposes of primary and secondary use of personal electronic health data within the territory of the EU or in a third country, territory or one or more specified sectors within that third country covered by an adequacy decision pursuant to Article 45 of the GDPR. These proposed changes are seemingly intended to address some of the concerns expressed by the European Data Protection Board (the “EDPB”) and the European Data Protection Supervisor (the “EDPS”) in their joint opinion of 12 July 2022. However in certain ways the provisions do seem to go beyond the recommendations of the EDPB / EDPS (for example, with respect to the localisation of data, the EDPB/EDPS opinion actually proposed to require that electronic health data be stored in the EEA, but to allow for transfers under Chapter V of the GDPR, i.e. including, for example, transfers under standard contractual clauses or under the derogations provided for in Article 49 of the GDPR).

    Next steps:

    The provisional agreement will now have to be endorsed by the EU Council. It has been reported that the aim of the institutions is to conclude the EHDS dossier and adopt the Regulation before the end of the EU Parliament’s term (June 2024).

    Once formally adopted and published in the Official Journal of the EU, the EHDS will be directly applicable following a grace-period (currently, two years) after the entry into force of the Regulation (with the exception of certain provisions which will have different application dates).


    [1] This is a body that Member States will set up to be responsible for granting access to electronic health data for secondary use).

    [2] This means the natural or legal person that has the ability to make available data; however note that negotiations between the EU Parliament, the EU Council and the EU Commission are still ongoing on the definition of “data holders”.

    Congress Releases American Privacy Rights Act Discussion Draft

    After years of fits and starts—including failed attempts to pass the American Data Privacy and Protection Act in 2022—Congress has renewed its attempt to nationalize privacy protections for American consumers with introduction of the American Privacy Rights Act (the “APRA” or “Act”).[1]  The APRA, a new bipartisan, bicameral proposal for comprehensive data protection legislation introduced by the House Committee on Energy and Commerce and the Senate Committee on Commerce, Science and Transportation in early April, is a direct response to a flurry of activity at the state level over the past few years and attempts to harmonize the resulting patchwork of privacy legislation that has created a burdensome and costly labyrinth of shifting compliance obligations for covered organizations that collect and process personal data.

    Several core provisions of the APRA—including strict data minimization obligations; consent requirements for certain data transfers; and consumer rights of access, correction, deletion and portability and to opt-out of certain processing activities—parallel recently enacted foreign and state privacy laws, including those currently in effect in California, Colorado, Connecticut, Utah and Virginia. In establishing these protections for consumers nationwide, the APRA creates a comprehensive, and in some ways more restrictive, framework to serve as the U.S. counterpart to Europe’s General Data Protection Regulation (the “GDPR”) that adjusts—and in some respects expands—the compliance burden on organizations that collect and use personal data. Most notably, and as those following Congress’ efforts to bring federal privacy legislation to fruition will recall, the APRA addresses the two most contentious aspects of federal privacy legislation by broadly preempting state and local data privacy laws and providing consumers a private right of action for violations of their privacy rights. If enacted, the Act would come into effect 180 days after its passing.

    Key Takeaways of the Act:

    1. Broad Preemption.  The Act as currently drafted contains broad preemption provisions that will largely do away with the patchwork of comprehensive privacy laws at the state level with some carve outs for certain state laws on discrete subjects related to privacy—notably, provisions of the California Consumer Privacy Act related to employee personal information are likely to remain in effect. 
    2. Consumer Private Right of Action.  In addition to enforcement by the FTC and state attorneys generals, individuals are provided with a private right of action that permits claims against covered entities for failures to comply with certain of the Act’s obligations.  Actions alleging substantial privacy harms or actions by a minor are prohibited from being subject to mandatory arbitration, and individuals can recover actual damages, injunctive relief, declaratory relief and reasonable attorney fees and costs.
    3. Strict Data Minimization Requirements.  In line with recent heightened regulatory scrutiny of organizations’ data collection practices, the Act imposes strict data minimization obligations, prohibiting the collection, processing, retention and transfer of personal data, unless such activity meets general data-minimization principles (e.g., such processing is necessary, proportionate and limited to a specific purpose) or one of fifteen (15) specific permitted purposes.
    4. Broad Coverage.  Unlike recently enacted state privacy laws, the APRA does not contain any revenue or processing thresholds when it comes to applicability—broadly applying instead to any entity that determines the means and purposes of processing covered data and is subject to the Federal Trade Commission’s (“FTC”) jurisdiction, as well as to non-profits and common carriers.  Large data holders, high impact social media companies and data brokers have heightened, bespoke obligations under the Act, and even small businesses are subject to the Act to the extent such businesses engage in data sales. Data covered by the Act includes any data that identifies or is linked or reasonably linkable to an individual or device, but does not include de-identified data, employee data or publicly available information, amongst other carve outs.
    5. Sensitive Data Transfers and Express Consent.  Affirmative express consent is required before sensitive data—which is defined far more broadly under the Act than any current state privacy law and includes any data related to individuals under the age of seventeen (17)—can be transferred to a third party, unless the transfer is necessary, proportionate and limited to one of the permitted purposes.  Additional considerations are required for transfers of biometric and genetic data.

    Summary of the APRA

    Applicability.  The Act broadly applies to covered entities that alone or jointly with others determine the purposes and means of collecting or processing covered data and (i) are subject to FTC jurisdiction, (ii) qualify a common carrier subject to Title II of the Communications Act of 1934 or (iii) are a non-profit organization. Affiliates who share common branding with a covered entity are also in scope, while small businesses[2], governments and their service providers, the National Center for Missing and Exploited Children and, except for data security obligations, fraud-fighting non-profits are excluded.  There are additional heightened requirements for large data holders[3], covered high-impact social media companies[4] and data brokers.

    Covered Data. Covered data is defined as any information that identifies or is linked to or reasonably linkable to an individual or device, excluding (i) deidentified data[5], (ii) employee data, (iii) publicly available information, (iv) inferences made from multiple sources of publicly available information that do not meet the definition of sensitive covered data and are not combined with covered data and (v) information in a library, archive, or museum collection subject to specific limitations.  The Act contains an extremely expansive definition of publicly available information, which serves to significantly narrow the Act’s scope.  Specifically, in addition to defining publicly available information to include information from government records or made available to the general public via widely distributed media, the definition also includes an information lawfully made available from “a website or online service made available to all members of the public, for free or for a fee, including where all members of the public can log-in to the website or online service” provided that the individual to whom the information pertains did not restrict the information to a specific audience.

    Sensitive covered data, the transfer of which requires affirmative opt-in consent unless expressly permitted under the Act, is a subset of covered data that generally includes any data relating to “covered minors” (i.e., individuals under the age of seventeen (17)), as well as government identifiers; health information; biometric information; genetic information; financial account and payment data; precise geolocation information; log-in credentials; private communications; information revealing sexual behavior; calendar or address book data, phone logs, photos and recordings for private use; any medium showing a naked or private area of an individual; video programming viewing information; an individual’s race, ethnicity, national origin, religion, or sex, in a manner inconsistent with a reasonable expectation of disclosure; online activities over time and across third party websites or over time on a high-impact social media company website or service[6]; and other data the FTC defines as sensitive covered data by rule.

    Obligations of Entities Subject to the APRA.  Broadly speaking, covered entities are subject to the obligations and restrictions under the Act set forth below. Notably, while the APRA does not contain specific revenue or processing thresholds to determine the Act’s applicability, it does impose specific, heightened compliance obligations on certain types of covered entities (such as large data holders and covered high-impact social media companies) based on annual revenues or the volume of covered data processed thereby. 

    • Data Minimization.  The Act prescribes strict data minimization requirements, limiting covered entities’ (or the service providers acting on their behalf) ability to collect, process, retain or transfer any covered data (i) beyond what is necessary, proportionate and limited to provide or maintain a specific product or service requested by the consumer or to communicate with a consumer in a manner reasonably anticipated within the context of the relationship or (ii) for an expressly permitted purpose (e.g., data security, compliance with legal obligations, preventing fraud, de-identification of data for product or service development or improvement).  Furthermore, covered entities are expressly prohibited from transferring (i) any sensitive covered data or (ii) biometric or genetic information, in each case, to a third party without express affirmative consent unless expressly permitted by the Act. 
      • Business Transfers. Notably the transfer of covered data as an asset to a third party in the context of a business transaction or bankruptcy is also set forth as a permitted purpose under the Act; provided that the covered entities provides in a reasonable time prior to such transfer each affected individual with (a) a notice describing such transfer, including the name of entity receiving the individual’s data and its privacy policy and (b) a reasonable opportunity to withdraw any previously given consent or request deletion of their data.
    • Transparency.  In a deviation from current requirements under state privacy laws, not only would covered entities be required to provide publicly available privacy policies detailing their data processing and security practices, but service providers would now also incur such obligations as well.  The privacy policy must be made available in each language the covered entity or service provider provides a product or service and disclose (i) the categories of covered data collected, (ii) purposes for processing and (iii) to whom the information is transferred (including a list of any data broker transfers), as well as (iv) how consumers can exercise their privacy rights.  Material changes to a covered entity’s privacy policy—i.e., a change that would likely impact an individual’s decision to provide affirmative consent for or opt-out of the entity’s data processing—require advanced notice to consumers and the provision of a means to opt-out. Uniquely, privacy policies must also disclose whether any covered data is transferred to, processed in or otherwise accessible to a foreign adversary.
      • Large Data Holders. Large data holders must further provide all copies of their privacy policies for the previous ten (10) years, including a log of all material changes (excluding for versions that predate the Act), as well as provide a short-form notice of their policies to consumers not to exceed 500 words in length.
    • Consumer Rights.  Like state privacy laws, the Act provides consumers with rights to access, correct and delete their data, as well as rights to data portability.  With respect to opt-out rights, consumers have rights to opt-out of (i) transfers of non-sensitive covered data and (ii) use of their data for targeted advertising, in each case, made through an opt-out mechanism. Not later than two (2) years after the Act’s enactment the FTC is directed to establish requirements and technical specifications for a privacy protective, centralized mechanism (including global privacy signals, such as browser or device privacy settings and registries of identifiers) for individuals to exercise their opt-out rights.  In addition, covered entities are prohibited from retaliating against any individual for exercising their APRA rights, provided that covered entities may offer bona fide loyalty programs or market research opportunities upon receipt of opt-in consent from the individual. Finally, users must be provided an “easy-to-execute” means to withdraw any affirmative express consent provided (i.e., in connection with the processing of their sensitive covered data).
      • Dark Patterns. The Act further prohibits covered entities from using any dark patterns—i.e., a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making or choice—to  divert an individual’s attention from any required notice, impair the ability of any individual to exercise their rights or to obtain, infer or facilitate consent.
    • Data Security and Executive Responsibility. Covered entities and service providers would be required to implement and maintain reasonable data security practices to protect the confidentiality, integrity and accessibility of, and prevent unauthorized access to, covered data, taking into account the size and complexity of the relevant business and the context, volume and sensitivity of the data to be processed. Entities must routinely assess vulnerabilities and take preventative and corrective actions to mitigate any reasonably foreseeable internal or external risk to, or vulnerability of, covered data. Additionally, covered entities must designate a privacy or data security officer to implement and facilitate ongoing review of the entity’s data privacy and security program, while large data holders must further (i) designate both a privacy and separate data security officer, (ii) beginning one year after the Act’s enactment, file annual certifications by the entity’s chief executive officer and each of its privacy and data security officers to the FTC detailing its internal controls and internal reporting structures for compliance with the Act, (iii) conduct privacy impact assessments on a biennial basis and (iv) develop a program to educate and train employees, amongst other responsibilities.
    • Additional Service Provider and Third Party Obligations. In addition to the obligation to enter into data processing agreements discussed below, the Act places similar requirements on service providers as those existing under current state privacy legislation, including requirements to (i) refrain from collecting, processing or transferring covered data other than to the extent necessary and proportionate to provide a service requested by the covered entity or where the service provider has actual knowledge that the covered entity violated the Act with respect to such data, (ii) assist the covered entity in responding to consumers attempting to exercise their APRA-rights, (iii) upon request by the covered entity, make available the necessary information to demonstrative the service provider’s compliance with the Act, (iv) delete or return covered data, as determined by the covered entity, upon the end of provision of services unless retention is required by law, (v) engage other service providers only after exercising reasonable due diligence, providing notice to the covered entity and entering into a written contract satisfying the disclosing service provider’s obligations under the Act, (vi) develop, implement and maintain reasonable administrative, technical and physical safeguards to protect covered data and (vii) allow for and cooperate with reasonable audits by the covered entity.
      • Data Processing Agreements. Akin to the Article 28 requirements under the GDPR, the APRA mandates that covered entities and service providers enter into data processing agreements in order to establish a service provider relationship.  Such agreement governs the data processing procedures of the service provider with respect to any such data collection, processing or transfer performed on behalf of the covered entity or primary service provider and must clearly define the instructions for collecting, processing, retaining or transferring data, the nature, purpose and duration of the processing, the type of data subject to the processing and the rights and obligations of each party.  Finally, the contract must specifically prohibit the service provider from combining its own data with covered data it receives from or on behalf of another covered entity or person. Notably, not only must covered entities enter into contracts with their service providers, but are also required under the Act to conduct reasonable due diligence in selecting a service provider as well as when deciding to transfer covered data to a third party.
      • Third Party Processing Restrictions. The Act expressly prohibits third parties from processing the covered data transferred to it for any purpose other than (i) in the case of sensitive covered data, the processing purpose for which the consumer gave affirmative express consent or (ii) in the case of non-sensitive covered data, the processing purpose for which the third party made a disclosure in its privacy policy.
    • Data Brokers. Borrowing from the obligations imposed under the California Delete Act (previously discussed here), the APRA imposes a set of requirements on data brokers, including obligations to register with the FTC (which will be subsequently used to create a public-facing, searchable data broker registry) and maintain a publicly accessible website that contains a clear, conspicuous notice informing individuals that the entity is a data broker using language to be prescribed by the FTC.  The notice must further include a tool for individuals to exercise their individual controls and opt-out rights and a link to the FTC’s data broker registry website. The FTC is further directed to include a “Do Not Collect” mechanism on the data broker registry website that permits an individual to submit a request to all registered data brokers, subject to certain exceptions, that results in registered data brokers no longer collecting covered data related to such individual without the affirmative express consent of such individual.
    • Civil Rights and Covered Algorithms.  With respect to race gender, and other protected characteristics, the APRA would prohibit a covered entity or service provider from collecting, processing or transferring covered data “in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services” on a discriminatory basis subject to certain exceptions such as using such data for self-testing by the covered entity to prevent or mitigate unlawful discrimination or diversifying an applicant, participant or customer pool.  An additional requirement for large data holders would align with restrictions on algorithmic decision-making introduced by the GDPR, pursuant to which large data holders that use covered algorithms in a manner that would pose a consequential risk of harm to an individual or group of individuals and uses such covered algorithm to collect, process or transfer covered data would be required to produce “algorithm impact assessments”.  The Act sets forth a list of prescriptive requirements for what must be included in such assessments, including detailed descriptions of the design process and methodologies, the data used by the covered algorithm, the steps taken to mitigate potential harms and an assessment of the necessity and proportionality of the covered algorithm in relation to its stated purpose.  Conversely, covered entities and service providers are only required to conduct such impact assessments where such entity knowingly develops a covered algorithm that is designed, solely or in part, to collect, process, or transfer covered data in furtherance of a consequential decision.  In each case, however, such assessments must (i) be submitted to the FTC for evaluation, (ii) upon request, be made available to Congress and (iii) be summarized and made publicly available.
      • Opt-Out Rights.  Any entity (not just covered entities as defined) that uses a covered algorithm to make or facilitate a consequential decision (e.g., related to access to or equal enjoyment of housing, employment, education enrollment or opportunity, healthcare, insurance or credit or use of or access to any place of public accommodation) must provide the relevant individual notice and an opportunity to opt-out of the designation.

    Enforcement. Departing from the approach adopted by most states (other than California), the APRA permits consumers to file private lawsuits against covered entities that violate certain their APRA rights (e.g., failures to receive consent for transfers of sensitive data or collection or transfer of biometric or genetic data, failures to provide privacy notices or to permit consumers to exercise their privacy rights), pursuant to which they may recover actual damages, injunctive relief, declaratory relief and reasonable attorney fees and costs.[7]  Where injunctive relief or actual damages are sought, consumers must provide the covered entity with thirty (30) days’ written notice of the alleged violation[8], unless the alleged violation that resulted in a substantial privacy harm.[9] 

    In addition to the private right of action, the APRA delegates primary enforcement authority to the FTC and permits state attorneys general, chief consumer protection officer and other state or federal offices authorized to enforce privacy or data security laws, including the California Privacy Protection Agency, to bring enforcement actions after notification to the FTC, subject to certain exclusions. The FTC is also provided the authority to promulgate regulations under a variety of provisions under the Act as well as tasked with establishing a new bureau comparable in nature to the existing bureaus within the FTC related to consumer protection and competition to assist the FTC in carrying out its duties under the Act.

    Violations of the Act will be treated as violations of a rule defining an unfair or deceptive trade practice under the FTC Act, carrying a maximum civil penalty of $51,744 per violation.  Civil penalties obtain are to be deposited in the Privacy and Security Victims Relief Fund to provide redress, payment, compensation or other monetary relief to individuals affected by an APRA violation.  States may further seek  injunctive relief; civil penalties, damages, restitution, or other consumer compensation; attorneys’ fees and other litigation costs; and other relief, as appropriate. 

    Preemption of State and Local Privacy Laws. The APRA would generally preempt states from adopting, maintaining or enforcing any law or regulation covered by provisions of the Act with the exception of an enumerated list of state laws, rendering moot most aspects of the privacy legislation recently passed at the state level.   Despite its wide-ranging preemptive effects, there are a few notable exceptions to the APRA’s broad preemption provisions, including privacy laws related to the protection of employee data (meaning the California Consumer Privacy Act would remain in effect with respect to employee data) as well as carve outs for certain state laws on discrete subjects related to privacy (e.g., provisions of laws that address privacy rights or other protections of students or student information, data breach notification laws, general consumer protection or civil rights laws).  Similarly, entities subject to and in compliance with other specified federal privacy laws, including the Gramm-Leach-Bliley Act and Health Insurance Portability and Accountability Act, or federal data security requirements shall be deemed in compliance with the related provisions of the APRA.

    State law preemption under the APRA has drawn heavy criticism from legislators and consumer advocacy groups who have criticized Congress’ approach as creating a ceiling for individual privacy rights rather than a floor.  Opponents of state law preemption argue that the federal government is ill-equipped to quickly respond to technological advancements that impact consumer privacy as compared with the states, which are often better positioned to respond to rapid changes in the digital environment. On the other hand, small and medium businesses and large corporations from around the country have expressed support for the APRA’s broad preemption provisions, citing the untenable compliance obligations imposed by the current patchwork of privacy legislation.

    Conclusion

    Because of its nationwide scope and potential to preempt state law, the APRA would markedly change the regulatory framework for entities that collect and process data of U.S. individuals. However, given the APRA’s uncertain future, covered entities should continue to monitor legal developments at the federal and state levels.


    [1] A copy of the discussion draft APRA can be found here.

    [2] Defined as entities and their affiliates whose average annual gross revenue for the previous three (3) years did not exceed $40 million, that, on average did not process the covered data of more than 200,000 (excluding payment transactions) and that do not transfer covered data to third parties for value (i.e., entities that do not “sell” data).

    [3] Covered entities or service providers that have $250 million or more in annual revenue; collect, process, retain, or transfer the covered data of more than 5 million individuals (or 15 million portable devices or 35 million connected devices that are linkable to an individual) or the sensitive data of more than 200,000 individuals (or 300,000 portable devices or 700,000 connected devices) subject to certain exemptions such as entities that collect, processor, retain or transfer personal mailing or email addresses, personal telephone numbers, log-in information or sellers of  the case of a covered entity that is a seller of goods or services (other than payment processors or platforms), credit, debit, or mobile payment information strictly necessary to initiate, render, bill for, finalize, complete, or otherwise facilitate payments for goods or services.

    [4] Covered entities that provide any internet-accessible platform and generate $3 billion or more in global annual revenue, have 300 million global monthly active users and constitute an online product that is primarily used by individuals to access or share user-generated content.

    [5] Similar to comprehensive state privacy laws passed to date, “de-identified data” is defined as information that cannot reasonably be used to infer or derive the identity of an individual, does not identify and is not linked or reasonably linkable to an individual or a device that identifies or is linked or reasonably linkable to such individual, regardless of whether the information is aggregated, if the relevant covered entity or service provider (i) takes reasonable physical, administrative, or technical measures to ensure that the information cannot, at any point, be used to re-identify any individual or device that identifies or is linked or reasonably linkable to an individual; (ii) publicly commits in a clear and conspicuous manner to: (A) process, retain, or transfer the information solely in a de-identified form without any reasonable means for re-identification; and (B) not attempt to re-identify the information with any individual or device that identifies or is linked or reasonably linkable to an individual; and (iii) contractually obligates any entity that receives the information from the covered entity or service provider to: (A) comply with all of the provisions of this paragraph/clauses (i) and (ii) with respect to the information; and (B) require that such contractual obligations be included contractually in all subsequent instances for which the data may be received.

    [6] Notably, this would mean any browsing data on such platforms, even without cross-site tracking, would require affirmative consent for third party transfers.

    [7] Notably, (i) California residents are further entitled to recover statutory damages consistent with the CCPA for an action related to a data breach and (ii) consumers may recover statutory damages consistent with Illinois’s Biometric Information Privacy Act and Genetic Information Privacy Act for an action involving a violation of the affirmative express consent provisions for biometric and genetic information where the conduct occurred substantially and primarily in Illinois.

    [8] If a cure for the alleged violation is possible within thirty (30) days, and the entity in fact cures and provides written notice of such cure to the individual, an action for injunctive relief will not be permitted.

    [9] Substantial privacy harms include financial harms of $10,000 or more and physical and mental harms that involve (i) treatment by a licensed health care provider, (ii) physical injury, (iii) highly offensive intrusions into the privacy expectations of a reasonable consumer or (iv) discrimination on the basis of a protected characteristic.

    EU Court of Justice confirms earlier case law on broad interpretation of “personal data” and offers extensive interpretation of “joint controllership”, with possible broad ramifications in the AdTech industry and beyond

    On March 7, 2024, the Court of Justice of the European Union (the “CJEU”) handed down its judgment in the IAB Europe case, answering a request for a preliminary ruling under Article 267 TFEU from the Brussels Market Court.[1]  The case revolves around IAB Europe’s Transparency and Consent Framework (“TCF”) and has been closely monitored by the AdTech industry ever since the Belgian DPA investigated and subsequently imposed a 250,000 euro fine on IAB Europe for alleged breaches of GDPR and e-Privacy rules back in 2022.[2]

    Factual Background

    IAB Europe is a European-level standard setting association for the digital marketing and advertising ecosystem.  Back in 2018, when GDPR entered into force, it designed the TCF as a set of rules and guidelines that addresses challenges posed by GDPR and e-Privacy rules in the context of online advertising auctions (such as real-time bidding).  The goal was to help AdTech companies that do not have any direct interaction with the website user (i.e., any company in the AdTech ecosystem that is not the website publisher, such as ad-networks, ad-exchanges, demand-side platforms) to ensure that the consent that the website publisher obtained (through cookies or similar technologies) is valid under the GDPR (i.e., freely given, specific, informed and unambiguous) and that, therefore, those AdTech companies can rely on that consent to serve ads to those users in compliance with GDPR and e-Privacy rules.

    On a technical level, overly simplified, the TCF is used to record consent (or lack thereof) or objections to the reliance on legitimate interests under GDPR among IAB’s members by storing the information on consents and objections in a Transparency and Consent String (the “TC String”).  The TC String is a coded representation (a string of letters and numbers) of a user’s preferences, which is shared with data brokers and advertising platforms participating in the TCF auction protocol who would not otherwise have a way to know whether users have consented or objected to the processing of their personal data.[3]

    First Question: Does the TC String constitute Personal Data?

    The CJEU now ruled, echoing its earlier decision in Breyer,[4] that the TC String may constitute personal data under the GDPR to the extent those data may, by “reasonable means”, be associated with an identifier such as an IP address, allowing the data subject to be (re-)identified.  The fact that IAB Europe can neither access the data that are processed by its members under its membership rules without an external contribution, nor combine the TC String with other factors itself, did not preclude the TC String from potentially being considered personal data according to the CJEU.[5] 

    Second Question: Does IAB Europe act as Data Controller?

    Secondly, the Court decided that IAB Europe, as a sectoral organization proposing a framework of rules regarding consent to personal data processing, which contains not only binding technical rules but also rules setting out in detail the arrangements for storing and disseminating personal data, should be deemed a joint controller together with its members if and to the extent it exerts influence over the processing “for its own purposes” and, together with its members, determines the means behind such operations (e.g., through technical standards).  In the IAB Europe case, this concerns in particular the facilitation by IAB of the sale and purchase of advertising space among its members and its enforcement of rules on TC String content and handling.  It also seemed particularly relevant to the Court that IAB Europe could suspend membership in case of breach of the TC String rules and technical requirements by one of its members, which may result in the exclusion of that member from the TCF.

    Further, in keeping with earlier CJEU case-law[6], the Court found it irrelevant that IAB Europe does not itself have direct access to the personal data processed by its members.  This does not in and of itself preclude IAB Europe from holding the status of joint controller under GDPR.

    However, the Court also reiterated that joint controllership doesn’t automatically extend to subsequent processing by third parties, such as – in this case – website or application providers further processing the TC String following its initial creation, unless the joint controller continues to (jointly) determine the purpose and means of that subsequent processing.  This is in line with the Court’s 2019 Fashion ID judgment.[7]  In addition, the Court opined that the existence of joint controllership “does not necessarily imply equal responsibility” of the various operators engaged in the processing of personal data. The level of responsibility of each individual operator must be assessed in the light of all the relevant circumstances of a particular case, including the extent to which the different operators are involved at different stages of the data processing or to different degrees.  So not all joint controllers are created equal.

    Key Takeaways

    In our view, the first finding is not groundbreaking.  It largely confirms the Court’s previous case-law establishing that “personal data” must be interpreted broadly under GDPR, meaning the standard for truly “anonymized data” continues to be very high.  It will now be for the Brussels Market Court to determine whether, based on the specific facts of the IAB Europe case, the TC String indeed constitutes personal data.

    The second finding may have caught more people off guard.  While it will again be up to the Brussels Market Court to determine whether IAB Europe is actually a joint controller in respect of the personal data alleged to be included in the TC String, the Court’s expansive interpretation of the concept of joint controllership (i.e., where “two or more controllers jointly determine the purposes and means of processing” (Article 26 GDPR)) could have broader ramifications beyond the AdTech industry. 

    Organizations who until now have consistently taken the position that they do not qualify as a data controller in respect of data processing activities of their members, users or customers, may need to re-assess that position and, based on the specific factual circumstances relevant to them, consider whether they might in fact be subject to GDPR’s onerous obligations imposed on data controllers.  This may be particularly relevant for standard-setting bodies and industry associations active or established in Europe, potentially hampering their ability to continue developing relevant standards and rules.  Arguably, this could even capture certain providers or deployers of software and other computer systems, including those developing or deploying AI models and systems, in case they would be found to issue “binding technical rules” and “rules setting out in detail the arrangements for storing and disseminating personal data”, and they would actually enforce those rules against third parties using their models and systems to process personal data. 

    Even if some solace can be found from a liability perspective in the confirmation by the Court that joint controllership relating to the initial collection of personal data does not automatically extend to the subsequent processing activities carried out by third-parties, and that not all joint controllers are created equal, the compliance burden on “newfound joint controllers” may nevertheless be burdensome because key obligations on lawfulness, transparency, data security and accountability are triggered irrespective of the “degree” of controllership in question.

    In our view that would take the concept of “joint controllership” too far beyond its literal meaning and originally intended purpose, but it remains to be seen which other enforcement actions will be taken and which other cases raising similar questions may find their way through the European courts in the coming months and years.


    [1]           CJEU, judgment of March 7, 2024, IAB Europe, C-604/22, ECLI:EU:C:2024:214 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=283529&pageIndex=0&doclang=FR&mode=req&dir=&occ=first&part=1&cid=167405).

    [2]           For more information on the original case in front of the Belgian DPA, see the DPA’s dedicated landing page: https://www.dataprotectionauthority.be/iab-europe-held-responsible-for-a-mechanism-that-infringes-the-gdpr.

    [3]           For more information, see the IAB Europe website: https://iabeurope.eu/.

    [4]           CJEU, judgment of 19 October 2016, Breyer, C‑582/14, EU:C:2016:779, paragraphs 41-49 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=184668&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1303370).

    [5]           Recital 26 of GDPR further clarifies that, “to ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.”  This will always require a fact-intensive, case-by-case inquiry, but it is now even more clear that “it is not required that all the information enabling the identification of the data subject must be in the hands of one person” (CJEU, IAB Europe judgment, §40).

    [6]           CJEU, judgment of July 10, 2018, Jehovan todistajat, C‑25/17, EU:C:2018:551, paragraph 69 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=203822&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1305431), and CJEU; judgment of June 5, 2018, Wirtschaftsakademie Schleswig-Holstein, C‑210/16, EU:C:2018:388, paragraph 38 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=202543&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1305548).

    [7]           CJEU, judgment of July 29, 2019, Fashion ID, C‑40/17, EU:C:2019:629, paragraph 74 (https://curia.europa.eu/juris/document/document.jsf?text=&docid=216555&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=1305826), as commented on in our earlier blog post here: https://www.clearycyberwatch.com/2019/08/cjeu-judgment-in-the-fashion-id-case-the-role-as-controller-under-eu-data-protection-law-of-the-website-operator-that-features-a-facebook-like-button/; See also the EDPB Guidelines 07/2020 on the concepts of controller and processor in the GDPR (version 2.1, adopted on July 7, 2021), in relation to the concept of “converging decisions”, at paragraphs 54-58 (https://www.edpb.europa.eu/system/files/2023-10/EDPB_guidelines_202007_controllerprocessor_final_en.pdf).

    Biden Administration Executive Order Targets Bulk Data Transactions

    The Biden administration recently issued Executive Order 14117 (the “Order”) on “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.”  Building upon earlier Executive Orders[1], the Order was motivated by growing fears that “countries of concern” may use artificial intelligence and other advanced technologies to analyze and manipulate bulk sensitive personal data for nefarious purposes.  In particular, the Order notes that unfettered access to American’s bulk sensitive personal data and United States governmental data by countries of concern, whether via data brokers, third-party vendor agreements or otherwise, may pose heightened national security risks. To address these possibilities, the Order directs the Attorney General to issue regulations prohibiting or restricting U.S. persons from entering into certain transactions that pose an unacceptable risk to the national security of the United States.  Last week, the Department of Justice (“DOJ”) issued an Advance Notice of Proposed Rulemaking, outlining its preliminary approach to the rulemaking and seeking comments on dozens of issues ranging from the definition of bulk U.S. sensitive personal data to mitigation of compliance costs. 

    The forthcoming proposed rule will apply to transactions that (i) involve bulk sensitive personal data or U.S. Government-related data; (ii) are part of a class of transactions determined by the Attorney General to pose an unacceptable risk to the national security of the U.S.; (iii) were initiated, are pending, or will be completed after the effective date of the regulations; (iv) do not qualify for an exemption and are not authorized by a license as set forth in the regulations; and (v) are not “incident to and part of the provision of financial services, including banking, capital markets, and financial insurance services, or required for compliance with any Federal statutory or regulatory requirements.”  The proposed rule will be published for public notice and comment by August 26, 2024.  What is interesting is that the Order specifically does NOT impose generalized data localization requirements or prohibit commercial transactions with countries of concern, but rather is tailored to the types of transactions described above.

    The proposed rule will also (i) identify classes of prohibited transactions; (ii) identify classes of restricted transactions; (iii) identify countries of concern and other covered persons; (iv) establish mechanisms to provide further clarity regarding the Order and any implementing regulations; (v) establish a process to issue licenses authorizing transactions that would otherwise be prohibited or restricted; (vi) define relevant terms; (vii) address coordination with other government entities; and (viii) address the need for recordkeeping and reporting of transactions to inform investigative, enforcement, and regulatory efforts.  Among other factors, the proposed regulations will consider both the nature of the class of transaction and the volume of bulk sensitive personal data involved.  Any proposed regulations will also “establish thresholds and due diligence requirements for entities to use in assessing whether a transaction is a prohibited transaction or a restricted transaction.”  Additionally, the Secretary of Homeland Security is directed to propose and seek public comment on security requirements to mitigate the risk posed by restricted transactions.  The security requirements will be based on the National Institute of Standards and Technology Cybersecurity and Privacy Frameworks.  The Secretary of Homeland Security will also issue interpretive guidance regarding such security requirements and the Attorney General will issue enforcement guidance.

    Several other agencies are also directed or advised by the Order to address risks relating to network infrastructure, health data and human genomic data, and the data brokerage industry.  The Order also requires the  Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence to make recommendations as to how to mitigate risks from transfers of bulk sensitive personal data to countries of concern that have already occurred.

    Many of the key concepts in the Order, including “countries of concern” and prohibited and restricted transactions will be further defined and clarified through the rulemaking process. However, it is clear that transactions involving cross-border transfers of large quantities of sensitive personal information will be the enhanced focus of regulatory scrutiny and eventual enforcement, particularly if it involves countries of concern.  The DOJ is accepting comments to the Advance Notice of Proposed Rulemaking until April 19, 2024.  The public will also have the opportunity to comment on the DOJ’s proposed rule later this year.


    [1] Executive Order 13873 of May 15, 2019 (Securing the Information and Communications Technology and Services Supply Chain) and Executive Order 14034 of June 9, 2021 (Protecting Americans’ Sensitive Data from Foreign Adversaries).

    New Privacy Laws Enacted in New Jersey and New Hampshire

    On January 16, 2024, New Jersey officially became one of a growing number states with comprehensive privacy laws, as Governor Phil Murphy signed Senate Bill 332 (the “New Jersey Privacy Act”) into law.[1]  New Hampshire followed closely behind, with its own comprehensive privacy law, Senate Bill 255 (the “New Hampshire Privacy Act” and, together with the New Jersey Privacy Act, the “Acts”), signed into law by Governor Chris Sununu on March 6, 2024.[2] 

    As with many of the other comprehensive privacy laws enacted around the country in the past few years, the Acts are based on the Washington Privacy Act model, containing many familiar consumer rights and protections, though with some notable differences highlighted below.  Joining all currently enacted comprehensive U.S. state privacy laws with the exception of California, the New Jersey Privacy Act and the New Hampshire Privacy Act do not include a private right of action and do not apply to New Jersey or New Hampshire residents acting in a commercial or employment context.  The New Jersey Privacy Act will come into effect 365 days from enactment, or January 15, 2025, with certain provisions, including regarding universal opt-out mechanisms discussed below, coming into effect later in 2025, while the New Hampshire Privacy Act will come into effect on January 1, 2025.

    Applicability

    Processing Thresholds.  Following the trend set by other comprehensive state privacy laws, such as those in Connecticut and Colorado, the New Jersey Privacy Act applies to controllers that (i) conduct business in New Jersey or produce products or services that are targeted to New Jersey residents and (ii) during a calendar year either control or process the personal data of (a) at least 100,000 consumers (i.e., New Jersey residents acting in an individual or household context), excluding personal data processed solely for the purpose of completing a payment transaction or (b) at least 25,000 consumers and derive revenue, or receive a discount on the price of any goods or services, from the sale[3] of personal data.

    The New Hampshire Privacy Act similarly follows the applicability standards of many prior state privacy laws, though with a few changes to account for the smaller population of the state.  The New Hampshire Privacy Act applies to persons that (i) conduct business in New Hampshire or produce products or services that are targeted to New Hampshire residents and (ii) during a one year period either control or process the personal data of (a) not less than 35,000 unique consumers (i.e., New Hampshire residents acting in an individual or household context), excluding personal data controlled or processed solely for the purpose of completing a payment transaction or (b) not less than 10,000 unique consumers and derived more than 25 percent of gross revenue from the sale of personal data. 

    Exceptions.  While the New Jersey Privacy Act contains some common exceptions to applicability, such as for protected health information collected by a covered entity or business associate under the Health Insurance Portability and Accountability Act or financial institutions and their affiliates or data subject to the Gramm-Leach-Bliley Act, there is no exception for non-profit organizations or higher education institutions.  Non-profit organizations that may be exempt under many other state privacy laws (i.e., Colorado, Delaware (which only exempts nonprofits dedicated to preventing and addressing insurance crime) and Oregon (where the non-profit applicability exemption will expire in July of 2025)) will need to pay close attention to the New Jersey Privacy Act, since such an organization will need to meet the standard requirements of the New Jersey Privacy Act if it meets the general applicability threshold by either processing or selling the personal data of the relevant number of New Jersey-based consumers. 

    The New Hampshire Privacy Act also contains many of the familiar exceptions to applicability, including for non-profit organizations and higher education institutions.  However, the exception for financial institutions or data subject to Title V of the Gramm-Leach-Bliley Act, does not include affiliates of such institutions.  Entities that have some affiliates that are subject to the Gramm-Leach-Bliley Act but others that are not will need to carefully consider applicability under the New Hampshire Privacy Act.

    Data Protected

    Both Acts apply to a similar set of data as other state comprehensive privacy laws, applying to personal data that is “linked or reasonably linkable to an identified or identifiable ” individual.[4] However, there are a few notable expansions in the types of data the Acts cover and the protections afforded to certain data when compared with other similar state privacy laws.

    Sensitive Data.  The definition of sensitive data under the New Jersey Privacy Act includes not only typical information such as personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition, etc., but also a few more unique categories.  First, like California, the definition encompasses financial information, which includes a consumer’s account number, account log-in, financial account or credit or debit card number in combination with any required security or access code or password that would permit access to a consumer’s financial account.  Following Oregon and Delaware’s definitions, sensitive data also includes personal data revealing status as transgender or non-binary.  Conversely, the New Hampshire Privacy Act’s sensitive data definition largely aligns with other state laws, without such additions.  Like other state privacy laws with the exception of California, both Acts require consumer consent to process sensitive data, and such processing additionally requires controllers to conduct data protection assessments, as discussed later in this post. 

    Children’s and Minors’ Data.  In addition to requirements to process personal data of children under the age of 13 in accordance with the Children’s Online Privacy Protection Act, the New Jersey Privacy Act requires controllers to obtain consent before processing personal data for purposes of targeted advertising, selling personal data or profiling in furtherance of decisions that produce legal or similarly significant effects where the controller has actual knowledge, or willfully disregards, that the consumer is at least 13 years old but younger than 17 years old.  The New Hampshire Privacy Act has a similar requirement as regards the processing of a minor’s data, but consent is only required where a controller is processing personal data for purposes of targeted advertising or selling personal data (and not profiling) and the requirement applies when a controller both has actual knowledge and willfully disregards that the consumer is at least 13 years old but younger than 16.

    Other Notable Provisions

                While this post dose not attempt to cover all provisions of the Acts, there are a few additional provisions that differentiate the New Jersey Privacy Act and the New Hampshire Privacy Act from similar state privacy acts.

    Website Link.  Similar to California, the New Hampshire Privacy Act requires that controllers provide a “conspicuous link” on the controller’s website that enables a consumer or their agent to opt-out of targeted advertising or the sale of personal data.

    Data Protection Assessments.  Like other state privacy laws, both Acts require controllers to conduct data protection assessments for processing activities that present a heightened risk of harm to a consumer.  The New Jersey Privacy Act is unique, however, in that it makes clear that such assessments must be conducted before the relevant processing activity requiring such assessment can occur.  In other words, controllers are expressly prohibited from conducting processing activities that present a heightened risk of harm to consumers without first conducting and documenting a data protection assessment of each of its processing activities involving personal data acquired on or after the New Jersey Privacy Act’s effective date.  Fortunately, in line with the requirements set forth under other state regimes, including New Hampshire, “heightened risk” is defined to include processing personal data for targeted advertising,  profiling if it presents certain reasonably foreseeable risks, selling personal data and processing sensitive data, and the items required to be considered in the data protection assessments, including weighing benefits of processing against rights of the consumer and using de-identified data, are also in line with other states’ requirements.  Accordingly, to the extent controllers covered by the Acts who engage in the aforementioned processing activities are also subject to the requirements to conduct data protection assessments under other currently effective privacy regimes, such controllers should be able to leverage such assessments for compliance purposes.

    Universal Opt-Out.  Both Acts require controllers to recognize universal opt-out signals if controllers undertake certain processing activities.  The New Jersey Privacy Act provides that no later than 6 months after the New Jersey Privacy Act’s effective date, controllers that process personal data for targeted advertising or that sell personal data must allow consumers to exercise their rights to opt-out of such processing through a user-selected universal opt-out mechanism (the technical specifications for which will be subject to further regulation as discussed below).  Under the New Hampshire Privacy Act, controllers that process personal data for targeted advertising or sell personal data must allow consumers to opt-out through an opt-out preference signal no later than January 1, 2025, which is the same as the New Hampshire Privacy Act’s effective date.  Both Acts set forth a number of requirements for the universal opt-out mechanisms, with New Hampshire’s aligning more closely with terms used in other state privacy laws that contain universal opt-out mechanisms such as Colorado and Connecticut; however, both Acts instruct that the universal opt-out mechanisms should be “as consistent as possible” with similar mechanisms required by federal or state law or regulation, highlighting the intent to encourage standard opt-out mechanisms. 

    Rulemaking.  New Jersey becomes only the third state with a comprehensive privacy law to specifically contemplate rulemaking by a state agency, joining California and Colorado.  Here, the Director of the Division of Consumer Affairs in the Department of Law and Public Safety is empowered to promulgate rules and regulations necessary to effectuate the purposes of the New Jersey Privacy Act, including with regard to universal opt-out mechanisms as discussed above.  No timeline is given for the enactment of such rules, but as seen in the rulemaking process occurring in California, such rules could have significant impacts on privacy requirements in the state.  The New Hampshire Privacy Act provides for only limited rulemaking by the secretary of state with respect to establishing standards for “clear and meaningful” privacy notices and the means by which consumers may submit requests to exercise their rights.

    Sunsetting Cure Periods.  Both acts contain cure periods before actions are brought against controllers (30 days in New Jersey and 60 days in New Hampshire), but these cure periods are set to expire under each of the Acts.  The New Jersey Privacy Act requires the Division of Consumer Affairs in the Department of Law and Public Safety issue a notice to the controller in violation if cure is deemed possible up until 18 months after the effective date of the act (July 2026), whereas the New Hampshire Privacy Act requires the attorney general to issue a notice of violation to the controller if cure is possible only until December 31, 2025, after which the notice of violation is discretionary.  The sunsetting cure periods indicate that the states expect entities to come into compliance with the new requirements reasonably quickly.

    Conclusion             The New Jersey Privacy Act and the New Hampshire Privacy Act do not break the mold when it comes to comprehensive privacy laws in the United States.  However, differences in applicability, scope of protection and requirements on data controllers means that businesses must pay close attention to the nuances of each new privacy law enacted to ensure continued compliance.


    [1] The full text of Senate Bill 332 is available here.

    [2] The full text of Senate Bill 255 is available here.

    [3] Note that both the New Jersey Privacy Act and New Hampshire Privacy Act define “sales” to include exchanges of personal data to a third party for monetary or other valuable consideration. 

    [4] This definition in both Acts also carves out de-identified and publicly available information which follow the definitions set forth under other state privacy laws; however, the New Jersey Privacy Act is silent with respect to pseudonymous data, suggesting that such data may qualify as personal data subject to the New Jersey Privacy Act’s requirements and restrictions. By contrast, the New Hampshire Privacy Act provides that certain of the rights afforded to consumers do not apply to pseudonymized data in cases where the controller is able to demonstrate that any information necessary to identify the consumer is kept separately and subject to effective controls to prevent the controller from accessing it.

    ❌
    ❌