California Enacts Landmark AI Safety Law But With Very Narrow Applicability
On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.
The path to TFAIA was paved by failure. TFAIA’s predecessor SB 1047[3] overwhelmingly passed the legislature last year, but was ultimately blocked at the Governor’s desk. In his veto statement, Governor Newsom called for an approach to frontier model regulation “informed by an empirical trajectory analysis of AI systems and capabilities,” criticizing SB 1047 for applying stringent standards to even the most basic functions[4]. TFAIA thus represents a strategic pivot to regulation focused only on the most impactful AI models, which eliminates the kill switch requirement (which would mandate full shutdown capabilities for noncompliant systems), rigid testing and auditing regime and aggressive 72-hour timeline for incident reporting that doomed its predecessor.
TFAIA serves as California’s attempt to strike the balance of advancing AI innovation and competition while underscoring accountability for responsible AI development. The Act aims to bolster public trust and increase awareness of AI-specific risks by requiring developers to think critically about frontier AI capabilities.
Scope and Thresholds
Scoped narrowly to target the most powerful models capable of significant and catastrophic impact, TFAIA imposes certain requirements on “frontier models,” defined as foundation models (or general purpose models that are trained on broad data sets) trained using or intending to use a quantity of computing power greater than 10^26 integer or floating-point operations.[5] In particular, all “frontier developers” (or persons that “trained or initiated the training” of frontier models) face baseline transparency requirements, with more burdensome obligations imposed on “large frontier developers” (namely, frontier developers that, together with affiliates, had annual gross revenues above $500 million in the preceding year).
Tailoring its scope even further, TFAIA focuses many of its requirements on prevention of “catastrophic risk”, defined as a foreseeable and material risk that a frontier model could (1) materially contribute to the death or serious injury of 50 or more people or (2) cause at least $1 billion in damages to property, in either case, arising from a single incident involving a frontier model, doing any of the following: (a) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (b) engaging in criminal conduct (conduct that would constitute murder, assault, extortion or theft) or a cyberattack, without meaningful human intervention; or (c) evading the control of its frontier developer or user.
Key Compliance Provisions
TFAIA imposes certain requirements on all frontier models, with heightened obligations on large frontier model developers:
- Transparency Reports. At or before the time of deploying a frontier model (or a substantially modified version of an existing frontier model), frontier model developers must implement and publish a transparency report on their website. Reports, which can under the Act be embedded in model or system cards, must include (a) the website of the frontier developer, (b) model details (e.g., release date, languages supported, intended uses, modalities, restrictions) and (c) mechanisms by which a person can communicate with the frontier developer.[6]
Large frontier developers must further (x) include summaries of assessments of catastrophic risks resulting from use of the frontier model, the results of such assessments, the role of any third-party evaluators and the steps taken to fulfill the requirements of the frontier AI framework (see below) and (y) transmit to the Office of Emergency Services reports of any assessments of catastrophic risk resulting from internal use of their frontier models every three months or pursuant to another reasonable schedule specified by the developer. The Act tasks the Office of Emergency Services with establishing a mechanism by which large frontier developers can confidentially submit such assessment reports of catastrophic risk. - Critical Safety Incident Reporting. Frontier developers are required to report “critical safety incidents”[7] to the Office of Emergency Services within 15 days of discovery. To the extent a critical safety incident poses imminent risk of death or serious physical injury, the reporting window is shortened to 24 hours, with disclosure required to an appropriate authority based on the nature of the incident and as required by law. Note, critical safety incidents pertaining to foundation models that do not qualify as frontier models are not required to be reported. Importantly, TFAIA exempts the following reports from disclosure under the California Public Records Act: reports regarding critical safety incidents, reports of assessments of catastrophic risk and covered employee reports made pursuant to the whistleblower protections described below.
- Frontier AI Frameworks for Large Frontier Developers. In addition to the above, large frontier developers must publish an annual (or, upon making a material modification to its framework, within 30 days of such modification) frontier AI framework describing the technical and organizational protocols relied upon to manage and assess how catastrophic risks are identified, mitigated, and governed. The framework must include documentation of a developer’s alignment with national/international standards, governance structures, thresholds used to identify and assess the frontier model’s capabilities to pose a catastrophic risk, mitigation processes (including independent review of potential for catastrophic risks and effectiveness of mitigation processes) and cybersecurity practices and processes for identifying and responding to critical safety incidents. Large frontier developers are prohibited from making false or misleading claims about catastrophic risks from their frontier models or their compliance with their published frontier AI framework. Additionally, these developers are permitted to redact information necessary to protect trade secrets, cybersecurity, public safety or national security or as required by law as long as they maintain records of unredacted versions for a period of at least five years.
Other Notable Provisions
In addition to the requirements imposed on frontier models, TFAIA resurrects CalCompute—a consortium tasked with development of a framework for the creation of a public cloud computing cluster first envisioned under SB 1047—which provides for access to advanced computing capabilities to support safe, equitable and sustainable AI development and deployment in the public interest.
TFAIA also enhances protections for whistleblowers by (1) prohibiting frontier developers from adopting rules that would prevent employees from reporting catastrophic risks or retaliating against employees who report such risks, (2) requiring frontier developers to provide notice to their employees once a year of their rights as whistleblowers and (3) requiring large frontier developers to implement and maintain anonymous internal reporting channels. Notably, whistleblowers are empowered to bring civil actions for injunctive relief (as well as recovery of attorneys’ fees) against frontier developers for violations of their rights under the Act.
Enforcement and Rulemaking
Large frontier developers that fail to publish TFAIA-compliant reports or other documentation, make a false statement about catastrophic risk or their compliance with their frontier AI framework, fail to report a critical safety incident or fail to comply with their frontier AI framework could face penalties up to $1 million per violation, scaled to the severity of the offense. Such penalties can only be recovered by the Attorney General bringing a civil action.
To ensure that the applicability of the TFAIA reflects technological change, the Act empowers the California Department of Technology—as opposed to the Attorney General as envisioned under SB 1047—to assess technological developments, research and international standards and recommend updates to key statutory definitions (of “frontier model,” “frontier developer” and “large frontier developer”) on or before January 1, 2027 and annually thereafter.
Key Takeaways
With TFAIA, California provides a blueprint for regulations focused on the most impactful and powerful AI technology, establishing transparency, disclosure, and governance requirements for frontier model developers. A similar bill, the Responsible AI Safety and Education (RAISE) Act, regulating frontier models awaits the signature of Governor Hochul in New York. Although TFAIA and RAISE have similar applicability and frameworks,[8] RAISE imposes stricter requirements (72-hour window for reporting safety incidents) and higher penalties (up to $10 million for a first violation and $30 million for subsequent ones), similar to the failed SB 1047. TFAIA’s success in navigating gubernatorial approval—where SB 1047 failed—demonstrates the effectiveness of a transparency-first approach over prescriptive mandates (as TFAIA largely focuses on disclosure requirements for covered models whereas RAISE does not require transparency reporting to the same extent nor include whistleblower protections, instead focusing on enforcement by imposing strict liability and strictly prohibiting models that create unreasonable risk of critical harms), suggesting the RAISE Act may be subject to further narrowing, or even a veto, by Governor Hochul.
Most businesses, including the vast majority of AI developers, will be relieved that TFAIA has such narrow applicability. For the few businesses that might meet TFAIA’s applicability thresholds, the law represents both immediate compliance obligations and a preview of the regulatory landscape to come. These businesses should:
- Conduct a threshold analysis to determine frontier developer or large frontier developer status
- Review existing AI safety practices against TFAIA requirements, particularly focusing on safety framework documentation and incident reporting capabilities
- Develop comprehensive frontier AI frameworks addressing the law’s required elements, including governance structures, risk assessment thresholds and cybersecurity practices
- Implement robust documentation systems to support transparency reporting requirements for model releases and modifications
- Create incident response procedures to identify and report critical safety incidents within required timelines (15-day standard, 24-hour emergency)
- Update whistleblower reporting mechanisms and ensure employees receive notice of their rights under the law
- Develop scalable compliance frameworks accommodating varying state requirements as other states, including New York, consider similar AI safety laws
- Consider voluntary adoption of TFAIA-style frameworks as industry best practices, even for companies below current thresholds
[1] The text of the Act can be found here.
[2] AB 2013 requires developers of generative AI systems to post documentation on their website describing the dataset(s) used for system training.
[3] The text of SB 1047 can be found here.
[4] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.
[5] The computing power minimum includes computing from both initial training and subsequent fine-tuning or modifications.
[6] Notably, frontier developers can redact portions of their transparency reports to protect trade secrets and guard against cybersecurity or public safety threats; however, any such redactions must be justified within the repot which must be maintained for 5 years.
[7] The Act defines a “critical safety incident” to mean any of the following: (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
[8] Unlike TFAIA, RAISE instead applies only to “large developers” defined as persons that have (1) trained at least one frontier model and (2) spent over $100 million in aggregate compute costs in training frontier models.