5 easy tips to make your Sora 2 videos pop
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
The companyβs AI Security Engineer autonomously keeps enterprise data protected across devices and environments.
The post Matters.AI Raises $6.25 Million to Safeguard Enterprise Data appeared first on SecurityWeek.
On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.
The path to TFAIA was paved by failure. TFAIAβs predecessor SB 1047[3] overwhelmingly passed the legislature last year, but was ultimately blocked at the Governorβs desk. In his veto statement, Governor Newsom called for an approach to frontier model regulation βinformed by an empirical trajectory analysis of AI systems and capabilities,β criticizing SB 1047 for applying stringent standards to even the most basic functions[4]. TFAIA thus represents a strategic pivot to regulation focused only on the most impactful AI models, which eliminates the kill switch requirement (which would mandate full shutdown capabilities for noncompliant systems), rigid testing and auditing regime and aggressive 72-hour timeline for incident reporting that doomed its predecessor.
TFAIA serves as Californiaβs attempt to strike the balance of advancing AI innovation and competition while underscoring accountability for responsible AI development. The Act aims to bolster public trust and increase awareness of AI-specific risks by requiring developers to think critically about frontier AI capabilities.
Scoped narrowly to target the most powerful models capable of significant and catastrophic impact, TFAIA imposes certain requirements on βfrontier models,β defined as foundation models (or general purpose models that are trained on broad data sets) trained using or intending to use a quantity of computing power greater than 10^26 integer or floating-point operations.[5]Β In particular, all βfrontier developersβ (or persons that βtrained or initiated the trainingβ of frontier models) face baseline transparency requirements, with more burdensome obligations imposed on βlarge frontier developersβ (namely, frontier developers that, together with affiliates, had annual gross revenues above $500 million in the preceding year).
Tailoring its scope even further, TFAIA focuses many of its requirements on prevention of βcatastrophic riskβ, defined as a foreseeable and material risk that a frontier model could (1) materially contribute to the death or serious injury of 50 or more people or (2) cause at least $1 billion in damages to property, in either case, arising from a single incident involving a frontier model, doing any of the following: (a) providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon; (b) engaging in criminal conduct (conduct that would constitute murder, assault, extortion or theft) or a cyberattack, without meaningful human intervention; or (c) evading the control of its frontier developer or user.
TFAIA imposes certain requirements on all frontier models, with heightened obligations on large frontier model developers:
In addition to the requirements imposed on frontier models, TFAIA resurrects CalComputeβa consortium tasked with development of a framework for the creation of a public cloud computing cluster first envisioned under SB 1047βwhich provides for access to advanced computing capabilities to support safe, equitable and sustainable AI development and deployment in the public interest.Β
TFAIA also enhances protections for whistleblowers by (1) prohibiting frontier developers from adopting rules that would prevent employees from reporting catastrophic risks or retaliating against employees who report such risks, (2) requiring frontier developers to provide notice to their employees once a year of their rights as whistleblowers and (3) requiring large frontier developers to implement and maintain anonymous internal reporting channels. Notably, whistleblowers are empowered to bring civil actions for injunctive relief (as well as recovery of attorneysβ fees) against frontier developers for violations of their rights under the Act.
Large frontier developers that fail to publish TFAIA-compliant reports or other documentation, make a false statement about catastrophic risk or their compliance with their frontier AI framework, fail to report a critical safety incident or fail to comply with their frontier AI framework could face penalties up to $1 million per violation, scaled to the severity of the offense. Such penalties can only be recovered by the Attorney General bringing a civil action.Β
To ensure that the applicability of the TFAIA reflects technological change, the Act empowers the California Department of Technologyβas opposed to the Attorney General as envisioned under SB 1047βto assess technological developments, research and international standards and recommend updates to key statutory definitions (of βfrontier model,β βfrontier developerβ and βlarge frontier developerβ) on or before January 1, 2027 and annually thereafter.Β
With TFAIA, California provides a blueprint for regulations focused on the most impactful and powerful AI technology, establishing transparency, disclosure, and governance requirements for frontier model developers. Β A similar bill, the Responsible AI Safety and Education (RAISE) Act, regulating frontier models awaits the signature of Governor Hochul in New York.Β Although TFAIA and RAISE have similar applicability and frameworks,[8] RAISE imposes stricter requirements (72-hour window for reporting safety incidents) and higher penalties (up to $10 million for a first violation and $30 million for subsequent ones), similar to the failed SB 1047.Β TFAIAβs success in navigating gubernatorial approvalβwhere SB 1047 failedβdemonstrates the effectiveness of a transparency-first approach over prescriptive mandates (as TFAIA largely focuses on disclosure requirements for covered models whereas RAISE does not require transparency reporting to the same extent nor include whistleblower protections, instead focusing on enforcement by imposing strict liability and strictly prohibiting models that create unreasonable risk of critical harms), suggesting the RAISE Act may be subject to further narrowing, or even a veto, by Governor Hochul.Β
Most businesses, including the vast majority of AI developers, will be relieved that TFAIA has such narrow applicability.Β For the few businesses that might meet TFAIAβs applicability thresholds, the law represents both immediate compliance obligations and a preview of the regulatory landscape to come. These businesses should:
[1] The text of the Act can be found here.
[2] AB 2013 requires developers of generative AI systems to post documentation on their website describing the dataset(s) used for system training.
[3] The text of SB 1047 can be found here.
[4] https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Veto-Message.pdf.
[5] The computing power minimum includes computing from both initial training and subsequent fine-tuning or modifications.
[6] Notably, frontier developers can redact portions of their transparency reports to protect trade secrets and guard against cybersecurity or public safety threats; however, any such redactions must be justified within the repot which must be maintained for 5 years.
[7] The Act defines a βcritical safety incidentβ to mean any of the following: (1) unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury; (2) harm resulting from the materialization of a catastrophic risk; (3) loss of control of a frontier model causing death or bodily injury; or (4) a frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk.
[8] Unlike TFAIA, RAISE instead applies only to βlarge developersβ defined as persons thatΒ have (1) trainedΒ atΒ leastΒ one frontier model and (2) spent over $100 million in aggregate compute costs in training frontier models.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Resistant AI will use the funding to expand its fraud detection and transaction monitoring offerings to new markets.
The post Fraud Prevention Firm Resistant AI Raises $25 Million appeared first on SecurityWeek.
Balancing innovation with ethical governance is crucial for ensuring fairness, accountability, and public trust in the age of intelligent machines.
The post Beyond the Black Box: Building Trust and Governance in the Age of AI appeared first on SecurityWeek.
Cybersecurity today is defined by complexity. Threats evolve in real time, driven by AI-generated malware, autonomous reconnaissance, and adversaries capable of pivoting faster than ever.Β
In a recent survey by DarkTrace of more than 1,500 cybersecurity professionals worldwide, nearly 74% said AI-powered threats are a major challenge for their organization, and 90% expect these threats to have a significant impact over the next one to two years.Β
Meanwhile, many organizations are still operating with defensive models that were built for a more static world. These outdated training environments are ad hoc, compliance-driven, and poorly suited for the ever-changing nature of todayβs security risks.
Whatβs needed now within organizations and cybersecurity teams is a transformation from occasional simulations to a daily threat-informed practice. This means changing from fragmented roles to cross-functional synergy and from a reactive defense to operational resilience.Β
At the heart of that transformation lies Continuous Threat Exposure Management (CTEM), a discipline β not a tool or a project β that enables organizations to evolve in step with the threats they face.
Legacy training models that include annual penetration tests, semi-annual tabletop exercises, and isolated red vs. blue events are no longer sufficient. They offer limited visibility, simulate too narrow a scope of attack behavior, and often check a compliance box without building lasting and strategic capabilities.
Even worse, they assume adversaries are predictable and unchanging. But as we know, AI-generated malware and autonomous reconnaissance have raised the bar. Threat actors are now faster, more creative, and harder to detect.Β
Todayβs attackers are capable of developing evasive malware and launching attacks that shift in real time. To meet this evolving threat environment, organizations must shift their mindset before they can shift their tactics.Β
CTEM offers a fundamentally different approach. It calls for operationalized resilience, where teams systematically test, refine, and continually evolve their defensive posture daily.Β
This is not done through broad-stroke simulations, but through atomic, context-aware exercises targeting individual techniques relevant to their specific threat landscape. This is also done one sub-technique at a time. Teams look at one scenario, then iterate, refine, and move to the next.Β
This level of precision ensures organizations are training for the threats that actually matter β attacks that target their sector, their infrastructure, and their business logic. It also creates a steady rhythm of learning that helps build enduring security reflexes.
What separates CTEM from traditional testing is not just frequency, but authenticity. Real-time breach simulations arenβt hypothetical. These simulations are designed to replicate real adversarial behavior, intensity, and tactics. If they are done right, they mirror the sneakiness and ferocity of live attacks.
We should keep in mind that authenticity doesnβt just come from tools but also from the people designing the simulations. You can only replicate real-world threats if your SOC teams are keeping current with todayβs threat landscape. Without that, simulations risk becoming just another theoretical exercise.Β
These complex scenarios donβt just test defenses; they reveal how teams collaborate under pressure, how fast they detect threats, and whether their response protocols are aligned with actual threat behavior.
What happens after a simulation is just as important as the exercise itself. The post-simulation analytics loop offers critical insights into what worked, what didnβt, and where systemic weaknesses lie.Β
Granular reporting is essential, as it allows organizations to identify issues with skills, processes, or coordination. By learning the specifics and gaining meaningful metrics β including latency in detection, success of containment, and coverage gaps β they can turn simulations into actionable intelligence.Β
Over time, recurring exercises using similar tradecraft help measure progress with precision and determine if improvements are taking hold or if additional refinements are needed.
For CISOs and security leaders, adopting CTEM is not just about adding more tools β itβs about implementing culture, structure, and strategy.Β
This is a blueprint for embedding CTEM into an organizationβs security protocols:
While attackers are already using AI to their advantage, defenders can use it too, but with care.Β
AI isnβt a replacement for real-world training scenarios. Relying on it alone to create best-practice content is a mistake. What AI can do well is speed up content delivery, adapt to different learners, and personalize the experience.Β
It can also identify each personβs weaknesses and guide them through custom learning paths that fill real skill gaps. In 2026, expect AI-driven personalization to become standard in professional development, aligning learner needs with the most relevant simulations and modules.
Ultimately, CTEM succeeds when itβs embraced not as a feature or a product but as a discipline woven into the daily practices of the organization.Β
It also requires careful development. Red and blue teams must be open, transparent, and aligned. Itβs not enough to simulate the threat. Security teams must also simulate to match an adversaryβs intensity in order to build reflexes strong enough to withstand the real thing.Β
The organizations that take this path wonβt just respond faster to incidents β theyβll be able to anticipate and adapt and cultivate resilience that evolves as quickly as the threats do.
Dimitrios Bougioukas is vice president of training at Hack The Box, where he leads the development of advanced training initiatives and certifications that equip cybersecurity professionals worldwide with mission-ready skills.
The post Red, blue, and now AI: Rethinking cybersecurity training for the 2026 threat landscape appeared first on CyberScoop.
Read more of this story at Slashdot.
Read more of this story at Slashdot.