Reading view

There are new articles available, click to refresh the page.

US government, allies publish guidance on how to safely deploy AI agents

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The agencies’ central message is that agentic AI does not require an entirely new security discipline. Organizations should fold these systems into the cybersecurity frameworks and governance structures they already maintain, applying established principles such as zero trust, defense-in-depth and least-privilege access.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.

The guidance also flags prompt injection, where instructions embedded inside data can hijack an agent’s behavior to perform malicious tasks. Prompt injection has been a lingering problem with large language models, with some companies admitting that the problem may never be solved

Identity management gets significant attention throughout the document. The agencies recommend that each agent carry a verified, cryptographically secured identity, use short-lived credentials and encrypt all communications with other agents and services. For high-impact actions, a human should have to sign off, and the guidance is explicit that deciding which actions require that approval is a job for system designers, not the agent.

The agencies admit the security field has not fully caught up with agentic AI. Some risks unique to these systems are not yet covered by existing frameworks, and the guidance calls for more research and collaboration as the technology takes on a growing number of operational roles.

“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains,” the guidance reads. 

You can read the full guidance below.

The post US government, allies publish guidance on how to safely deploy AI agents appeared first on CyberScoop.

CrowdStrike to buy identity startup SGNL for nearly $740M

CrowdStrike is buying identity management startup SGNL, a move that underscores how identity security has become a central battleground in enterprise cybersecurity as companies add cloud services and deploy AI-driven tools.

The cybersecurity firm did not disclose financial terms in a Thursday announcement, but CrowdStrike CEO George Kurtz told CNBC the deal is valued at nearly $740 million.

The acquisition targets a growing problem for large organizations: Access is no longer limited to employees logging into a handful of internal systems. Modern environments include contractors, automated scripts, cloud workloads and an expanding set of non-human identities, such as service accounts and machine credentials. More recently, companies have begun experimenting with AI agents that can take actions across multiple systems, sometimes with broad privileges.

Kurtz framed that shift as a security challenge, saying AI agents can operate with “superhuman speed and access,” effectively turning each agent into a privileged identity. The company argues that older models built around static policies and “standing privileges” can leave gaps because access rights may remain in place even as conditions change, such as with a compromised device, suspicious behavior or a new threat signal.

The bet behind the SGNL purchase is that access decisions can be made more dynamic and more automated. CrowdStrike said SGNL functions as a runtime enforcement layer between identity providers and the software and cloud infrastructure, including SaaS applications and major cloud platforms. In practice, that implies shifting controls closer to the moment an account tries to access a resource, allowing permissions to be continuously reevaluated and, if necessary, revoked.

The company is also positioning the deal as an expansion of its identity security portfolio within the Falcon platform, which it says spans privileged access management, identity threat detection and response, SaaS identity security, and protections aimed at AI-driven identities. It said SGNL would extend “just-in-time” access controls beyond Microsoft Active Directory and Entra ID to additional identity systems, including AWS Identity and Access Management and Okta.

The announcement points to a broader industry trend: Identity has become a primary attack path, particularly as organizations connect more cloud services and integrate them with single sign-on systems. Even when organizations harden endpoints and networks, a stolen credential can offer a direct route into business applications and data. The rise of automated identities adds another layer of complexity, because these accounts are often created for operational convenience and may be poorly tracked or overprivileged.

SGNL CEO Scott Kriz said the company was founded to connect access decisions with “business reality,” describing standing privileges as a persistent risk. The companies have not detailed how SGNL will be integrated operationally, but the rationale centers on using real-time signals about identity, device and behavior to determine whether access should continue.

The deal also reflects the industry’s focus on artificial intelligence, which is increasingly seen both as a defensive tool and as a source of new security risks.

In the latter half of 2025 alone:

  • Palo Alto Networks announced it will acquire Chronosphere, a cloud observability platform, for $3.35 billion in cash and equity.
  • Cloud security company Zscaler announced it has acquired SplxAI, an artificial intelligence security platform.
  • Veeam acquired Securiti AI for $1.7 billion.
  • Check Point acquired AI security firm Lakera.

The proposed acquisition is expected to close during CrowdStrike’s first quarter of fiscal 2027.

The post CrowdStrike to buy identity startup SGNL for nearly $740M appeared first on CyberScoop.

❌