Reading view

There are new articles available, click to refresh the page.

US government, allies publish guidance on how to safely deploy AI agents

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The agencies’ central message is that agentic AI does not require an entirely new security discipline. Organizations should fold these systems into the cybersecurity frameworks and governance structures they already maintain, applying established principles such as zero trust, defense-in-depth and least-privilege access.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.

The guidance also flags prompt injection, where instructions embedded inside data can hijack an agent’s behavior to perform malicious tasks. Prompt injection has been a lingering problem with large language models, with some companies admitting that the problem may never be solved

Identity management gets significant attention throughout the document. The agencies recommend that each agent carry a verified, cryptographically secured identity, use short-lived credentials and encrypt all communications with other agents and services. For high-impact actions, a human should have to sign off, and the guidance is explicit that deciding which actions require that approval is a job for system designers, not the agent.

The agencies admit the security field has not fully caught up with agentic AI. Some risks unique to these systems are not yet covered by existing frameworks, and the guidance calls for more research and collaboration as the technology takes on a growing number of operational roles.

“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains,” the guidance reads. 

You can read the full guidance below.

The post US government, allies publish guidance on how to safely deploy AI agents appeared first on CyberScoop.

Everyone’s building AI agents. Almost nobody’s ready for what they do to identity.

Anthropic recently announced that it would not release Mythos, its most powerful AI model, to the public. The model discovered thousands of previously unknown software vulnerabilities — flaws that had sat undetected in major operating systems and web browsers for as long as nearly three decades. Anthropic said the model was too dangerous to deploy broadly because the same capabilities that let it find and fix security flaws could let attackers exploit them. A single AI agent, the company warned, could scan for weaknesses faster and more persistently than hundreds of human hackers. 

That decision tells you something important about where we are. The same AI systems that companies are racing to deploy as autonomous assistants — scheduling your appointments, writing your code, managing your workflows — are also capable of probing digital defenses at a speed and scale no human team can match. And most of the systems they’d be probing still rely on a security model designed for an era when a person sat behind every keyboard. 

Think of it like a building where every door has a lock, but the locks were all designed to recognize human hands. Now the building is full of robots — some of them authorized couriers, some of them intruders — and the locks can’t tell the difference. 

Not long ago, you could sit at your desk, glance at the sticky note on your monitor for your username and password, type them in, and grab a cup of coffee while your browser opened a doorway to the rest of the world. Every layer of security that followed — passwords, security questions, biometric scans, two-factor authentication — grew out of a single bedrock assumption: a person was on the other end. 

AI agents break that assumption from two directions at the same time. Legitimate agents need credentials to act like a human. OpenAI’s Operator navigates websites on your behalf. Google’s Gemini can plan your next family vacation while you sleep. Visa recently unveiled Intelligence Commerce Connect, a platform that lets AI agents do the shopping for consumers. These aren’t demos or hot takes from a tech conference floor. They’re shipping products that act on behalf of real people—and to do that, they need your identity. 

At the same time, adversaries can fake humanity at scale. The same AI that can act like a helpful assistant convincing can also be a malicious impersonator. They don’t break in, they log in—through shared credentials, hiring pipelines, vendor onboarding portals, and collaboration tools. Most organizations still treat identity as a login problem—something IT handles with stronger passwords or additional authentication steps layered on top of existing systems. The harder challenge now is knowing who, or what, you’ve already let in. 

That distinction is collapsing just as digital systems become more autonomous. 

When that distinction blurs, the damage is concrete. If a procurement workflow cannot distinguish between a human manager and an AI impersonator, purchase orders go out under false authority. When compliance logs cannot determine how a decision was authorized — by a person or a bot — the accountability chain falls apart. Regulators and customers will not accept “we’re not sure” as an explanation. 

The economics have tilted sharply toward the attacker. Sophisticated fraud once required coordination, with people researching targets, crafting messages, and adjusting tactics in real time. AI agents eliminate those constraints. One person can now supervise an army of autonomous systems, each running a valid persona across multiple interactions simultaneously. A single operator can field a hundred synthetic employees for the cost of one real salary. The barrier to large-scale impersonation is no longer skill or manpower. It is access to a capable model and a set of stolen credentials. 

Stronger identity controls do carry a cost. Every additional verification step is a moment when a customer might abandon a transaction, or an employee might lose patience with a security protocol. The goal is not to shut down automation. It is to make sure the systems acting in your name are authorized to do so. 

Some organizations are adapting. They are treating AI agents less like software and more like new employees, cataloging every agent in their environment, limiting permissions, requiring human approval for sensitive actions. They are moving beyond passwords to phishing-resistant authentication that binds access to a known device and a verified user. They are building behavioral baselines so that when a customer service bot suddenly queries a financial database, or a new hire accesses source code on day one, alarms go off. 

Nobody keeps their password on a sticky note anymore (I hope). But the assumption behind the sticky note, that a human hand would type it in, still underpins most of the systems we depend on. These systems hold your medical records, process your mortgage, and let an AI assistant rebook your flight. In a world where AI agents act faster, more persistently, and more convincingly than any person, that assumption is the vulnerability. 

The organizations that can verify identity continuously — not just at the door, but at every action, for every actor, human or machine — will have a durable advantage. The ones that cannot will find out what ambiguity costs. 

Devin Lynch is Senior Director of the Paladin Global Institute and a former Director for Policy and Strategy Implementation at the Office of the National Cyber Director. 

The post Everyone’s building AI agents. Almost nobody’s ready for what they do to identity. appeared first on CyberScoop.

Spy agency officials say job loss anxiety, moving fast ‘safely’ among top challenges in AI workforce overhaul

Like many organizations, the National Geospatial Intelligence Agency is moving to integrate AI tools into their business operations.

Jay Harless, director of human development at NGA, said the agency is trying to strike a balance: move fast enough to keep pace in what U.S. national security officials increasingly view as an AI arms race with adversarial countries like Russia, China, but not so fast that it disrupts proven intelligence-gathering methods.

“One of our primary drivers is that our adversaries were investing heavily, and so there is the pressure to keep ahead of and do that safely,” Harless said Tuesday at the Workday Federal Forum, presented by Scoop News Group. “We also realize that some of our adversaries may not have the same legal and ethical boundaries that us and our partners all need.”

Harless said the agency and others in the intelligence community are working to build systems with agentic AI that operates that can accelerate decision making “within secure boundaries.” That means building new IT infrastructure, validation protocols, monitoring for bias or rogue behavior, and putting accountability mechanisms in place.

“We’re moving fast, and moving fast safely by distinguishing what should be automated, what should be augmented and what should be kept purely human, because there are some things that will always be [human-operated],” he said.

A key piece is figuring out exactly how AI should fit into the work. Sasha Muth, NGA’s deputy director of human development, said the agency envisions a three-to-five-year effort to transform its workforce and IT infrastructure for the AI age. This year will be spent largely putting “structural things in place” for when and how analysts use AI, and reassessing what qualifications the agency should require for entry-level jobs.

But that effort is also causing tensions within the workforce, and Muth acknowledged that part of the challenge is convincing rank-and-file employees that the technology is going to help them – not replace them. The agency hired its first Chief AI Officer in 2024, and its upcoming three-year strategic plan will focus on change management, professional development and updating employees’ job skills. 

Muth said they are focused on evolving their human capital needs because one of her biggest fears is that over that five-year transition “we‘re going to lose a lot of our expertise” by automating functions and not doing enough to modernize job requirements.

“We do see it as a big transformation, not only for just utilizing the technology, but moving our workforce along with us, having them excited about the changes and not fearful, because there’s a lot of fear…that their job is going away, that they won’t have a job,” she said.

The post Spy agency officials say job loss anxiety, moving fast ‘safely’ among top challenges in AI workforce overhaul appeared first on CyberScoop.

Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution

As organizations consider agentic AI for their business and IT stacks, researchers continue to find bugs and vulnerabilities in major, commercial models  that can significantly expand their attack surface.

This week, researchers at Pillar Security disclosed a vulnerability in Antigravity, an AI-powered developer tool for filesystem operations made by Google.

The bug, since patched, combined prompt injection with Antigravity’s permitted file-creation capability to grant attackers remote code execution privileges.

The research details how the exploit was able to circumvent Antigravity’s secure mode, Google’s highest security setting for its agents that runs all command operations through a virtual sandbox environment, throttles network access and prohibits the agent from writing code outside of the working directory.

Secure mode is supposed to limit the AI agent access to sensitive systems – and its ability to execute malicious or dangerous acts through shell commands. But one of the file-searching tools used by Antigravity, called “find_by_name,” is classified as a ‘native’ system tool. This means the agent can execute it directly and before protections like Secure Mode can even evaluate command level operations.

“The security boundary that Secure Mode enforces simply never sees this call,” wrote Dan Lisichkin, an AI security researcher with Pillar Security. “This means an attacker achieves arbitrary code execution under the exact configuration a security-conscious user would rely on to prevent it.”

The prompt injection attacks can be delivered through compromised identity accounts connected to the agent, or indirectly by hiding clandestine prompt instructions inside open-source files or web content the agent ingests. Antigravity  has trouble distinguishing between written data it ingests for context and literal prompt instructions, so compromise can be achieved without any elevated access by getting it to read a malicious document or file.

According to a disclosure timeline provided by Pillar Security, the bug was reported to Google on Jan. 6 and patched on Feb. 28, with Google awarding a bug bounty for the discovery.

Lisichkin said this same pattern of prompt injection through unvalidated input has been found in other coding AI agents like Cursor. In the age of AI, any unvalidated input can become a malicious prompt capable of hijacking internal systems.

“The trust model underpinning security assumptions, that a human will catch something suspicious, does not hold when autonomous agents follow instructions from external content,” he wrote.

The fact that the vulnerability was able to completely bypass Google’s secure mode underscores how the cybersecurity industry must start adapting and “move beyond sanitization-based controls.” 

“Every native tool parameter that reaches a shell command is a potential injection point. Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely,” Lisichkin wrote.

The post Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution appeared first on CyberScoop.

Could you stop a bot agent that’s running wild? Probably not.

PUBLIC DEFENDER By Brian Livingston Installing “agentic AI” such as Microsoft’s Copilot, OpenAI’s GPT Atlas, and other artificial-intelligence helpers is a big trend among businesses and individual computer users — but big problems come along with such bots. A white paper published by Kiteworks, a data-management firm, says 60 percent of companies using agentic AI […]

Appeals court temporarily pauses order blocking Perplexity’s AI shopping agent on Amazon

A federal appeals court has temporarily put on hold a California judge’s order that would have blocked Perplexity AI from using an AI-powered shopping agent on Amazon, as the case moves forward in a dispute over who controls automated activity inside customer accounts.

The Ninth U.S. Circuit Court of Appeals on Monday granted Perplexity an administrative stay, pausing the injunction while the court considers the company’s request for a longer pause during its appeal. The lower-court order had been set to take effect within days.

Amazon sued Perplexity in November, alleging the startup’s Comet browser and associated AI agent accessed password-protected portions of Amazon customer accounts without Amazon’s authorization, even when users allowed the tool to act for them. Amazon also accused Perplexity of disguising automated activity as human browsing and of ignoring repeated demands to stop.

U.S. District Judge Maxine Chesney in San Francisco granted Amazon’s request for a preliminary injunction on March 9. She wrote that Amazon was likely to succeed on claims under the federal Computer Fraud and Abuse Act and California’s Comprehensive Computer Data Access and Fraud Act. Chesney said Amazon had provided strong evidence that Perplexity accessed accounts “with the Amazon user’s permission but without authorization by Amazon.”

Chesney’s order required Perplexity to prohibit Comet from accessing or attempting to access Amazon user accounts, and to delete Amazon accounts and customer data it collected. Chesney also cited Amazon’s evidence of response costs, including employee time spent developing tools to block Comet and detect future access, writing that the company incurred more than the threshold amount often used to support computer-fraud claims.

Perplexity argues the activity is lawful because users authorized the AI agent to make purchases and navigate the site on their behalf. In seeking a pause, the company said blocking its product from one of the internet’s largest shopping sites would cause “devastating harm” to the business and to consumers.

A Perplexity spokesperson told CyberScoop Tuesday the company would continue to fight for “people’s right to choose their own AI.” Amazon declined to comment. 

The case underscores issues with “agentic” AI tools that move from answering questions to initiating transactions. Courts are being asked to weigh user permission against platform authorization, and to decide whether automated representatives must follow platform rules designed to limit undisclosed bots in sensitive account areas.

The post Appeals court temporarily pauses order blocking Perplexity’s AI shopping agent on Amazon appeared first on CyberScoop.

It’s time to get serious about post-quantum security. Here’s where to start.

After decades of development, quantum computing is now becoming increasingly available for advanced scientific and commercial use. The potential marvels range from accelerating drug discovery and materials science, to optimizing complex logistics and financial modeling.

But there’s a paradox to this trend: Quantum computing also poses a growing threat to data security.

The risk is that the algorithms and protocols currently used to secure devices, applications and computer systems could eventually be broken by malicious actors using quantum computing, compromising even the strongest security measures. By some estimates, widely used encryption standards such as RSA and ECC could be cracked by quantum computers as soon as 2029—a doomsday known as “Q-Day,” when current security standards would be rendered ineffective by quantum computing’s number-calculating prowess.

The possibility that quantum computing could break today’s data protection protocols is prompting chief security officers and chief technology officers to ramp up countermeasures. They’re doing it with post-quantum cryptography (PQC), a niche area of cybersecurity that is rising in priority across the business world. Lack of preparedness could be costly, with one report putting the potential U.S. economic cost of a quantum attack at more than $3 trillion. Even before that potential calamity, the current average cost of a data breach is upwards of $10 million, and that number will only increase commensurate to the scale of a quantum-induced breach.

That is why the quantum threat should not be treated as a concern only for forward-thinking executives. It must become a board-level issue for every enterprise. Organizations should launch a comprehensive PQC initiative that builds enterprise-wide awareness and updates digital systems and data assets to be resilient against quantum attacks.

Waiting until Q-Day would be mistake because people will not know when it occurs. It probably will not arrive with press releases or product announcements. Instead, in may unfold quietly as attackers try to maximize what they can steal before anyone notices. The reality is that sensitive data is already at risk of being stolen and stored away so it can be decoded – an attack referred to as “harvest now, decrypt later”- when Q-Day is a reality. Security pros need to give this immediate attention, even if the ultimate threat appears to be a few years away.

Quantum-proofing data at scale

Security teams are usually focused on immediate threats, but they still have a window of opportunity to prepare for Q-Day, as long as they start now. 

One interim measure underway is the transition to more robust versions of the digital certificates and keys that are already pervasive in business and everyday life. Such certificates, which act as identity credentials, are used to authenticate billions of users, devices, documents and other forms of communications and endpoints. The certificates contain cryptographic keys. Security teams are phasing in “47-day keys,” which are designed to expire and be replaced within 47 days—much more frequently than the current generation. It’s a step in the right direction, but not enough.

Establishing a hardened PQC defense requires much more than a standard software patch or upgrade to the public key infrastructure (PKI) used most everywhere to manage digital certificates and encrypt data. An enterprise-wide PQC strategy must be adopted and implemented at scale.

Consider the rapid rise of agentic AI, where organizations may need to assign digital identities to thousands or even millions of AI agents. That will require a level of authentication that goes well beyond existing infrastructure.

These projects will be led by the CISO but planning and execution should include other business leaders because post-quantum security must reach every part of the organization’s digital environment. Boards also need to be involved, given the governance stakes and the significant capital investment required. 

Developing a multi-year, multi-pronged strategy

Organizations in regulated industries—banking, healthcare and government, for example—are generally a step ahead in bracing for the post-quantum threat. Regardless of industry, though, few are fully prepared because readiness requires a detailed picture of an organization’s end-to-end data and security landscape.

In my experience, that holistic view is a rarity. For CISOs and their line-of-business colleagues, a good starting point is creating a comprehensive inventory of systems and data across the enterprise, then prioritizing what needs to be safeguarded.

Another important step is to begin testing and adopting the latest quantum-resistant algorithms and protocols that have been standardized by NIST. A growing range of PKI products and platforms support those specifications. That’s essential because the only way enterprises will be able to orchestrate, monitor and manage the scope of deployment is through automation.

Such updates are vital, but this isn’t a matter of simply replacing pre-quantum specs with newer ones. Because PQC will be a multi-year undertaking, organizations must bridge the gap between old and new. The best strategy for some will be a hybrid approach that combines classical cryptography and next-gen algorithms, though standardization remains a work in progress. Other organizations are driving toward a “pure” or unblended post-quantum model.

As for those harvest attacks, the best defense is straightforward: Encrypt your most sensitive long-lived data with quantum-resistant algorithms ASAP.

PQC is a shared responsibility

Unfortunately, there is no finish line in the race to quantum-era security. And even if an organization locks down its systems against emerging threats, there’s no guarantee that customers and business partners will do the same.

 Many vulnerabilities will still remain, which is why the business case for PQC includes protecting customer data and safeguarding reputation and brand trust as digital threats evolve quickly. Even today, a major breach can cost millions and inflict lasting damage to a corporate brand.

Quantum computing promises to bring many new capabilities to business and society—from transforming supply chain optimization and risk analysis, to enabling breakthrough discoveries in medicine and climate science. But the potential risks are just as substantial. After years of watching and waiting for quantum, business leaders have little choice but to take action.

Chris Hickman is the chief security officer of Keyfactor, a leading provider of quantum-safe security solutions. 

The post It’s time to get serious about post-quantum security. Here’s where to start. appeared first on CyberScoop.

Federal judge blocks Perplexity’s AI browser from making Amazon purchases

A federal judge has blocked Perplexity, makers of the Comet AI browser, from accessing user Amazon accounts and making purchases on their behalf.

In an March 9 order, Judge Maxine Chesney of the Northern District Court of California said the temporary injunction reflects the likelihood that Amazon “will succeed on the merits” of its claim that Perplexity’s AI agents violate the Computer Fraud and Abuse Act and the Comprehensive Computer Data Access and Fraud Act.

The court held that Amazon “has provided strong evidence that Perplexity, through its Comet browser, accesses with the Amazon user’s permission but without authorization by Amazon, the user’s password-protected account.”

Per the ruling, Perplexity must prohibit Comet from accessing, attempting to access, assisting, instructing or providing the means for others to access Amazon user accounts. Perplexity must also delete all Amazon account and customer data it collected along the way.

Perplexity told the court that the purchases were legitimate and legal because their users had authorized their AI agent to make the purchases on their behalf. But Amazon has explicitly denied them such permission, saying the agents make mistakes, interfere with Amazon’s own algorithm and place their users at an elevated cybersecurity risk.

Additionally, Chesney wrote that Amazon has incurred “significantly more” than $5,000 needed to qualify as computer fraud, including the cost of time spent by Amazon employees to develop new web tools to block Comet’s access to private customer accounts and detect future unauthorized access by the browser.

According to Amazon, they have asked Perplexity officials on five separate occasions to cease covertly accessing Amazon’s store with its agents. In a cease-and-desist letter sent to Perplexity Oct. 31, 2025, attorney Moez Kaba of law firm Hueston Hennigan wrote to Perplexity, alleging the automated purchases degrade the online shopping experience for Amazon customers.

Amazon requires AI agents to digitally identify themselves when using the e-commerce platform. But they alleged Perplexity executives “refused to operate transparently and have instead taken affirmative steps to conceal its agentic activities in the Amazon Store,” including configuring their software to covertly pose as human traffic.

“Such transparency is critical because it protects a service provider’s right to monitor AI agents and restrict conduct that degrades the customer shopping experience, erodes customer trust, and creates security risks for our customers’ private data,” wrote Kaba.

Additionally, such agents could pose a further risk to Amazon through cybersecurity vulnerabilities exploited by cybercriminals to hijack AI browsers like Comet.

The lack of response from Perplexity executives to earlier entreaties from Amazon may have played a role in the court’s injunction, with Chesney noting that Amazon was likely to suffer irreparable harm without court intervention because “Perplexity has made clear that, in the absence of the relief requested, it will continue to engage in the above-referenced challenged conduct.”

The case could have broader implications for the way commercial AI agent tools are designed and how far they can legally act on a person’s behalf. Notably, while Amazon opposes Comet’s AI-directed purchases, Perplexity claims that its users have given them permission to make purchases on their behalf.

Perplexity argued a court order halting their AI’s activities would go against the public interest, depriving them of consumer choice and innovation. Chesney concluded the opposite, endorsing Amazon’s argument that the public has a greater interest in protecting their computers from unauthorized access.

Perplexity did not respond to a request for comment on the ruling at press time.

You can read the injunction below.

The post Federal judge blocks Perplexity’s AI browser from making Amazon purchases appeared first on CyberScoop.

How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

Researchers discover suite of agentic AI browser vulnerabilities

Researchers have discovered multiple vulnerabilities that let attackers to quietly hijack agentic AI browsers.

Researchers at Zenity Labs discovered these flaws, which affected multiple AI browsers, including Perplexity’s Comet. Before being patched, an attacker could exploit them via a legitimate calendar invite, using a prompt injection to force the AI browser to act against its user.

“These issues do not target a single application bug,” Stav Cohen, senior AI security researcher at Zenity Labs, wrote in a blog published Tuesday. “They exploit the execution model and trust boundaries of AI agents, allowing attacker controlled content to trigger autonomous behavior across connected tools and workflows.”

Prompt injection and AI hijacking attacks work because many agentic browsers can’t differentiate between instructions  given by users and any outside content they ingest. Essentially, any webpage or email the browser encounters, if phrased the right way, could be interpreted as a straightforward prompt instruction.

By seeding the calendar invite with malicious prompts, the browser can be directed to access local file systems, browse directories, open and read files, and exfiltrate data to a third-party server. No malware or special access is required, only that the user accept the invite so the browser performs “each step as part of what it believes is a legitimate task delegated by the user.”

“Comet follows its normal execution model and operates within its intended capabilities,” Cohen wrote. “The agent is persuaded that what the user actually asked for is what the attacker desires.”

The potential damage doesn’t stop there. Another vulnerability allowed an attacker to use similar indirect prompting techniques to have Comet take over a user’s password manager. If a user is already signed in to the service, the agentic browser also has full access, and can silently change settings and passwords or extract secrets while the user receives “benign” outputs.

According to Zenity, the vulnerabilities were reported to Perplexity last year, with a fix issued in February 2026.

Prompt injection attacks remain one of the biggest ongoing challenges to integrating AI into organizations’  technology stacks, because eliminating these flaws entirely may be impossible. : OpenAI said in December that such vulnerabilities are “unlikely to ever” be fully solved in agentic browsers, though the company said the overall dangers could be reduced through automated attack discovery, adversarial training and new “system level safeguards.”

Cohen notes that with traditional browsers, local file access and other sensitive tasks can only be obtained with explicit user permission. But agentic browsers have far more autonomy to infer whether that access is necessary to carry out the user’s request, and take action without user input. While researchers used calendar invites to deliver the malicious prompts, the same technique can be deployed through nearly any form of written content.

“Once that decision is delegated, access to sensitive resources depends on the agent’s interpretation of intent rather than on an explicit user action,” he wrote. “At that point, the separation between user intent and agent execution becomes a security-critical concern.”

The post Researchers discover suite of agentic AI browser vulnerabilities appeared first on CyberScoop.

Palo Alto Networks’ Koi acquisition is all about keeping AI agents in check

Palo Alto Networks announced Tuesday its plans to buy security startup Koi, a deal aimed at addressing the security risks emerging as organizations rapidly adopt agentic AI.

Terms were not disclosed, but Israeli business outlet Globes reported that Palo Alto will pay approximately $400 million. The deal is another among a trend of larger cybersecurity industry companies buying AI-focused security startups. 

In a statement announcing the agreement, Palo Alto Networks argues that “agentic” tools are reshaping endpoint risk because they can act with broad privileges, interact with multiple systems and move data in ways that older security products were not designed to monitor. For years, endpoint protection emphasized detecting malicious files and stopping known malware techniques. The new concern described in the announcement centers on legitimate software that can become dangerous through compromise, misconfiguration or abuse. AI agents, in this framing, resemble highly capable insiders: they operate using a user’s credentials, can take actions on a user’s behalf and may do so automatically and at speed.

AI agents and tools are the ultimate insiders,” said Lee Klarich, Palo Alto’s chief product & technology officer. “They have full access to your systems and data, but operate entirely outside the view of traditional security controls. By acquiring Koi, we will be closing this gap and setting a new standard for endpoint security. We will give our customers the visibility and control required to safely harness the power of AI — ensuring that every agent, plugin, and script is governed, verified, and secure.”

Palo Alto Networks says Koi’s technology would be integrated into its Prisma AIRS AI security platform and would enhance the company’s Cortex XDR endpoint product. The stated goal is better visibility into AI-driven activity on endpoints and additional controls over tools that fall outside conventional security monitoring.

Palo Alto Networks and Koi describe their approach moving forward as “Agentic Endpoint Security,” built around visibility into AI-related software, continuous risk analysis and real-time policy enforcement. The language suggests an attempt to define a new product category at a moment when enterprises are still deciding how to govern AI tools that are proliferating through developer workflows and everyday office software.

The proposed acquisition also signals how major security vendors may respond to enterprise AI adoption: by packaging agent governance, monitoring and control into endpoint and cloud security portfolios, and by treating AI-driven automation as a distinct source of risk rather than a feature layered onto existing defenses.

The acquisition is the second AI-focused deal for Palo Alto in the plast six months. In November, the company announced it was acquiring Chronosphere, an AI-focused observability firm, for $3.35 billion. 

The post Palo Alto Networks’ Koi acquisition is all about keeping AI agents in check appeared first on CyberScoop.

Proofpoint acquires Acuvity to tackle the security risks of agentic AI

Proofpoint announced Thursday it has acquired Acuvity, an AI security startup, as the cybersecurity company moves to address security risks stemming from widespread corporate adoption of agentic AI.

The acquisition strengthens Proofpoint‘s capabilities in monitoring and securing AI-powered systems that are increasingly handling sensitive business functions across enterprises. 

Financial terms of the deal were not disclosed, but Ryan Kalember, Proofpoint’s chief strategy officer, told CyberScoop that the acquisition was beyond a pure “technology acquisition,” with Acuvity’s engineering team slated to join the California-based company. 

Acuvity specializes in visibility and governance for AI applications, including the ability to track how employees and automated systems interact with external AI services and protect custom AI models developed within organizations. The startup’s platform monitors AI usage across multiple deployments, from web browsers to specialized infrastructure including Model Context Protocol (MCP) servers and locally installed AI tools.

The deal reflects growing concern among enterprises about security gaps created as organizations deploy agentic AI across departments, like software development, customer support, finance, and legal operations. These systems increasingly access sensitive data and execute tasks previously handled exclusively by humans.

Additionally, AI-specific attack vectors such as prompt injection and model manipulation have emerged as potential threats that traditional cybersecurity tools were not designed to address.

Kalember said CISOs are seeing the potential risk combined with agentic AI growth, and are sensing the need to maintain governance without impeding innovation, particularly as the pace of AI adoption has outstripped many companies’ ability to secure these systems effectively.

“It has definitely been a pivot from, ‘I got to be able to stop prompt injection’ to ‘I have to be able to figure out what the AI is even doing,’” he told CyberScoop.

Last May, Proofpoint acquired Hornetsecurity Group, a Germany-based provider of Microsoft 365 security services, in a deal reportedly valued at more than $1 billion. Kalember told CyberScoop he sees Acuvity helping small- and medium-sized organizations that leverage Hornetsecurity’s offerings to boost its AI security. 

“That is going to be a world in which, independent of the size of the organization, they are going to very much leverage AI, and some of that will be built into the tools like M365 that is tightly coupled with the Hornetsecurity architecture,” Kalember said.

The acquisition follows a theme within the industry where larger security companies are buying AI-focused security startups. Just last week, data security firm Varonis acquired AI security firm AllTrue.ai for $150 million. 

The post Proofpoint acquires Acuvity to tackle the security risks of agentic AI appeared first on CyberScoop.

Delete all AI from your Web browsers right now

PUBLIC DEFENDER By Brian Livingston The addition of artificial intelligence to everything — especially AI browsers — is big these days, but it opens huge security holes that may never be fixable. The problems affect every computer user, from individuals to corporations. The advisory firm Gartner announced in a December 2025 study that organizations “must […]

Can society get used to generative AI?

AI By Michael A. Covington We are never going to live in a world in which generative AI hasn’t been invented. People will have to get used to it, and the big thing to get used to will not be its computational power and usefulness, but rather the ability to communicate with computers in human […]

ServiceNow agrees to buy cyber firm Armis for $7.75B

ServiceNow has agreed to buy cybersecurity firm Armis for $7.75 billion in cash, a deal that would push the enterprise software company deeper into a fast-growing corner of security focused on tracking and reducing “exposure” across sprawling networks of connected devices.

The companies said Tuesday that combining ServiceNow’s workflow and risk products with Armis’ asset discovery and cyber-physical security tools would create an end-to-end system intended to detect vulnerable devices, prioritize risks and route remediation through automated operational processes. That vision reflects a broader shift in cybersecurity: visibility and response are increasingly being treated as continuous, integrated business functions rather than standalone technical tools. 

“ServiceNow is building the security platform of tomorrow,” said Amit Zavery, president, chief operating officer, and chief product officer at ServiceNow. “In the agentic AI era, intelligent trust and governance that span any cloud, any asset, any AI system, and any device are non-negotiable if companies want to scale AI for the long-term. Together with Armis, we will deliver an industry-defining strategic cybersecurity shield for real-time, end-to-end proactive protection across all technology estates. Modern cyber risk doesn’t stay neatly confined to a single silo, and with security built into the ServiceNow AI Platform, neither will we.”

Armis specializes in mapping and classifying devices across information technology systems and operational technology, including industrial controls and medical devices. Those environments, often essential to manufacturing, hospitals and critical infrastructure, have become prominent concerns as more equipment is connected to networks but remains difficult to inventory with traditional security software. Armis says it performs “agentless” discovery, meaning it can identify devices without installing software on each endpoint, a key consideration for older or regulated systems.

“AI is transforming the threat landscape faster than most organizations can adapt. Every connected asset has become a potential point of vulnerability,” said Yevgeny Dibrov, co-founder and CEO of Armis. “We built Armis to protect the most critical environments and give both public and private sector organizations the real-time intelligence they need to stay ahead – so they can see their entire environment clearly, understand risk in context, and take action before an incident occurs. Together with ServiceNow, customers will have a powerful new way to reduce their exposure and strengthen security at scale.”

ServiceNow, best known for IT service management and enterprise workflow products, has been building a security and risk business that it said crossed $1 billion in annual contract value in the third quarter of 2025. The company described the Armis deal as a way to “more than triple” its market opportunity in security and risk. While such projections are inherently forward-looking, the figure underscores how cybersecurity has become a major battleground for large platform vendors seeking to consolidate multiple functions into a single suite.

The announcement also highlights the industry’s preoccupation with artificial intelligence, both as a tool for defenders and a driver of new risks. ServiceNow framed the acquisition around “AI-native” and “agentic” capabilities, language that has become common as vendors race to incorporate autonomous features into security operations. The premise is that, as networks expand and threats move faster, human analysts cannot manually triage every alert or vulnerability, making automation and prioritization central selling points.

In the second half of 2025 alone: 

  • Palo Alto Networks announced it will acquire Chronosphere, a cloud observability platform, for $3.35 billion in cash and equity.
  • Cloud security company Zscaler announced it has acquired SplxAI, an artificial intelligence security platform.
  • Veeam acquired Securiti AI for $1.7 billion.
  • Check Point acquired AI security firm Lakera.
  • Mitsubishi Electric acquired OT and IoT cybersecurity specialist Nozomi Networks for $1 billion.

The companies cited a forecast that worldwide end-user spending on information security will rise 12.5% in 2026 to $240 billion, attributing growth to evolving threats and the expanding use of AI and generative AI. Whether those drivers translate into better security outcomes remains debated, but the spending trajectory signals continued pressure on organizations to manage risk across more endpoints, more software and more interconnected supply chains.

If completed, the deal would also strengthen ServiceNow’s position in so-called cyber-physical security, an area that blurs the line between digital compromise and real-world disruption. The integration described by the companies links Armis’ real-time device intelligence to ServiceNow’s configuration management database, which ties technical assets to business services and responsible teams. That connection, they argue, would make remediation more actionable by directing fixes to the people who can implement them.

Armis, founded in 2015, reported more than $340 million in annual recurring revenue and said it employs about 950 people. The company counts Global 2000 customers, including more than 35% of the Fortune 100, and said it serves government agencies and public-sector organizations.

The post ServiceNow agrees to buy cyber firm Armis for $7.75B appeared first on CyberScoop.

How to determine if agentic AI browsers are safe enough for your enterprise

Agentic AI browsers like OpenAI’s Atlas have debuted to major fanfare, and the enthusiasm is warranted. These tools automate web browsing to close the gap between what you want to accomplish and getting it done. Rather than manually opening multiple tabs, you can simply tell the browser what you need. Ask it to file a competitor brief, filling out a form, or schedule a meeting, and it will handle the task while you watch.

But with this evolution comes a stark reality: agentic browsers expand the enterprise attack surface in unprecedented ways. As the web shifts from something we browse to something that acts on our behalf, the stakes get higher. Agentic AI browsers are no longer passive tools. They take initiative, operate on our behalf, and in some cases, act with administrative privilege. That represents a seismic shift in trust and risk.

The browsing revolution: From reader to actor

Agentic AI is an execution model. It interprets a user’s intent, plans a series of actions, and executes them autonomously across websites. Over the past few months, I’ve tested several agentic browsers (Atlas, Comet, Dia, Surf, and Fellou) extensively and conducted limited testing with others (Neon and Genspark).

Each browser represents a distinct approach to the same fundamental challenge: how to eliminate constant tab-switching and let users complete tasks in one place. Atlas, built on ChatGPT, emphasizes supervised actions within a browsing sandbox. Comet prioritizes “research velocity,” using coordinating agents across multiple tabs to gather information faster. Neon offers a comprehensive browser automation experience with the option to run it on your own machine. Genspark and Fellou are designed to take more actions with less human oversight.

Yet as these tools grow more capable, they grow correspondingly more dangerous.

The hidden security threats

Conventional browser security measures, like TLS encryption and endpoint protection, weren’t designed to handle the risk that AI agents create. These tools introduce several significant new attack vectors. These include:

Indirect Prompt Injection: Malicious instructions can be embedded in websites in ways invisible to the user. The agent, tasked with interpreting and acting on content, may misinterpret these cues as legitimate directives. Imagine a rogue blog post containing hidden HTML that causes your agent to email internal documents to an attacker. If the browser agent treats that action as part of the task flow, damage can be done before any human intervenes.

Clipboard and Credential Artifacts: Some agents interact with your clipboard or browser session to perform actions. If the agent can access sensitive tokens or passwords, particularly without clear logs or approval workflows, an attacker could manipulate this access through crafted web content.

Opaque Execution Flows: Many of these browsers operate with black-box agents. Without fine-grained logs, rollback capabilities, or sandboxing, users often remain unaware of what the agent is doing in the background until it’s too late. Comet, for instance, offers impressive speed but has demonstrated vulnerabilities to prompt injection and credential misuse.

Over-Privileged Automation: It’s tempting to let the AI agent access everything, especially when tasks involve multiple sites, accounts, and tools. But granting such control without granular permissions or approval checkpoints opens the door to lateral movement attacks—where a compromised agent becomes a gateway to your broader systems.

Without clear guardrails like scoped permissions, transparent logs, and sandboxing, these tools can unintentionally execute malicious or unauthorized actions on behalf of the user.

Governance isn’t optional

Enterprise buyers must stop thinking of governance as a secondary concern. The most secure tools are those that limit what agents can do.

Atlas, for example, confines actions to a supervised mode (“Watch Mode”) for sensitive sites, requiring active oversight before anything consequential happens. Neon executes actions locally in the user’s session, avoiding the transfer of credentials to a cloud agent. Surf (now open source) and Dia (recently acquired by Atlassian) don’t let agents take actions independently, limiting the attack surface.

Genspark and Fellou, on the other hand, promise sweeping autonomy. Their security profiles reflect that ambition, with user reviews calling out instability, unverifiable claims, and the need for sandboxed, staged rollouts.

Practical advice for enterprise leaders

For enterprises interested in these new browsers but concerned about security, the answer is simple: start narrow. Begin with a few, well-defined workflows rather than deploying agents across the organization. Choose three specific tasks, like drafting a competitor brief, reviewing  vendor RFPs, or arranging travel. Then track key metrics: speed of completion, frequency of mistakes, and quality of results.

Next, apply enterprise-grade controls. These include:

  • Requiring approval for each action when the agent sends messages, emails, or makes purchases.
  • Using role-based access to limit what agents can touch.
  • Keeping critical systems (e.g., HRIS, financial tools, source code repositories) completely out of scope.
  • Insisting on transparent logs that record each action taken by the agent and the input that triggered it.

It’s equally critical to train your users. Even basic training on how to write good prompts makes a big difference. Help teams understand how agents interpret language, how prompt injection works, and how to spot suspicious outputs.

Most importantly, don’t bet everything on one browser. Instead, choose an agent that operates with more independence (like Comet or Atlas) for low-risk workflows, and pair it with a more guided tool (like Dia) for employees who need support but not full automation.

A measured optimism

Despite the risks, I remain optimistic. The shift to agentic browsing is fundamentally reshaping how we work. Applied correctly and judiciously, these tools will save time, reduce friction, and help users unlock insights faster than ever before.

But we cannot afford to conflate novelty and safety. The burden is on vendors to bake in controls, not bolt them on, and on enterprises to pilot thoughtfully, not plunge ahead. We’ve seen this pattern previously with browser extensions, mobile apps, and cloud-first tools. Those who approached with healthy skepticism and robust guardrails were the ones who reaped the benefits without the breaches. Agentic AI will be no different.

Shanti Greene is head of data science and AI innovation at AnswerRocket.

The post How to determine if agentic AI browsers are safe enough for your enterprise appeared first on CyberScoop.

BigBear.ai to buy Ask Sage, strengthening security-centric AI for federal agencies

Virginia-based BigBear.ai announced Monday it will acquire Ask Sage, a generative artificial intelligence platform specializing in secure deployment of AI models and agentic systems across defense and other regulated sectors, in a deal valued at about $250 million.

Ask Sage focuses on safety and security in the growing field of agentic AI, or systems capable of independent reasoning and task execution. Designed to serve organizations handling classified and sensitive information, Ask Sage offers a model-agnostic framework and holds a FedRAMP High accreditation, a top-tier government certification for cloud security.

The emphasis on secure, compliant AI drew specific mention from BigBear.ai CEO Kevin McAleenan on an earnings call Monday, who characterized the acquisition as a direct fit with the company’s strategy of pursuing “disruptive AI mission solutions for national security.” 

McAleenan pointed out that safeguarding information, assuring compliance, and enabling scalable AI deployment have become central requirements in defense and intelligence markets as organizations seek to harness the abilities of increasingly autonomous AI agents.

Nicolas Chaillan, founder of Ask Sage and former chief software officer for the U.S. Air Force and Space Force, will join BigBear.ai as chief technology officer as part of the agreement. Chaillan’s background includes shaping cybersecurity and software development policy at the Department of Defense and the Department of Homeland Security, where he advocated for the adoption of secure, iterative technology practices across federal agencies.

BigBear.ai plans to integrate Ask Sage’s security-focused capabilities throughout its portfolio, cross-sell to its existing client base, and leverage the Ask Sage marketplace as a new distribution channel for compliant AI solutions. The company aims to address growing demands among government and regulated industry clients for artificial intelligence that meets increasingly complex standards for data protection and operational assurance.

The transaction highlights broader trends in the AI sector as providers — especially those serving national security and critical infrastructure — race to build tools that balance innovation with the safety and security requirements of highly regulated environments. 

The acquisition is expected to close late in the fourth quarter of 2025 or early in the first quarter of 2026.

The post BigBear.ai to buy Ask Sage, strengthening security-centric AI for federal agencies appeared first on CyberScoop.

❌