Reading view

There are new articles available, click to refresh the page.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

Federal judge blocks Perplexity’s AI browser from making Amazon purchases

A federal judge has blocked Perplexity, makers of the Comet AI browser, from accessing user Amazon accounts and making purchases on their behalf.

In an March 9 order, Judge Maxine Chesney of the Northern District Court of California said the temporary injunction reflects the likelihood that Amazon “will succeed on the merits” of its claim that Perplexity’s AI agents violate the Computer Fraud and Abuse Act and the Comprehensive Computer Data Access and Fraud Act.

The court held that Amazon “has provided strong evidence that Perplexity, through its Comet browser, accesses with the Amazon user’s permission but without authorization by Amazon, the user’s password-protected account.”

Per the ruling, Perplexity must prohibit Comet from accessing, attempting to access, assisting, instructing or providing the means for others to access Amazon user accounts. Perplexity must also delete all Amazon account and customer data it collected along the way.

Perplexity told the court that the purchases were legitimate and legal because their users had authorized their AI agent to make the purchases on their behalf. But Amazon has explicitly denied them such permission, saying the agents make mistakes, interfere with Amazon’s own algorithm and place their users at an elevated cybersecurity risk.

Additionally, Chesney wrote that Amazon has incurred “significantly more” than $5,000 needed to qualify as computer fraud, including the cost of time spent by Amazon employees to develop new web tools to block Comet’s access to private customer accounts and detect future unauthorized access by the browser.

According to Amazon, they have asked Perplexity officials on five separate occasions to cease covertly accessing Amazon’s store with its agents. In a cease-and-desist letter sent to Perplexity Oct. 31, 2025, attorney Moez Kaba of law firm Hueston Hennigan wrote to Perplexity, alleging the automated purchases degrade the online shopping experience for Amazon customers.

Amazon requires AI agents to digitally identify themselves when using the e-commerce platform. But they alleged Perplexity executives “refused to operate transparently and have instead taken affirmative steps to conceal its agentic activities in the Amazon Store,” including configuring their software to covertly pose as human traffic.

“Such transparency is critical because it protects a service provider’s right to monitor AI agents and restrict conduct that degrades the customer shopping experience, erodes customer trust, and creates security risks for our customers’ private data,” wrote Kaba.

Additionally, such agents could pose a further risk to Amazon through cybersecurity vulnerabilities exploited by cybercriminals to hijack AI browsers like Comet.

The lack of response from Perplexity executives to earlier entreaties from Amazon may have played a role in the court’s injunction, with Chesney noting that Amazon was likely to suffer irreparable harm without court intervention because “Perplexity has made clear that, in the absence of the relief requested, it will continue to engage in the above-referenced challenged conduct.”

The case could have broader implications for the way commercial AI agent tools are designed and how far they can legally act on a person’s behalf. Notably, while Amazon opposes Comet’s AI-directed purchases, Perplexity claims that its users have given them permission to make purchases on their behalf.

Perplexity argued a court order halting their AI’s activities would go against the public interest, depriving them of consumer choice and innovation. Chesney concluded the opposite, endorsing Amazon’s argument that the public has a greater interest in protecting their computers from unauthorized access.

Perplexity did not respond to a request for comment on the ruling at press time.

You can read the injunction below.

The post Federal judge blocks Perplexity’s AI browser from making Amazon purchases appeared first on CyberScoop.

❌