Reading view

There are new articles available, click to refresh the page.

Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments

The Senate’s top Democrat called on the Department of Homeland Security Friday to work closely with state and local governments to defend against artificial intelligence-strengthened hacks. 

Senate Minority Leader Chuck Schumer, D-N.Y., wrote to DHS Secretary Markwayne Mullin to make sure state, local, tribal and territorial (SLTT) governments aren’t left behind as AI models advance, posing new hacking threats.

“There is a race between cybersecurity defenders and AI-enabled hacking — and there’s no time to waste,” Schumer wrote.

“While the White House has reportedly begun hosting meetings about its internal security priorities following these frontier AI cyber breakthroughs, it is glaringly obvious that the Department of Homeland Security needs an updated plan for coordinating these efforts with [state, local, tribal and territorial] governments and implementing procedures to reduce the risk of disruptive cyberattacks enabled by frontier AI,” he stated.

Schumer said he was worried about the capabilities of DHS and its Cybersecurity and Infrastructure Security Agency to carry out that coordination, given federal funding cuts to the Multistate Information Sharing and Analysis Center, and the lack of a Senate-confirmed CISA director for the duration of the second Trump administration.

Schumer wants a plan from DHS by July 1 on coordinating with state and local governments on a range of questions, such as how to identify top AI talent, carry out rapid patching and conduct risk assessments.

“AI is changing the cyber battlefield fast — and we cannot let hackers get there first,” Schumer said in comments accompanying the letter. “Hospitals, power grids, water systems, schools, elections, and emergency services cannot be left exposed while criminal gangs and state-backed hackers race to exploit new AI tools. DHS must immediately help states and localities find and fix vulnerabilities before Americans are hit with outages, disruptions, and attacks that could put lives and livelihoods at risk.”

CISA is using AI to help on the defensive side internally, agency officials recently said.

The post Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments appeared first on CyberScoop.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict

The Cybersecurity and Infrastructure Security Agency is urging critical infrastructure owners and operators to plan for delivering essential services under emergency conditions – potentially for months at a time.

The federal government’s top cybersecurity agency warned that state-sponsored hackers, particularly two Chinese groups known as Salt Typhoon and Volt Typhoon, continue to threaten critical sectors like electricity, water, and internet. 

The agency is now working with the private sector to protect operational technology – the systems that control the heavy machinery and equipment that powers most critical infrastructure – from attacks that enter through business IT systems or third-party vendor products.

The initiative  — known as CI Fortify – will include CISA conducting targeted technical assessments of critical infrastructure entities and aims to create plans that “allow for safe operations for weeks to months while isolated” from IT networks and third-party tools, according to the agency’s website.

Nick Andersen, CISA’s acting director, told reporters that the goal is “service delivery [that] can still reach critical infrastructure after the asset owner has disconnected with IT and OT, disconnected from third party vendors and service provider connections and disconnected from third party telecommunications equipment.”

Over the past two years, wars in Ukraine, Gaza, Iran and elsewhere have seen water plants, power substations, data centers and other critical infrastructure targeted by kinetic or cyberattacks.

Andersen said the agency has already begun engaging with some companies to pilot the assessments and expects that work to ramp up considerably as CISA hires additional staff in the coming months.

He declined to name the entities involved in the pilot program, but said they will focus on organizations that support national security, defense, public health and safety and economic continuity. He added that CISA’s assessments will vary from sector to sector depending on their unique needs.

“Water isn’t necessarily designed to prioritize specific customer needs outside of recovery periods, while energy and transportation have more immediate tradeoffs for selecting one load or one set of cargo over another,” Andersen said as an example.

One pillar of CISA’s strategy is isolation: essentially turning off all third-party and business network connections to an OT network when facing an emergency or unknown vulnerability.

Organizations also need to develop an internal plan for what acceptable service levels look like under those conditions and reach understandings with their critical customers, like U.S. military installations and lifeline services.

The second pillar, recovery, involves best practices for organizations: backing up files, documenting systems and having manual backups for operations when normal computer systems are down.

In conversations with cybersecurity specialists who focus on critical infrastructure and operational technology, it is widely assumed that China is not the only nation to have broadly compromised Americans critical infrastructure. That hacking groups tied to other nations have almost surely noticed and exploited the same basic vulnerabilities and hygiene issues found by the Typhoons.

Agencies like the FBI and Federal Communications Commission have touted efforts to purge Chinese hackers and work voluntarily with telecoms to harden their network security. But U.S. national security officials and cybersecurity defenders have consistently said both Salt Typhoon and Volt Typhoon remain active threats to U.S. critical infrastructure.

The post CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict appeared first on CyberScoop.

NYC Public Schools Lack Central Inventory to Track Vendors Used By Schools — NYS Auditor

Audit conducted by NYS Comptroller’s Office between 2020-2025 found multiple concerns leaving students and employees at risk of privacy and data security breaches. The auditor also criticized the city for failing to cooperate in a timely manner with the auditor’s requests for information.  In June 2014, a decade after the NYC Education Department had been...

Source

Spotify Adds 'Verified' Badges To Distinguish Human Artists From AI

Spotify is adding "Verified by Spotify" badges to distinguish human artists from AI-generated personas, using signals like linked social accounts, consistent listener activity, merchandise, and concert dates. The BBC reports: The world's most-used music streaming service said the 'Verified by Spotify' text and green checkmark icon would appear next to artist names when they meet "defined standards demonstrating authenticity." This could include having linked social accounts on their artist profile, consistent listener activity or other "signals of a real artist behind the profile," the company said, such as merchandise or concert dates. In its blog post, Spotify said "more than 99%" of the artists listeners actively search for will be verified, representing "hundreds of thousands of artists." It said the process would prioritize acts with "important contributions to music culture and history", rather than "content farms," with the platform rolling out verification and badges over the coming weeks.

Read more of this story at Slashdot.

❌