Reading view

There are new articles available, click to refresh the page.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict

The Cybersecurity and Infrastructure Security Agency is urging critical infrastructure owners and operators to plan for delivering essential services under emergency conditions – potentially for months at a time.

The federal government’s top cybersecurity agency warned that state-sponsored hackers, particularly two Chinese groups known as Salt Typhoon and Volt Typhoon, continue to threaten critical sectors like electricity, water, and internet. 

The agency is now working with the private sector to protect operational technology – the systems that control the heavy machinery and equipment that powers most critical infrastructure – from attacks that enter through business IT systems or third-party vendor products.

The initiative  — known as CI Fortify – will include CISA conducting targeted technical assessments of critical infrastructure entities and aims to create plans that “allow for safe operations for weeks to months while isolated” from IT networks and third-party tools, according to the agency’s website.

Nick Andersen, CISA’s acting director, told reporters that the goal is “service delivery [that] can still reach critical infrastructure after the asset owner has disconnected with IT and OT, disconnected from third party vendors and service provider connections and disconnected from third party telecommunications equipment.”

Over the past two years, wars in Ukraine, Gaza, Iran and elsewhere have seen water plants, power substations, data centers and other critical infrastructure targeted by kinetic or cyberattacks.

Andersen said the agency has already begun engaging with some companies to pilot the assessments and expects that work to ramp up considerably as CISA hires additional staff in the coming months.

He declined to name the entities involved in the pilot program, but said they will focus on organizations that support national security, defense, public health and safety and economic continuity. He added that CISA’s assessments will vary from sector to sector depending on their unique needs.

“Water isn’t necessarily designed to prioritize specific customer needs outside of recovery periods, while energy and transportation have more immediate tradeoffs for selecting one load or one set of cargo over another,” Andersen said as an example.

One pillar of CISA’s strategy is isolation: essentially turning off all third-party and business network connections to an OT network when facing an emergency or unknown vulnerability.

Organizations also need to develop an internal plan for what acceptable service levels look like under those conditions and reach understandings with their critical customers, like U.S. military installations and lifeline services.

The second pillar, recovery, involves best practices for organizations: backing up files, documenting systems and having manual backups for operations when normal computer systems are down.

In conversations with cybersecurity specialists who focus on critical infrastructure and operational technology, it is widely assumed that China is not the only nation to have broadly compromised Americans critical infrastructure. That hacking groups tied to other nations have almost surely noticed and exploited the same basic vulnerabilities and hygiene issues found by the Typhoons.

Agencies like the FBI and Federal Communications Commission have touted efforts to purge Chinese hackers and work voluntarily with telecoms to harden their network security. But U.S. national security officials and cybersecurity defenders have consistently said both Salt Typhoon and Volt Typhoon remain active threats to U.S. critical infrastructure.

The post CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict appeared first on CyberScoop.

A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory

A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area. 

According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory. 

The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception. 

 On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.

The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”

Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met. 

Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.

“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.

Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.

“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.

After this story was published, Snap told CyberScoop it is investigating.

“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”

The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.

Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.

“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”

The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.

Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”

The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”

On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.

Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.

Apple and Google did not respond to a request for comment.

You can read the full lawsuit below.


5/05/26: This story was updated to include comment from Snap received after publication.

The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.

FCC tightens KYC rules for telecoms, closes loophole for banned foreign services

The Federal Communications Commission approved new regulations Wednesday designed to crack down on robocalling, protect telecommunications networks from cyberattacks and further vet equipment-testing labs based overseas.

Commissioners unanimously passed a measure to strengthen telecom companies’ “Know Your Customer” requirements for verifying callers’ identities. Among the potential solutions being considered are requiring telecoms to verify a customer’s name, address, government ID and alternative phone numbers prior to enabling their service.

In a statement ahead of the vote, FCC Chair Brendan Carr said that under current rules some telecoms “do the bare minimum” to verify callers and have “become complicit in illegal robocalling schemes.”

“As we have continued to investigate the problem of illegal robocalls over the last year, it has become clear that some originating providers are not doing enough to vet their customers, allowing bad actors to infiltrate our U.S. phone networks,” he said.

Current rules require telecoms to take “affirmative, effective” measures to verify callers and block illegal calls, but in practice this system has largely relied on self-attestation from the companies. Because a single call can traverse multiple networks, carriers must also often rely on identity verification performed by other telecoms.

For example, the telecom that transmitted thousands of false robocalls imitating then-President Joe Biden during the 2024 New Hampshire presidential primary initially reported to the FCC that they had the highest level of confidence in the identity of those using the phone numbers. That turned out to be false, as the robocallers spoofed a well-known former state Democratic Party official.

Unsurprisingly, the commission is also interested in finding ways to better enforce Know Your Customer rules, including tying penalties to the number of illegal calls that were placed.

Since 1999, the FCC has traditionally granted blanket authorization for domestic carriers to operate interstate telecommunications services within U.S. borders. Another rule passed by the commission today would formally end that practice for foreign companies on the FCC’s covered entity list.  

The list bans a small number of foreign companies based in Russia or China from selling their equipment in the U.S. on national security grounds, but Carr said equipment from those companies often wind up in U.S. products by providing services that don’t fall under the current legal definition of international telecommunications authority.

Commissioner Olivia Trusty, who helped lead the development of the rule, said cybersecurity threats facing telecom networks today “exceed those of any recent era” and that updates must be made to modernize and harden networks.

“In response to these growing hostilities, it is imperative that we re-examine policies that permit access to U.S. networks to ensure that frameworks originally designed to promote economic growth are not exploited in ways that jeopardize our national and economic security,” Trusty said in a statement after the vote passed.

The FCC also passed a third measure that would refuse to recognize any testing or equipment lab based overseas that does not have a reciprocity agreement in place with U.S.-based labs. The rule builds off efforts last year to prohibit telecoms from relying on testing and certification labs that are owned or operated by foreign adversarial countries like China or Russia, which led to the FCC withdrawing or denying certification of 23 overseas labs.

The post FCC tightens KYC rules for telecoms, closes loophole for banned foreign services appeared first on CyberScoop.

Spy agency officials say job loss anxiety, moving fast ‘safely’ among top challenges in AI workforce overhaul

Like many organizations, the National Geospatial Intelligence Agency is moving to integrate AI tools into their business operations.

Jay Harless, director of human development at NGA, said the agency is trying to strike a balance: move fast enough to keep pace in what U.S. national security officials increasingly view as an AI arms race with adversarial countries like Russia, China, but not so fast that it disrupts proven intelligence-gathering methods.

“One of our primary drivers is that our adversaries were investing heavily, and so there is the pressure to keep ahead of and do that safely,” Harless said Tuesday at the Workday Federal Forum, presented by Scoop News Group. “We also realize that some of our adversaries may not have the same legal and ethical boundaries that us and our partners all need.”

Harless said the agency and others in the intelligence community are working to build systems with agentic AI that operates that can accelerate decision making “within secure boundaries.” That means building new IT infrastructure, validation protocols, monitoring for bias or rogue behavior, and putting accountability mechanisms in place.

“We’re moving fast, and moving fast safely by distinguishing what should be automated, what should be augmented and what should be kept purely human, because there are some things that will always be [human-operated],” he said.

A key piece is figuring out exactly how AI should fit into the work. Sasha Muth, NGA’s deputy director of human development, said the agency envisions a three-to-five-year effort to transform its workforce and IT infrastructure for the AI age. This year will be spent largely putting “structural things in place” for when and how analysts use AI, and reassessing what qualifications the agency should require for entry-level jobs.

But that effort is also causing tensions within the workforce, and Muth acknowledged that part of the challenge is convincing rank-and-file employees that the technology is going to help them – not replace them. The agency hired its first Chief AI Officer in 2024, and its upcoming three-year strategic plan will focus on change management, professional development and updating employees’ job skills. 

Muth said they are focused on evolving their human capital needs because one of her biggest fears is that over that five-year transition “we‘re going to lose a lot of our expertise” by automating functions and not doing enough to modernize job requirements.

“We do see it as a big transformation, not only for just utilizing the technology, but moving our workforce along with us, having them excited about the changes and not fearful, because there’s a lot of fear…that their job is going away, that they won’t have a job,” she said.

The post Spy agency officials say job loss anxiety, moving fast ‘safely’ among top challenges in AI workforce overhaul appeared first on CyberScoop.

U.S. companies hit with record fines for privacy in 2025

U.S. states issued $3.45 billion in privacy-related fines to companies in 2025, a total larger than the last five years combined, according to research and advisory firm Gartner.

The increase is driven in part by stronger, more established privacy laws in states like California, new interstate partnerships built around enforcing laws across state lines, and a renewed focus to how AI and automation affect privacy.

The data indicates that “regulators are shifting their efforts away from awareness to full scale enforcement,” marking a significant shift from even the last few years in how aggressively states are investigating and penalizing companies for privacy law violations.

“This is increasingly becoming the standard in 2026 and for the coming two years,” Gartner’s analysis concludes.

Privacy related fines have gone up significantly in recent years. (Source: Gartner)

The California Consumer Privacy Act had consumer privacy provisions go live in 2023, but for years enforcement was largely dormant. According to Nader Heinen, a data protection and AI analyst at Gartner and co-author of the research, that enforcement lag mirrors the way other major privacy laws, like Europe’s Global Data Protection Regulation, have been carried out in order to “lead with a bit of guidance” for companies while using enforcement sparingly.

But that era appears to be over. In 2025, the California Privacy Protection Agency has used the law to pursue violators across a wide range of industries— not just large conglomerates, but smaller and mid-sized companies in tech, the auto industry, and consumer products, including off-the-shelf goods and apparel.

Heinen said some businesses “weren’t paying attention” and may have been lulled into a false sense of complacency as regulators spun up their enforcement teams, leading to a harsh 2025.

“Unfortunately what happens when so much time passes between the legislation and starting enforcement regularly, is a lot of organizations let their privacy program atrophy,” he said.

States have also sought to combine their resources to target and penalize privacy violators across state lines. Last year, ten states came together to form the Consortium of Privacy Regulators, pledging to coordinate investigations and enforcement of common privacy laws around accessing, deleting and preventing the sale of personal information.

Beyond laws like the CCPA, states have been updating existing privacy and data-protection laws to more directly address harms from automated decision-making technologies, including AI. State privacy regulators are especially focused on how personal or private data is used to train AI systems and  help it make inferences.

Gartner expects privacy fines to further increase in the coming years and Heinen said states will likely again lead the way on building the legal infrastructure to enforce data privacy in the AI age as they become the main conduit for lingering anxiety about the potential negative impacts of the technology.

“You have to put yourself in the position of these state legislatures,” Heinen said. “Their constituencies – the voting public – is telling them we’re worried about AI. AI anxiety is a thing. Everybody’s worried about whether AI is going to take their job or impact their capacity to find a job, so they want to see legislation in place to protect them.”

This past month, House Republicans unveiled their latest attempt to pass comprehensive federal privacy legislation with a bill that would preempt tougher state laws like those in California. In particular, the CCPA gives residents a private right of action – the legal right to sue companies directly – for violation of privacy laws.

On Monday, Tom Kemp, executive director of the California Privacy Protection Agency, wrote to House Energy and Commerce Chair Brett Guthrie, R-Ky., to oppose the bill, arguing it would provide “a ceiling” for Americans’ data privacy protections rather than a “floor” to build on.

“Preemption would strip away important existing state privacy provisions that protect tens of millions of Americans now,” Kemp wrote. “That would be a significant step backward in privacy protection at a time when individuals are increasingly concerned about their privacy and security online, and when challenges from data-intensive new technologies such as AI are developing quickly.”

The post U.S. companies hit with record fines for privacy in 2025 appeared first on CyberScoop.

Dragos: Despite AI use, new malware targeting water plants is ‘hype’

One day AI may be capable of creating malware that threatens critical infrastructure.

But that day was not earlier this month, when reports surfaced of a new piece of malware seemingly configured to search for and sabotage Israeli water infrastructure, according to industrial cybersecurity firm Dragos. 

The malware, called ZionSiphon, was first identified by AI cybersecurity firm Darktrace, which said it was designed to target operational technology and industrial control system environments. The code scans the internet for IP addresses tied to water treatment and desalination plants owned or operated in Israel, with the goal of compromising them to sabotage the levels of chlorine and poison water supplies.

Strings in the malware’s binary code included the names of different components of the Israeli water sector, as well as politically-themed messaging, such as “In support of our brothers in Iran, Palestine, and Yemen against Zionist aggression.”

But a technical lead malware analyst at Dragos, Jimmy Wyles, called the malware nothing more than “hype,” claiming it poses no threat to water plants in Israel or anywhere else. 

For instance, whoever wrote the malware appears to have little knowledge of how operational technology works at Israeli water plants.

“The code is broken and shows little to no knowledge of dam desalination or ICS protocols,” wrote Wylie.

The developers also appeared to use AI to generate significant portions of the code, leading to hallucinations and errors. All the Windows-based process names and directory paths designed to confirm that a target was related to water desalination were filled with “fictional and likely LLM generated guesses.” The configuration files purportedly designed to manipulate chlorine levels were also fake and likely created using AI. 

Darktrace’s analysis notes that the malware sample they tested appears to be dysfunctional, citing an incorrect configuration in the code’s country targeting functions.

But Wylie wrote that the malware still would have been harmless to water treatment plants even when correctly configured, because the rest of the code was so riddled with “logic errors and invalid assumptions” that it would have been inoperable.

Similar maturity and logic issues were found in the malware’s USB infection and self-destruction capabilities. Wylie said Dragos was withholding additional technical analysis of the flaws plaguing ZionSiphon because they’re “not in the business of fixing malware for adversaries.”

The episode highlights an ongoing dispute around how much attention defenders – particularly those who work with operational technology – should give to more novel threats like AI-enabled hacking, versus more established tactics, techniques and procedures that have been successfully wielded by foreign hacking groups.

Operational technology – the systems that control or manipulate the machinery used in water facilities, electrical power plants and other industrial sectors – differs substantially from information technology environments. That presents challenges for both cybersecurity defenders and malicious hackers who often lack the industry-specific knowledge or skillset to design effective exploits.

To wit, Dragos claims there are publicly less than 10 malware samples capable of threatening industrial control systems. ZionSiphon is not one of them.

Wylie was critical of the way threat intelligence companies and media outlets initially framed the danger posed by the malware, saying it was overblown and likely diverted water sector cybersecurity resources away from more tangible threats, like Volt Typhoon, the Chinese-backed hacking group that U.S. intelligence officials say has burrowed deep into American critical infrastructure.

“Those responsible for protecting water treatment facilities and other critical infrastructure have finite time and attention,” Wylie wrote. “Spending either on ZionSiphon means spending less on threat groups like [Volt Typhoon], which have a demonstrated history of intrusions into those environments and are a far more pressing concern.”

The post Dragos: Despite AI use, new malware targeting water plants is ‘hype’ appeared first on CyberScoop.

House Republicans roll out national privacy bill

House Republicans unveiled on Wednesday Congress’ latest effort to tackle comprehensive digital privacy legislation for Americans.

The Secure Data Act would allow consumers to opt out of data collection for individual businesses for the purposes of targeted advertising, selling to third parties or for use in automated decisionmaking.

It would also require companies to inform consumers when their personal data is being collected or used, provide them with a portable version of that data, and give consent rights to parents over the data collection of teenagers.

“This bill establishes clear, enforceable protections so that Americans remain in charge of their own data and companies are held accountable for its safe keeping,” said Brett Guthrie, R-Ky., Chair of the House Energy and Commerce Committee and Rep. John Joyce, R-Pa., who led a working charged with developing the draft legislation, in a statement.

The draft bill also imposes new requirements on businesses and other organizations to limit their collection of personal consumer data to what is “adequate, relevant and reasonably necessary” and only for purposes that are disclosed to consumers in advance. They must also adopt new safeguards for customers’ personal data and disclose any third parties they share it or sell it to, including adversarial foreign governments like Russia and China.

The Federal Trade Commission would be given greater oversight of data brokers that buy, collect, repackage and sell personal data to the highest bidder. The draft bill requires data brokers to register with the FTC, comply with data minimization, disclosure and data security mandates, and creates a new national data broker registry.

Cobun Zwiefel-Keegan, managing director at the International Association of Privacy Professionals, told CyberScoop that based on the released draft and conversations on the Hill, the bill most resembles privacy laws passed by Virginia or Kentucky (the home state of Guthrie) in recent years, with an emphasis on providing notice and opt-out rights to individual consumers and often tying business compliance to “reasonable” standards of evidence that they acted to protect consumer data.  

At the same time, Zwiefel-Keegan said it could potentially further empower the Federal Trade Commission and state Attorneys General to investigate and sanction bad actors.

The bill is the product of more than 16 months of internal discussion and consensus-building within the GOP majority. While drafting it, a working group led by Rep. John Joyce (R-Pa.) and other House Republicans solicited feedback from 170 organizations and received more than 250 responses from the public to a Request for Information released last year.

While they have worked to achieve consensus within their own caucus, House Republicans did not involve Democratic members in the working group or drafting process, something observers said could make it difficult to attract bipartisan support.

Zwiefel-Keegan said that while the Republican drafters of the bill “would challenge Democrats to explain why they can’t support the type of bill that has been passed in blue states.”

But he also noted that there are “plenty of ways that people will point to how it’s weaker than a lot of blue state privacy laws,” including federal preemption of more robust state privacy laws like those in California, the lack of a private right of action allowing individuals to sue companies directly and a mandatory 45-day “curing” period that allows companies in violation of the law to come into compliance and avoid formal sanctions.  

“I think the privacy working group and the leadership of the committee thinks there’s a pretty strong chance of passing it out of committee.” After that the bill’s chances are likely dependent on other factors, like getting some Democrats on board and working with “red state representatives who may not like their own laws being preempted.”

Shortly after the draft bill was released, Rep. Frank Pallone, D-N.J., ranking member on the House Energy and Commerce Committee, said he was opposed and accused House Republicans of having “lost the plot” on passing national privacy legislation.

“This Republican privacy bill protects corporations and their bottom line, not people’s privacy,” Pallone said in a statement. “We should be protecting the little guy with a bill that empowers consumers, not one that preempts consumer protections at the behest of Big Tech.”

Eric Null, director of the privacy and data project at the Center for Democracy and Technology, indicated that the Secure Data Act falls short, calling it full of “easily exploitable loopholes” that let companies “hide behind cookie banners and lengthy terms of service rather than establishing meaningful privacy protections.”

Null was also critical of the bill’s lack of substance around AI, saying that Large Language Models pose significant privacy challenges today that will only worsen over time.

“Any federal privacy law discussed in 2026 should be future-proofed by protecting against growing AI-related privacy harms, namely by limiting data collection for AI training and preventing use of the technology to discriminate against protected classes, but this bill does neither sufficiently,” he said.

The American Civil Liberties Union also came out against the bill, with senior staff attorney Cody Venzke saying the GOP-led bill “places the onus on regular people” to sift through complex privacy policies created by businesses to request opt out or deletion of their data.

“And it leaves us without real recourse – even blocking us from going to court – if our requests go unanswered,” said Venzke in a statement.

In their joint statement, Guthrie and Joyce said they “look forward to working with our colleagues to build support for this bill and advance data privacy protections fit for our 21st century economy.”

The post House Republicans roll out national privacy bill appeared first on CyberScoop.

Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution

As organizations consider agentic AI for their business and IT stacks, researchers continue to find bugs and vulnerabilities in major, commercial models  that can significantly expand their attack surface.

This week, researchers at Pillar Security disclosed a vulnerability in Antigravity, an AI-powered developer tool for filesystem operations made by Google.

The bug, since patched, combined prompt injection with Antigravity’s permitted file-creation capability to grant attackers remote code execution privileges.

The research details how the exploit was able to circumvent Antigravity’s secure mode, Google’s highest security setting for its agents that runs all command operations through a virtual sandbox environment, throttles network access and prohibits the agent from writing code outside of the working directory.

Secure mode is supposed to limit the AI agent access to sensitive systems – and its ability to execute malicious or dangerous acts through shell commands. But one of the file-searching tools used by Antigravity, called “find_by_name,” is classified as a ‘native’ system tool. This means the agent can execute it directly and before protections like Secure Mode can even evaluate command level operations.

“The security boundary that Secure Mode enforces simply never sees this call,” wrote Dan Lisichkin, an AI security researcher with Pillar Security. “This means an attacker achieves arbitrary code execution under the exact configuration a security-conscious user would rely on to prevent it.”

The prompt injection attacks can be delivered through compromised identity accounts connected to the agent, or indirectly by hiding clandestine prompt instructions inside open-source files or web content the agent ingests. Antigravity  has trouble distinguishing between written data it ingests for context and literal prompt instructions, so compromise can be achieved without any elevated access by getting it to read a malicious document or file.

According to a disclosure timeline provided by Pillar Security, the bug was reported to Google on Jan. 6 and patched on Feb. 28, with Google awarding a bug bounty for the discovery.

Lisichkin said this same pattern of prompt injection through unvalidated input has been found in other coding AI agents like Cursor. In the age of AI, any unvalidated input can become a malicious prompt capable of hijacking internal systems.

“The trust model underpinning security assumptions, that a human will catch something suspicious, does not hold when autonomous agents follow instructions from external content,” he wrote.

The fact that the vulnerability was able to completely bypass Google’s secure mode underscores how the cybersecurity industry must start adapting and “move beyond sanitization-based controls.” 

“Every native tool parameter that reaches a shell command is a potential injection point. Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely,” Lisichkin wrote.

The post Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution appeared first on CyberScoop.

The FTC’s AI portfolio is about to get bigger

The Federal Trade Commission is poised to deepen its involvement in curbing the use of AI for malicious purposes, including the spread of nonconsensual sexualized deepfakes and voice cloning scams.

Last year, Congress passed the Take It Down Act, a law that allowed for criminal prosecution of individuals who share or distribute nonconsensual, intimate images and digital forgeries, including those that are AI-generated.

At a Senate oversight hearing last week, FTC Chair Andrew Ferguson called the new law one of the “greatest legislative achievements” of the current Congress and President Donald Trump’s administration, and said the FTC was preparing for “robust enforcement.”

Earlier this month, the Department of Justice scored its first successful conviction under the new law, when 37-year-old Columbus, Ohio resident James Strahler pleaded guilty to using AI-generated deepfake nudes as part of a harassment campaign targeting at least six women.

Another section of the law – set to become active in May, will permit individuals to file “take down” notices with websites that publish or host sexual deepfakes. Companies will have 48 hours to remove the content or be subject to FTC investigation and enforcement.

Commissioner Mark Meador said at a March 30 conference in Washington D.C. that while he hopes they “never have to enforce it,” the FTC is treating Take It Down enforcement as a top priority and “actively spinning everything up that we need” to enforce the take down provision.   

That could quickly set up one of the first major confrontations with the tech sector— especially companies like xAI. Its Grok tool continues to be used to create and host nonconsensual deepfake images of real people, even after the scandal it faced earlier this year.

Following his speech, CyberScoop asked Meador how the take down provisions might apply to Grok’s mass nudification spree of its users. He said the law specifies that the commission can’t take action against a company until they receive formal complaints starting in May.  

“This is coming into place, and then if they don’t [remove the content] we would get the complaints and then we would go after them at that point,” Meador said. “So, we kind of have to wait and see how…companies respond to complaints and requests being made, and my hope would be that every company that gets a request to take something down would immediately take it down.”

xAI’s press office did not respond to CyberScoop’s request for comment on its preparations to comply with the Take It Down act. 

Strahler, who has yet to be sentenced, also admitted to using photos of children in his neighborhood to create deepfake pornography. A strategic plan published earlier this month flagged protecting children online as a “key concern” for the commission that merits more consumer tools and resources.

The commission is “dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act,” the plan states.

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, told CyberScoop that the current commission’s focus on child online safety leaves ample room for the law to be brought to bear in creative ways.

“We’ve seen enforcing technology and privacy violations related to youth children is a priority, so I think it’s relatively easy to parlay that into some Take it Down Act enforcement,” she said.

Waughn said the one-year delay for provision’s enforcement was so that platforms could prepare, but also said the FTC could do more to publicly signal to companies what lawful compliance looks like, similar to the resources they provide around major privacy laws.  

“I think what would be helpful for all organizations…would be guidance explaining what constitutes a good faith effort, for example, to attempt to address a take down request,” said Waughn.

Living in a scammer’s paradise

The FTC is also grappling with the impact of AI on criminal scams targeting Americans online.

Ferguson told lawmakers that AI is “increasing both the sophistication of the actual mechanisms by which the scams are accomplished, but it’s also making it easier for scammers to choose their targets.”

But the FTC’s powers are limited, as the Federal Communications Commission regulates the telephone and internet providers that transmit most scams. Ferguson also noted that many call center scams are located overseas “where they don’t bat an eye at the risk of civil enforcement from the FTC.” He said the commission was open to additional legislative authorities to tackle the problem.

At the March conference, Meador was said AI-fueled deception was something the commission thinks about “daily” and is lowering the barrier to entry for many criminal schemes.

“The biggest place that we’ve seen [in] the way that some of these AI tools are being used to triple charge scams, to be honest,” he said.

Last year, the FBI reported that voice cloning scams impersonating distressed family members had bilked Americans out of nearly $900 million, and the technology has been used to impersonate high level Trump administration officials in conversations with businesses and political leaders.

Senator Maggie Hassan wrote to four AI voice cloning companies – ElevenLabs, LOVO, Speechify and VEED – asking what policies and programs they had in place to prevent or deter fraud enabled by their tools.

But Meador said that when it comes to deceptive claims, it’s particularly difficult to define credulity around the use of AI. Many deepfakes, he said, are seen and consumed by many people online with the same sort of “willing suspension of disbelief” that they bring to computer-generated effects in movies.

As such, the FTC will likely have to adjudicate on a case-by-case basis rather than through “broad brush strokes.”

“I think we’ll see a lot of that in the AI context, where if you know something wasn’t meant to be real or authentic, that’s not a concern,” he said. “The question is then, what are those situations where there is an expectation that you’re being shown something authentic and quote, unquote ‘real’ as opposed to being AI generated and was there misrepresentation or material omission” to disclose that?”

The post The FTC’s AI portfolio is about to get bigger appeared first on CyberScoop.

OpenAI expands Trusted Access for Cyber program with new GPT 5.4 Cyber model 

OpenAI said it is expanding its Trusted Access for Cyber program to “thousands of individuals and organizations,” who will use the company’s technology to root out bugs and vulnerabilities in their products.

The program will also incorporate  GPT 5.4 Cyber, a new variant of ChatGPT that OpenAI says is specifically optimized for cybersecurity tasks. OpenAI’s goal with this release is to make advanced cybersecurity tools more widely accessible.

The company said access to the program and cybersecurity-focused model will still be governed by “strong” Know-Your-Customer and identity verification rules to help prevent the model’s spread to bad actors.

“Our goal is to make these tools as widely available as possible while preventing misuse,” the company said in a blog posted Tuesday. “We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t.”

OpenAI’s announcement comes one week after Anthropic rolled out Project Glasswing, a similar effort that seeks to provide major tech companies with Claude Mythos, an unreleased model that Anthropic officials have claimed is too dangerous to sell commercially.

OpenAI officials noted they publicly announced Trusted Access for Cyber program months earlier. They have also quietly avoided direct comparisons to Mythos, and GPT 5.4 Cyber.

Cybersecurity experts in the U.S. and UK have described Mythos as a significant improvement from previous frontier models around identifying (and potentially exploiting) cybersecurity vulnerabilities, though there remains debate and speculation about the model’s ultimate impact on information security.  

Similarly, GPT 5.4 Cyber has been finetuned for testing and vulnerability research, though OpenAI wants to make iterative improvements to the program as lessons are learned.

The company has plans to allow  a broader group of cyber operators to use the model to protect critical infrastructure, public services and other digital systems. The company said it is also leery of having too much influence over which industries or sectors ultimately take part in the program.

“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” the blog stated. “Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability.”

The post OpenAI expands Trusted Access for Cyber program with new GPT 5.4 Cyber model  appeared first on CyberScoop.

Space Force official touts AI’s impact on cyber compliance

Seth Whitworth, who is both acting Associate Deputy Chief of Space Operations for Cyber and Data and acting chief information security officer, said he believes AI tools are shifting the way defenders review cyber risk, both for individual systems and more holistically throughout an enterprise.  

In particular, Large Language Models can be used to systematically implement fixes for the smaller but critical weaknesses that have allowed state-sponsored hackers and cybercriminals to get inside victim networks and live off the land.

“Our adversaries are not looking for the massive cybersecurity vulnerabilities – we’re actually pretty good at [defending] that,” said Whitworth Tuesday at AI Talks, presented by Scoop News Group. “They’re looking for a misconfiguration, a failed update, a tiny little thing that allows them an entry point into a very connected network.”

Many of these basic cyber hygiene problems tend to fall under existing compliance programs, but it can take more than legal mandates to fix them. Many enterprise IT networks – particularly older ones – build up technical debt over time, leading to forgotten systems, hidden routers and other forms of shadow IT that get more insecure over time.

Cybersecurity experts say agents and the Large Language Models that power them – which operate in perpetuity 24/7, – are particularly well-suited to finding these smaller flaws and quickly exploiting them.

But Whitworth argued that the same technology can be used to reshape how organizations measure and track cyber compliance, from a sluggish box-checking exercise to something more nimble and substantive. He claimed that Space Force’s internal process for obtaining Authorities to Operate and other formal security certifications used to take 3-18 months. Now, it “can now be done in weeks and days.”

That in turn can empower program managers to “pull in all of that massive amount of data, allow the AI – who doesn’t get tired, who doesn’t miss patterns, who doesn’t miss these components – to churn on those items and them deliver something” that can inform real-time changes to cybersecurity, he said.

Whitworth also acknowledged the “fear” that many organizations still have around the use of AI, as well as lingering concerns about some of the technology’s enduring limitations like hallucinations and data poisoning. He said he still gives AI-generated outputs “extra scrutiny, because I haven’t seen the trusted validation” yet.

But he also said he gets more valuable insight on the Space Force’s holistic cyber risk from using Large Language Models than he does from other security control assessments, which tend to narrowly focus on the risk of single systems or assets in isolation.

“We are operating in a highly connected, highly orchestrated world, and so moderate risk that’s accepted in one program immediately becomes moderate risk that is accepted in another program,” said Whitworth. “AI can take that whole picture and understand that when this system change impacts this system, it also impacts this [other] system.”

The post Space Force official touts AI’s impact on cyber compliance appeared first on CyberScoop.

Here’s how cyber heavyweights in the US and UK are dealing with Claude Mythos

A joint report from the Cloud Security Alliance (CSA), the SANS Institute and the Open Worldwide Application Security Project (OWASP) concludes that in the near term, organizations are “likely to be overwhelmed” by threat actors using AI to find and exploit vulnerabilities faster than defenders can patch them.

While those organizations can use AI tools to speed up their own defenses, attackers “still face a heavier relative burden due to the inherent limitations of patching. This in turn leads to “asymmetric benefits” for attackers who can afford to adopt the technology without the same caution and bureaucracy as a multi-billion dollar business.

“The cost and capability floor to exploit discovery is dropping, the time between disclosure and weaponization is compressing toward zero, and capabilities that previously required nation-state resources are now becoming broadly accessible,” wrote Robert Lee, SANS Institute’s Chief AI Officer, Gadi Evron, CEO of Knostic and Rich Mogull, chief analyst at CSA, who served as the primary authors.

The report marks one of the first comprehensive responses to the capabilities of Claude Mythos from the U.S., boasting cybersecurity luminaries who have set policy at the highest levels as contributing authors, including Jen Easterly, former director of the Cybersecurity and Infrastructure Security Agency, Rob Joyce, a former top White House and NSA cybersecurity official, and Chris Inglis, former National Cyber Director.

It also includes private sector stalwarts like Heather Adkins, Google’s CISO, Katie Moussouris, CEO of Luta Security, and Sounil Yu, chief technology officer at Knostic. Another seventy CISOs, CTOs and other security executives are named as editors and reviewers.

Also this week, the UK’s AI Security Institute (AISI) detailed the results of tests it performed on a preview version of Claude Mythos, calling it a “step up” from past Anthropic models in the cybersecurity arena and able to “execute multi-stage attacks on vulnerable networks and discover and exploit vulnerabilities autonomously.”

Using a mix of Capture the Flag exercises and cyber range testing, AISI researchers found that Mythos not only raised the ceiling of technical non-experts and apprentice-level users, it narrowed the overall gap in hacking proficiency between the two. In other words, there’s becoming less of a distinction between the capabilities of amateur “script kiddies” and mid-level hackers with technical knowledge.

Claude Mythos and other Large Language Models are increasing the capabilities of both lower and mid-level hackers when it comes to solving cybersecurity-specific tasks and challenges. (Source: AISI)

Before April 2025, no Large Language Model could complete a single expert-level CTF problem. Mythos successfully solved nearly three quarters (73%) of them.

In cyber range tests – which are meant to simulate more complex, multi-chain attacks – the results were uneven, but also represented meaningful progress over prior Claude models.

Mythos was subjected to a 32-step attack playbook modeled on corporate networks, spanning initial network access to full network takeover. In three of the 10 simulations, the model completed an average of 24 of the 32 steps. Older versions of Claude and other frontier models never averaged more than 16.

Claude Mythos improved on other models ability to complete a 32 step cyber attack targeting a simulated corporate network environment. (Source: AISI)

Mythos flunked its test against a simulated operational technology cooling tower, but researchers noted that this doesn’t mean AI is bad at exploiting OT: the model actually faltered during the IT section of the exercise.

UK researchers were more measured in their analysis of Mythos, noting that their testing indicates it is “at least capable” of autonomously taking down smaller, weakly defended enterprise networks.

But they also note their cyber ranges lack security features – like active defenders and defensive tooling – that would be common in many real-world networks and present additional obstacles, nor did they penalize the model for triggering security alerts.

“This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems,” the researchers concluded.

Technical debt coming due

Both the US and UK reports agree that large language models are broadly moving in a similar direction of lowering the technical barrier. The US authors call for organizations to more quickly adopt AI for cyber defense while overhauling their incident response playbooks and corporate policies to account for more automated defense postures.

For its part, Anthropic has said it is not selling Mythos commercially, and last week it announced the model would be made available to Project Glasswing, a consortium of major tech companies that will use it to root out and patch vulnerabilities in commonly used products and services.

But other experts have warned that businesses and governments are not well-positioned to either absorb the influx of expected vulnerability exploitation or deftly harness AI tools of their own to counter them.

Casey Ellis, CTO and founder of Bugcrowd, wrote that recent advances in AI cyber tools has succeeded largely by “living in the places we stopped looking a decade ago.”

While the cybersecurity community has spent years focusing on application security, vulnerability triage and other “top layer” security problems, AI tools and apex level hacking groups have been feasting on vulnerabilities in forgotten firmware, or routers whose manufacturers long went out of business.

This reality that tools like Mythos can endlessly weaponize the massive technical debt of large organizations has taken the traditional defender’s dilemma and “the knob that used to go to ten and turned it to seven hundred,” Ellis wrote.

Additionally, corporations and governments run on consensus-building, multiple layers of hierarchy and legal compliance. While those are all necessary when handing your cybersecurity over to automated tooling, it can also lead to a slower process and more asymmetry against defenders in the short term.

“Integration into actual production becomes the battlezone,” wrote Ellis. “Lag is real. Bureaucracy is real. Supply chains are real.”

The post Here’s how cyber heavyweights in the US and UK are dealing with Claude Mythos appeared first on CyberScoop.

Commerce setting up new AI export regime to push adoption of ‘American AI’ abroad

The Department of Commerce is putting together a catalog of AI tools that will be given special export status by the federal government to be sold abroad.

The department issued a call for proposals to participating companies in the Federal Register, looking to create a “menu of priority AI export packages that the U.S. Government will promote to allies and partners around the world.”

The companies and technologies included “will be presented by U.S. Government representatives as a standing, full-stack American AI export package and may receive priority government advocacy, export licensing review and processing, interagency coordination, and financing referrals, subject to applicable law,” the department said in a Federal Register notice Friday.

The export package was mandated through President Donald Trump’s AI executive order last year, which described the export packages as part of a larger effort to “ensure that American AI technologies, standards, and governance models are adopted worldwide” and “secure our continued technological dominance.”

“The American AI Exports Program delivers on President Trump’s directive to ensure that American AI systems – built on trusted hardware, secure data, and world-leading innovation – are deployed at scale around the world,” Secretary of Commerce Howard Lutnick said in a statement earlier this month. “By promoting full-stack American solutions, we are strengthening our economic and national security, deepening ties with allies and partners, and ensuring that the future of AI is led by the United States.”

The executive order called for certain technologies to be included in the package, including AI models and systems but also computer chips, data center storage, cloud services and networking services, along with unspecified “measures” to ensure security and cybersecurity of AI systems.

The Commerce notice envisions offering multiple packages of AI technology from “standing teams of AI companies organized to offer a complete American AI technology stack to foreign markets on an ongoing basis.” There is no limit on the number of companies that participate in a consortium, and Commerce said there isn’t “any particular legal structure” required.

While the proposal at several points refers to these packages as “American AI,” the notice does specify that foreign companies can participate.

In fact, for certain categories like hardware, the total level of U.S.-made content only needs to be 51% or greater. Member companies providing data, software, cybersecurity or application layer services can’t be incorporated or primarily based in countries like China or Russia, where national security laws may compel them to work with foreign governments or hand over sensitive data.

The potential business would be broad, covering foreign public and private sector buyers in global, regional, and country-specific markets. It also includes the potential formation of separate, “on demand” packages of companies and products meant for “specific foreign opportunities.”

But the notice also states that final decisions will be made on the basis of “national interest” by principals at the Departments of Commerce, State, Defense and Energy, as well as the White House Office of Science, Technology and Policy.

Commerce does not intend to formally rank proposals or use fixed scoring formulas to approve packages of technology for the export program, and the language in the notice appears to give wide latitude to federal decisionmakers to determine whether a particular proposal meets the “national interest” threshold.

“A proposal that undertakes reasonable efforts to satisfy the 51 percent hardware U.S.-content presumption is not automatically entitled to designation, and a proposal that does not satisfy that presumption is not automatically disqualified,” the notice said. 

The post Commerce setting up new AI export regime to push adoption of ‘American AI’ abroad appeared first on CyberScoop.

Why is the timeline to quantum-proof everything constantly shrinking?

When Google announced last month it was moving up its own internal timeline for migrating to quantum-resistant forms of encryption, it started a broader conversation in the cybersecurity and cryptography communities: Just what was pushing one of the largest tech companies in the world to significantly accelerate its adoption of post-quantum protections for its systems, devices and data?

In the weeks since, new research has lended weight to those claims. A joint research paper from the California Institute of Technology, its tech startup Oratomic and the University of California concluded that technological advancements in neutral atom arrays indicate a quantum computer capable of breaking classical encryption may require as few as 10,000 quantum bits (or qubits), not millions as previously thought.

Qian Xu, a CalTech researcher and coauthor of the paper, said the findings are significant and indicates that such a computer could potentially be operational by the end of the decade.

“For decades, qubit count has been viewed as the main obstacle to fault-tolerant quantum computing,” Xu said in a statement. “I hope our work helps shift that perspective.”

Google’s Quantum AI division released its own research paper around the same time, outlining a twenty-fold decrease in the number of physical qubits believed to be needed to break some of the most popular forms of 256-bit elliptic curve encryption algorithms used to currently protect cryptocurrencies.

“We note that while viable solutions like [post-quantum cryptography] exist, they will take time to implement, bringing increasing urgency to act,” wrote Ryan Babbush, director of research and Hartmut Neven, vice president of engineering at Google.

Google’s decision to accelerate its shift to post-quantum encryption reflects a growing consensus.  Over the past year, CyberScoop has heard similar concerns from tech and government officials, typically centered on two quantum-related threats facing governments and businesses today.

One is the capability of foreign nations and cybercriminals to collect sensitive, encrypted data today in the hopes of breaking it later with a quantum computer. This “harvest now, decrypt later” technique is one of the main reasons proponents push for faster adoption of post-quantum encryption.

The second stems from a string of notable quantum computing breakthroughs over the past two years, many led by researchers in China.

Andrew McLaughlin, chief operating officer for SandboxAQ, a Software-as-a-Service company that focuses AI and quantum computing technologies, said concerns can be summed up as “hardware, math and China.

Advancements in areas like neutral atom arrays have given scientists more powerful hardware, while breakthroughs in mathematics like that in the Google research paper have found ways to use that hardware more efficiently. 

But he also pointed to what he described as exciting (and worrying) advancements in the field from some of America’s greatest international rivals.

Beijing has invested heavily in quantum computing, empowering top scientists like Pan Jianwei, a professor at China’s University of Science and Technology, with the resources and support to push the boundaries of technological development and position China as a world leader in quantum science.

Late last year, Chinese state media reported that Huanyuan 1, a 100-qubit quantum computer developed by researchers at Wuhan University on a Chinese government grant program, had been approved for commercial use. The reports claim that orders worth more than 40 million yuan (or $5.6 million dollars) have already been processed in sales, including to subsidiaries at domestic telecom China Mobile and the government of Pakistan.

Experts say quantum computers pose a potentially exceptional threat to blockchain-based cryptocurrencies.

Nathaniel Szerezla, chief growth officer at Naoris Protocol, a company that develops quantum-resistant encryption for blockchain infrastructure, said the paper from Oratomic and Caltech has “shifted the timeline” for planning around quantum encryption, particularly for cryptocurrency and blockchain platforms.

The underlying assumption was a “fault tolerant” quantum computer (i.e. one capable of threatening classical encryption) would require millions of qubits, but the paper suggests that it may actually only need as few as 10,000 qubits.

“Ultimately, we have gone from planning for a threat two decades out to one that overlaps with systems actively being deployed and funded,” Szerezla said.

For digital assets like cryptocurrency, the implications are “immediate” because the private key encryption underpinning billions of dollars on the blockchain were never designed to withstand attacks from a quantum computer.

“Migrating a live blockchain to post-quantum standards is a different problem entirely from upgrading a centralized system,” Szerezla continued. “You are dealing with immutable ledgers, billions in locked liquidity, and decentralized governance that cannot mandate a coordinated upgrade.”

Not everyone believes that we are on the cusp of a quantum hacking apocalypse.

On BlueSky Matthew Green, a computer science professor and cryptography expert at Johns Hopkins University, called the Google and Oratomic papers a good “precautionary” analysis of the long-term challenge of quantum encryption.

However, he expressed skepticism that quantum computing had enough “lucrative immediate applications” to push the field beyond its foundational research stage to more practical applications. He also questioned whether some of the newer quantum-resistant algorithms vetted by NIST would truly stand up to a real quantum computer. They were designed to protect against a threat that is still largely theoretical, and several of the post-quantum algorithms initially evaluated by NIST have turned out to contain vulnerabilities that could be exploited by classical computers.

That’s if one does indeed arrive in the next decade. Green said this week that he’s not convinced quantum-enabled hacks will be something to worry about in his lifetime, though he acknowledged that prediction might “haunt him” someday.

Nevertheless, “I’d bet huge amounts of money against a relevant quantum computer by 2029 or even 2035,” he wrote.

The post Why is the timeline to quantum-proof everything constantly shrinking? appeared first on CyberScoop.

Wyden warns Social Security chief: Trump’s voter database is ‘blatant voter suppression’

Sen. Ron Wyden, D-Ore., warned Social Security Administration chief Frank Bisignano that any follow-through on President Donald Trump’s executive order creating a new database of U.S. voters using agency data would be viewed by Democrats as a conscious choice on the part of SSA officials to participate in “blatant voter suppression.”

“Facilitating Donald Trump’s directive to create a flawed voter database would be willing participation in blatant voter suppression ahead of consequential midterm elections,” Wyden, the top Democrat on the Senate Finance Committee, wrote in a letter to Bisignano sent Friday.

The executive order, issued March 31, directs the Homeland Security secretary, the director of U.S. Citizenship and Immigration Services and the commissioner of the Social Security Administration to compile lists of American voters for each state, including their supposed citizenship status.

To build the lists, the agencies would rely on the controversial Systematic Alien Verification for Entitlements database that DHS has been building under the Trump administration, as well as Social Security and federal citizenship and naturalization records.

Those lists would then be transmitted to states, most of which have already rejected previous Trump administration efforts to collect voter data or dictate voter registration lists. Another section of the order would direct the postmaster general to develop a similar state-by-state list of voters eligible to vote by mail.

“The clear intent of this executive order is to undermine vote-by-mail and disenfranchise eligible voters,” Wyden wrote. “SSA has a duty to ensure its data is not misused as part of this effort.”

Wyden echoed numerous state officials and election experts in calling the Trump administration’s executive order an unconstitutional encroachment by the executive branch on election authorities that the U.S. Constitution clearly delineates to Congress and the states.

The White House’s executive order has already been challenged in lawsuits from states officials and voting rights advocates, and a previous, less ambitious executive order issued last year that attempted to assert similar executive branch authorities was largely overturned by U.S. courts.

Wyden’s missive essentially asks Bisignano to consider whether following the Trump administration’s order would conflict with his responsibility to safeguard Social Security records under laws like the Privacy Act and the Social Security Act.

He asks how the agency will ensure it’s not disenfranchising voters, and whether it sought permission from citizens to use their Social Security data for a federal elections list, noting that the agency’s own regulations limit the sharing of Social Security data to “routine use for determining eligibility or amount of benefit in a health or income maintenance program.”

Expanding the agency’s role to elections — an area it has no background or experience in — would be in direct conflict with those rules.

“Simply put, sharing Americans’ personal data to DHS for creating a ‘state citizenship’ list does not meet this standard,” Wyden wrote.

The post Wyden warns Social Security chief: Trump’s voter database is ‘blatant voter suppression’ appeared first on CyberScoop.

Akira ransomware group can achieve initial access to data encryption in less than an hour

The Akira ransomware group has compromised hundreds of victims over the past year with a well-honed attack lifecycle that has whittled down the time from initial access to encryption of data in less than four hours, according to cybersecurity firm Halcyon.

Akira has been active since 2023, racking up at least $245 million in ransom payments from victims through September 2025. The cybercriminal outfit likely includes former members and affiliates of the now-defunct Conti ransomware group, and is known for its polished approach to digital extortion.

A primary example can be found in the efficiency of Akira’s infection cycle, which has reduced incident response times to hours. According to Halcyon, Akira is known for using zero-day vulnerabilities, buying exploits from initial access brokers and exploiting VPNs lacking multifactor authentication to infect their victims. Akira also uses a process known as “intermittent encryption,” whereby large files can be encrypted faster in smaller blocks.

“Akira is more stealthy and less aggressive allowing the ransomware to move swiftly through the entire ransomware attack kill chain from initial access to exfiltration, and encryption in as little as 1 hour without detection,” Halcyon wrote in a blog published Thursday. “In most cases, the time from initial access to encryption was less than four hours.” 

Additionally, while most ransomware operators tend to spend “about 90-95%” of their time developing their encryption malware and 5-10% on crafting decryptors, Halcyon said Akira has made “extensive efforts to ensure the recovery of large files, like server images,” going so far as to temporarily auto-save files with custom .akira extensions to ensure they can be recovered if the encryption process is interrupted.

Halcyon’s blog notes that these efforts are likely less due to ethical principles than because the group believes offering functional decryptors increases the chance that a business will pay the ransom. Akira’s combination of rapid infection while offering firms a more reliable way to recover their data is something that “sets it apart from many ransomware operators.”

“The group’s ability to move from initial access to full encryption in under an hour, while maintaining recovery guarantees that incentivize victim payment, reflects a mature, business-driven criminal enterprise,” Halcyon said.

The group has been observed exploiting vulnerabilities in Veeam backup and replication servers, Cisco VPNs and SonicWall appliances. Like other ransomware groups, Akira uses a double-extortion model against victims, stealing their data before encrypting it, then threatening to publish the stolen data online if businesses don’t pay.

Last year, the FBI and the Cybersecurity and Infrastructure Security Agency flagged Akira as one of the top ransomware criminal groups in the world, primarily targeting small- and medium-sized businesses in the manufacturing, education, IT, health care, financial and agricultural sectors.

The post Akira ransomware group can achieve initial access to data encryption in less than an hour appeared first on CyberScoop.

White House executive order purports to limit mail-in voting, mandate federal voter lists 

President Donald Trump signed an executive order Tuesday that purports to limit mail-in voting, though critics say the move will almost certainly be challenged in court on constitutional grounds.

The order instructs the Homeland Security secretary, the director of U.S. Citizenship and Immigrations Services and the commissioner of the Social Security Administration to compile lists of American voters for each state, including their supposed citizenship status.

To build the lists, the agencies would rely on the controversial Systemic Alien Verification for Entitlements database that DHS has been building under the Trump administration, as well as Social Security and federal citizenship and naturalization records.

Those lists would then be transmitted to states, most of which have already rejected previous Trump administration efforts to collect voter data or dictate voter registration lists. The White House order instructs the Department of Justice to prioritize the investigation and prosecution of state and local officials or any others involved in the administration of federal elections who issue federal ballots to individuals not eligible to vote in a federal election.  

The order also directs the postmaster general to issue new proposed regulations that require mail-in ballots to be mailed in special envelopes that include barcodes for tracking. Crucially, it asks states ahead of time whether they intend to submit a list of voters eligible to vote by mail, and attempts to assert the authority to deny sending ballots to states that do not participate. It also claims the attorney general is entitled to withhold federal funding from noncompliant states.

The Trump administration’s previous efforts to aggressively assert executive branch authority over elections have been rebuffed by courts, with judges noting the U.S. Constitution explicitly empowers states and Congress to set the time, manner and place for elections. 

The order justifies White House involvement by claiming it has “an unavoidable duty” under Article II of the Constitution to maintain confidence in election outcomes by preventing violations of criminal law. But numerous post-election audits, investigations and recounts have consistently confirmed over decades that criminal non-citizen voting is infinitesimally rare in U.S. elections, and for the small number that did, most turn out to be accidents or decades-old administrative errors.

Criticism from election officials, experts and Democrats in Congress was swift.

Minnesota Secretary of State Steve Simon, who has resisted demands by the DOJ to hand over state voter data, predicted the order “will meet the same fate” as previous executive orders in being struck down by courts. Other secretaries of state have issued similar statements rejecting the order’s constitutionality. 

“Our office has helped stop his actions before and we are now exploring our legal options to stop this new order from taking effect,” Simon said in a statement to CyberScoop.

He also stumped for mail-in voting, calling it a secure, trustworthy and convenient way for citizens to exercise their rights to vote. Local election officials “track every ballot” sent by mail and have a range of checks and safeguards to ensure they’re sent to only eligible voters and that voters can only cast one ballot.

“Absentee voters who choose to vote by mail must provide a matching ID number, sign their signature envelope, and have a witness sign their ballot envelope before returning their ballot,” Simon said. “All of that information is tracked digitally by election administrators. Voters are able to track the status of their ballot using our online ballot tracker tool. Any attempt to register or cast a ballot while ineligible is referred for investigation and potential prosecution.”

Sen. Alex Padilla, D-Calif., called the order a “blatant, unconstitutional abuse of power” and said he expected “immediate” lawsuits challenging its legality.

“The President and the Department of Homeland Security have no authority to commandeer federal elections or direct the independent Postal Service to undermine mail and absentee voting that nearly 50 million Americans relied on in 2024,” Padilla said in a statement. “A decade of lies about election fraud does not change the Constitution.”

David Becker, executive director for the Center for Election Innovation and Research, said the administration’s latest mandates are so far outside the constitutional limits of the executive branch they will almost certainly be halted through lawsuits. 

“Some may freak out about this, but honestly, this is hilarious,” Becker wrote on Bluesky. “It’s clearly unconstitutional, will be blocked immediately, and the only thing it will accomplish is to make liberal lawyers wealthier. He might as well sign an EO banning gravity.”

However, while lower courts have consistently struck down previous orders and lawsuits from the White House, election experts have expressed concerns that the Supreme Court’s conservative majority — which has clashed with lower courts over the Trump administration’s constitutional authority — appeared receptive to the administration’s position in a recent oral argument.

Alexandra Chandler, director of the Free and Fair Elections program at nonprofit Protect Democracy, said in a statement that the White House order “is more like an attempted executive override” of state authority over elections.

“Meant to solve for a problem that exists only in the false rhetoric of the Trump administration and its political fortunes, the [order] is a classic example of their playbook to deceive the American people and disrupt the election process in order to deny any future results that don’t suit them,” Chandler said.

The post White House executive order purports to limit mail-in voting, mandate federal voter lists  appeared first on CyberScoop.

Researchers say credential-stealing campaign used AI to build evasion ‘at every stage’

A new malware-based credential-stealing campaign, which researchers are calling “DeepLoad,” has been infecting enterprise business IT environments.

In a report released Monday, ReliaQuest AI researchers Thassanai McCabe and Andrew Currie say the most relevant feature of this attack is the way it uses artificial intelligence and other engineering “to defeat the controls most organizations rely on, turning one user action into persistent, credential-stealing access.”

DeepLoad is delivered to victims via “QuickFix” social-engineering techniques, such as fake browser prompts or error pages. If the user falls for the scheme, the malware developers — or more likely their AI tools — put a lot of work into building evasion of security technology “at every stage” of the attack chain.

The loader “buries functional code under thousands of meaningless variable assignments,” and the payload runs behind a Windows lock screen process that is “overlooked by security tools” monitoring for threats. ReliaQuest said “the sheer volume” of code padding likely rules out human-only involvement.

“We assess with high confidence that AI was used to build this obfuscation layer,” McCabe and Currie write. “If so, organizations should expect frequent updates to the malware and less time to adapt detection coverage between waves.”

DeepLoad can steal credentials through real-time keylogging, and even if security teams block the initial loader, it was able to persist through backup contingencies.

“In the incidents we investigated, the loader spread to connected USB drives, which means the initial host is unlikely to be the only impacted system,” McCabe and Currie wrote. “Even after cleanup, a hidden persistence mechanism not addressed by standard remediation workflows re-executed the attack three days later.”

ReliaQuest’s research offers more evidence that over the past year, some traditional static cybersecurity practices — such as searching for malware signatures or file-based patterns — may be fast becoming obsolete, as AI models can spin out endless variations of attack tooling with unique signatures.

Other organizations like Google and Anthropic have been sounding the alarm that AI-enhanced cyberattacks are dramatically shrinking the time defenders must respond to a compromise.  

At the RSA Conference in San Francisco this year, experts told CyberScoop that the next two years are set to be a “perfect storm” favoring AI-powered offense, with cybercriminals and nation-states more quickly adapting the technology to add greater speed and scale to their attacks than their defensive counterparts.

McCabe and Currie say the likely continued use of AI to frustrate static analysis monitoring means that defenders will need to shift focus to other indicators of compromise.

“Based on what we’ve observed, organizations must prioritize behavioral, runtime detection—not file-based scanning—to catch this campaign (and similar ones) early,” they wrote. 

The post Researchers say credential-stealing campaign used AI to build evasion ‘at every stage’ appeared first on CyberScoop.

FCC pushes new rules to crack down on robocallers, foreign call centers

The Federal Communications Commission is moving to crack down on illegal robocalls and the use of foreign call centers.

At a meeting Thursday, the three-member commission unanimously approved a new proposed regulation to increase certification and disclosure requirements for obtaining phone numbers, while also expanding those same requirements to all providers seeking phone numbers from the North American Numbering Plan Administrator and resellers.

The rule – which will be shaped through public comments – is meant to make it more difficult for spammers, scammers and other illegal robocallers to obtain legitimate phone numbers. The FCC’s Office of Communications said a majority of the agency’s investigations into illegal robocalling have involved resold numbers.

It would also impose stricter disclosure requirements on telecoms about the callers on their networks and their identities, information that will assist organizations like the Industry Traceback Group track and identify robocallers as their calls hop across the nation’s patchwork, decentralized telephone networks.

Commissioner Anna Gomez said the proposed rules would help raise the bar for bad actors to obtain valid phone numbers and help close gaps in reporting that make it harder for industry and regulators to find and expunge robocallers from networks.

“Right now, bad actors are exploiting gaps in a phone number system that was designed for a simpler time,” Gomez said.

The commission plans to explore a range of solutions to strengthen numbering requirements and policies, including cracking down on common tactics that rely heavily on resold numbers — like number cycling where “service providers churn through large quantities of telephone numbers [on] a rotating and even single-use basis to evade detection.”

Commissioner Olivia Trusty said that while changes in technology and the marketplace have brought significant benefits to consumers, it has also “made it more difficult to identify who is using telephone numbers and for what purposes, complicating both robocall enforcement and numbering administration.”

Last month, the FCC finalized regulations that require telecoms to annually certify that their caller information is accurate and provide updated information to the agency’s Robocall Mitigation Database. 

A separate proposed regulation passed by the commission Thursday would place new restrictions on the ability of U.S. telephone providers to outsource their call-center services to foreign countries. It specifically asks about the feasibility of giving consumers the option to require that their calls be routed to U.S.-based call centers, requiring calls involving “certain types of sensitive information” to be processed at U.S. locations, requiring providers to disclose the use of overseas centers to callers during a call and requiring operators to speak proficient English.

FCC Chair Brendan Carr touted the initiative as part of the Trump administration’s stated efforts to convince American companies to onshore more of their services in the U.S.

But organizations like the AARP have also found that overseas call centers operating outside of U.S. or international law play a big role in the nation’s robocalling epidemic. In a press conference after the meeting, Carr echoed that sentiment, claiming that some criminal scammers plaguing Americans today first broke into the industry by working at outsourced call centers.

“I think it also helps us crack down on some of the illegal robocallers,” Carr said about the new onshoring rules. “At the end of the day, I think American callers should expect and deserve to reach American call centers.”

The post FCC pushes new rules to crack down on robocallers, foreign call centers appeared first on CyberScoop.

❌