Reading view

There are new articles available, click to refresh the page.

Pressure mounts on Canvas as data leak extortion deadline looms

Pressure is mounting on Instructure, the company behind Canvas, as cybercriminals threaten to leak a trove of sensitive data they claim was stolen during a prolonged cyberattack on the widely used education tech platform.

Widespread outages left schools, students and teachers temporarily unable to access critical data late last week after the company took Canvas offline following additional malicious activity, including a defacement of the platform’s login page. By Friday, the company said Canvas — a central hub for K-12 and university coursework, exams, grades and communication — was back online and fully operational. 

ShinyHunters, a decentralized crew of prolific cybercriminals affiliated with The Com, claimed responsibility for the attack on its data leak site and is attempting to extort the company for an unknown ransom amount. Instructure hasn’t confirmed the existence of a ransom demand and declined to answer questions about its response.

The threat group initially set a deadline of May 6 — four days after Instructure previously said the incident was contained soon after it disclosed the attack — claiming it stole 3.65 terabytes of data spanning 275 million records across 8,809 school systems. 

When that deadline passed without payment, ShinyHunters escalated its pressure on the company by “injecting an extortion message directly into the Canvas login pages of roughly 330 institutions, and pivoted to school-by-school extortion with a current deadline of May 12,” Cynthia Kaiser, senior vice president of Halcyon’s Ransomware Research Center, told CyberScoop.

“The scope makes this one of the largest single education-sector exposures we’ve tracked,” she added.

The additional public pressure prompted Infrastructure to take Canvas offline, disrupting schoolwork and access to critical systems nationwide. 

Instructure CEO Steve Daly apologized over the weekend for the company’s inconsistent communication and deficient public response to the cyberattack. 

“Over the past few days, many of you dealt with real disruption. Stress on your teams. Missed moments in the classroom. Questions you couldn’t get answered. You deserved more consistent communication from us, and we didn’t deliver it. I’m sorry for that,” he said in a statement.

Daly acknowledged that the attack, which remains under investigation aided by CrowdStrike, exposed usernames, email addresses, course names, enrollment information and messages. He insisted that course content, submissions and credentials were not compromised.

The temporary but widespread disruption caused has spurred broad concern across the education sector as ransomware experts and threat hunters continue to track developments. The cyberattack also caught the attention of lawmakers on Capitol Hill. 

The House Homeland Security Committee on Monday published a letter to Daly seeking a briefing with him or a senior leader at Instructure by May 21. 

“The recurrence of an intrusion within days of an initial breach disclosure, and Instructure’s apparent failure to fully remediate the underlying vulnerabilities during that window, raise serious questions about the company’s incident response capabilities and its obligations to the institutions and individuals whose data it holds,” House Homeland Security Chairman Andrew Garbarino, R-N.Y., wrote in the letter to Daly.

The committee wants to learn more about the “circumstances of both intrusions, the the nature and volume of data accessed, the steps Instructure has taken and is taking to contain the threat and notify affected institutions, and the adequacy of the company’s coordination with federal law enforcement and the Cybersecurity and Infrastructure Security Agency,” he added. 

CISA did not describe the extent of its involvement in Instructure’s response. “CISA is aware of a potential cyber incident affecting Canvas. As the nation’s cyber defense agency, we provide voluntary support and cybersecurity services to organizations in responding to and recovering from incidents,” Chris Butera, the agency’s acting executive assistant director for cybersecurity, said in a statement.

Instructure’s timeline of the attack has changed and remains incomplete. The company said it first detected unauthorized activity in Canvas on April 29 and immediately revoked the attacker’s access and initiated an incident response. Researchers not directly involved with the formal investigation said ShinyHunters gained access to Canvas at least a few days earlier.

The follow-on malicious activity on May 7 — the defacement of public login pages — was tied to the same incident, the company said. 

“We have since confirmed that the unauthorized actor carried out this activity by exploiting an issue related to our Free-For-Teacher accounts. This is the same issue that led to the unauthorized access the prior week. As a result, we have made the difficult decision to temporarily shut down Free-For-Teacher accounts,” the company said in an updated post about the incident.

Instructure did not answer questions about the vulnerability or explain how attackers intruded its systems. The company said it also revoked privileged credentials and access tokens for affected systems, rotated internal keys, restricted token creation pathways, and deployed additional security controls and monitoring.

Canvas is fully operational and safe to use, the company said, adding that CrowdStrike has reviewed known indicators of compromise and “found no evidence that the threat actor currently has access to the platform.”

Access still remains spotty and unavailable for some Canvas users as school districts restore the platform in phases after conducting their own internal checks.

Halcyon published an alert about the attack Friday, including a screenshot of the message that some school staff, guardians and students encountered before Instructure took the learning management system offline.

ShinyHunters threatened Instructure and all affected schools to contact the threat group and reach a resolution by end of day Tuesday. The cybercrime group, which has a “known pattern of removing victim entries once communications and negotiations have started,” removed Instructure from its data leak site after it defaced the Canvas login pages, Halcyon said. 

ShinyHunters is a notorious data theft extortion group that previously hit major cloud platforms, including Salesforce and Snowflake, via voice phishing, credential theft and supply-chain attacks. 

“Historically, their claims of compromise typically hold up, but they often exaggerate the impact, scale, and type of data stolen,” Kaiser said.

Education is a recurring and consistent target for cybercriminals. Researchers at Halcyon tracked more than 250 ransomware attacks on education institutions globally last year. Yet, the attack on Canvas stands apart from most of these attacks because of its widespread use and downstream impact.

“This is student, parent, and staff data, including minors, which creates downstream phishing and impersonation risk that will outlast the immediate incident,” Kaiser said. 

“By compromising a shared platform used across thousands of schools, ShinyHunters hit the entire education sector in one move, which is the same playbook Clop ran against Oracle EBS customers last fall,” she added. “Among 2026 incidents against critical infrastructure, this is at or near the top for education-sector impact, and it highlights a trend of third-party software vendors now being part of an attack surface, and causing cascading effects across an entire sector.”

Cybersecurity professionals focused on ransomware and data theft extortion consistently encourage victims to not pay ransoms, but they also often acknowledge that companies have to make tough decisions based on their own interests and the security of their customers or users caught up in the aftermath.

Allison Nixon, chief research officer at Unit 221B, said the threat group claiming responsibility for the attack should not be trusted. 

“They are claiming they will delete the data after they are paid, and if they are not paid that they will leak the data,” she told CyberScoop. “This is in line with the past data extortion scams run by the same and related Com actors, who have made false statements to victims and to the public in the past.”

Instructure hasn’t indicated what it plans to do as part of any effort to prevent the leak of stolen data. 

Daly — a longtime security executive who was previously CEO at Ivanti — ended his mea culpa with a pledge to improve communications and provide a summary of a forensics report soon.

“Last week, we made a call to get the facts right before speaking publicly. That instinct isn’t wrong, but we got the balance wrong. We focused on fact-finding and went quiet when you needed consistent updates. You’ve been clear about that, and it’s fair feedback. We will change that moving forward,” he said. 

“Rebuilding trust takes time,” Daly added. “We’re going to earn it back through consistent action and honest communication.”

The post Pressure mounts on Canvas as data leak extortion deadline looms appeared first on CyberScoop.

Google spotted an AI-developed zero-day before attackers could use it

Google researchers found a zero-day exploit developed by artificial intelligence and alerted the susceptible vendor to the imminent threat before a well-known cybercrime group initiated a mass-exploitation campaign, the company said in a report released Monday.

The averted disaster probably isn’t the first time attackers used AI to build a zero-day, but it is the first time Google Threat Intelligence Group found compelling evidence that this long-predicted and worrying escalation in vulnerability-exploit development is underway.

“We finally uncovered some evidence this is happening,” John Hultquist, chief analyst at GTIG, told CyberScoop. “This is probably the tip of the iceberg and it’s certainly not going to be the last.”

Google declined to identify the specific vulnerability, which has been patched, or name the “popular open-source, web-based administration tool” it affected. It did, however, note that the defect impacted a Python script that allows attackers to bypass two-factor authentication for the service.

Researchers also withheld details about how they discovered the zero-day exploit or the cybercrime group that was preparing to use it for a large-scale attack spree.

The threat group has a “strong record of high-profile incidents and mass exploitation,” Hultquist said, suggesting the attackers are prominent and well-known among cybersecurity practitioners. 

GTIG is fairly confident the threat group was using AI in a meaningful way throughout the entire process, but it has yet to determine if the technology also discovered the vulnerability it ultimately developed into an exploit.

Whichever AI model the attackers used — Google is confident it wasn’t Gemini or Anthropic’s Mythos — left artifacts throughout the exploit code that are inconsistent with human developers. This evidence, which included documentation strings in Python, highly annotated code and a hallucinated but non-existent CVSS score, tipped Google off to the fact AI was heavily involved, Hultquist said. 

GTIG has been warning about and expecting AI-developed exploits to hit systems in the wild, especially after its Big Sleep AI agent found a zero-day vulnerability in late 2024.

“I think the watershed moment was two years ago when we proved this was possible,” Hultquist said, adding that there are probably several other AI developed zero-days in play now. 

Yet, to him, the discovery of a zero-day exploit developed by AI is less concerning than what this single instance forebodes even further.

“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist said. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”

The post Google spotted an AI-developed zero-day before attackers could use it appeared first on CyberScoop.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict

The Cybersecurity and Infrastructure Security Agency is urging critical infrastructure owners and operators to plan for delivering essential services under emergency conditions – potentially for months at a time.

The federal government’s top cybersecurity agency warned that state-sponsored hackers, particularly two Chinese groups known as Salt Typhoon and Volt Typhoon, continue to threaten critical sectors like electricity, water, and internet. 

The agency is now working with the private sector to protect operational technology – the systems that control the heavy machinery and equipment that powers most critical infrastructure – from attacks that enter through business IT systems or third-party vendor products.

The initiative  — known as CI Fortify – will include CISA conducting targeted technical assessments of critical infrastructure entities and aims to create plans that “allow for safe operations for weeks to months while isolated” from IT networks and third-party tools, according to the agency’s website.

Nick Andersen, CISA’s acting director, told reporters that the goal is “service delivery [that] can still reach critical infrastructure after the asset owner has disconnected with IT and OT, disconnected from third party vendors and service provider connections and disconnected from third party telecommunications equipment.”

Over the past two years, wars in Ukraine, Gaza, Iran and elsewhere have seen water plants, power substations, data centers and other critical infrastructure targeted by kinetic or cyberattacks.

Andersen said the agency has already begun engaging with some companies to pilot the assessments and expects that work to ramp up considerably as CISA hires additional staff in the coming months.

He declined to name the entities involved in the pilot program, but said they will focus on organizations that support national security, defense, public health and safety and economic continuity. He added that CISA’s assessments will vary from sector to sector depending on their unique needs.

“Water isn’t necessarily designed to prioritize specific customer needs outside of recovery periods, while energy and transportation have more immediate tradeoffs for selecting one load or one set of cargo over another,” Andersen said as an example.

One pillar of CISA’s strategy is isolation: essentially turning off all third-party and business network connections to an OT network when facing an emergency or unknown vulnerability.

Organizations also need to develop an internal plan for what acceptable service levels look like under those conditions and reach understandings with their critical customers, like U.S. military installations and lifeline services.

The second pillar, recovery, involves best practices for organizations: backing up files, documenting systems and having manual backups for operations when normal computer systems are down.

In conversations with cybersecurity specialists who focus on critical infrastructure and operational technology, it is widely assumed that China is not the only nation to have broadly compromised Americans critical infrastructure. That hacking groups tied to other nations have almost surely noticed and exploited the same basic vulnerabilities and hygiene issues found by the Typhoons.

Agencies like the FBI and Federal Communications Commission have touted efforts to purge Chinese hackers and work voluntarily with telecoms to harden their network security. But U.S. national security officials and cybersecurity defenders have consistently said both Salt Typhoon and Volt Typhoon remain active threats to U.S. critical infrastructure.

The post CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict appeared first on CyberScoop.

A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory

A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area. 

According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory. 

The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception. 

 On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.

The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”

Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met. 

Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.

“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.

Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.

“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.

After this story was published, Snap told CyberScoop it is investigating.

“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”

The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.

Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.

“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”

The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.

Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”

The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”

On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.

Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.

Apple and Google did not respond to a request for comment.

You can read the full lawsuit below.


5/05/26: This story was updated to include comment from Snap received after publication.

The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.

US government, allies publish guidance on how to safely deploy AI agents

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The agencies’ central message is that agentic AI does not require an entirely new security discipline. Organizations should fold these systems into the cybersecurity frameworks and governance structures they already maintain, applying established principles such as zero trust, defense-in-depth and least-privilege access.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.

The guidance also flags prompt injection, where instructions embedded inside data can hijack an agent’s behavior to perform malicious tasks. Prompt injection has been a lingering problem with large language models, with some companies admitting that the problem may never be solved

Identity management gets significant attention throughout the document. The agencies recommend that each agent carry a verified, cryptographically secured identity, use short-lived credentials and encrypt all communications with other agents and services. For high-impact actions, a human should have to sign off, and the guidance is explicit that deciding which actions require that approval is a job for system designers, not the agent.

The agencies admit the security field has not fully caught up with agentic AI. Some risks unique to these systems are not yet covered by existing frameworks, and the guidance calls for more research and collaboration as the technology takes on a growing number of operational roles.

“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains,” the guidance reads. 

You can read the full guidance below.

The post US government, allies publish guidance on how to safely deploy AI agents appeared first on CyberScoop.

Two new extortion crews are speedrunning the Scattered Spider playbook

A pair of persistent and problematic threat groups affiliated with The Com are actively targeting organizations across multiple critical infrastructure sectors for rapid data theft and extortion attacks, according to CrowdStrike.

The financially-motivated attackers, which CrowdStrike tracks as Cordial Spider and Snarky Spider, have used voice-phishing and social engineering attacks to break into victims’ identity platforms and traverse SaaS environments since at least October 2025, the company said in a report Thursday, which it shared exclusively with CyberScoop prior to release. 

Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, said the subgroups composed of native English speakers primarily target U.S.-based organizations in the academic, aviation, retail, hospitality, automotive, financial services, legal and technology sectors.

This “new wave of ecrime threat actors” are closely aligned with Scattered Spider and linked to other subsets of The Com, including SLSH and ShinyHunters, Meyers said. 

Because these attacks target identity systems and can expose data in other connected services beyond the initial breach point, it’s difficult to determine how many victims have been caught up in these campaigns. 

CrowdStrike’s warning closely follows research Palo Alto Networks’ Unit 42 and the Retail & Hospitality Information Sharing and Analysis Center shared last week about Cordial Spider’s string of attacks targeting organizations in the retail and hospitality industry, among others. 

Cordial and Snarky Spider have set lures via voice calls, text messages and emails directing targeting employees to phishing pages posing as their employer’s legitimate single sign-on page or primary identity provider, researchers said. 

These phishing pages, which capture credentials, session keys or tokens, depending on the workflow, provide attackers an entry point into systems, which they exploit for widespread access across victims’ entire SaaS ecosystems.

Attackers use these initial hooks to remove and establish multi-factor authentication devices, then delete emails and other alerts that would otherwise warn organizations of potential malicious activity, researchers said. 

The data theft for extortion campaigns share striking similarities, but CrowdStrike said the tactics, techniques and procedures for each subgroup are distinct. These variances include hours of operation, different phishing domain providers, preferred operating systems, data leak sites, and the tools or devices they used to register for multi-factor authentication. 

The domain for BlackFile, Cordial Spider’s data-leak site, was offline as of Wednesday, according to Meyers.

CrowdStrike declined to put a range on the groups’ extortion demands, but Unit 42 previously said Cordial Spider, which is also tracked as CL-CRI-1116 and UNC6671, are typically in the seven-figure range.

Some victims that didn’t pay extortion demands have been subjected to DDoS attacks, and Snarky Spider has used more aggressive follow-on harassment tactics, including the swatting of victim organizations’ employees, Meyers said. 

CrowdStrike said Cordial and Snarky Spider also use residential proxy networks — including Mullvad, Oxylabs, NetNut, 9Proxy, Infatica and NSOCKS — to evade IP-based detection and blend in with typical traffic. 

Residential proxy networks, which rely on IP addresses assigned to real home users, can serve a legitimate purpose, but researchers have been warning that unethical or outright criminal operators are abusing these networks to build and support botnets, cybercrime campaigns, espionage and other malicious activity.

Cordial and Snarky Spider haven’t achieved the impact or technical capability of Scattered Spider, but the groups share many commonalities and objectives, Meyers said. 

“They’ve kind of taken their playbook and they’re using a lot of their techniques, but we haven’t really seen the technical sophistication demonstrated by them that we saw from Scattered Spider,” he said. “It’s kind of the new generation of Scattered Spider.”

The post Two new extortion crews are speedrunning the Scattered Spider playbook appeared first on CyberScoop.

Latest spy power reauthorization bill leaves critics unimpressed

The latest attempt to re-up a controversial expiring surveillance law has failed to placate vocal critics on both the left and right of the political spectrum.

Two House votes failed last week to extend the spying powers under Section 702 of the Foreign Intelligence Surveillance Act (FISA) for 18 months without changes, leading to Congress instead passing a 10-day reauthorization. GOP leaders have been scrambling to find a bill they can pass since with the April 30 deadline approaching.

House Speaker Mike Johnson, R-La., introduced a bill Thursday to extend it for three years, with a section stating that government officials can’t use Section 702 to target Americans. Under Section 702, U.S. spies and law enforcement agencies can warrantlessly search electronic communications of foreign targets. But those targets are sometimes communicating with U.S. persons, and officials can search the communications database using their personal information.

But critics of the latest Johnson proposal say the language about targeting Americans is window dressing.

“On the whole, it is an empty-calories bill and nothing more that does not engage in reform,” Jake Laperruque, deputy director of the center’s security and surveillance project at the Center for Democracy and Technology, said in a call with reporters Friday.

Civil liberties groups have long called for a warrant requirement for U.S. person-based searches.

“It doesn’t require a warrant or any kind of court process for U.S. person searches,” said Kia Hamadanchy, senior policy counsel for the American Civil Liberties Union’s political advocacy division. “The main reform just restates existing law… . It’s also completely irrelevant to the issue at hand, because backdoor searches have never been the product of the government intentionally targeting U.S. persons under 702. The problem is that they are incidentally collecting U.S. person communications and searching the communications of Americans.”

Gene Schaerr, general counsel of the conservative Project for Privacy and Surveillance Accountability, called the proposal “smoke and mirrors.”

The legislation did win over at least one key lawmaker, however: Rep. Warren Davidson, who had earlier introduced an amendment to attach a ban on the government buying American’s information from third-party data brokers, and who was a chief co-sponsor of legislation requiring a warrant for U.S. person searches under Section 702.

“Collectively, this set of reforms provides robust privacy protections for American citizens. Congress should bank this win and reauthorize Section 702,” Davidson said on X. “Then, we should swiftly begin gutting the unmitigated surveillance state left growing unchecked during these 702 fights.”

But it doesn’t look like it has yet won over enough conservative House Freedom Caucus members, and few Democrats have been on board with Johnson’s plans.

Rep. Ted Lieu, D-Calif., indicated on X in harsh terms that he doesn’t trust FBI Director Kash Patel with current Section 702 powers.

The post Latest spy power reauthorization bill leaves critics unimpressed appeared first on CyberScoop.

Surveillance campaigns use commercial surveillance tools to exploit long-known telecom vulnerabilities

Campaigns employing commercial surveillance vendors tracked targets by exploiting mobile phone network vulnerabilities in what researchers said Thursday was the first-ever linking of “real-world attack traffic to mobile operator signalling infrastructure.”

The two unknown parties behind the campaigns mimicked the identities of mobile phone operators with customized surveillance tools, and manipulated signaling protocols and steered traffic through network pathways to hide, according to research from the University of Toronto’s Citizen Lab.

“Our findings highlight a systemic issue at the core of global telecommunications: operator infrastructure designed to enable seamless international connectivity is being leveraged to support covert surveillance operations that are difficult to monitor, attribute, and regulate,” a report published Thursday reads.

“Despite repeated public reporting, this activity continues unabated and without consequence,” Gary Miller and Swantje Lange wrote for Citizen Lab. “The continued use of mobile networks, built on a close inter-operator trust model and relied upon by users worldwide, raises broader questions for national regulators, policymakers, and the telecom industry about accountability, oversight, and global security.”

The attackers relied on identifiers and infrastructure associated with operators around the world, including networks based in Cambodia, China, the self-governing Island of Jersey, Israel, Italy, Lesotho, Liechtenstein, Morocco, Mozambique, Namibia, Poland, Rwanda, Sweden, Switzerland, Thailand, Uganda and the United Kingdom.

They shifted between SS7 and Diameter protocols, the signalling protocols known for 3G and 4G/most of 5G, respectively, according to the report. While Diameter was meant to be more secure than SS7, the Federal Communications Commission in 2024 opened a probe into both its vulnerabilities and SS7’s, and Sen. Ron Wyden, D-Ore., has asked for a Cybersecurity and Information Security Agency report about telecommunications vulnerabilities rooted in both protocols.

But identifying the vendors used in the two surveillance campaigns, or who was behind them, was beyond the researchers’ reach.

“The reality is that there are a number of known surveillance vendors and bad actors in this space, but given the opaque nature of telecommunications signalling protocols, those vendors are able to operate without revealing exactly who they really are,” Ron Deibert, director of Citizen Lab, wrote in his newsletter. “Much of the malicious things they are doing blend into the otherwise voluminous flow of billions of normal messages and roaming signals. They are ‘ghost operators’ within the global telecom ecosystem.”

One of the operators mentioned in Citizen Lab’s report, Israel-based 019 Mobile, wrote back that it didn’t recognize the hostnames referenced in the report as 019 Mobile’s network nodes, and couldn’t attribute the signaling activity it represents to 019 Mobile-operated infrastructure.

Another operator, Sure, said it has taken preventative measures to defend against misuse.

“Sure acknowledges that digital services can be misused, which is why we take a number of
steps to mitigate this risk,” CEO Alistair Beak said in a statement to CyberScoop. “Sure has implemented several protective measures to prevent the misuse of signalling services, including monitoring and blocking inappropriate signalling. Any evidence or valid complaint relating to the misuse of Sure’s network results in the service being immediately suspended and, where malicious or inappropriate activity is confirmed following investigation, permanently terminated.”

019 Mobile and a third operator, Tango Networks UK, didn’t respond to requests for comment from CyberScoop. The Citizen Lab report afforded some grace to the operators.

“It is important to note that the operator signalling addresses observed in the attacks do not necessarily imply direct operator involvement,” it states. “In some cases, access to the signalling ecosystem can be obtained through third-party providers, commercial leasing arrangements, or other intermediary services that allow actors to send messages using operator identifiers from legitimate networks.”

Updated 4/24/26: to include quote from Alistair Beak.

The post Surveillance campaigns use commercial surveillance tools to exploit long-known telecom vulnerabilities appeared first on CyberScoop.

Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution

As organizations consider agentic AI for their business and IT stacks, researchers continue to find bugs and vulnerabilities in major, commercial models  that can significantly expand their attack surface.

This week, researchers at Pillar Security disclosed a vulnerability in Antigravity, an AI-powered developer tool for filesystem operations made by Google.

The bug, since patched, combined prompt injection with Antigravity’s permitted file-creation capability to grant attackers remote code execution privileges.

The research details how the exploit was able to circumvent Antigravity’s secure mode, Google’s highest security setting for its agents that runs all command operations through a virtual sandbox environment, throttles network access and prohibits the agent from writing code outside of the working directory.

Secure mode is supposed to limit the AI agent access to sensitive systems – and its ability to execute malicious or dangerous acts through shell commands. But one of the file-searching tools used by Antigravity, called “find_by_name,” is classified as a ‘native’ system tool. This means the agent can execute it directly and before protections like Secure Mode can even evaluate command level operations.

“The security boundary that Secure Mode enforces simply never sees this call,” wrote Dan Lisichkin, an AI security researcher with Pillar Security. “This means an attacker achieves arbitrary code execution under the exact configuration a security-conscious user would rely on to prevent it.”

The prompt injection attacks can be delivered through compromised identity accounts connected to the agent, or indirectly by hiding clandestine prompt instructions inside open-source files or web content the agent ingests. Antigravity  has trouble distinguishing between written data it ingests for context and literal prompt instructions, so compromise can be achieved without any elevated access by getting it to read a malicious document or file.

According to a disclosure timeline provided by Pillar Security, the bug was reported to Google on Jan. 6 and patched on Feb. 28, with Google awarding a bug bounty for the discovery.

Lisichkin said this same pattern of prompt injection through unvalidated input has been found in other coding AI agents like Cursor. In the age of AI, any unvalidated input can become a malicious prompt capable of hijacking internal systems.

“The trust model underpinning security assumptions, that a human will catch something suspicious, does not hold when autonomous agents follow instructions from external content,” he wrote.

The fact that the vulnerability was able to completely bypass Google’s secure mode underscores how the cybersecurity industry must start adapting and “move beyond sanitization-based controls.” 

“Every native tool parameter that reaches a shell command is a potential injection point. Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely,” Lisichkin wrote.

The post Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution appeared first on CyberScoop.

The FTC’s AI portfolio is about to get bigger

The Federal Trade Commission is poised to deepen its involvement in curbing the use of AI for malicious purposes, including the spread of nonconsensual sexualized deepfakes and voice cloning scams.

Last year, Congress passed the Take It Down Act, a law that allowed for criminal prosecution of individuals who share or distribute nonconsensual, intimate images and digital forgeries, including those that are AI-generated.

At a Senate oversight hearing last week, FTC Chair Andrew Ferguson called the new law one of the “greatest legislative achievements” of the current Congress and President Donald Trump’s administration, and said the FTC was preparing for “robust enforcement.”

Earlier this month, the Department of Justice scored its first successful conviction under the new law, when 37-year-old Columbus, Ohio resident James Strahler pleaded guilty to using AI-generated deepfake nudes as part of a harassment campaign targeting at least six women.

Another section of the law – set to become active in May, will permit individuals to file “take down” notices with websites that publish or host sexual deepfakes. Companies will have 48 hours to remove the content or be subject to FTC investigation and enforcement.

Commissioner Mark Meador said at a March 30 conference in Washington D.C. that while he hopes they “never have to enforce it,” the FTC is treating Take It Down enforcement as a top priority and “actively spinning everything up that we need” to enforce the take down provision.   

That could quickly set up one of the first major confrontations with the tech sector— especially companies like xAI. Its Grok tool continues to be used to create and host nonconsensual deepfake images of real people, even after the scandal it faced earlier this year.

Following his speech, CyberScoop asked Meador how the take down provisions might apply to Grok’s mass nudification spree of its users. He said the law specifies that the commission can’t take action against a company until they receive formal complaints starting in May.  

“This is coming into place, and then if they don’t [remove the content] we would get the complaints and then we would go after them at that point,” Meador said. “So, we kind of have to wait and see how…companies respond to complaints and requests being made, and my hope would be that every company that gets a request to take something down would immediately take it down.”

xAI’s press office did not respond to CyberScoop’s request for comment on its preparations to comply with the Take It Down act. 

Strahler, who has yet to be sentenced, also admitted to using photos of children in his neighborhood to create deepfake pornography. A strategic plan published earlier this month flagged protecting children online as a “key concern” for the commission that merits more consumer tools and resources.

The commission is “dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act,” the plan states.

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, told CyberScoop that the current commission’s focus on child online safety leaves ample room for the law to be brought to bear in creative ways.

“We’ve seen enforcing technology and privacy violations related to youth children is a priority, so I think it’s relatively easy to parlay that into some Take it Down Act enforcement,” she said.

Waughn said the one-year delay for provision’s enforcement was so that platforms could prepare, but also said the FTC could do more to publicly signal to companies what lawful compliance looks like, similar to the resources they provide around major privacy laws.  

“I think what would be helpful for all organizations…would be guidance explaining what constitutes a good faith effort, for example, to attempt to address a take down request,” said Waughn.

Living in a scammer’s paradise

The FTC is also grappling with the impact of AI on criminal scams targeting Americans online.

Ferguson told lawmakers that AI is “increasing both the sophistication of the actual mechanisms by which the scams are accomplished, but it’s also making it easier for scammers to choose their targets.”

But the FTC’s powers are limited, as the Federal Communications Commission regulates the telephone and internet providers that transmit most scams. Ferguson also noted that many call center scams are located overseas “where they don’t bat an eye at the risk of civil enforcement from the FTC.” He said the commission was open to additional legislative authorities to tackle the problem.

At the March conference, Meador was said AI-fueled deception was something the commission thinks about “daily” and is lowering the barrier to entry for many criminal schemes.

“The biggest place that we’ve seen [in] the way that some of these AI tools are being used to triple charge scams, to be honest,” he said.

Last year, the FBI reported that voice cloning scams impersonating distressed family members had bilked Americans out of nearly $900 million, and the technology has been used to impersonate high level Trump administration officials in conversations with businesses and political leaders.

Senator Maggie Hassan wrote to four AI voice cloning companies – ElevenLabs, LOVO, Speechify and VEED – asking what policies and programs they had in place to prevent or deter fraud enabled by their tools.

But Meador said that when it comes to deceptive claims, it’s particularly difficult to define credulity around the use of AI. Many deepfakes, he said, are seen and consumed by many people online with the same sort of “willing suspension of disbelief” that they bring to computer-generated effects in movies.

As such, the FTC will likely have to adjudicate on a case-by-case basis rather than through “broad brush strokes.”

“I think we’ll see a lot of that in the AI context, where if you know something wasn’t meant to be real or authentic, that’s not a concern,” he said. “The question is then, what are those situations where there is an expectation that you’re being shown something authentic and quote, unquote ‘real’ as opposed to being AI generated and was there misrepresentation or material omission” to disclose that?”

The post The FTC’s AI portfolio is about to get bigger appeared first on CyberScoop.

Network ‘background noise’ may predict the next big edge-device vulnerability

Attackers rarely exploit an edge-device vulnerability indiscriminately. Typically, they first test how widely the flaw can be used and how much access it can provide, then move on to steal data or disrupt operations.

Pre-attack surveillance and planning leaves a lot of noise in its wake. These signals — particularly spikes in traffic that are hitting specific vendors — can act as an early-warning system, often preceding public vulnerability disclosures, according to research GreyNoise shared exclusively with CyberScoop prior to its release. 

Roughly half of every activity surge GreyNoise detected during a 103-day study last winter was followed by a vulnerability disclosure from the same targeted vendor within three weeks, GreyNoise said in its report.

Researchers determined that the median warning of an impending vulnerability disclosure arrived nine days before the targeted vendor issued a public alert to its customers.

“Virtually every time we see large scale spikes in reconnaissance and inventory activity looking for a certain device, it’s because somebody knows about a vulnerability,” Andrew Morris, founder and chief architect at GreyNoise, told CyberScoop.

“Within a few days or weeks — usually within the responsible disclosure timeline — a new very bad vulnerability comes out,” he added.

GreyNoise insists that every day of advance notice matters, giving defenders an opportunity to defend against and thwart potential attacks before they occur. 

The real-time network edge scanning platform spotted 104 distinct activity surges across 18 vendors during its study period. These embedded systems, including routers, VPNs, firewalls and other security systems, consistently account for the most commonly exploited vulnerabilities.

“Attackers love hacking security devices like security appliances. The irony of that is just not lost on me at all,” Morris said.

“It hasn’t gotten bad enough for us to start taking the security of these devices seriously,” he added. “It’s not bad enough for us to take it seriously enough to start ripping these things out and replacing them with new devices or new vendors.”

GreyNoise linked traffic surges to a swarm of vulnerabilities disclosed by vendors across the market, including Cisco, Palo Alto Networks, Fortinet, Ivanti, HPE, MicroTik, TP-Link, VMware, Juniper, F5, Netgear and others.

“It’s becoming scientifically empirical, and it’s becoming more like meteorology than mysticism,” Morris said. “This is like clockwork now.”

GreyNoise breaks these traffic surges down to measure intensity and breadth. Session counts indicate how hard existing sources are hammering a specific vendor and unique source IP counts demonstrate how widely new infrastructure is joining the activity, researchers wrote in the report.

“When both the intensity and breadth of targeting increase simultaneously, it signals a coordinated escalation,” the report said. 

“When you see a session spike against one of your vendors and new source IPs joining at the same time, treat it as a high-confidence reason to look harder. When you see only an IP spike, do not assume a vulnerability is coming,” researchers added. 

The study bolsters other research from Verizon, Google Threat Intelligence Group and Mandiant — landing during what GreyNoise calls “the most aggressive period of edge device exploitation on record.”

This activity doesn’t happen in a vacuum and threat groups aren’t flooding edge devices with traffic for free or for fun, according to Morris.

“People tend to treat internet background noise like it’s this unexplainable phenomenon,” he said. “They’re clearly trying to test the existence of a vulnerability in order to compromise the systems.”

The post Network ‘background noise’ may predict the next big edge-device vulnerability appeared first on CyberScoop.

The surveillance law Congress can’t quit — and can’t explain

Congress is grappling with renewal of a surveillance law set to expire at the end of this month that critics say is a mystery on how much of a difference it has made for controversial government spying authorities — for better or worse.

The 2024 law reauthorized so-called Section 702 powers of the Foreign Intelligence Surveillance Act (FISA), which authorizes warrantless surveillance of electronic communications of foreign targets. Most controversially, the law allows U.S. officials to search (“query”) those communications databases using Americans’ personal information, as long as the American is  in contact with someone overseas, which raises significant privacy concerns.

Backers of the 2024 law, known as the Reforming Intelligence and Securing America Act (RISAA), point to 56 changes it made to deal with criticisms of Section 702, following a period where abuses came to light, including hundreds of thousands of improper searches. At the same time, the law made changes that some feared could actually expand Section 702 powers.

The House voted to extend the law as-is for 10 days early Friday. The Senate then did the same. The Trump administration has sought a 180-day “clean” reauthorization.

As Congress weighs potential extensions of the 2024 law without making changes to it, “I don’t think we know” what good has come of it, said Elizabeth Goitein, senior director of the Brennan Center for Justice’s liberty and national security program. By the same token, it’s difficult to know whether some of the expansion fears have come to fruition, she said: “We don’t have reliable information on this.”

Added Jake Laperruque of the Center for Democracy and Technology: “There’s a lot of black boxes here.”

Examining Past Changes

Both Goitein and Laperruque are skeptical of any positive change from RISAA, though, and have long advocated for a warrant requirement for U.S. person searches. Intelligence agencies have resisted that addition, claiming that it would dramatically slow down time-sensitive national security investigations.

By contrast, Glenn Gerstell, former general counsel at the National Security Agency, said RISAA constituted “the most significant set of reforms to the statute since its adoption in 2008.” and that “those reforms have had a dramatic effect.” 

One major point of dispute is to what degree the number of U.S. person searches dropped, particularly because of a conclusion in last year’s Justice Department inspector general report finding that an “advanced filtering tool generated queries that were not tracked by the FBI.” 

As the report outlines, an FBI system has an “‘advanced filter function’ that allows users to select a specific FBI casefile number or ‘facility’ (e.g., a phone number or email address), using a drop-down menu or search bar, to review communications with targeted facilities.

“This functionality enables users to select from lists of ‘participants’ in communication with targeted facilities and review communications of those participants.In or around August 2024,” the report continues. The National Security Division of the Justice Department “became aware of the participants filter function in [the system] and was concerned that searches conducted through use of the participants filter constituted separate queries that must satisfy the query standard and comply with all query procedural requirements.”

By the intelligence community’s count, the number of U.S. person searches has otherwise mostly declined even going back to before the 2024 law’s passage: 119,383 in 2022, 57,094 in 2023, 5,518 in 2024 and 7,413 in 2025.

“It is quite clear that the searches that were run using this filter function met the statutory definition of queries, and yet the FBI for some significant period of time decided to not count them as queries,” Goitein said.

Laperruque, deputy director of CDT’s security and surveillance project, said an audit mandate in the 2024 law was potentially useful, but hasn’t proven to be in reality.

“At least it should mean that it should help try to detect abuse if it is happening,” he said. “The problem there, though, is you’re still relying on the FBI to properly log all of its quarries and hand them over for DOJ to be checked, which hasn’t happened. You’re trusting DOJ and the executive to engage in self-policing, and that’s something where folks rightfully have a lot of skepticism based on how DOJ has conducted itself recently.”

Gerstell, a senior adviser at the Center for Strategic and International Studies, points to numerous reviews — including a staff report from the Privacy and Civil Liberties Oversight Board (PCLOB) — that indicate a drop in U.S. person searches. It’s the biggest change of RISAA, he said.

“The most significant one is a very substantial drop in the number of queries of the database for U.S. person information, which has been a big focus for privacy advocates, and there’s been a dramatic drop, so much so that both the Inspector General for the Department of Justice and the staff of the PCLOB have said, ‘I wonder if we’re overdoing it.’ … Every single one of them presents those numbers, without caveat.”

On the advanced filter function count, Gerstell acknowledged the ambiguity, but referred to reports that said, as he summarized, “If they had been considered queries, it appears that most would have been compliant anyway… because they were a subset of something that was already compliant. But we don’t know if any of them were noncompliant, and we don’t have the data.”

On the other side of the RISAA debate, critics argued that its revised definition of “electronic communications service provider” could dramatically expand surveillance to include businesses like coffee shops or landlords. The reported, but formally undisclosed, real target of the change was data centers.

“That was a pretty big expansion with a lot of potential abuse,” Laperruque said. But “we don’t really know much about how it’s changed” anything, he said.

Virginia Sen. Mark Warner, the top Democrat on the Intelligence Committee, sought to advance clarifying language about that subject after RISAA’s passage, and the Biden administration said it would confine the provision’s use to the kind of undisclosed businesses that prompted the provision in the first place. Laperreque noted that the Trump administration has made no such promises, and Warner’s clarifying language never became law.

The Foreign Intelligence Surveillance Court (FISC) has issued its annual opinion re-certifying the Section 702 program for another year. However, the court reportedly took issue with the program’s f filtering systems, saying that when such a system is used to look for information on Americans it must be counted as a query, subjecting it to additional restrictions. The Trump administration plans to appeal the ruling.

Other critiques of the 2024 law include that many of its biggest changes weren’t changes at all, but instead codifications of changes that then-FBI Director Christopher Wray had implemented. Abuses continued after those changes, Goitein said.

Gerstell said enshrining those changes into law wasn’t a bad thing. “The statute expressly codified some but not all of Wray reforms — and some went beyond that in many ways,” he said. Those changes included requiring FBI deputy director approval of U.S. person queries that target elected officials, government appointees, political candidates or organizations, or media. Those were some of the more criticized prior targeting abuses.

The fight still ahead

Republicans remain divided over extending the law. Some who had reservations about a clean reauthorization have come on board, such as Senate Judiciary Chairman Chuck Grassley, R-Iowa, who had taken issue with limitations on congressional attendance of FISC proceedings but since has had that concern resolved.

Others may have been swayed by direct lobbying from the Trump administration, including a social media post from Trump himself this week, where he wrote, “I am willing to risk the giving up of my Rights and Privileges as a Citizen for our Great Military and Country!” Still others have had their position against a clean extension hardened by the FISC court opinion and additional concerns.

Other issues have become enmeshed in the reauthorization debate, such as calls to block government agencies from purchasing information from data brokers. But “this has nothing to do with this authority,” said George Barnes, former deputy director of the NSA. 

But lawmakers of both parties have complained for months that the administration was silent for too long as the law’s expiration loomed.

Only recently did the Trump administration share new examples of the law’s successes, including that it had thwarted a 2024 terrorist attack on a Taylor Swift concert. Barnes said releasing such examples might offer a public case for the law, but has its downsides, too.

“I was always understanding but frustrated by the need to release examples just because they choreographed to the adversary what we could do,” said Barnes, now Red Cell’s cyber practice president. 

Reauthorizing Section 702 is urgent, though, for cybersecurity purposes, he said.

“A lot of the impact that I saw the authority having over my time was in cybersecurity as well,” he said. “And so when you have foreign entities that are targeting the U.S., or U.S. interests overseas, that authority can be positioned to help eliminate those activities.”

The post The surveillance law Congress can’t quit — and can’t explain appeared first on CyberScoop.

Researchers Induce Smells With Ultrasound, No Chemical Cartridges Required

An anonymous reader quotes a report from UploadVR: A group of independent researchers built a device that can artificially induce smell using ultrasound, with no consumable cartridges required. [...] The team of four are Lev Chizhov, Albert Yan-Huang, Thomas Ribeiro, Aayush Gupta. Chizhov is a neurotech entrepreneur with a background in math and physics, Yan-Huang is a researcher at Caltech with a background in computation and neural systems, and Ribeiro and Gupta are co-researchers on the project with software engineering and AI expertise. Instead of targeting your nose at all, the device directly targets the olfactory bulb in your brain with "focused ultrasound through the skull." The researchers say that as far as they're aware, no one has ever done this before, even in animals. A challenge in targeting the olfactory bulb is that it's buried behind the top of your nose, and your nose doesn't provide a flat surface for an emitter. Ultrasound also doesn't travel well through air. The solution the researchers came up with was to place the emitter on your forehead instead, with a "solid, jello-like pad for stability and general comfort," and the ultrasound directed downward towards the olfactory bulb. To determine the best placement, they say they used an MRI of one of their skulls to "roughly determine where the transducer would point and how the focal region (where ultrasound waves actually concentrate) aligned with the olfactory bulb (the target for stimulation)". [...] According to the researchers, they were able to induce the sensation of fresh air "with a lot of oxygen", the smell of garbage "like few-day-old fruit peels," an ozone-like sensation "like you're next to an air ionizer," and a campfire smell of burning wood. While technically head-mounted, the current device does require being held up with two hands. But as with all such prototypes, it likely could be significantly miniaturized.

Read more of this story at Slashdot.

NIST narrows scope of CVE analysis to keep up with rising tide of vulnerabilities

The federal agency tasked with analyzing security vulnerabilities is overwhelmed as it and other authorities struggle to keep pace with a flood of defects that grows every year. The National Institute of Standards and Technology announced Wednesday that it has capitulated to that deluge and narrowed the priorities for its National Vulnerability Database.

NIST said it will only prioritize analysis for CVEs that appear in the Cybersecurity and Infrastructure Security Agency’s known exploited vulnerabilities catalog, software used in the federal government and critical software defined under Executive Order 14028.

The federal agency’s goal with the change is to achieve long-term sustainability and stabilize the NVD program, which has encountered previous challenges, notably a funding lapse in early 2024 that forced NIST to temporarily stop providing key metadata for many vulnerabilities in the database.

The agency still hasn’t cleared a backlog of unenriched CVEs that built up during that pause and grew since then. 

NIST said it analyzed nearly 42,000 vulnerabilities last year, adding that CVE submissions surged 263% from 2020 to 2025. “We don’t expect this trend to let up anytime soon. Submissions during the first three months of 2026 are nearly one-third higher than the same period last year,” the agency said in a blog post announcing the change. 

Indeed, vulnerabilities are increasing across the board. For instance, Microsoft addressed 165 vulnerabilities Tuesday, its second-largest monthly batch of defects on record.

NIST said CVEs that don’t fit its more narrow criteria will still be listed in the NVD, but they won’t be automatically enriched with additional details. 

“This will allow us to focus on CVEs with the greatest potential for widespread impact,” the agency said. “While CVEs that do not meet these criteria may have a significant impact on affected systems, they generally do not present the same level of systemic risk as those in the prioritized categories.”

Researchers and threat hunters who analyze vulnerabilities for CVE Numbering Authorities (CNA) and vendors that publish their own assessments view NIST’s new approach as inevitable.

“They had to do something. NIST was woefully behind on classifying CVEs and would likely never have caught up,” Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative, told CyberScoop.

“I’m not sure if it was a herculean task or a sisyphean one, but either way, they were set up for failure under their previous system. This change allows them to prioritize their work,” he added.

NIST’s new approach will impact the vulnerability research community at large, but also put more private companies and organizations in a position to gain more authority as defenders seek out more alternative sources.

Caitlin Condon, vice president of security research at VulnCheck, previously told CyberScoop that prioritization remains a problem, with too many defenders paying attention to vulnerabilities that aren’t worth their time. 

Of the more than 40,000 newly published vulnerabilities that VulnCheck cataloged last year, only 1% of those defects, just 422, were exploited in the wild

NIST is also trying to reduce other duplicitous efforts with its new approach, effectively leaning even more on CNAs. CVEs that are submitted with a severity rating will no longer receive a separate CVSS score from NIST, the agency said. 

While the agency remains the ultimate authority providing a government-backed catalog of vulnerability assessments, it acknowledged these changes will affect its users.

“This risk-based approach is necessary to manage the current surge in CVE submissions while we work to align our efforts with the needs of the NVD community,” the agency said. “By evolving the NVD to meet today’s challenges, we can ensure that the database remains a reliable, sustainable and publicly available source of information about cybersecurity vulnerabilities.”

The post NIST narrows scope of CVE analysis to keep up with rising tide of vulnerabilities appeared first on CyberScoop.

OpenAI expands Trusted Access for Cyber program with new GPT 5.4 Cyber model 

OpenAI said it is expanding its Trusted Access for Cyber program to “thousands of individuals and organizations,” who will use the company’s technology to root out bugs and vulnerabilities in their products.

The program will also incorporate  GPT 5.4 Cyber, a new variant of ChatGPT that OpenAI says is specifically optimized for cybersecurity tasks. OpenAI’s goal with this release is to make advanced cybersecurity tools more widely accessible.

The company said access to the program and cybersecurity-focused model will still be governed by “strong” Know-Your-Customer and identity verification rules to help prevent the model’s spread to bad actors.

“Our goal is to make these tools as widely available as possible while preventing misuse,” the company said in a blog posted Tuesday. “We design mechanisms which avoid arbitrarily deciding who gets access for legitimate use and who doesn’t.”

OpenAI’s announcement comes one week after Anthropic rolled out Project Glasswing, a similar effort that seeks to provide major tech companies with Claude Mythos, an unreleased model that Anthropic officials have claimed is too dangerous to sell commercially.

OpenAI officials noted they publicly announced Trusted Access for Cyber program months earlier. They have also quietly avoided direct comparisons to Mythos, and GPT 5.4 Cyber.

Cybersecurity experts in the U.S. and UK have described Mythos as a significant improvement from previous frontier models around identifying (and potentially exploiting) cybersecurity vulnerabilities, though there remains debate and speculation about the model’s ultimate impact on information security.  

Similarly, GPT 5.4 Cyber has been finetuned for testing and vulnerability research, though OpenAI wants to make iterative improvements to the program as lessons are learned.

The company has plans to allow  a broader group of cyber operators to use the model to protect critical infrastructure, public services and other digital systems. The company said it is also leery of having too much influence over which industries or sectors ultimately take part in the program.

“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” the blog stated. “Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability.”

The post OpenAI expands Trusted Access for Cyber program with new GPT 5.4 Cyber model  appeared first on CyberScoop.

❌