Normal view

There are new articles available, click to refresh the page.
Today — 12 May 2026CyberScoop

‘Mini Shai-Hulud’ malware compromises hundreds of open-source packages in sprawling supply-chain attack

By: Greg Otto
12 May 2026 at 17:38

A rapidly spreading malware campaign has infected hundreds of software packages across major open-source registries, embedding credential-stealing code into development tools downloaded millions of times a week.

The attack, referred to as “mini Shai-Hulud,” targeted prominent software libraries, including TanStack, UiPath, and MistralAI. TanStack’s React Router package alone accounts for more than 12 million weekly downloads, placing the malicious code deep within the software supply chain of modern enterprise applications.

In a blog post, Tanstack said security teams have pulled all compromised software versions from the registry. While there is no evidence that registry passwords were stolen, experts urge anyone who downloaded the affected tools Monday to immediately change all connected cloud, server, and developer credentials — including Amazon Web Services, Google Cloud, and GitHub.

The incident highlights a systemic vulnerability in automated software publishing. The compromised updates successfully bypassed two-factor authentication and carried cryptographically valid provenance signatures. These signatures verified that the packages originated from the correct continuous integration pipelines, but failed to detect that the pipelines themselves had been manipulated to authorize malicious code.

Security researchers attribute the campaign to TeamPCP, a cloud-focused cybercriminal group that emerged in late 2025 that specializes in automating supply-chain attacks and exploiting cloud-native infrastructure, including Docker and Kubernetes environments. The group, alleged to be responsible for earlier development of Shai Hulud, quietly slips their malware into trusted software updates, allowing them to infect thousands of companies at once without triggering security alarms. 

The group is notorious for its advanced ability to hide its tracks — such as disguising stolen data as anonymous messaging traffic — and its aggressive extortion tactics, which include threatening to completely erase victims’ computers if they attempt to remove the hackers’ access.

Attackers triggered the automated release process using an “orphaned commit” — code pushed to a repository fork without a corresponding branch. This allowed them to exploit overly broad permissions in GitHub Actions workflows. The malware was then delivered via a concealed dependency that fetched a heavily obfuscated 2.3-megabyte payload disguised as an initialization module.

Upon execution, the malware uses Bun — a high-speed software engine designed to run JavaScript — to systematically steal security keys and passwords. It targets high-level cloud infrastructure, including AWS, Google Cloud Platform, Kubernetes, and HashiCorp Vault. The code is engineered to infiltrate highly secure Amazon cloud networks. At the same time, it scours the developer’s local computer for secret files and SSH keys used to unlock other corporate systems.

Operating as a self-propagating worm, it publishes copies of itself to those projects, spoofing its activity to appear as automated commits from the Anthropic Claude bot. In a secondary extortion measure, the malware generates a new registry token containing a ransom note in its description, threatening a destructive computer wipe if the victim attempts to revoke the compromised access.

Despite the malware’s properties, researchers told CyberScoop they have not seen it spread. 

“We saw very limited community spread,” said Charlie Eriksen, a security researcher with application security firm Aikido Security.

To maintain continuous access to developer workstations, the malware embeds itself into the configuration files of popular developer tools, notably Visual Studio Code and Anthropic’s Claude Code. This ensures the malicious scripts execute automatically every time a developer opens a project or initiates an AI coding session.

Stephen Thoemmes, senior developer advocate at Snyk, told CyberScoop this is a particular blind spot for these types of attacks. 

“Directories like .claude/ and .vscode/ are typically excluded from version control via .gitignore and are rarely scrutinized as viable attack surfaces,” Thoemmes said. “While these hook and task systems provide valuable automation for legitimate work, they offer a silent execution environment for malicious code. To counter this, developers must move away from treating these local configurations as benign and begin applying the same rigorous security auditing to their tooling directories as they would to their production infrastructure.”

To avoid detection, the stolen data is exfiltrated using Session — an anonymous messaging app that bounces data across a decentralized network. By disguising the theft as ordinary, encrypted chat traffic, the hackers blend in with normal network activity. This allows the attackers to completely ditch the traditional “command” servers that corporate security teams usually hunt for and block.

The success of the “Mini Shai-Hulud” campaign exposes a major blind spot in software security: Current defenses check where an update comes from, but not if the code inside is actually safe. By hijacking the developers’ own automated systems, attackers were able to stamp their malware with official digital signatures — proving that attackers can bypass modern safeguards simply by turning a company’s own tools against them.

Socket CEO Feross Aboukhadijeh told CyberScoop that organizations should look for signs that a compromised package version was installed in CI/CD or developer environments, unexpected outbound connections to campaign infrastructure, suspicious changes in package lockfiles, unusual package publishes from their own maintainers or CI systems, and persistence artifacts in developer tooling directories. 

“There is no single centralized kill switch for this kind of campaign,” Aboukhadjieh said. “The hard part is that by the time a malicious package is confirmed, it may already have been installed inside the exact environments attackers want most: developer machines and CI runners. You can pull a package from the registry, but you cannot automatically pull back the credentials it may have already stolen.”

While these packages are maintained by volunteers, Eriksen said the incident is a huge issue for enterprises due to how many development teams use the software in their products and services. 

“This is not a ‘volunteer’ vs corporate thing,” Eriksen told CyberScoop. “This is an all-of-society problem.”

Aboukhadjieh told CyberScoop that these continuing attacks on popular open-source software packages is part of “a larger reckoning over how the software industry consumes open source.”

“This campaign shows how thin the line has become between a developer tool and critical infrastructure,” he said. “When attackers compromise tools that are already trusted inside build systems, they do not have to break into every company directly. They can ride the trust those tools already have.”


The post ‘Mini Shai-Hulud’ malware compromises hundreds of open-source packages in sprawling supply-chain attack appeared first on CyberScoop.

Major world economies spell out key elements of AI ‘ingredients list’

12 May 2026 at 17:09

A group of international government agencies released guidance Tuesday on what they believe any artificial intelligence “ingredients list” tool should include to make AI more secure.

The concept of such a list, known as a “software bill of materials (SBOM),” is to know everything that goes into a particular piece of software so that any supply chain risks are easier to identify. There’s been a growing focus from cyber experts on how they interact with AI.

The guidance produced by agencies from the G7 group of nations, including the Cybersecurity and Infrastructure Security Agency, is aimed at setting minimum voluntary standards for what SBOMs for AI should look like. It builds on past efforts to produce other kinds of SBOM guidance.

“While not exhaustive or mandatory, the supplemental minimal elements outlined in this guidance reflect the consensus of G7 experts and will expand over time to keep pace with the rapid advancement of AI technology,” CISA stated. (Some refer to SBOMs for AI as AIBOMs.)

The elements include those that fall under the categories of information related to the SBOM for AI itself, on the AI system as a whole, for identifying the models used by the AI system, on datasets used during the whole life cycle of the model, on physical and virtual infrastructure needed for operation and support support of the AI system, on cybersecurity measures that apply to AI models and systems and on the AI system’s key performance indicators. 

A trio of industry professionals who have worked on the topic of AISBOMs told CyberScoop they welcomed the guidance, in each case praising it as a good step that could nonetheless be improved upon.

“Pretty much every piece of software out there is now going to have AI incorporated into it, and when a hospital is buying an AI-enabled medical device, or the Department of War is buying an AI-enabled weapon system, or auto manufacturers are putting AI into cars, we need to be able to trust what AI is in those systems,” said Daniel Bardenstein, CEO of Manifest Cyber. “And the first step to trust is to identify what is this AI, where did it come from? How is it trained?”

“This is a strong, applaudable step towards getting everybody on the same page that this is the future of how we need to think about trusting AI,” said Bardenstein, who has built and AIBOM generator and worked on the topic in the past with CISA and the OWASP Foundation.

Dmitry Raidman, co-founder and chief technology officer at Cybeats — and someone who, like Bardenstein, has built his own AIBOM generator and worked on AIBOMs with CISA and OWASP — said the G7 guidance was “amazing” because it covers 80 to 90% of what’s needed.

“There was no baseline, but it now will put out a clear baseline,” he said.

On the downside, Bardenstein said he had concerns with how easily organizations can implement the guidance, and Raidman said it doesn’t adequately tackle the issue of runtime.

Allan Friedman, sometimes called the “godfather of SBOMs,” said the guidance was a good document, but probably mislabeled because it states that the elements it identifies are not mandatory.

“This document is laying out sets of types of data that could be useful,” said Friedman, who worked on SBOMs in multiple U.S. government roles who is senior technical adviser at the Institute for Security and Technology and technologist in residence at TPO Group. “And so it is a great, great piece to advance AI transparency and AI system transparency, but it lists potential elements. These aren’t the minimum elements.”

Friedman said the next steps could include mapping the guidance into what is being implemented today, and talking about aligning it with policies in the European Union and G7 governments to make sure there are minimal conflicts.

The post Major world economies spell out key elements of AI ‘ingredients list’ appeared first on CyberScoop.

Microsoft addresses 137 vulnerabilities in May’s Patch Tuesday, including 13 rated critical

12 May 2026 at 17:00

Microsoft addressed another triple-digit batch of vulnerabilities cutting across its various enterprise products, components and underlying systems. Yet despite the high number of defects, the vendor reported no actively exploited zero-days in this month’s Patch Tuesday update.

Thirteen of the 137 vulnerabilities Microsoft disclosed were assigned critical CVSS ratings, including a pair of vulnerabilities affecting Azure — CVE-2026-33109 and CVE-2026-42823 — and CVE-2026-42898 in Microsoft Dynamics 365 with 9.9 CVSS scores. 

The company designated 13 vulnerabilities as more likely to be exploited, and 113 defects as less likely or unlikely to be exploited.

The high volume of vulnerabilities reflects a growing trend researchers have been anticipating as artificial intelligence models are deployed to find previously uncovered defects in code. 

While not all of these bugs were found by AI, it’s likely they had an AI-related component — even if it was just AI writing the submission,” Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative, wrote in a blog post Tuesday.

Childs was especially intrigued by CVE-2026-41096, which he described as a “nasty-looking bug” in Microsoft Windows DNS that allows unauthorized attackers to run code remotely. 

“No authentication or user interaction needed, and since the DNS Client runs on virtually every Windows machine, the attack surface is enormous. An attacker with a position to influence DNS responses could achieve unauthenticated remote-code execution across your enterprise,” he added. 

Childs also described CVE-2026-41089, a Windows Netlogon defect that allows unauthenticated remote attackers to run code, as the “highest-impact bug that requires immediate patching,” adding that a “compromised domain controller is a compromised domain.”

Jack Bicer, director of vulnerability research at Action1, called out CVE-2026-42898, the critical vulnerability affecting Microsoft Dynamics 365. 

“With no user interaction required, and the potential to impact systems beyond the vulnerable component’s original security scope, this vulnerability poses serious enterprise risk: an attacker with only basic access could turn a business application server into a remote execution platform,” he said in a blog post.

“Compromise of Dynamics 365 infrastructure can expose customer records, operational workflows, financial information, and integrated business systems. Since CRM environments often connect with identity services, databases, and enterprise applications, successful exploitation could lead to broader organizational compromise and operational disruption,” Bicer added.

The full list of vulnerabilities addressed this month is available in Microsoft’s Security Response Center.

The post Microsoft addresses 137 vulnerabilities in May’s Patch Tuesday, including 13 rated critical appeared first on CyberScoop.

Google and Amnesty International teamed up to make it harder for spyware vendors to hide

12 May 2026 at 13:00

Google launched a feature for Android phones Tuesday for dedicated forensic logs about intrusions from sophisticated attacks like those by spyware vendors, in what design partners at Amnesty International hailed as an important first.

The tech giant has been ramping up the new feature, Intrusion Logging, since last year, and has now begun rolling it out.

“The new intrusion logging feature promises to be a major aid to digital forensics researchers undertaking investigations into sophisticated attacks on Android devices,” Amnesty International said in a Tuesday technical briefing. “This is the first time a major device vendor has released a feature specifically to enhance the ability to forensically detect and respond to advanced digital threats.”

To date, independent investigators have relied on records and often short-lived log files that weren’t meant for forensic use, and Amnesty said surveillance groups have grown increasingly aware of those forensic efforts. Intrusion Logging, a feature of Android Advanced Protection Mode, is designed specifically to keep track of possible intrusions for forensic purposes. It keeps records of security incidents like device unlocking, physical access and spyware installation and removal.

Google’s annual security and privacy update for Android phones mentions the feature and its development with Amnesty International, Reporters Without Borders and others. It also touts new protections against banking scam calls, other features for detecting suspicious activity on Android phones, additional privacy safeguards and more.

The firm has been working on the feature since announcing it last year.

“Intrusion Logging enables persistent and privacy-preserving forensics logging to allow for investigation of devices in the event of a suspected compromise,” wrote Eugene Liderman, director of Android security and privacy.

Intrusion Logging joins an expanding slate of features from tech companies to fight sophisticated attacks like those from commercial spyware, among them Apple’s Lockdown Mode and Memory Integrity Enforcement and WhatsApp’s Strict Account Settings.

Intrusion Logging “promises to help shift the balance to the advantage of defenders, providing civil society investigators with the key evidence needed to detect and expose some of the most advanced attacks facing journalists and activists,” said Donncha Ó Cearbhaill, head of the Amnesty International Security Lab, “With Intrusion Logging Google is the first major vendor to proactively address to challenge of detecting advanced attacks on device. By making more consensual forensic data available for researchers, we can make life more difficult for attackers and help civil society seek accountability when their devices are unlawfully targeted by spyware and mobile data extraction tools.”

The feature has some limitations, though, Amnesty said in its technical briefing. It requires Android 16 and is only available for now on Pixel devices; the device has to be linked to a Google account, and the logs may include sensitive information, like browser navigation history, so secure sharing of the logs is important.

The logs may also be deletable by attackers, Ó Cearbhaill told CyberScoop, but he said he understands there are plans to strengthen protections against that in future versions. And lots of attacks would be detectable in the logs where attackers wouldn’t necessarily have the root access needed to try to delete logs, he said.

To enable Intrusion Logging, users need to be using Android Advanced Protection Mode, and can find the feature at Settings > Security & privacy > Advanced Protection > Intrusion Logging. If users suspect some kind of security incident, they’ll need to export and share the logs with a forensic analyst.

The post Google and Amnesty International teamed up to make it harder for spyware vendors to hide appeared first on CyberScoop.

AI is separating the companies built to scale from the ones built to sell

By: Greg Otto
12 May 2026 at 06:00

If you had time to walk the expo floor at this year’s RSA Conference, it was impossible to miss the shift in our industry. Artificial intelligence has moved from an emerging layer to the foundation of what powers cybersecurity companies. But from our vantage point as investors who work closely with founders and operators, the bigger shift is how AI is changing how these companies are formed, funded and scaled.

The past year marked an inflection point. A surge in venture funding and headline acquisitions underscored a market moving faster than many expected. Startups that once spent years iterating toward product-market fit are now emerging from stealth with mature products and raising large early rounds almost immediately. Meanwhile, the traditional progression from seed to Series A is compressing into a much shorter, higher-stakes window, and legacy companies are being forced to move faster than ever to stay relevant in today’s landscape.  

Venture funding is concentrating around fewer, larger AI bets

The acceleration reflects real capability. AI has cut the time and cost of building and iterating on cybersecurity products, allowing small teams to move at unprecedented speed. But faster development doesn’t change the basics: durable businesses still require clear differentiation, strong go-to-market execution and proven customer demand.

What has changed is how capital is being deployed. Venture funding in cybersecurity is increasingly concentrated into fewer companies, with larger rounds and higher valuations. The market is increasingly binary: startups are expected to either secure AI systems or use AI to deliver clear, measurable improvements in security outcomes. Companies that can’t clearly stake out one of those positions are finding it harder to attract attention from both investors and acquirers.

Higher valuations can accelerate momentum, but they also raise the bar for performance. When growth does not materialize as expected, the path forward becomes more difficult, particularly in a market that is moving as quickly as this one.

AI-native startups are operating with smaller, more technical teams

AI is also reshaping how cybersecurity companies are staffed and operated. The most effective teams today are smaller and more technical, relying heavily on automation to extend their capabilities. Engineers are increasingly focused on orchestrating AI systems rather than building every component from scratch, shifting the nature of technical work toward higher-level problem solving and system design. They can iterate faster than ever before, putting pressure on fast-paced innovation and high-capacity outputs. 

This is creating a widening gap between companies that are built around AI from the start and those trying to retrofit it into existing models. For newer startups, this approach is often foundational. For incumbents, it can require significant changes to both technology and culture, leading to an upcoming M&A wave that’s already in the early innings.  

Threat actors are using AI to scale attacks and lower barriers to entry

At the same time, the threat landscape is evolving. AI is lowering the barrier to entry for offensive cyber capabilities, enabling less sophisticated actors to execute attacks that previously required significant expertise. This is increasing both the volume and complexity of threats facing organizations. We’re seeing early responses to that with things like Anthropic’s Project Glasswing, which aims to bring together leading organizations to protect critical software.

The expansion is not limited to traditional network or endpoint attacks. AI is introducing new attack surfaces, from machine identities to autonomous agents and decision-making systems. It is also unleashing new forms of risk, including more advanced disinformation campaigns and other narrative-driven attacks that can impact markets and corporate reputations as much as technical systems.

Cyber defense is shifting toward autonomous, machine-driven models

As attackers scale their use of AI, defenders are being forced to do the same. Cybersecurity is moving toward a model where machine-driven systems play a central role in both detecting and responding to threats. In many cases, the dynamic is moving from human vs. machine, to machine vs. machine.

This shift is driving innovation across the market. New categories are emerging around securing AI systems and workloads, while established areas like endpoint security, data protection and vulnerability management are being rebuilt with AI at their core. These changes are enabling new capabilities but also increasing the pace of competition across the industry.

M&A and platform strategies are accelerating alongside AI innovation

The speed of innovation is also reshaping consolidation across cybersecurity. Larger platforms are moving to incorporate AI capabilities more quickly, while startups are building toward platform strategies earlier in their lifecycle. This is compressing timelines for both growth and acquisition. When incumbents can’t innovate quickly enough, they can buy instead.

Capital continues to play a central role in this dynamic. Strong funding environments are enabling companies to scale quickly, but they are also introducing risk when valuations outpace underlying performance. Some of the largest rounds are functioning as signals of market leadership as much as sources of operating capital.

There is growing awareness that not all these companies will meet expectations. The same conditions that enable rapid growth can also expose weaknesses quickly, particularly if customer adoption and revenue do not keep pace.

What founders and investors are watching for the rest of 2026

The defining characteristic of the current market is speed. The gap between companies that can adapt to these changes and those that cannot is widening quickly.

For founders, that means balancing urgency with discipline – building AI-native products while staying focused on real customer problems. For investors, it means identifying teams that can execute in a rapidly changing environment and build companies that endure beyond the current cycle.

The cybersecurity landscape has always evolved alongside technology and threat activity, but the pace of change today is different. The companies that emerge as leaders in the next phase of the market will be those that can operate effectively in that reality, where AI is foundational, competition is global, and the timeline for success is shorter than ever.

The post AI is separating the companies built to scale from the ones built to sell appeared first on CyberScoop.

Instructure claims hackers returned stolen Canvas data after an extortion standoff

11 May 2026 at 19:31

Instructure, the company behind Canvas, said it reached an agreement with the cybercriminals who threatened to leak a trove of sensitive data they claim was stolen during a prolonged cyberattack on the widely used education tech platform.

Pressure was mounting on the company as widespread outages left schools, students and teachers temporarily unable to access critical data late last week when the company took Canvas offline after the attackers defaced the platform’s login page. By Friday, the company said Canvas — a central hub for K-12 and university coursework, exams, grades and communication — was back online and fully operational. 

ShinyHunters, a decentralized crew of prolific cybercriminals that researchers affiliate with The Com, claimed responsibility for the attack on its data leak site and was attempting to extort the company for an unknown ransom amount. 

Instructure didn’t outright say it paid a ransom, but insisted the agreement provided all necessary assurances. “The data was returned to us. We received digital confirmation of data destruction (shred logs),” the company said in an update Monday.

“We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise,” the company added. “This agreement covers all impacted Instructure customers, and there is no need for individual customers to attempt to engage with the unauthorized actor.”

The threat group initially set a deadline of May 6 — four days after Instructure previously said the incident was contained — claiming it stole 3.65 terabytes of data spanning 275 million records across 8,809 school systems. 

When that deadline passed without payment, ShinyHunters escalated its pressure on the company by “injecting an extortion message directly into the Canvas login pages of roughly 330 institutions, and pivoted to school-by-school extortion with a current deadline of May 12,” Cynthia Kaiser, senior vice president of Halcyon’s Ransomware Research Center, told CyberScoop.

The additional public pressure prompted Infrastructure to take Canvas offline, disrupting schoolwork and access to critical systems nationwide. 

Instructure CEO Steve Daly apologized over the weekend for the company’s inconsistent communication and deficient public response to the cyberattack. 

“Over the past few days, many of you dealt with real disruption. Stress on your teams. Missed moments in the classroom. Questions you couldn’t get answered. You deserved more consistent communication from us, and we didn’t deliver it. I’m sorry for that,” he said in a statement.

Daly acknowledged that the attack, which remains under investigation aided by CrowdStrike, exposed usernames, email addresses, course names, enrollment information and messages. He insisted that course content, submissions and credentials were not compromised.

The temporary but widespread disruption has spurred broad concern across the education sector as ransomware experts and threat hunters continue to track developments. The cyberattack also caught the attention of lawmakers on Capitol Hill. 

The House Homeland Security Committee on Monday published a letter to Daly seeking a briefing with him or a senior leader at Instructure by May 21. 

“The recurrence of an intrusion within days of an initial breach disclosure, and Instructure’s apparent failure to fully remediate the underlying vulnerabilities during that window, raise serious questions about the company’s incident response capabilities and its obligations to the institutions and individuals whose data it holds,” House Homeland Security Chairman Andrew Garbarino, R-N.Y., wrote in the letter to Daly.

The committee wants to learn more about the “circumstances of both intrusions, the the nature and volume of data accessed, the steps Instructure has taken and is taking to contain the threat and notify affected institutions, and the adequacy of the company’s coordination with federal law enforcement and the Cybersecurity and Infrastructure Security Agency,” he added. 

CISA did not describe the extent of its involvement in Instructure’s response. “CISA is aware of a potential cyber incident affecting Canvas. As the nation’s cyber defense agency, we provide voluntary support and cybersecurity services to organizations in responding to and recovering from incidents,” Chris Butera, the agency’s acting executive assistant director for cybersecurity, said in a statement.

Instructure’s timeline of the attack has changed and remains incomplete. The company said it first detected unauthorized activity in Canvas on April 29 and immediately revoked the attacker’s access and initiated an incident response. Researchers not directly involved with the formal investigation said ShinyHunters gained access to Canvas at least a few days earlier.

The follow-on malicious activity on May 7 — the defacement of public login pages — was tied to the same incident, the company said. 

“We have since confirmed that the unauthorized actor carried out this activity by exploiting an issue related to our Free-For-Teacher accounts. This is the same issue that led to the unauthorized access the prior week. As a result, we have made the difficult decision to temporarily shut down Free-For-Teacher accounts,” the company said in an updated post about the incident.

Instructure did not answer questions about the vulnerability or explain how attackers intruded its systems. The company said it also revoked privileged credentials and access tokens for affected systems, rotated internal keys, restricted token creation pathways, and deployed additional security controls and monitoring.

Canvas is fully operational and safe to use, the company said, adding that CrowdStrike has reviewed known indicators of compromise and “found no evidence that the threat actor currently has access to the platform.”

Access still remains spotty and unavailable for some Canvas users as school districts restore the platform in phases after conducting their own internal checks.

Halcyon published an alert about the attack Friday, including a screenshot of the message that some school staff, guardians and students encountered before Instructure took the learning management system offline.

ShinyHunters is a notorious data theft extortion group that previously hit major cloud platforms, including Salesforce and Snowflake, via voice phishing, credential theft and supply-chain attacks. 

Education is a recurring and consistent target for cybercriminals, accounting for more than 250 ransomware attacks globally last year, according to Halcyon. 

Yet, the scope of the attack on Canvas “makes this one of the largest single education-sector exposures we’ve tracked,” Kaiser said.

“By compromising a shared platform used across thousands of schools, ShinyHunters hit the entire education sector in one move, which is the same playbook Clop ran against Oracle EBS customers last fall,” she added. “Among 2026 incidents against critical infrastructure, this is at or near the top for education-sector impact, and it highlights a trend of third-party software vendors now being part of an attack surface, and causing cascading effects across an entire sector.”

Cybersecurity professionals focused on ransomware and data theft extortion consistently encourage victims to not pay ransoms, but they also often acknowledge that companies have to make tough decisions based on their own interests and the security of their customers or users caught up in the aftermath.

Allison Nixon, chief research officer at Unit 221B, said the threat group claiming responsibility for the attack should not be trusted. 

“They are claiming they will delete the data after they are paid, and if they are not paid that they will leak the data,” she told CyberScoop. “This is in line with the past data extortion scams run by the same and related Com actors, who have made false statements to victims and to the public in the past.”

Instructure acknowledged that its agreement with the attackers isn’t ironclad. “While there is never complete certainty when dealing with cybercriminals, we believe it was important to take every step within our control to give customers additional peace of mind, to the extent possible,” the company said.

Daly — a longtime security executive who was previously CEO at Ivanti — ended his mea culpa with a pledge to improve communications and provide a summary of a forensics report soon.

“Last week, we made a call to get the facts right before speaking publicly. That instinct isn’t wrong, but we got the balance wrong. We focused on fact-finding and went quiet when you needed consistent updates. You’ve been clear about that, and it’s fair feedback. We will change that moving forward,” he said. 

“Rebuilding trust takes time,” Daly added. “We’re going to earn it back through consistent action and honest communication.”

Update: May 12, 11:00 am: This story has been updated to reflect that Instructure announced they have reached a deal with ShinyHunters.

The post Instructure claims hackers returned stolen Canvas data after an extortion standoff appeared first on CyberScoop.

Google spotted an AI-developed zero-day before attackers could use it

11 May 2026 at 09:00

Google researchers found a zero-day exploit developed by artificial intelligence and alerted the susceptible vendor to the imminent threat before a well-known cybercrime group initiated a mass-exploitation campaign, the company said in a report released Monday.

The averted disaster probably isn’t the first time attackers used AI to build a zero-day, but it is the first time Google Threat Intelligence Group found compelling evidence that this long-predicted and worrying escalation in vulnerability-exploit development is underway.

“We finally uncovered some evidence this is happening,” John Hultquist, chief analyst at GTIG, told CyberScoop. “This is probably the tip of the iceberg and it’s certainly not going to be the last.”

Google declined to identify the specific vulnerability, which has been patched, or name the “popular open-source, web-based administration tool” it affected. It did, however, note that the defect impacted a Python script that allows attackers to bypass two-factor authentication for the service.

Researchers also withheld details about how they discovered the zero-day exploit or the cybercrime group that was preparing to use it for a large-scale attack spree.

The threat group has a “strong record of high-profile incidents and mass exploitation,” Hultquist said, suggesting the attackers are prominent and well-known among cybersecurity practitioners. 

GTIG is fairly confident the threat group was using AI in a meaningful way throughout the entire process, but it has yet to determine if the technology also discovered the vulnerability it ultimately developed into an exploit.

Whichever AI model the attackers used — Google is confident it wasn’t Gemini or Anthropic’s Mythos — left artifacts throughout the exploit code that are inconsistent with human developers. This evidence, which included documentation strings in Python, highly annotated code and a hallucinated but non-existent CVSS score, tipped Google off to the fact AI was heavily involved, Hultquist said. 

GTIG has been warning about and expecting AI-developed exploits to hit systems in the wild, especially after its Big Sleep AI agent found a zero-day vulnerability in late 2024.

“I think the watershed moment was two years ago when we proved this was possible,” Hultquist said, adding that there are probably several other AI developed zero-days in play now. 

Yet, to him, the discovery of a zero-day exploit developed by AI is less concerning than what this single instance forebodes even further.

“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist said. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”

The post Google spotted an AI-developed zero-day before attackers could use it appeared first on CyberScoop.

The missing cybersecurity leader in small business

By: Greg Otto
11 May 2026 at 06:00

The average cyberattack costs for a small- or medium-size business is more than $250,000. The salary for a chief information security officer (CISO) is about the same, pulling in between $250,000 and $400,000, according to the annual 2026 CISO Report from Sophos and Cybersecurity Ventures. Small- and medium-size businesses (SMBs) know they cannot afford the salary, so they roll the dice, hoping they will not be attacked. This is a dangerous gamble that these businesses, which make up the backbone of the American economy, should not have to take. A virtual (vCISO) or fractional CISO (fCISO) can provide a practical solution.

As the American economy goes digital, SMBs now rely on the same building blocks as big enterprises — cloud services, payment systems, remote access, customer data, and other third-party vendors.  But without senior cyber leadership, cybersecurity often becomes a patchwork of tools, checklists, insurance paperwork, and whatever guidance a vendor offers. That may get these companies through a questionnaire; it will not build real resilience. Nearly half, all reported cyber incidents, which is projected to cost the global economy $12.2 trillion annually by 2031, involve smaller firms.

The threat is growing in both size and sophistication. Adversaries are deploying AI to automate reconnaissance, develop malware, and run phishing campaigns at scale.  This reduces the cost and skill needed to target smaller firms at volume. Adversaries are also collecting encrypted data with the intent to decrypt it later when they have access to large enough quantum computers. SMBs in defense, healthcare, and financial supply chains often hold sensitive credentials that provide access into larger enterprise environments, but most are not prepared to adopt quantum-resistant encryption.

SMBs generally understand they face cyber risk. The real gap is leadership: someone who can turn technical vulnerabilities into business decisions, set priorities, brief executives, prepare for audits, and hold vendors accountable. For more SMBs, hiring a full-time CISO is financially unrealistic.

A Virtual CISO provides remote, on-demand cybersecurity leadership and advice, typically supporting several organizations at the same time. A fractional CISO is a dedicated, part-time executive who is more deeply integrated into one organization’s governance, security planning, and day-to-day operations. Both models give smaller organizations access to senior-level cybersecurity expertise in a flexible, more affordable way than hiring a full-time CISO.

Washington should make it easier for SMBs to hire fractional cybersecurity leaders, because the private market is not closing this gap on its own. The Cybersecurity and Infrastructure Security Agency (CISA) and the Small Business Administration (SBA) could help by publishing buyer guidance: vetted criteria for evaluating providers, example scopes of work and deliverables, and real-world case studies that show SMB owners what a high-quality vCISO or fCISO engagement should look like.

Clear guidance matters because many smaller firms cannot easily tell the difference between true cybersecurity leadership and a tool reseller, compliance-only consultant, or a generic managed services contract. Any vetted provider criteria should emphasize proven experience building and running security programs, independence from vendor incentives and product quotas, and the ability to tie security investment to real business risk, not just a list of certifications. Model scopes of work should also spell out the basics every engagement should deliver: an initial risk assessment, a prioritized remediation roadmap, and simple metrics that show whether security is improving over time. Without clear buyer criteria, federal efforts could end up funding low-quality services that add cost and paperwork without making companies safer.

The National Institute for Standards and Technology (NIST) should recognize these CISO models in its SMB-focused Cybersecurity Framework guidance. That would help smaller firms turn the framework’s Govern, Identify, Protect, Detect, Respond, and Recover functions into a clear, accountable leadership structure. This would make these roles less abstract: the point is not merely providing advice, but taking executive-level ownership of risk priorities, vendor oversight, incident readiness, and communication with the owner or board.

Congress and the Treasury Department should consider targeted tax incentives or credits for qualified cybersecurity leadership services, tied to measurable risk-reduction outcomes. Eligible activities could include completing a risk assessment, building a incident response plan, conducting vendor security reviews, running employee training, and producing a remediation roadmap. SMBs often defer cybersecurity because every dollar competes with payroll, inventory, and growth. A targeted incentive would make security leadership easier to justify as a business investment rather than an optional add-on.

Federal acquisition officials should require contractors that handle sensitive government data to show it has executive-level cybersecurity oversight, whether it is full-time, virtual, or fractional, and should extend that expectation down to relevant subcontractors and suppliers. This is necessary because SMBs serve as entry points into defense, healthcare, financial, and critical infrastructure supply chains.

Finally, CISA and the SBA should support vCISO- and fractional-CISO-led workforce training. Employees improve security when training comes with leadership, regular reinforcement, and clear accountability, not just annual awareness training. The aim is not to turn every SMB into a Fortune 500 security shop. It should be to give smaller firms access to the leadership they need before the next incident forces the issue.

Georgianna Shea, who is a Doctor of Computer Science, is chief technologist at the Foundation for Defense of Democracies’ Center on Cyber and Technology Innovation and its Transformative Cyber Innovation Lab, where Cason Smith served as a summer 2025 intern. Cason is studying integrated information technology at the University of South Carolina.

The post The missing cybersecurity leader in small business appeared first on CyberScoop.

Before yesterdayCyberScoop

Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments

8 May 2026 at 13:20

The Senate’s top Democrat called on the Department of Homeland Security Friday to work closely with state and local governments to defend against artificial intelligence-strengthened hacks. 

Senate Minority Leader Chuck Schumer, D-N.Y., wrote to DHS Secretary Markwayne Mullin to make sure state, local, tribal and territorial (SLTT) governments aren’t left behind as AI models advance, posing new hacking threats.

“There is a race between cybersecurity defenders and AI-enabled hacking — and there’s no time to waste,” Schumer wrote.

“While the White House has reportedly begun hosting meetings about its internal security priorities following these frontier AI cyber breakthroughs, it is glaringly obvious that the Department of Homeland Security needs an updated plan for coordinating these efforts with [state, local, tribal and territorial] governments and implementing procedures to reduce the risk of disruptive cyberattacks enabled by frontier AI,” he stated.

Schumer said he was worried about the capabilities of DHS and its Cybersecurity and Infrastructure Security Agency to carry out that coordination, given federal funding cuts to the Multistate Information Sharing and Analysis Center, and the lack of a Senate-confirmed CISA director for the duration of the second Trump administration.

Schumer wants a plan from DHS by July 1 on coordinating with state and local governments on a range of questions, such as how to identify top AI talent, carry out rapid patching and conduct risk assessments.

“AI is changing the cyber battlefield fast — and we cannot let hackers get there first,” Schumer said in comments accompanying the letter. “Hospitals, power grids, water systems, schools, elections, and emergency services cannot be left exposed while criminal gangs and state-backed hackers race to exploit new AI tools. DHS must immediately help states and localities find and fix vulnerabilities before Americans are hit with outages, disruptions, and attacks that could put lives and livelihoods at risk.”

CISA is using AI to help on the defensive side internally, agency officials recently said.

The post Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments appeared first on CyberScoop.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

By: djohnson
8 May 2026 at 09:06

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

Ivanti customers confront yet another actively exploited zero-day

7 May 2026 at 17:50

Attackers are hitting Ivanti customers yet again — circling back to a common target and consistently susceptible vendor in the network edge space — by exploiting a zero-day vulnerability in one of the company’s most besieged products. 

Ivanti warned customers that attackers have successfully exploited CVE-2026-6973, an improper input validation defect in Ivanti Endpoint Manager Mobile (EPMM) that allows authenticated users with administrative privileges to run code remotely. The company alerted customers to the threat in a security advisory Thursday while also disclosing four additional high-severity vulnerabilities in the same product.

“At the time of disclosure, Ivanti is aware of very limited exploitation in the wild of CVE-2026-6973, which requires authenticated administrative access to implement,” a spokesperson for Ivanti said in a statement.

Ivanti did not say when the first instance of exploitation occurred, or precisely how many customers have already been impacted.

The Cybersecurity and Infrastructure Security Agency added the zero-day to its known exploited vulnerabilities catalog within hours of Ivanti’s disclosure.

The company released patches for all five vulnerabilities Thursday, including the four additional defects — CVE-2026-5787, CVE-2026-5788, CVE-2026-6973 and CVE-2026-7821 — which it said haven’t been exploited in the wild.

“Ivanti discovered these vulnerabilities in recent weeks through internal detection processes which are supported by advanced AI, customer collaboration, and responsible disclosure,” the company spokesperson said. One of the defects was discovered and responsibly reported to Ivanti by a former employee.

The company suggested at least one of the root causes for the latest zero-day may be traced to lingering risk posed by a pair of separate, critical zero-days — CVE-2026-1281 and CVE-2026-1340 — that were exploited starting in late January. The fallout from those exploited vulnerabilities in Ivanti EPMM spread to nearly 100 victims, including The Netherlands’ Dutch Data Protection Authority and the Council for the Judiciary, by early February.

The latest Ivanti EPMM zero-day “requires authenticated administrative access to exploit, which is why customers who followed Ivanti’s recommendation in January to rotate EPMM credentials are at significantly reduced risk. Customers unaffected by the prior vulnerability are also at a much lower risk,” the company spokesperson said.

Caitlin Condon, vice president of security research at VulnCheck, said the administrative privileges required to exploit CVE-2026-6973 indicates it was possibly exploited as part of an attack chain relying on another method for initial access. 

“No attribution was shared on threat actor exploitation of CVE-2026-6973, but two other 2026 CVEs in Ivanti EPMM — CVE-2026-1281 and CVE-2026-1340 — have been exploited by a range of threat actors, including China- and Iran-attributed groups,” Condon told CyberScoop. 

“Those vulnerabilities notably were code-injection vulnerabilities that were remotely exploitable without authentication, unlike CVE-2026-6973,” she added. “Both CVE-2026-1281 and CVE-2026-1340 appear to have been fixed in today’s Ivanti release. Comparatively, these earlier vulns were of higher initial concern than today’s fresh zero-day vulnerability, which requires admin authentication.”

Attacks involving Ivanti defects are a recurring problem for the vendor’s customers and security practitioners at large, including many vulnerabilities that attackers exploited before the company caught or fixed the errors. 

The Cybersecurity and Infrastructure Security Agency has flagged 34 Ivanti defects on its known exploited vulnerabilities catalog since late 2021. At least 22 defects across Ivanti products have been exploited in the past two years, including five vulnerabilities in Ivanti EPMM in the last year.

During an interview with CyberScoop in March at the RSAC Conference, Ivanti Chief Security Officer Daniel Spicer said the company’s transparency partly explains the high number of vulnerabilities reported and disclosed in its products. 

“My position here at Ivanti is it doesn’t do our customers any good to be quiet about this,” he said, describing the company’s communication stance with the public, CISA and global partners as “very aggressive.”

That’s not always the case with other vendors, Spicer said. “I don’t know that transparency is a core tenant of all other organizations.”

The company, which serves many government agencies and critical infrastructure operators, also routinely notes that highly skilled and resourced attackers, including those backed by nation-states, are often responsible for these waves of attacks on its customers.

Ivanti maintains that it’s trying to consistently improve the security of its products. “Through continued investment in its product security program, including the use of advanced AI paired with human verification, Ivanti is strengthening its ability to identify, remediate, and disclose issues quickly, helping customers stay ahead of an increasingly compressed threat landscape,” the spokesperson said.

The way Spicer put it in March: “We want to make sure that people understand that we are trying to do the right thing.”

The post Ivanti customers confront yet another actively exploited zero-day appeared first on CyberScoop.

Trump officials are steering a cybersecurity scholarship program toward AI

7 May 2026 at 15:57

The Trump administration is redirecting a cybersecurity scholarship program that requires recipients to work in government service toward artificial intelligence, leaving some current program scholars dismayed and bewildered.

In an email to participating school program coordinators obtained by CyberScoop, the Office of Personnel Management and National Science Foundation said the CyberCorps Scholarship For Service program would now be known as CyberAI SFS.

“The SFS students we enroll today will not be employable when they graduate in 2-3 years without significant AI background,” the email reads. “Any SFS student in this new program must be proficient in using AI in cybersecurity or providing security and resilience for AI systems. Therefore, new students in the legacy CyberCorps program must learn to acquire AI expertise to augment their cybersecurity expertise.”

“Effective immediately, new SFS scholars will not be accepted to the Legacy CyberCorps(C) program without a description on how they will develop competencies at the intersection of cybersecurity and AI,” the email continues. “The description of the competency development could include, but are not limited to, formal program of study, experimental learning, research activities, capstone projects, competitions, certifications, and/or no-credit professional development via external providers.”

One current program scholar graduating soon said they were “disappointed” by the change for several reasons. As of earlier this week, the agencies collectively running the program — OPM, NSF and the Department of Homeland Security — hadn’t notified any program participants that any changes were on the horizon.

For another: “I was a little bit surprised that it was coming out as so blatantly disregarding the people that haven’t graduated yet, that everyone in my cohort is already considered ‘legacy,’ and the fact that it said people in the program that I’m currently in will not be employable in the coming years,” they said.

The email leaves scholars uncertain about what will happen as they try to fulfill their side of the agreement, especially since doing so has  already been difficult amid cyber job cutbacks and other concerns about how the program has recently been administered. The scholar told CyberScoop there are around 300 people in this current group.

“I assume it will affect placements,” they said. “I can’t say for sure one way or another, because placements are already so impacted by everything that’s been going on. I don’t know what’s due to lack of AI background and what’s due to everything else.”

Another scholar said it was wrong for OPM “to keep claiming repeatedly that they’re acting in our best interests,” when “we’re left out to dry.” Already, the current group of scholars has been frustrated by their inability to get questions answered.

“If we’re legacy CyberCorps, then how does that address anything?” the scholar asked. “We’re just kind of being shoved into a closet and forgotten about. Now in that email, they were saying that we were going to be unhireable in two years time without all this AI stuff under our belt. But at the same time, almost all of our universities were actively discouraging the use of AI.”

Another part of the email brought welcome news to those scholars: a temporary easing of the program’s requirements, including the 70-20-10 rule that sets targets for jobs in the federal government, state and local governments, and the education sector, as well as the rules for securing an internship.. Even so, scholars say they still haven’t received any direct information about the changes.

A spokesperson for NSF said there have been some misunderstandings about the email to school program coordinators (known as principal investigators), but didn’t address current scholars’ concerns about communication.

“The guidance does not require scholars to possess these competencies upon entry,” said the spokesperson, Michael Englund. “Rather, it requires principal investigators (PIs) to clearly describe how their programs will prepare scholars to develop AI-related competencies by the time they graduate (typically within two to three years). In other words, programs must have a concrete and immediate plan to ensure scholars gain these skills during the course of their studies, not prior to admission.”

A spokesperson for OPM addressed the two biggest concerns of current participants.

“There are no changes to placement requirements,” the spokesperson said. “As noted, NSF’s updates are forward-looking to ensure future cohorts are prepared for evolving workforce needs. NSF has encouraged institutions to use professional development funds to expand AI-related training where needed. At OPM, we are also expanding AI training and have introduced AI ambassadors to support adoption.”

On communication: “Principal investigators (PIs) remain the primary point of contact for scholars, but OPM plans to increase direct outreach and plans to issue follow-up communication to scholars on placement efforts,” the spokesperson said.

Last week’s email is the latest turn for the program, with the Cybersecurity and Infrastructure Security Agency last month declaring that it was canceling summer internships due to the lapse in funding for some DHS agencies. Congress has since provided funding for CISA. 

The agency didn’t answer a question about whether that cancellation decision has been reversed as a result.

The post Trump officials are steering a cybersecurity scholarship program toward AI appeared first on CyberScoop.

American duo sentenced for hosting laptop farms for North Korean IT workers

By: Greg Otto
7 May 2026 at 09:56


Two U.S. nationals were sentenced to 18 months in prison for running laptop farms that facilitated North Korea’s expansive remote IT workers scheme, the Justice Department said Wednesday.

Matthew Issac Knoot and Erick Ntekereze Prince both received and hosted laptops at their residences to dupe U.S. companies into thinking remote IT workers they hired were located in the country. The pair’s separate schemes impacted almost 70 U.S. companies and generated a combined $1.2 million in revenue for the North Korean regime.

“The FBI and our partners will continue to disrupt North Korea’s ability to circumvent sanctions and fund its totalitarian regime,” Brett Leatherman, lead of the FBI’s Cyber Division, said in a statement. “These cases should leave no doubt that Americans who choose to facilitate these schemes will be identified and held accountable. Hosting laptops for DPRK IT workers is a federal crime which directly impacts our national security, and these sentences should serve as a warning to anyone considering it.”

Knoot, of Nashville, Tennessee, and Prince, of New York, received the laptops from unsuspecting U.S. companies and installed remote desktop applications on the machines to enable co-conspirators to work from anywhere while appearing to be based at their respective residences.

Prince’s company Taggcar was contracted to supply IT workers to victim U.S. companies from June 2020 through August 2024. He pleaded guilty in November 2025 to wire fraud conspiracy for his yearslong involvement in the North Korean IT worker scheme. 

Prince was indicted and charged in January 2025 along with his alleged co-conspirators, who collectively obtained work for North Korean IT workers at 64 U.S. companies, earning nearly $950,000 in salary payments. 

A federal judge sentenced Prince Wednesday and ordered him to forfeit $89,000, which is the amount he netted personally. 

Knoot was arrested in August 2024, a year after the FBI searched his home. Officials said he made multiple false and misleading statements and destroyed evidence to obstruct the investigation at that time. 

Victim companies paid North Korean workers linked to Knoot’s laptop farm more than $250,000 from July 2022 to August 2023. The remote IT workers transferred those funds to Knoot and accounts associated with North Korean and Chinese nationals, officials said. 

Knoot was sentenced May 1 and ordered to pay $15,100 in restitution to the victim companies and forfeit an additional $15,100, which is equivalent to the amount of his direct take from the scheme.

The pair of North Korean operatives join a growing list of people who have been charged and jailed for supporting the regime’s scheme that generates hundreds of millions of dollars annually for the country’s military and organizations involved in its weapons programs.

Authorities have been cracking down on the malicious insider activity by seizing cryptocurrency linked to the theft, and targeting U.S.-based facilitators who provided forged or stolen identities and hosted laptop farms for North Korean operatives. 

The countermeasures are stacking up, but the scheme is widespread and has infiltrated an undetermined number of businesses, including hundreds of Fortune 500 companies.

Federal judges previously sentenced other people to prison for their involvement in the scheme, including Keija Wang and Zhenxing Wang; Audricus Phagnasay, Jason Salazar and Alexander Paul Travis; Oleksandr Didenko and Christina Chapman

“These sentences hold accountable U.S nationals who enabled North Korea’s illicit efforts to infiltrate U.S. networks and profit on the back of U.S. companies,” John A. Eisenberg, assistant attorney general for national security, said in a statement. 

“These defendants helped North Korean ‘IT workers’ masquerade as legitimate employees, compromising U.S. corporate networks and helping generate revenue for a heavily sanctioned and rogue regime,” he added. “The National Security Division will continue to pursue those who, through deception and cyber-enabled fraud, threaten our national security.”

The post American duo sentenced for hosting laptop farms for North Korean IT workers appeared first on CyberScoop.

One House Democrat is pressing Commerce on the government’s spyware use

7 May 2026 at 06:00

A House Democrat who’s been at the forefront of congressional efforts to scrutinize the federal government’s use of commercial spyware wants the Commerce Department to brief Capitol Hill amid apprehension that the Trump administration might further embrace the technology.

Rep. Summer Lee, D-Pa., sent a letter to the department Thursday seeking a briefing on several developments stemming from Immigration and Customs Enforcement acknowledging its use of Paragon’s Graphite spyware, as well as an American company purchasing a controlling stake in Israel’s NSO Group. The Commerce Department sanctioned NSO Group under former President Joe Biden after widespread abuse allegations, including eavesdropping on government officials, activists and journalists.

“The Trump Administration appears to be broadly receptive to using commercial spyware to infiltrate cell phones and allowing U.S. investment in sanctioned spyware companies like NSO Group,” Lee wrote in her letter to Commerce Secretary Howard Lutnick, which CyberScoop is first reporting.

NSO Group’s new executive chairman, David Friedman, is a former Trump ambassador to Israel and was his bankruptcy attorney. He has said in November that he expects the administration will be “receptive” to using NSO Group tech.

“Given those close ties between NSO Group and the Trump Administration, and the serious concerns about how NSO’s technology could be used to spy on Americans, we write to request information regarding the purchase of NSO Group by an American company and the potential usage of NSO Group spyware by federal law enforcement,” wrote Lee, who sits on the Oversight and Government Reform panel and is the top Democrat on its Federal Law Enforcement Subcommittee.

Lee was one of the authors of a recent Democratic letter seeking confirmation of ICE’s use of Paragon’s Graphite, which ICE acknowledged. But they criticized the administration for not answering all their questions, in addition to being outraged.

In her latest letter, Lee asked the Commerce Department to brief Oversight and Government Reform Committee staff about internal department deliberations, Commerce communication with the White House and any outside conversations — including with Friedman — about government use of NSO Group technology or any other commercial spyware, and American investment in NSO.

NSO Group “appears to view the Trump administration as friendly to its interests in the United States, pitching itself as a vital tool for the U.S. government to safeguard national security,” Lee wrote, citing company court filings that it “is reasonably foreseeable that a law enforcement or intelligence agency of the United States will use Pegasus.”

The Biden administration sanctions, and court losses in a case against Meta, represented setbacks for NSO Group’s ambitions. And prior to the U.S. investment firm controlling stake purchase last fall, the Commerce Department under Trump rebuffed efforts to remove NSO Group from its sanctions list.

But the tens of millions of dollars worth of investment, following news that Israel had used Pegasus to track people kidnapped or murdered by Hamas, was a boon.

NSO Group maintains that its products are designed only to help law enforcement and intelligence fight terrorism and crime, and that it vets its customers in advance as well as investigates misuse. News accounts and other investigations have turned up a multitude of abuses.

There have been scattered reports of U.S. flirtation with using NSO Group technology. The FBI acknowledged it had bought a Pegasus license, but stopped short of deploying it. The Times of London reported that “it is believed” the Central Intelligence Agency used Pegasus spyware as part of a rescue mission last month for a U.S. airman downed in Iran.

You can read the full letter below.

The post One House Democrat is pressing Commerce on the government’s spyware use appeared first on CyberScoop.

A DOD contractor’s API flaw exposed military course data and service member records

By: Greg Otto
6 May 2026 at 17:15

A defense technology company with Department of Defense contracts exposed user records and military training materials through API endpoints that lacked meaningful authorization checks, according to an account published by Strix, an open-source autonomous security testing project.

The issue affected Schemata, an AI-powered virtual training platform used in military and defense settings. According to Strix, an ordinary low-privilege account was able to access data across multiple tenants, including user listings, organization records, course information, training metadata and direct links to documents hosted on the Schemata’s Amazon Web Services instances.

Strix said the exposed materials included a 3D virtual training course for naval maintenance personnel with documentation marked confidential and proprietary, a course containing Army field manuals on explosive ordnance handling and tactical deployment, and hundreds of user records linked to bases and training enrollments. Additionally, the exposed information included names, email addresses, enrollment details and the military bases where U.S. service members were stationed. 

Schemata acknowledged the affected endpoints were exposed May 1, after what Strix described as a 150-day disclosure process. Strix said it verified remediation before publication and published its account earlier this week, 152 days after its initial disclosure attempt.

The reported vulnerability did not require a complex exploit. Strix said it used a low-privilege account to watch normal browser traffic, identify API endpoints exposed through the application, and request high-value data using the same session. According to Strix, those requests returned records from outside the account’s own organization, suggesting the API was not properly enforcing tenant boundaries or user permissions.

In multi-tenant software, authorization controls are intended to ensure users can access only the data and functions assigned to their account or organization. The failure described by Strix would represent a basic breakdown in that model. The firm said some routes also appeared “write-enabled,” meaning a malicious actor could potentially modify or delete courses through update or delete requests, though the account does not say Strix performed destructive testing.

Strix did not respond to CyberScoop’s request for comment. 

Schemata’s platform serves military and defense training environments, where user identities, assignments and course enrollments can reveal sensitive operational context. Even when information is not classified, records showing where service members are based, what training they are enrolled in and which materials they can access may create risks if exposed outside intended channels.

In a statement posted on the company’s website, Schemata said it did not have “evidence that any third party exploited the vulnerability to access customer data.”

The disclosure timeline also raises questions about how companies handling sensitive government-related data receive and respond to vulnerability reports. Strix said it first contacted Schemata on Dec. 2, 2025. According to the account, Schemata’s CEO initially responded, “I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?”

Strix said it clarified the same day that compensation was not required and that its priority was user safety. It said it sent multiple follow-ups from Dec. 8-29, warning that the vulnerability was critical and asking where to send details. Five months later, after telling Schemata that researchers were publishing the information publicly, Schemata responded, acknowledged the exposed endpoints and said it would patch the issue immediately.

“After we received actionable details about the vulnerability and confirmed the security researcher appeared to be legitimate, our team remediated the vulnerability the same day, and the researcher independently verified the fix before publishing their findings,” Schemata’s statement reads. “We appreciate the security researcher bringing this to our attention and their contribution to the security of our platform.”

Schemata said it’s working with cybersecurity consultants to assist with its response and improve its security posture. The company also said it is in contact with government authorities about the vulnerability.

Defense contractors that handle Controlled Unclassified Information, or CUI, must report cyber incidents to the Department of Defense Cyber Crime Center (DC3). The center did not respond to CyberScoop’s request for comment. 

According to contracting data, the company holds $3.4 million in contracts with the Department of Defense. In May 2025, Schemata announced $5 million in venture funding from several firms, including Andreessen Horowitz. 

The post A DOD contractor’s API flaw exposed military course data and service member records appeared first on CyberScoop.

A critical Palo Alto PAN-OS zero-day is being exploited in the wild

6 May 2026 at 15:48

Attackers are actively exploiting a zero-day vulnerability affecting some Palo Alto Networks’ customers’ firewalls, the security vendor said in an advisory Tuesday.

The critical memory corruption vulnerability — CVE-2026-0300 — affects the authentication portal of PAN-OS, and allows unauthenticated attackers to run  code with root privileges on the vendor’s PA-Series and VM-Series firewalls, the company said.

Palo Alto Networks did not say when or how it became aware of active exploitation, nor when the earliest known exploitation occurred. The Cybersecurity and Infrastructure Security Agency added the defect to its known exploited vulnerabilities catalog Wednesday.

The company hasn’t released a patch for the vulnerability or described the scope and objective of confirmed attacks.

“This vulnerability is specific to a limited number of customers with their User-ID Authentication Portal (Captive Portal) exposed to the public internet or untrusted IP addresses. We have observed limited exploitation of this issue and are working to release software fixes, with the first updates expected to be available on May 13,” a Palo Alto Networks spokesperson told CyberScoop.

The company said firewalls exposed to the buffer-overflow vulnerability, which has a CVSS rating of 9.3, are broadly exposed in real-world deployments, and it described the attack complexity as low.

Shadowserver scans found more than 5,800 publicly exposed VM-Series firewalls running PAN-OS as of Tuesday, yet it’s unknown how many of those instances have restricted authentication access to trusted internal IP addresses or disabled the feature altogether.

“We have provided clear mitigation guidance to our customers to secure their environments immediately. This issue does not impact Cloud NGFW or Panorama appliances. We remain committed to a transparent, security-first approach to protect our global customer base,” Palo Alto Networks’ spokesperson added.

Benjamin Harris, CEO and founder of watchTowr, noted that Palo Alto Networks proactively alerted customers to the zero-day, a step that allowed defenders to take action on potentially exposed instances. 

“In a bad situation, that is the best they can do immediately. However, that also alerts everyone to the existence of a vulnerability,” he told CyberScoop.

Despite the risk, Harris said watchTowr expects attacks linked to the zero-day exploit to be “very limited.” 

Palo Alto Networks and its impacted customers remain the only parties to have observed exploitation in the wild, but researchers warn that will likely change soon. 

“It’s likely rules will also start to fire in third-party organizations and honeypots shortly,” Caitlin Condon, vice president of security research at VulnCheck, told CyberScoop. 

“Management interfaces, login pages, and authentication portals have been common adversary targets for both opportunistic and targeted campaigns in recent years,” she added. “With researcher and community eyes on the vulnerability, it’s likely that we’ll see public exploits and broader exploitation quickly, provided the issue isn’t prohibitively difficult to exploit.”

Palo Alto Networks has yet to attribute the attacks to any known threat group, publish indicators or compromise, nor disclose the type of organizations that have been targeted and impacted. 

Researchers are hunting for malicious activity and advise customers to apply patches upon release.

The post A critical Palo Alto PAN-OS zero-day is being exploited in the wild appeared first on CyberScoop.

CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict

By: djohnson
5 May 2026 at 17:47

The Cybersecurity and Infrastructure Security Agency is urging critical infrastructure owners and operators to plan for delivering essential services under emergency conditions – potentially for months at a time.

The federal government’s top cybersecurity agency warned that state-sponsored hackers, particularly two Chinese groups known as Salt Typhoon and Volt Typhoon, continue to threaten critical sectors like electricity, water, and internet. 

The agency is now working with the private sector to protect operational technology – the systems that control the heavy machinery and equipment that powers most critical infrastructure – from attacks that enter through business IT systems or third-party vendor products.

The initiative  — known as CI Fortify – will include CISA conducting targeted technical assessments of critical infrastructure entities and aims to create plans that “allow for safe operations for weeks to months while isolated” from IT networks and third-party tools, according to the agency’s website.

Nick Andersen, CISA’s acting director, told reporters that the goal is “service delivery [that] can still reach critical infrastructure after the asset owner has disconnected with IT and OT, disconnected from third party vendors and service provider connections and disconnected from third party telecommunications equipment.”

Over the past two years, wars in Ukraine, Gaza, Iran and elsewhere have seen water plants, power substations, data centers and other critical infrastructure targeted by kinetic or cyberattacks.

Andersen said the agency has already begun engaging with some companies to pilot the assessments and expects that work to ramp up considerably as CISA hires additional staff in the coming months.

He declined to name the entities involved in the pilot program, but said they will focus on organizations that support national security, defense, public health and safety and economic continuity. He added that CISA’s assessments will vary from sector to sector depending on their unique needs.

“Water isn’t necessarily designed to prioritize specific customer needs outside of recovery periods, while energy and transportation have more immediate tradeoffs for selecting one load or one set of cargo over another,” Andersen said as an example.

One pillar of CISA’s strategy is isolation: essentially turning off all third-party and business network connections to an OT network when facing an emergency or unknown vulnerability.

Organizations also need to develop an internal plan for what acceptable service levels look like under those conditions and reach understandings with their critical customers, like U.S. military installations and lifeline services.

The second pillar, recovery, involves best practices for organizations: backing up files, documenting systems and having manual backups for operations when normal computer systems are down.

In conversations with cybersecurity specialists who focus on critical infrastructure and operational technology, it is widely assumed that China is not the only nation to have broadly compromised Americans critical infrastructure. That hacking groups tied to other nations have almost surely noticed and exploited the same basic vulnerabilities and hygiene issues found by the Typhoons.

Agencies like the FBI and Federal Communications Commission have touted efforts to purge Chinese hackers and work voluntarily with telecoms to harden their network security. But U.S. national security officials and cybersecurity defenders have consistently said both Salt Typhoon and Volt Typhoon remain active threats to U.S. critical infrastructure.

The post CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict appeared first on CyberScoop.

CISA boasts AI automation improvements to threat analysis, mission support

5 May 2026 at 15:18

The Cybersecurity and Infrastructure Security Agency has gotten “by far” the biggest gains from artificial intelligence automation in its security operations unit to help analysts sift through threats, but it’s also proven valuable elsewhere within the agency, CISA officials said Tuesday.

It’s “really allowing those analysts to do triage very fast, so they focus on what matters versus the noise,” Tammy Barbour, acting chief of application management at CISA, said. “They’re able to do a lot of real-time, quick looks before events happen in most places.”

Barbour, speaking at the UiPath FUSION Public Sector event hosted by Scoop News Group, said automation has also been a boon to CISA’s Technology Operations Center.

“The top analysts are able to quickly respond to customers who are reaching out to talk and asking questions, and be able to get real-time efficiencies with that,” she said. 

And it’s been a big help for data migration, Barbour said.

Lauren Wind, acting deputy chief technology officer at CISA, said from her wing of the department, it’s focused on finding benefits from automation in areas like human resources, contracting and finance.

“So we can continue to drive mission, but also accelerate the mission-supporting functions,” she said. “We really want to ensure that our cyber analysts are focusing on the things that matter, like malware.”

But there are some barriers to adoption of the technology, both said.

“We’re still kind of in our infancy,” Barbour said. “But we still struggle with the legacy workflows, processes. We still have some systems that need to be modernized, that we’re currently working towards adoption. People love their spreadsheets. I just can’t force it out of their hands, especially the — sorry, all the accountants in the room, I apologize, but you’ve got to let it go.”

AI governance needs to be laid out in advance, too, and transparently, Wind said.

“One of the biggest things is ensuring that the CTO is driving governance, whether that’s for data, whether that’s for AI,” she said. “I think we’re pretty good on generative, and everyone’s a little bit catching up to industry on agentic.”

How to handle data is another consideration, Wind said.

“Whether you’re on the cloud and you’re serverless or you’re still on prem, if you haven’t figured out what your structure of your data platform looks like, it makes automation a lot more difficult,” she said. 

The comments from Barbour and Wind offered a window into how CISA is viewing AI internally. Much of the agency’s recent work related to AI is focused on advice for safe deployment of agentic AI at other organizations, or examination of the way AI is deepening threats.

The post CISA boasts AI automation improvements to threat analysis, mission support appeared first on CyberScoop.

Latvian national sentenced for ransomware attacks run by former Conti leaders

5 May 2026 at 12:28

A federal judge sentenced a Latvian national to 102 months in prison for his involvement in a series of ransomware attacks for more than two years prior to his arrest in 2023, the Justice Department said Monday.

Deniss Zolotarjovs, a resident of Moscow at the time, helped an organization led by former leaders of the Conti ransomware group extort payments from more than 54 companies. 

The 35-year-old was mostly tasked with putting pressure on the crew’s victims. In one case, Zolotarjovs urged co-conspirators to leak or sell children’s health records stolen from a pediatric healthcare company and ultimately sent a collection of sensitive data to “hundreds of patients,” according to court records. 

The ransomware crew identified itself in ransom notes under multiple names during Zolotarjovs’ involvement, including Conti, Karakurt, Royal, TommyLeaks, SchoolBoys Ransomware, Akira and others. 

Zolotarjov and his co-conspirators extorted nearly $16 million in confirmed ransom payments from their victims. Officials estimate the group’s crimes resulted in hundreds of millions of dollars in losses, not including the psychological and future financial exposure confronting tens of thousands of people whose personal data was stolen.

“Deniss Zolotarjovs helped his ransomware gang profit from hacks of dozens of companies, and even on a government entity whose 911 system was forced offline,” A. Tysen Duva, assistant attorney general of the Justice Department’s Criminal Division, said in a statement. 

Officials said Zolotarjovs searched for points of leverage after researching victim companies and analyzing stolen data. Many of the victims impacted during his active participation between June 2021 and August 2023 were based in the United States.

Zolotarjov was arrested in the country of Georgia in December 2023 and extradited to the United States in August 2024. He pleaded guilty to money laundering and wire fraud in July 2025. 

“Cybercriminals might think they are invulnerable by hiding behind anonymizing tools and complex cryptocurrency patterns while they attack American victims from non-extradition countries,” Dominick S. Gerace II, U.S. attorney for the Southern District of Ohio, said in a statement. “But Zolotarjovs’s prosecution shows that federal law enforcement also has a global reach, and we will hold accountable bad actors like Zolotarjovs, who will now spend significant time in prison.”

The Russian ransomware crew was prolific and spread across multiple teams, relying on companies registered in Russia, Europe and the United States to conceal its operations. Authorities said the group included former Russian law enforcement officers whose connections allowed members to access Russian government databases to harass detractors and identify potential new recruits.

Conti was among the most prolific ransomware groups globally for a time, impacting hundreds of critical infrastructure providers, Costa Rica’s government in 2022, and ultimately leading the State Department to offer a $10 million reward for information related to Conti’s leaders. The group was notoriously resilient, bouncing back with new infrastructure and hitting new targets after a massive leak exposed chats between the group’s members in 2022.

Conti disbanded later that year, but members of the Cyrillic-language group rebranded under three subgroups: Zeon, Black Basta and Quantum, which quickly rebranded to Royal, before rebranding again to BlackSuit in 2024.

The post Latvian national sentenced for ransomware attacks run by former Conti leaders appeared first on CyberScoop.

❌
❌