Reading view

There are new articles available, click to refresh the page.

AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it

Wade Woolwine is Senior Director, Product Security at Rapid7.

The headlines around Glasswing have focused on how quickly AI can surface vulnerabilities, which has naturally caught the attention of security leaders. In my conversations with teams and customers, the more useful discussion has been about what that speed means in practice for business protection, especially across open source risk, dependency choices, and software supply chain resilience. The deeper issue for security leaders sits elsewhere. 

Software risk is becoming harder to manage across the full lifecycle, especially in open source dependencies, build pipelines, developer environments, and the operational processes that sit between disclosure and remediation. When vulnerabilities can be found faster and at greater depth, security teams need more than another source of findings. They need a stronger way to understand what they run, what they trust, what they can patch quickly, and where a single weak dependency can create disproportionate risk.

Faster discovery makes software supply chain resilience a more immediate leadership issue. CISOs need a clearer view of how dependencies are chosen, monitored, validated, and governed across production, build, and developer environments, especially as open source remains essential to modern software development.

Organizations already struggle to absorb vulnerability disclosures at the pace they are coming in, because when discovery gets faster, the operational gap widens between knowing there is a problem and being able to do something useful about it. That gap is especially serious in the software supply chain, where a single dependency can introduce risk into build systems, production workloads, developer endpoints, and the tools used to secure them.

This is why I would frame AI-driven vulnerability discovery risk as a lifecycle challenge. The pressure does not sit in one place, but across inventory, dependency decisions, threat intelligence, patching discipline, and validation – with people, process, and visibility shaping how well an organization can respond. Technology matters, but it cannot compensate for a weak operating model underneath it.

Open source still matters. Dependency choices matter more.

Open source remains essential to modern software development because it helps teams move faster and get products to market without rebuilding common functionality from scratch. The better response is to be more deliberate about where and how third-party code enters the environment. 

Open source has always involved a trade-off between speed, efficiency, flexibility, and inherited risk, and that trade-off becomes harder to manage as AI makes code review deeper and faster. More flaws and supply chain compromises will likely be found in packages that teams have trusted for years, including transitive dependencies most developers did not knowingly choose. One only needs to look back a few weeks to find that the widely used Axios package suffered a supply chain compromise that bundled a Remote Access Trojan (RAT) charged with stealing secrets. That raises the value of understanding which dependencies are essential, which ones can be removed, which ones pull in large chains of transitives, and which ones are maintained by too few people to inspire confidence.

That work starts with a more disciplined question than “Is there a package that does this?” It starts with “Do we need this dependency, and do we understand the risk that comes with it?” The safest dependency is often the one that never enters the environment in the first place.

Why inventory has to go deeper than package lists

Supply chain resilience begins with knowing what you are actually running, which sounds straightforward until a critical disclosure lands in a package no one realized was in the environment three layers deep. Dependency graphs are deeper than most teams think, and transitive risk is where a lot of operational pain begins. A package chosen directly by a developer may bring in dozens of additional packages, each with its own maintainers, release cadence, security posture, and potential failure points.

A mature approach to inventory needs to move beyond a static package list, because CISOs need confidence in three views at once: What is declared in source, what is resolved and built, and what is actually running in production? Those views often drift apart over time, which means a package can be patched in source and still remain unpatched in a deployed container or runtime environment. An SBOM on its own will not close that gap; continuous, usable inventory will.

That inventory also needs clear ownership attached to it, because the moment a critical dependency is identified, someone has to decide what happens next, coordinate the change, and absorb the operational consequences. Security teams cannot do that well if responsibility is unclear, which is why ownership needs to be treated as part of resilience rather than an administrative detail.

Build pipelines and developer environments deserve the same scrutiny as production

Supply chain conversations still tend to start with production systems, even though recent incidents have shown how quickly compromise can move through the build layer, developer tooling, or the security tooling inside the pipeline itself. Those environments hold code, secrets, and trust relationships that attackers know how to exploit, while developer workstations often carry a rich mix of credentials and elevated privileges because speed matters to the business. Build systems are predictable and privileged, which makes them both valuable and vulnerable, but also easier to monitor.

Seeing those layers as part of the same attack surface means asking harder questions about how code enters the build, how package updates are governed, how actions and dependencies are pinned, what secrets exist in CI/CD, and what controls are in place on developer endpoints to detect anomalous behavior or stop high-risk package activity before it goes unnoticed.

You can gauge the maturity of the operating model with the answers to a few basic questions:

  • How tightly are dependencies controlled in CI?

  • How are package lifecycle scripts governed?

  • What secrets exist in CI/CD, and what protections surround them?

  • What visibility exists into anomalous behavior on developer endpoints?

  • How would the team detect or prevent high-risk package activity before it spreads?

If those answers are unclear, important parts of the model are still missing.

Why prioritization matters more as scanning accelerates

When software risk rises, the instinct is often to add another scanner because more visibility feels like progress. What matters more over time, though, is how well teams can prioritize the findings that follow, assign them to the right owner, choose the right mitigation, and prove that exposure actually went down. Broader scanning and faster discovery mostly add to the pile unless the operating model behind them is strong enough to turn findings into action. Feed more issues into a process that is already stretched and the backlog grows, priorities become harder to sort, and remediation slows in the places where speed matters most. The organizations that come through this period well will be the ones that treat supply chain resilience as a systems problem, with stronger intake, clearer governance, better intelligence, and faster paths from alert to action.

What stronger software supply chain resilience looks like in practice

A stronger response starts with a deeper inventory of dependencies across source, build, and runtime, so teams can see both direct and transitive packages and connect them back to real environments and real owners. Once that picture is in place, intelligence monitoring becomes far more useful when it runs continuously against credible signals on vulnerabilities, package risk, maintainer health, end-of-life software, and unusual changes in dependency behavior.

The same level of care needs to carry through into dependency governance, where better decisions depend on asking whether a new package is necessary, how much transitive risk it introduces, whether its maintenance model is healthy, and what policy governs its path into production. Build and developer controls belong in that same conversation, because version pinning, private registries, secret handling, script restrictions, immutable builds, ephemeral runners, and stronger endpoint monitoring all reduce the attack surface around the software supply chain.

Monitoring threat intelligence for notifications about new vulnerabilities and compromised packages and having a well defined and practiced process for scoping and remediating emerging threats becomes critical. Your supply chain vulnerability and compromise response should be practiced – just like your incident response plan – through table top exercises and simulated threat events. You don’t want to wait until the house is on fire to know how to execute an effective response.

Similarly, Engineering, DevOps, and Security teams should collaborate on establishing a trust and reputation scoring mechanism for supply chain dependencies. Being able to evaluate the speed of response, transparency of communication and updates, and ultimate resolution of the vulnerability or compromise speak volumes for how much you can trust the maintainers of the software you depend on. The OpenSSF Scorecard project offers a great place to start evaluating the open source packages you’re already using.

Organizations should also have a fallback plan for when obtaining a security patch is not available. Some options to consider include exploring other open source packages that perform similar functions, exploring other mitigations such as application firewalling, or even forking and contributing a security patch back to the community.

Validation closes the loop by showing whether the artifact came from where it was supposed to, whether the package has drifted in unexpected ways, and whether the mitigations applied are reducing live risk rather than simply documenting the process.

How CISOs should think about the next 12 months

The strain on security teams is only growing, and the potential for AI to relieve some of that pressure is understandably compelling, especially when boards, CEOs, and CFOs are asking how the organization plans to adopt it. That makes this a leadership question as much as a technology one. CISOs need a clear point of view on where AI can genuinely improve resilience, where it still introduces too much uncertainty, and how to explain those choices in business terms.

If software engineering teams are already adopting AI-assisted development, security teams should be part of that conversation early, especially around dependency management. I have seen teams begin connecting AI coding agents to vulnerability management workflows so those agents can interpret vulnerabilities found in the code base, assess reachability with more context, help plan remediation, and validate updates much faster than traditional handoffs usually allow. Used well, that can reduce drag across the workflow and help teams move faster on classes of issues that are currently slowing them down.

Getting there safely still depends on the foundation underneath it. A more resilient path starts with a clearer picture of the environment and a more complete inventory of dependencies across source, build, and runtime. From there, ownership needs to be explicit, threat and vulnerability intelligence needs to be embedded into how the organization prioritizes, and dependency sprawl needs to be reduced with more discipline around what actually enters production. The same mindset should carry through to the build layer and developer endpoints, where tighter controls and better visibility help reduce unnecessary exposure, while faster and more repeatable paths from disclosure to action make it easier for teams to respond before risk compounds.

That foundation will matter regardless of which AI model or platform becomes dominant six or twelve months from now. It will also matter if the next wave of AI makes backlog reduction, lower-tier remediation, or patch validation more practical. Organizations that know what they run and how they operate will be in a much better position to adopt those capabilities with intent.

The shift security leaders should make now

Security in an AI-accelerated world needs to be managed as a systems challenge, with supply chain resilience shaped by how well organizations connect software composition, exposure visibility, dependency governance, threat intelligence, build integrity, endpoint controls, remediation workflows, and validation. When those layers are treated separately, gaps open quickly; when they are tied together through a stronger operating model, teams are in a much better position to absorb faster discovery without losing control of the response.

For CISOs, that means continuing to use open source with a more deliberate view of dependency risk, reducing unnecessary packages where possible, knowing what is running and who owns it, and monitoring threat and vulnerability intelligence with enough discipline to act before the queue overwhelms the team. It also means paying closer attention to the attack surface across production, build, and developer environments, while treating AI as something that will amplify both the strengths and the weaknesses already present in the program. Faster discovery is here, and the organizations that handle it best will be the ones that can respond with the same level of discipline.

Vercel attack fallout expands to more customers and third-party systems

Vercel said the fallout from an attack on its internal systems hit more customers than previously known, as ongoing analysis uncovered additional evidence of compromise

The company, which makes tools and hosts cloud infrastructure for developers, maintains a “small number” of accounts were impacted, but it has yet to share a number or range of known incidents linked to the attack. Vercel created and maintains Next.js, a platform supporting AI agents that’s downloaded more than 9 million times per week, and other popular open-source projects. 

Vercel CEO Guillermo Rauch said the company and partners have analyzed nearly a petabyte of logs across the Vercel network and API, and learned malicious activity targeting the company and its customers extends beyond an initial attack that originated at Context.ai. 

“Threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers,” Rauch said in a post on X

“Once the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables,” he added.

The attack exemplifies the widespread and compounded risk posed by interconnected systems that rely on OAuth tokens, trusted relationships and overly privileged permissions linking multiple services together.

“The real vulnerability was trust, not technology,” Munish Walther-Puri, head of critical digital infrastructure at TPO Group, told CyberScoop. “OAuth turned a productivity app into a backdoor. Every AI tool an employee connects to their work account is now a potential attack surface.”

An attacker traversed Vercel’s internal systems to steal and decrypt customer data, including environment variables it stored, posing significant downstream risk. 

The company insists the breach originated at Context.ai, a third-party AI tool used by one of its employees. Researchers at Hudson Rock previously said the seeds of that attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments. 

Vercel has not specified the systems and customers data compromised, nor has it described the threat eradicated or contained. The company said it’s found no evidence of tampering across the software packages it publishes, concluding “we believe the supply chain remains safe.” 

The company fueled further intrigue in its updated security bulletin, noting that it also identified a separate “small number of customers” that were compromised in attacks unrelated to the breach of its systems. 

“These compromises do not appear to have originated on Vercel systems,” the company said. “This activity does not appear to be a continuation or expansion of the April incident, nor does it appear to be evidence of an earlier Vercel security incident.”

It’s unclear how Vercel became aware of those attacks and why it’s disclosing them publicly. 

Vercel declined to answer questions, and Mandiant, which is running incident response and an investigation into the attack, referred questions back to Vercel. 

Vercel has not attributed the breach to any named threat group or described the attackers’ objectives. 

An online persona identifying themselves as ShinyHunters took responsibility for the attack and is attempting to sell the stolen data, which they claim includes access keys, source code and databases. Austin Larsen, principal threat analyst at Google Threat Intelligence Group, said the attacker is “likely an imposter,” but emphasized the risk of exposure is real.

Walther-Puri warned that the downstream blast radius from the attack on its systems remains undefined. “Stolen API keys and source code snippets from internal views are potentially keys to customer production environments,” he said.

The stolen data attackers claim to have “sounds almost boring … but it’s infrastructure intelligence,” Walther-Puri added. “The right environment variable doesn’t just unlock a system — it lets adversaries become that system, silently, from the inside.”

The post Vercel attack fallout expands to more customers and third-party systems appeared first on CyberScoop.

Vercel’s security breach started with malware disguised as Roblox cheats

Vercel customers are at risk of compromise after an attacker hopped through multiple internal systems to steal credentials and other sensitive data, the company said in a security bulletin Sunday. 

The attack, which didn’t originate at Vercel, showcases the pitfalls of interconnected cloud applications and SaaS integrations with overly privileged permissions. 

An attacker traversed third-party systems and connections left exposed by employees before it hit the San Francisco-based company that created and maintains Next.js and other popular open-source libraries. 

Researchers at Hudson Rock said the seeds of the attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments.

Each of the companies are pinning at least some blame for the attack on the other vendor.

Context.ai on Sunday said that breach allowed the attacker to access its AWS environment and OAuth tokens for some users, including a token for a Vercel employee’s Google Workspace account. Vercel is not a Context customer, but the Vercel employee was using Context AI Office Suite and granted it full access, the artificial intelligence agent company said. 

“The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as sensitive,” Vercel said in its bulletin. 

The company said a limited number of its customers are impacted and were immediately advised to rotate credentials. Vercel, which declined to answer questions, did not specify which internal systems were accessed or fully explain how the attacker gained access to Vercel customers’ credentials. 

Vercel CEO Guillermo Rauch said customer data stored by the company is fully encrypted, yet the attacker got further access through enumeration, or by counting and inventorying specific variables. 

“We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI,” he said in a post on X. “They moved with surprising velocity and in-depth understanding of Vercel.”

A threat group identifying itself as ShinyHunters took responsibility for the attack in a post on Telegram and is attempting to sell the stolen data, which they claim includes access keys, source code and databases.

The attacker “is likely an imposter attempting to use an established name to inflate their notoriety,” Austin Larsen, principal threat analyst at Google Threat Intelligence, wrote in a LinkedIn post. “Regardless of the threat actor involved, the exposure risk is real.”

Vercel also warned that the attack on Context’s Google Workspace OAuth app “was the subject of a broader compromise, potentially affecting its hundreds of users across many organizations.” It published indicators of compromise and encouraged customers to review activity logs, review and rotate variables containing secrets.

Context and Vercel said their separate and coordinated investigations into the attack aided by CrowdStrike and Mandiant remain underway.

The post Vercel’s security breach started with malware disguised as Roblox cheats appeared first on CyberScoop.

Why the Axios attack proves AI is mandatory for supply chain security

Two weeks ago, a suspected North Korean threat actor slipped malicious code into a package within Axios, a widely used JavaScript library. The immediate concern was the blast radius: roughly 100 million weekly downloads spanning enterprises, startups, and government systems. But beyond the sheer scale, the attack’s speed was just as worrisome – a stark reminder of the tempo modern adversaries now operate at.

The Axios compromise was identified within minutes of publication by an Elastic researcher using an AI-powered monitoring tool that analyzed package registry changes in real time. The approach was right: AI classifying code changes at machine speed, at the moment of publication, before the damage compounds. By any standard, it was a fast response. The compromised package was removed in about three hours. But even in those three hours, the widely-used package may have been downloaded over half a million times.

This underscores a new reality. Enterprises and the public sector are being overwhelmed with attacks that are increasing in both speed and complexity, driven in part by AI. Adversaries are probing every link in the supply chain, and they are doing it at a pace that human-speed defenses cannot match.

This project is one example of using AI to tackle a security problem, but it also makes a broader case: AI-powered security can dramatically improve SOC efficiency especially when organizations across the public sector and beyond are drowning in attacks.

The direct threat to the public sector

Government agencies increasingly rely on the same open-source JavaScript frameworks as the private sector, so a poisoned package can give an adversary access to sensitive systems before anyone realizes the supply chain has been poisoned. This is a direct threat to national security and critical infrastructure, especially when the payloads are cross-platform, affecting macOS, Windows, and Linux.

What is most critical now is understanding and correctly preparing for the frequency and speed at which these attacks occur.

AI has fundamentally lowered the barrier to sophisticated cyber operations, granting relatively unsophisticated bad actors and small nation-states capabilities once reserved for elite criminal groups and countries. Adversaries now leverage AI to automate reconnaissance, craft convincing social engineering, and develop evasive malware. With a new vulnerability discovered every few minutes, the pace is accelerating.

For the public sector, the threat model has expanded. Defending against known nation-state playbooks is no longer sufficient—that’s just the baseline. Groups that couldn’t execute at nation-state levels five years ago now operate with comparable sophistication, while state-sponsored actors operate with unprecedented speed and automation. Staying ahead means moving beyond traditional defense to meet a threat landscape that is increasingly automated and ubiquitous.

AI is not optional

Adversarial AI is the defining threat of the current operating environment. Automated reconnaissance. AI-generated obfuscation. Machine-speed deployment across multiple vectors simultaneously. The adversary has implemented AI faster and more aggressively than most defensive teams.

It is rapidly becoming unquestionable in security: if you are not using AI to battle AI, you will lose.

That does not mean buying into the autonomous SOC fantasy. That approach treats AI in isolation, as if defenders are the only ones with access to the technology. Defensive AI is not a win button, but the minimum entry fee to stay level with the attacker. You still need business context, mission knowledge, and human judgment.

The agentic SOC transformation

The Axios compromise should serve as a clear signal. Nation-state actors are targeting the software supply chain with increasing frequency and sophistication. The government agencies and organizations that will defend successfully against these threats are the ones building security operations that can move just as fast as the threat actors they face.

AI-driven security operations that can match the speed of modern threats, like agentic workflows that automatically triage, investigate, and contain suspicious activity are operationally necessary. Having an agentic SOC mindset and approach to how these centers work will empower analysts’ activity. Agents will operate on behalf of the analyst automatically and transparently.

The traditional SOC pyramid puts humans at the bottom doing the highest-volume work. A wide analyst tier triaging alerts, feeding a narrower senior tier handling investigations. Adversarial AI has made that base layer untenable. The volume is too high, the speed too fast, the surface area too broad. The pyramid inverts into a diamond – AI takes the base while analysts rise to become threat engineers: managing, validating, and improving the agents working on their behalf.

AI agents handle the high-volume work of alert correlation, investigation enrichment, and initial containment while human analysts focus on strategic decisions and mission context. These agents amplify the expertise that government security professionals bring, delivering pre-investigated, correlated findings rather than a flood of disconnected alerts.

The rapid acceleration of sophisticated attacks calls for this essential change across the SOC. The public sector and industry are undergoing a significant transformation, shifting away from eyes-on-glass alert triage toward a high-impact era of threat engineering. In doing so, public sector teams will have the ability to greatly reduce mean time to detect/respond, in turn reducing SOC analyst fatigue and compressing investigation timelines.

Mike Nichols is the GM of Security at Elastic.

The post Why the Axios attack proves AI is mandatory for supply chain security appeared first on CyberScoop.

OpenAI’s Mac apps need updates thanks to the Axios hack

OpenAI updated its security certificates and is requiring all macOS users to update to the latest versions after determining its products, along with many others, were impacted by a widespread supply-chain attack that briefly infected a popular open-source library in late March, the company said in a blog post Friday.

The artificial intelligence vendor said it “found no evidence that OpenAI user data was accessed, that our systems or intellectual property was compromised, or that our software was altered.”

Yet, because a GitHub workflow the company uses to sign certificates for macOS applications downloaded and executed a malicious version of Axios, the company is treating the soon-to-be defunct certificate as compromised.

A North Korean hacking group injected malware into two versions of Axios after it compromised the lead maintainer’s computer via social engineering and took over his npm and GitHub accounts. Jason Saayman, the lead maintainer for Axios, said the malicious versions of the software were live for about three hours before removal. 

Google Threat Intelligence Group, which tracks the threat group as UNC1069, said the impact of the attack was broad with ripple effects potentially exposing other popular packages. The JavaScript libraries flow into dependent downstream software through more than 100 million and 83 million downloads weekly. 

The attack was discovered just weeks after a series of other open-source tools, including Trivy, were compromised by UNC6780, also known as TeamPCP, resulting in aggressive extortion attempts. 

OpenAI insists the malware that infected Axios did not directly impact its certificate, which is designed to help customers confirm they are downloading legitimate software. 

“The signing certificate present in this workflow was likely not successfully exfiltrated by the malicious payload due to the timing of the payload execution, certificate injection into the job, sequencing of the job itself, and other mitigating factors,” the company said in the blog post. “Nevertheless, out of an abundance of caution we are treating the certificate as compromised, and are revoking and rotating it.”

Older versions of OpenAI’s macOS apps may lose functionality and will no longer be supported when the certificate is fully revoked May 8, the company said.

OpenAI, which hired a third-party digital forensics and incident response firm to aid its investigation and response, pinned the root cause of the security issue on a misconfiguration in its GitHub workflow. The company said it corrected that error and worked with Apple to ensure fraudulent apps posing as OpenAI cannot use the impacted certificate.

The 30-day window is designed to minimize disruption for users, but OpenAI said it will speed up the revocation deadline if it identifies any malicious activity. The company did not immediately respond to a request for comment.

The post OpenAI’s Mac apps need updates thanks to the Axios hack appeared first on CyberScoop.

Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities

Major technology companies have joined forces in an effort to use advanced artificial intelligence to identify and address security flaws in the world’s most critical software systems, marking a significant shift in how the industry approaches cybersecurity threats.

Anthropic announced Project Glasswing on Tuesday, bringing together Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. The initiative centers on Claude Mythos Preview, an unreleased AI model that Anthropic will make available exclusively to project partners and approximately 40 additional organizations responsible for critical software infrastructure.

The model has already identified thousands of previously unknown vulnerabilities in its initial testing phase, including security flaws that have existed in widely used systems for decades, according to Anthropic. Among the discoveries is a 27-year-old bug in OpenBSD, an operating system known primarily for its security focus, and a 16-year-old vulnerability in FFmpeg, a widely used video software program that automated testing tools had failed to detect despite running the affected code line five million times. The company has been in contact with the maintainers of the relevant software, and all found vulnerabilities have been patched. 

Anthropic will commit up to $100 million in usage credits for the project, along with $4 million in direct donations to open-source security organizations. The company has stated it does not plan to make Mythos Preview available to the general public, citing concerns about the model’s potential misuse.

The initiative reflects growing concerns within the technology sector about the dual-use nature of advanced AI systems. While Mythos Preview was not trained specifically for cybersecurity purposes, its coding and reasoning capabilities have proven effective at identifying subtle security flaws that have eluded human analysts and conventional automated tools.

“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the company said in a blog post. “Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”

The project comes as the industry has predicted that similar AI capabilities will soon become more widespread. Anthropic executives have indicated that without coordinated action, such tools could eventually reach actors who might deploy them for malicious purposes rather than defensive security work.

Participating organizations will be required to share their findings with the broader industry. The project places particular emphasis on open-source software, which forms the foundation of most modern systems, including critical infrastructure, yet whose maintainers have historically lacked access to sophisticated security resources.

“Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation,” said Jim Zemlin, CEO of the Linux Foundation. “This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.” 

Additionally, Anthropic says it has engaged in ongoing discussions with U.S. government officials regarding Mythos Preview’s capabilities. The company has framed the project in national security terms, arguing that maintaining leadership in AI technology represents a strategic priority for the United States and its allies. Anthropic has been locked in a high-stakes dispute with the Department of Defense about the U.S. military’s use of the startup’s Claude AI model in real-world operations. 

The project’s success will depend partly on whether the collaborative approach can keep pace with rapid advances in AI capabilities. Anthropic has indicated that frontier AI systems are likely to advance substantially within months, potentially creating a dynamic environment where defensive and offensive capabilities evolve in parallel.

“Project Glasswing is a starting point,” Anthropic wrote in a blog post. “No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.”

The post Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities appeared first on CyberScoop.

Experts warn of a ‘loud and aggressive’ extortion wave following Trivy hack

SAN FRANCISCO — Mandiant is responding to a major, ongoing supply-chain attack involving the compromise of Trivy, a widely used open-source tool from Aqua Security that’s designed to find vulnerabilities and misconfigurations in code repositories.

The fallout from the attack spree, which was first detected March 19, is extensive and poses substantial risk for follow-on compromises and threatening extortion attempts. 

“We know over 1,000 impacted SaaS environments right now that are actively dealing with this particular threat campaign,” Charles Carmakal, chief technology officer at Mandiant Consulting said during a threat briefing held in conjunction with the RSAC 2026 Conference. “That thousand-plus downstream victims will probably expand into another 500, another 1,000, maybe another 10,000.”

Attackers stole a privileged access token and established a foothold in Trivy’s repository automation process by exploiting a misconfiguration in the tool’s GitHub Actions environment in late February, Aqua Security said in a blog post

On March 1, the company tried to block an ongoing breach by changing its credentials. They later realized the attempt failed, which allowed the attacker to stay in the system using valid logins. Attackers published malicious releases of Trivy on March 19.

“While this activity initially appeared to be an isolated event, it was the result of a broader, multi-stage supply-chain attack that began weeks earlier,” Aqua Security said in the blog post.

By compromising the tool, attackers gained access to secrets for many organizations, Carmakal said. “There will likely be many other software packages, supply-chain attacks and a variety of other compromises as a result of what’s playing out right now.”

Mandiant expects widespread breach disclosures, follow-on attacks and a variety of downstream impacts to play out over the next several months. 

The attackers, which the incident response firm has yet to name, are collaborating with multiple threat groups mostly based in the United States, Canada and United Kingdom. These cybercriminals “are known for being exceptionally aggressive with their extortion,” Carmakal said. “They’re very loud, they’re very aggressive.”

Mandiant is still working to identify the root of the initial attack. “We can’t quite tell how those credentials were stolen, because it is our belief that those credentials were not stolen from that victim’s environment,” Carmakal said. 

The credentials were likely stolen from another cloud environment, a business process outsourcer, partner or the personal computer of an engineer, he added. 

Aqua said Sygnia, which is investigating the attack and assisting in remediation efforts, identified additional suspicious activity Sunday involving unauthorized changes and repository changes — activity that is consistent with the attacker’s previously observed behavior.

“This development suggests that the incident is part of an ongoing and evolving attack, with the threat actor reestablishing access. Our investigation is actively focused on validating that all access paths have been identified and fully closed,” the company said.

Aqua, in its latest update Tuesday, said it is continuing to revoke and rotate credentials across all environments and claimed there is still no indication its commercial products are affected. 

Many attackers are currently weaponizing access and likely targeting additional victims, yielding to potential extortion attempts and the compromise of additional software, Carmakal said. 

“It’s going to be a different outcome for a lot of different organizations,” he said. “This will be a very concentrated focus of the adversaries and their expansion group of partners that they’re collaborating with right now.”

The post Experts warn of a ‘loud and aggressive’ extortion wave following Trivy hack appeared first on CyberScoop.

Critical defect in Java security engine poses serious downstream security risks

A maximum-severity vulnerability in pac4j, an open-source library integrated into hundreds of software packages and repositories, poses a significant security threat, but has thus far received scant attention.

The defect in the Java security engine, which handles authentication across multiple frameworks, has not been exploited in the wild since code review firm CodeAnt AI published a proof-of-concept exploit last week. The company discovered the vulnerability and privately reported it to pac4j’s maintainer, which disclosed the defect and released patches for affected versions of the library within two days.

Some researchers told CyberScoop they are concerned about the vulnerability — CVE-2026-29000 — because it affects a widely deployed Java security engine that attackers can exploit with relative ease.

“A threat actor only needs to access a server’s public RSA key to attempt exploitation,” researchers at Arctic Wolf Labs said in an email. 

These public keys, which are shared openly, are used to encrypt data and enable identity authentication. Attackers can trigger the defect and bypass authentication by forging a JSON Web Token (JWT) or deploy raw JSON claims via JSON Web Encryption (JWE) in pac4j-jwt to break into a system with the highest privileges.

“It is currently too early into the lifecycle of this vulnerability to tell if it will materialize into a major threat but the fact that it is a vulnerability in a library makes it more challenging to assess the potential risk,” researchers at Arctic Wolf Labs said. “Downstream consumers of the library may end up needing to issue their own advisories, as we’ve seen with other similar vulnerabilities in the past.”

Amartya Jha, co-founder and CEO at CodeAnt AI, warned that anyone with basic JWT knowledge can achieve exploitation. The vulnerability is a “logic flaw that no pattern-matching scanner or rule-based static application security testing tool would surface, because there’s no single line of code that’s wrong.”

The downstream security risk, as is often the case with open-source software, is widespread. The authentication module for pac4j is integrated into multiple frameworks, including Spring Security, Play Framework, Vert.x, Javalin and others, Jha said.

Many organizations may not realize they depend on pac4j-jwt because it’s not always declared in build files, he added. CodeAnt said it has contacted hundreds of maintainers in the past week to warn them that their packages and repositories are impacted by the vulnerability, which has a CVSS rating of 10.

Researchers haven’t observed any additional PoC exploit code, but they noted the exploit path is easy to reproduce. 

“The conditions for exploitation are favorable,” Jha said. “It’s pre-authentication, requires no secrets, the PoC is public, and the attack surface includes any internet-facing application or API gateway using the affected configuration. The window between public PoC and patch adoption is where the risk is highest.”

The post Critical defect in Java security engine poses serious downstream security risks appeared first on CyberScoop.

Senate Intel chair urges national cyber director to safeguard against open-source software threats

Senate Intelligence Committee Chairman Tom Cotton is raising the spectre of foreign adversaries playing too heavy a role in open-source software, and asking the national cyber director to counter the risks.

The Arkansas Republican wrote to National Cyber Director Sean Cairncross Thursday, saying he was concerned about reports that “state-sponsored software developers and cyber espionage groups have started to exploit this communal environment, which assumes that contributors are benevolent, to insert malicious code into widely used open source codebases.”

Cotton cited last year’s alarms about a shadowy suspected nation-state hacker, Jia Tan, inserting a backdoor into a beta version of the compression utility XZ Utils. He also noted a Russia-based developer being the sole maintainer of a piece of open-source software (OSS) that’s in Defense Department software packages, and citations about Chinese tech companies Alibaba and Huawei being top OSS contributors.

“As the Office of the National Cyber Director holds responsibility for coordinating implementation of national cyber policy and government-wide cybersecurity, you are well-positioned to lead the U.S. government in addressing this cross-cutting vulnerability,” Cotton wrote. “I respectfully request that you take steps to build up the federal government’s capability to maintain awareness of provenance and foreign influence on OSS and track contributions from developers in adversary nations.”

Cotton’s letter adds to warnings from the Hill this year about the risks that Chinese involvement in open-source tech poses, following a letter from the House select committee on China on the subject to Biden-era Commerce Secretary Gina Raimondo. Legislation designed to improve open-source cybersecurity didn’t advance in the Senate after leading lawmakers introduced it in 2023.

The senator noted that open-source software is part of critical government and defense systems. Defense Secretary Pete Hegseth in July ordered the Pentagon’s chief information officer to take steps to guard against foreign influence in department technology.

“The DoD will not procure any hardware or software susceptible to adversarial foreign influence that presents risk to mission accomplishment and must prevent such adversaries from introducing malicious capabilities into the products and services that are utilized by the Department,” he wrote.

At the same time, a Trump administration executive order this year puzzled experts by deleting language from a previous Biden administration executive order emphasizing the importance of open-source software.

The post Senate Intel chair urges national cyber director to safeguard against open-source software threats appeared first on CyberScoop.

Attackers hit React defect as researchers quibble over proof

Attackers of different origins and motivations swiftly exploited a critical vulnerability dubbed React2Shell, affecting React Server Components shortly after Meta and the React team publicly disclosed the flaw with a patch Wednesday. 

Multiple security firms are responding to active exploitation in the wild as a scrum of reports conclude the malicious activity is limited to scanning and attempts instead of actual attacks. Yet, official word from the Cybersecurity and Infrastructure Security Agency is clear — the agency added CVE-2025-55182 to its known exploited vulnerabilities catalog Friday. 

Reaction to the deserialization vulnerability, which has a CVSS rating of 10 and allows unauthenticated attackers to achieve remote-code execution, has revealed a chasm in the cybersecurity research community. Threat analysts are mostly growing more concerned about downstream impacts, but some are urging defenders to respond with less urgency and restraint.

A debate over actual exploitation is muddying response efforts as some researchers say they’ve observed working proof of concepts and others assert legitimate PoCs are lacking. Nonetheless, real organizations have been impacted by attacks, according to multiple researchers investigating the fallout. 

Palo Alto Networks’ incident response firm Unit 42, watchTowr and Wiz told CyberScoop they’ve observed successful exploitation and follow-on malicious activity.

As of late Friday, Unit 42 has confirmed more than 30 organizations across various sectors are impacted. 

“Unit 42 observed threat activity we assess with high confidence is consistent with CL-STA-1015, also known as UNC5174, a group suspected to be an initial access broker with ties to the Chinese Ministry of State Security,” said Justin Moore, senior manager of threat intel research at Unit 42. 

“In this activity, we observed the deployment of Snowlight and Vshell malware, both highly consistent with Unit 42 knowledge of CL-STA-1015,” he added. 

More broadly, Moore said Unit 42 has “observed scanning for vulnerable remote-code execution, reconnaissance activity, attempted theft of Amazon Web Services configuration and credential files, as well installation of downloaders to retrieve payloads from attacker command and control infrastructure.”

Ben Harris, CEO and founder of watchTowr, said his team has observed indiscriminate exploitation, describing the malicious activity as rapid and prolific.

“Post-exploitation we’ve seen everything from basic extraction of credentials through to webshell deployments as a stepping stone to further activities,” Harris said. 

Multiple Wiz customer environments have been impacted by successful exploitation as well, according to Amitai Cohen, the company’s threat vector intel lead. 

“So far, we’ve observed deployments of cryptojacking malware and attempts to extract cloud credentials from compromised machines,” he said. “These early-stage activities are consistent with common post-exploitation objectives like resource hijacking and establishing further access.”

Researchers from multiple firms said attempted and successful exploitation has increased following the release of public PoCs. The potential scope of impact is significant, as 39% of cloud environments contain instances of React or Next.js, a separate open-source library that depends on React Server Components, running versions vulnerable to CVE-2025-55182, according to Wiz Research.

“The Next.js framework itself is present in 69% of environments, and 44% of all cloud environments have publicly exposed Next.js instances — regardless of the version running,” Cohen said.

Further complicating matters, Vercel, the company behind Next.js, disclosed and issued a patch Wednesday for its own maximum-severity vulnerability — CVE-2025-66478 — but the CVE was rejected because it’s a duplicate of the React defect, the root cause. 

Multiple threat groups are mobilizing resources to exploit the vulnerability for various objectives. 

“There are remote-code execution PoCs around now. It’s definitely already started, which means ransomware gangs follow. They don’t ignore opportunities for money,” Harris said.

Within hours of the public disclosure of the vulnerability, “Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda,” CJ Moses, chief information security officer of Amazon Integrated Security, said in a blog post Thursday.

Unit 42 said it, too, is tracking attempted exploitation from several possible China-linked threat actors and cybercriminals. 

Automated, opportunistic exploitation attempts based on a publicly released PoC have been widespread, said Noah Stone, head of content at GreyNoise Intelligence. The firm’s sensors have captured malicious traffic originating from infrastructure in China, Hong Kong, the United States, Japan and Singapore targeting services based in the United States, Pakistan, India, Singapore and the United Kingdom, he said. 

VulnCheck’s decoy systems, which act as an early warning sign of vulnerability exploitation, have also observed exploitative scanning, said Caitlin Condon, the company’s vice president of research. “VulnCheck has been looking at patch rates on exposed Next.js apps, and we didn’t see a lot of patched systems,” she added.

Patching and mitigating the vulnerability isn’t without risk, either. Cloudflare said it experienced a temporary outage that was triggered by changes it made to its body parsing logic to detect and mitigate the vulnerability Friday.

As security researchers debate the viability of PoCs for the React vulnerability and visibility into actual attacks differs across the community, there’s no doubt the defect, which affects one of the most extensively used application frameworks, has captured sweeping interest and attention.

“This whole story is wild,” said Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative. “This has been a real rollercoaster.”

The post Attackers hit React defect as researchers quibble over proof appeared first on CyberScoop.

Developers scramble as critical React flaw threatens major apps

Security researchers and code developers are scrambling to patch and investigate a critical vulnerability affecting React Server Components, an open-source library used widely across the internet and embedded into many essential software frameworks.

The rapid response underscores the potential consequences of exploitation. Although no attacks have been observed or reported, researchers expect them soon and are urgently mobilizing resources to address the defect.

The vulnerability – CVE-2025-55182 – was discovered by Lachlan Davidson, a developer and lead of security innovation at Carapace, and reported to Meta on Saturday. Meta and the React team created a patch and worked with affected hosting providers to address the defect Monday before the public disclosure on Wednesday.

“The reason there’s been such a measured response to this vulnerability is because exploitation is inevitable,” Ben Harris, CEO and founder of watchTowr, told CyberScoop. “We should be expecting attackers to start exploiting this vulnerability truly imminently.” 

React is one of the most extensively used application frameworks, putting large swaths of web applications at risk. “Our data shows that these libraries can be found in vulnerable versions in around 39% of cloud environments,” said Amitai Cohen, threat vector intel lead at Wiz.

Researchers warn that exploitation of the deserialization defect is trivial and allows unauthenticated attackers to achieve remote code execution in default configurations, resulting in elevating privileges or pivots into other parts of a network. “The impact on the resources stored on that system could be devastating should things like access keys or other secrets or sensitive information be present,” said Stephen Fewer, senior principal researcher at Rapid7.

Prior to public disclosure, security researchers from Meta, which initially created and maintained React before moving the open-source library to the React Foundation in October, worked behind the scenes to notify affected organizations of the defect and shared temporary steps for mitigation such as web application firewall rules.

“While we are actively investigating and have no evidence that this vulnerability has been exploited at this time, we want to make all developers aware of this issue so they can implement the appropriate mitigations quickly,” a Meta spokesperson said in a statement.

The vulnerability affects multiple React frameworks and bundlers, including Next.js, React Router, Waku, Parcel RSC plugin, Vite RSC plugin, RedwoodJS and likely others that haven’t been identified yet, according to researchers. Vercel, the company behind Next.js, disclosed and issued a patch for its own maximum-severity vulnerability — CVE-2025-66478 — due to its dependency on React Server Components. 

Researchers from Wiz, Rapid7, watchTowr and other security firms warned that ensuing fallout from other frameworks or libraries that depend on React Server Components is likely, and long-tail impacts will persist in environments that are less maintained or difficult to update.

It’s unclear why Vercel assigned a separate CVE for Next.js since the upstream defect in React, CVE-2025-55182, is the root cause, but the vendor could be tracking impact on its own product, Fewer said. “It should not be necessary to assign a new CVE for each React-dependent framework, so long as the root cause remains the same as the original CVE-2025-55182 issue,” he added.

Cale Black, senior researcher at VulnCheck, said upstream dependency vulnerabilities tend to be handled on a per-project basis. “Projects with more mature security processes will release their own remediation guidance, and potentially over CVEs,” he said.

Meanwhile, threat hunters are steeling themselves for active exploitation and expect technical details and exploit code to be publicly available shortly. 

“With the entire internet looking at a solution that’s used everywhere to understand this vulnerability, someone will figure it out,” Harris said. “I have no doubt that by tomorrow morning, when I wake up, there will be easily one, if not more ways to reproduce this vulnerability.”

The post Developers scramble as critical React flaw threatens major apps appeared first on CyberScoop.

Shai-Hulud worm returns stronger and more automated than ever before

Security researchers and authorities are warning about a fresh wave of supply-chain attacks linked to a self-replicating worm that attackers have injected into almost 500 npm (node.js package manager) software packages, exposing more than 26,000 open-source repositories on GitHub.

The trojanized npm packages, which were first discovered late Sunday by Charlie Eriksen, security researcher at Aikido Security, were uploaded during a three-day period starting Friday and reference a new version of Shai-Hulud, malware that previously infected npm packages in September.

The campaign remains active and is compromising additional repositories, while others have been removed. Researchers haven’t observed downstream attacks originating from credentials stolen by the malware.

“However, because these credentials were publicly exposed on GitHub, it is highly likely that multiple threat actors already have access to them or will soon. This significantly increases the probability of downstream exploitation even if it has not yet appeared at scale,” Eriksen told CyberScoop.

The malware is propagating rapidly, using stolen npm tokens to infect additional packages at a level of automation and scale that is substantially higher than its previous version, approaching near self-sufficiency, Eriksen added. 

Major packages including Zapier, ENS Domains, PostHog and Postman were trojanized, allowing the attackers to populate GitHub repositories with stolen victim data, according to Wiz. Some of the packages are present in about 27% of cloud and code environments, the security firm said in a blog post Monday.

“We’ve observed multiple environments where these trojanized packages were downloaded before their removal from npm, suggesting active real-world exposure,” Merav Bar, threat researcher at Wiz, told CyberScoop. “As we saw in past attacks, we expect to see a long tail of exploitation of the exposure across both the initial and opportunistic attackers.”

The previous and current wave of Shai-Hulud attacks appear to be focused on stealing developer secrets that can be used for deeper supply-chain compromise, Bar added.

“Both waves of Shai-Hulud show how easy it is for attackers to weaponize trusted distribution paths, push malicious versions at scale, and reach thousands of downstream developers before anyone realizes something is wrong,” she said.

The timing of the latest Shai-Hulud campaign was opportunistic as well, hitting repositories just weeks before npm, a company GitHub acquired in 2020, plans to revoke classic tokens as part of a push to institute more strict security practices broadly. “This campaign would be significantly limited if these security implementations were in place,” Eriksen said.

The latest variant of Shai-Hulud creates malicious files during the preinstall phase, including a randomly named public repository containing stolen data. While the attacker references Shai-Hulud and activities resemble the previous worm, researchers at Wiz said there are some differences and attribution has not been fully confirmed.

Ron Peled, chief operating officer and co-founder of Sola Security, described npm as a low-friction package ecosystem, which makes it an appealing target for attackers. Moreover, he said, developers’ endpoints and CI/CD environments are often a blind spot for endpoint detection and response and anti-malware tools.

“Developers often store GitHub tokens, npm tokens or cloud secrets in environment variables,” Peled said. “Build systems almost always have access to powerful tokens and the malware only needs one of them to propagate.”

Attackers target open-source software for supply-chain attacks frequently, and the latest campaign marks yet another attack specifically targeting npm. Attacks are gaining maturity and complexity, building upon previous success, said Melissa Bischoping, senior director of security and product design research at Tanium. 

“Last year, everyone zeroed in on the XZ Utils compromise and how supply-chain compromise of a single open-source project could potentially hijack the entire world. In early September we had simple cryptojacking which was mostly a non-issue, but then that was quickly followed by the Shai-Hulud worm which stole credentials and further compromised security,” she added. 

“The pattern emerging is that attackers have identified open-source developers as high-value targets and have had massive success in just the last year,” Bischoping said. “Developers, even hobbyist ones, need to be prepared for continued attacks and escalation.”

The post Shai-Hulud worm returns stronger and more automated than ever before appeared first on CyberScoop.

Hackers turn open-source AI framework into global cryptojacking operation

Malicious hackers have been attacking the development environment of an open-source AI framework, twisting its functions into a global cryptojacking bot for profit, according to researchers at cybersecurity firm Oligo.

The flaw exists in an Application Programming Interface for Ray, an open-source framework for automating, scaling and optimizing compute resources that Oligo researchers called “Kubernetes for AI” due to its popularity. This vulnerability allows for unauthenticated remote code execution.

The attackers “have turned Ray’s legitimate orchestration features into tools for a self-propagating, globally cryptojacking operation, spreading autonomously across exposed Ray clusters,” Oligo researchers Avi Lumelsky and Gal Elbaz wrote.

Oligo’s report describes/details observing multiple cybercriminal groups jostling and battling with each other and legitimate users for LLM compute resources. They also used several techniques to hide their presence, limiting CPU usage to evade detection, disguising malicious processes as legitimate services, and hiding GPU usage from Ray’s monitoring to avoid detection while leveraging “premium compute resources,” researchers wrote.

The potential attack surface for exploitation is large: Oligo researchers say there are more than 200,000 exposed Ray servers online, though only a portion have been confirmed thus far as vulnerable or compromised.

“Many of the exposed servers belong to active startups, research labs, and cloud-hosted AI environments, while some are honeypots,” Lumelsky and Elbaz wrote.

The researchers said the latest campaign is a “major evolution” from prior exploitation of the same vulnerability in Ray initially discovered in 2023, and they believe it’s an entirely separate group of actors this time around.  The report says available evidence indicates the attackers “could” have been lurking in Ray since September 2024, more than a year, while migrating between development environments GitLab and GitHub.

The actors gained initial access to exposed Ray nodes through the Job Submission API flaw, then sent commands to Ray’s dashboard in the form of fraudulent tasks the system processes.   Although the dashboard is supposed to be used only within internal networks,  Oligo notes they are frequently exposed to the public internet. This allowed attackers to explore the network further  and deploy malware payloads.

“Instead of exploiting CVEs or using network attacks, attackers used Ray’s own scheduling API to spread,” the researchers wrote. “It’s essentially using the victim’s infrastructure as intended, using python code – like the applications that are already running, just for malicious purposes.”

Once the attackers took control of  Ray clusters, they manipulated the system that manages compute resources. They specifically searched for NVIDIA A100 GPUs, calculated the best way to use those resources, and then submitted takeover jobs with the exact resource requirements. Oligo noted that A100 chips are valuable to cryptominers because they cost $3-4 per hour on most cloud platforms, allowing attackers to hide their presence in the cloud “while stealing premium compute resources.”

The attack occurred in two phases. First, the attackers used GitLab to develop and deliver their malware, but this operation was taken down after being discovered on November 5. A few days later, the attackers reappeared on GitHub, creating a new repository to continue their campaign. Lumelsky and Elbaz noted that whenever their activity was discovered, the attackers simply created new GitHub repositories. As of November 17, the campaign was still ongoing.

In response to a request for comment on the research Nov. 18, a public relations firm representing GitHub said the company is “committed to investigating reported security issues.”

“In response to malicious activity, we have removed the accounts that violate GitHub’s Acceptable Use Policies, which prohibit content that supports malware campaigns,” a spokesperson said through email.

Artifacts drawn from attempts to obfuscate their presence contained code that “strongly implies” it was generated with a Large Language Model.

It’s important to note that the underlying API flaw exploited in both the 2023 attacks and this more recent campaign (CVE-2023-48022) has never been fully mitigated.

According to the flaw’s MITRE ATT&CK entry, the bug “remains unpatched and has been disputed by the vendor as they maintain that Ray is not intended for use outside of a strictly controlled network environment.”

“In practice however, users often deploy Ray without heeding this warning, which creates an extended window for exploitation, evidenced by its continued and expanded weaponization by attackers in the wild,” Lumelsky and Elbaz wrote.

The post Hackers turn open-source AI framework into global cryptojacking operation appeared first on CyberScoop.

OpenAI releases ‘Aardvark’ security and patching model 

A new security-focused AI model released Thursday by OpenAI aims to automate bug hunting, patching and remediation.

The model, powered by ChatGPT-5 and given the name Aardvark, has been used internally at OpenAI and among external partners. Currently offered in an invite-only Beta, it’s designed to continuously scan source code repositories to find known vulnerabilities and bugs, assess and prioritize their potential severity, then patch and remediate them.

In a blog post published on the company’s website, OpenAI claims that Aardvark “does not rely on traditional program analysis techniques like fuzzing or software composition analysis.”

“Instead, it uses LLM-powered reasoning and tool-use to understand code behavior and identify vulnerabilities,” the blog stated. “Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more.”

An illustration of how Aardvark, OpenAI’s new security model, works to identify, analyze and then remediate vulnerabilities. (Source: OpenAI)

OpenAI says Aardvark can also develop threat models based on the contents of a repository and project security goals and design, sandbox vulnerabilities to test their exploitability, annotate problematic code and submit proposed patches for human review.

In addition to finding security vulnerabilities, the company said Aardvark has shown the potential to spot logic and privacy bugs in code bases, and identified 92% of known and synthetically introduced vulnerabilities in unspecified “golden” repositories. Members of the open source community who operate noncommercial repositories will be able to use the scanner for free.

The company recently updated its coordinated vulnerability disclosure process in September, rolling out changes that include no longer committing to strict disclosure timelines, which OpenAI said can “pressure developers” and emphasizing broader ecosystem security. The Beta version of the model is currently open to select research partners, and OpenAI said it plans to broaden the tool’s use over time as it refines detection, validation and reporting capabilities.

“By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation,” the blog stated.

Aardvark’s release reflects OpenAI’s desire to leverage their technology for automated vulnerability scanning and remediation, a field where large language models have shown increasing promise and potential over the past year. The company said Aardvark has identified 10 vulnerabilities thus far that have received Common Vulnerabilities and Exposure (CVE)  entries.

Other companies, such as startup XBOW, have been able to develop AI security models over the past year that can ride to the top of bug bounty leaderboards at HackerOne and BugCrowd, run day and night and identify and fix hundreds of vulnerabilities.

XBOW founder Oege de Moor, who previously led GitHub Next, the company’s software research and development division, told CyberScoop in July that their model receives some human guidance on the front and manual validation on the backend, but otherwise runs autonomously during its bug hunting.

While vulnerability research experts have described models like XBOW as more useful for high-volume, low-impact bugs, the company has attempted to showcase the evolving model’s ability to tackle higher complexity bugs and exploits.

An automated program to address the thousands of low-severity bugs plaguing the internet, while freeing up human operators to tackle higher complexity vulnerabilities, would still have tremendous value. Some security experts point out that large cyber intrusions and multi-stage malware attacks are often less about exploiting zero days or high severity bugs and more about chaining together lower- and medium-impact flaws that exist in unpatched systems.

But another consideration around these models is the sheer energy they consume. De Moor said that while XBOW had solved thousands of bugs and received bug bounties and awards for its work, those earnings aren’t enough to cover the total compute costs to run XBOW over that time.

The post OpenAI releases ‘Aardvark’ security and patching model  appeared first on CyberScoop.

❌