Reading view

There are new articles available, click to refresh the page.

Open Source Project Shuts Down Over Legal Threats from 3D Printer Company Bambu Lab

The free/open source project OrcaSlicer is a popular fork of 3D printer slicing software from Bambu Lab. But Tuesday independent developer Pawel Jarczak shuttered the project "following legal threats from Bambu Lab," reports Tom's Hardware: Jarczak's fork of OrcaSlicer would have allowed users to bypass Bambu Connect, a middleware application that severely limits OrcaSlicer's access to remote printer functions in the name of security. Jarczak said in a note on GitHub that Bambu Lab threatened him with a cease and desist letter and accused him of reverse engineering its software in order to impersonate Bambu Studio. From Bambu Lab's blog post: Bambu Studio is an open-source project under the AGPL-3.0 license. Anyone can take its code, modify it, and distribute it... That's what OrcaSlicer does, and 734 other forks do as well. We have no issue with that and never have. At the same time, a license for code is not a pass to our cloud infrastructure... Our cloud is a private service. Access to it is governed by a user agreement, not the AGPL license... [T]he modification in question worked by injecting falsified identity metadata into network communication. In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers... If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client. "User-Agent is not authentication," counters OrcaSlicer's developer. "It is only self-declared client metadata. Any program can set any User-Agent." And "the User-Agent construction comes directly from Bambu Lab's own public AGPL Bambu Studio code.... So on what basis can anyone claim that I am not allowed to use this specific part of AGPL-licensed code under the AGPL license...? My work was based on publicly available Bambu Studio source code together with my own integration layer." But the bottom line is that Bambu Lab "contacted me directly and demanded removal of the solution." I asked whether I could publish the private correspondence in full for transparency. That request was refused... They also referred to legal materials and stated that a cease and desist letter had been prepared... I removed the repository voluntarily. That removal should not be interpreted as an admission that all legal or technical allegations made against the project were correct. I removed it because I have no interest in maintaining a prolonged dispute around this particular implementation, and no interest in continuing to distribute it. YouTuber and right-to-repair advocate Louis Rossmann reviewed the correspondence from Bambu Lab — then pledged $10,000 for legal expenses if the developer returned his code online. ("I think that their legal claim is bullshit," Rossman said Saturday in a YouTube video for his 2.5 million subscribers. "I'm not a lawyer, but I'm willing to put my money where my mouth is.") The video now has over 129,000 views so far. "Rossman has not started a crowdfunding site yet," Tom's Hardware notes, "stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 129,000 views so far, with commenters vowing to back the case as requested."

Read more of this story at Slashdot.

Open Source Registries Join Linux Foundation Working Group to Address Machine-Generated Traffic

Under the nonprofit Linux Foundation, "a new Sustaining Package Registries Working Group will seek to identify concrete funding, governance, and security practices," reports ZDNet, "to keep code flowing as download counts grow.... Because software builds, continuous integration pipelines, and AI systems hammer registries at machine speed rather than human speed, the sites can't keep up. "That growth has brought a surge in bot traffic, automated publishing, security reports, and outright abuse, exposing what the working group bluntly calls a 'sustainability gap'." Sonatype CTO Brian Fox, who oversees the Maven Central Java registry, estimates open-source registries saw 10 trillion downloads in 2025. And "The same pattern is appearing across ecosystems. More machine traffic. More automation. More scanning. More expectations around uptime, integrity, provenance, and policy enforcement. More cost. More support burden. More dependency on infrastructure that the industry still talks about as though it runs on goodwill and spare time." ZDNet reports that "To tackle that, Sonatype has teamed up with the Linux Foundation and other package registry leaders, including Alpha-Omega, Eclipse Foundation (OpenVSX), OpenJS Foundation, OpenSSF, Packagist, Python Software Foundation, Ruby Central (RubyGems), and the Rust Foundation (Crates)." The idea is to give operators a neutral forum to discuss money, governance, and shared operational burdens openly. Once that's dealt with, they'll coordinate how to explain those realities back to companies and organizations that have long assumed registries are "free." No, they're not. They never were. As the Linux Foundation pointed out, "Registries today run primarily on two things: (1) infrastructure donations and credits; and (2) heroic efforts from small paid teams (themselves funded by donations and grants) and unpaid volunteers that operate and maintain registry services. The bulk of donations and grants comes from a small set of donors and doesn't scale with demands on the registry." The working group is explicitly positioned as a venue where registry leaders and ecosystem stakeholders can align on "practical, community-minded" ways to sustain that infrastructure, rather than each operator improvising its own survival plan in isolation. ZDNet says the group will also coordinate security practices and information, and craft frameworks "that make it politically and legally possible to introduce sustainable funding models without fracturing communities." And they will also "align messaging and educational content so developers, companies, and policymakers finally understand what it costs to run these services."

Read more of this story at Slashdot.

Trump officials are steering a cybersecurity scholarship program toward AI

The Trump administration is redirecting a cybersecurity scholarship program that requires recipients to work in government service toward artificial intelligence, leaving some current program scholars dismayed and bewildered.

In an email to participating school program coordinators obtained by CyberScoop, the Office of Personnel Management and National Science Foundation said the CyberCorps Scholarship For Service program would now be known as CyberAI SFS.

“The SFS students we enroll today will not be employable when they graduate in 2-3 years without significant AI background,” the email reads. “Any SFS student in this new program must be proficient in using AI in cybersecurity or providing security and resilience for AI systems. Therefore, new students in the legacy CyberCorps program must learn to acquire AI expertise to augment their cybersecurity expertise.”

“Effective immediately, new SFS scholars will not be accepted to the Legacy CyberCorps(C) program without a description on how they will develop competencies at the intersection of cybersecurity and AI,” the email continues. “The description of the competency development could include, but are not limited to, formal program of study, experimental learning, research activities, capstone projects, competitions, certifications, and/or no-credit professional development via external providers.”

One current program scholar graduating soon said they were “disappointed” by the change for several reasons. As of earlier this week, the agencies collectively running the program — OPM, NSF and the Department of Homeland Security — hadn’t notified any program participants that any changes were on the horizon.

For another: “I was a little bit surprised that it was coming out as so blatantly disregarding the people that haven’t graduated yet, that everyone in my cohort is already considered ‘legacy,’ and the fact that it said people in the program that I’m currently in will not be employable in the coming years,” they said.

The email leaves scholars uncertain about what will happen as they try to fulfill their side of the agreement, especially since doing so has  already been difficult amid cyber job cutbacks and other concerns about how the program has recently been administered. The scholar told CyberScoop there are around 300 people in this current group.

“I assume it will affect placements,” they said. “I can’t say for sure one way or another, because placements are already so impacted by everything that’s been going on. I don’t know what’s due to lack of AI background and what’s due to everything else.”

Another scholar said it was wrong for OPM “to keep claiming repeatedly that they’re acting in our best interests,” when “we’re left out to dry.” Already, the current group of scholars has been frustrated by their inability to get questions answered.

“If we’re legacy CyberCorps, then how does that address anything?” the scholar asked. “We’re just kind of being shoved into a closet and forgotten about. Now in that email, they were saying that we were going to be unhireable in two years time without all this AI stuff under our belt. But at the same time, almost all of our universities were actively discouraging the use of AI.”

Another part of the email brought welcome news to those scholars: a temporary easing of the program’s requirements, including the 70-20-10 rule that sets targets for jobs in the federal government, state and local governments, and the education sector, as well as the rules for securing an internship.. Even so, scholars say they still haven’t received any direct information about the changes.

A spokesperson for NSF said there have been some misunderstandings about the email to school program coordinators (known as principal investigators), but didn’t address current scholars’ concerns about communication.

“The guidance does not require scholars to possess these competencies upon entry,” said the spokesperson, Michael Englund. “Rather, it requires principal investigators (PIs) to clearly describe how their programs will prepare scholars to develop AI-related competencies by the time they graduate (typically within two to three years). In other words, programs must have a concrete and immediate plan to ensure scholars gain these skills during the course of their studies, not prior to admission.”

A spokesperson for OPM addressed the two biggest concerns of current participants.

“There are no changes to placement requirements,” the spokesperson said. “As noted, NSF’s updates are forward-looking to ensure future cohorts are prepared for evolving workforce needs. NSF has encouraged institutions to use professional development funds to expand AI-related training where needed. At OPM, we are also expanding AI training and have introduced AI ambassadors to support adoption.”

On communication: “Principal investigators (PIs) remain the primary point of contact for scholars, but OPM plans to increase direct outreach and plans to issue follow-up communication to scholars on placement efforts,” the spokesperson said.

Last week’s email is the latest turn for the program, with the Cybersecurity and Infrastructure Security Agency last month declaring that it was canceling summer internships due to the lapse in funding for some DHS agencies. Congress has since provided funding for CISA. 

The agency didn’t answer a question about whether that cancellation decision has been reversed as a result.

The post Trump officials are steering a cybersecurity scholarship program toward AI appeared first on CyberScoop.

Microsoft Open-Sources 'Earliest DOS Source Code Discovered To Date'

An anonymous reader quotes a report from Ars Technica: Several times in the last couple of decades, Microsoft has released source code for the original MS-DOS operating system that kicked off its decades-long dominance of consumer PCs. This week, the company has reached further back than ever, releasing "the earliest DOS source code discovered to date" along with other documentation and notes from its developer. Today's source release is so old that it predates the MS-DOS branding, and it includes "sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK," write Microsoft's Stacey Haffner and Scott Hanselman in their co-authored post about the release. [...] This source code is old enough that it hadn't been stored digitally. "A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini," calling itself the "DOS Disassembly Group," painstakingly transcribed and scanned in code from paper printouts provided by Paterson. This process was made even more difficult because modern OCR software struggled with the quality of the decades-old printout.

Read more of this story at Slashdot.

Spy agency officials say job loss anxiety, moving fast ‘safely’ among top challenges in AI workforce overhaul

Like many organizations, the National Geospatial Intelligence Agency is moving to integrate AI tools into their business operations.

Jay Harless, director of human development at NGA, said the agency is trying to strike a balance: move fast enough to keep pace in what U.S. national security officials increasingly view as an AI arms race with adversarial countries like Russia, China, but not so fast that it disrupts proven intelligence-gathering methods.

“One of our primary drivers is that our adversaries were investing heavily, and so there is the pressure to keep ahead of and do that safely,” Harless said Tuesday at the Workday Federal Forum, presented by Scoop News Group. “We also realize that some of our adversaries may not have the same legal and ethical boundaries that us and our partners all need.”

Harless said the agency and others in the intelligence community are working to build systems with agentic AI that operates that can accelerate decision making “within secure boundaries.” That means building new IT infrastructure, validation protocols, monitoring for bias or rogue behavior, and putting accountability mechanisms in place.

“We’re moving fast, and moving fast safely by distinguishing what should be automated, what should be augmented and what should be kept purely human, because there are some things that will always be [human-operated],” he said.

A key piece is figuring out exactly how AI should fit into the work. Sasha Muth, NGA’s deputy director of human development, said the agency envisions a three-to-five-year effort to transform its workforce and IT infrastructure for the AI age. This year will be spent largely putting “structural things in place” for when and how analysts use AI, and reassessing what qualifications the agency should require for entry-level jobs.

But that effort is also causing tensions within the workforce, and Muth acknowledged that part of the challenge is convincing rank-and-file employees that the technology is going to help them – not replace them. The agency hired its first Chief AI Officer in 2024, and its upcoming three-year strategic plan will focus on change management, professional development and updating employees’ job skills. 

Muth said they are focused on evolving their human capital needs because one of her biggest fears is that over that five-year transition “we‘re going to lose a lot of our expertise” by automating functions and not doing enough to modernize job requirements.

“We do see it as a big transformation, not only for just utilizing the technology, but moving our workforce along with us, having them excited about the changes and not fearful, because there’s a lot of fear…that their job is going away, that they won’t have a job,” she said.

The post Spy agency officials say job loss anxiety, moving fast ‘safely’ among top challenges in AI workforce overhaul appeared first on CyberScoop.

Rep. Delia Ramirez takes over as top House cybersecurity Dem

Illinois Rep. Delia Ramirez is taking over as the top Democrat on the House Homeland Security panel’s cybersecurity subcommittee, replacing former Rep. Eric Swalwell after his resignation.

Committee Democrats approved the change Tuesday at a meeting prior to a “shadow hearing” without the GOP majority, focused on protecting elections from Trump administration interference.

Ramirez first won election to Congress in 2022 and was reelected in 2024. She has served as the vice ranking member of the committee since 2023. She is now the ranking member of the Subcommittee on Cybersecurity and Infrastructure Protection.

She has leveled criticisms during committee hearings about the Trump administration’s personnel cutbacks at the Cybersecurity and Infrastructure Security Agency, and was critical of how data was secured under the administration’s Department of Government Efficiency initiative led by Elon Musk.

“Under a Musk and Trump presidency, it’s clear that the security of Americans’ information is not a priority. I mean, a private civilian with no security clearance bullied his way into the Treasury, set up private servers, and stole sensitive information from an agency. If that isn’t a national security crisis, a cybersecurity  crisis –then I don’t know what is,” Ramirez said at an early 2025 hearing. “The true threat to our homeland security is ‘fElon’ Musk, Trump, and their blatant misuse of power to steal information and coerce employees to leave agencies.”

She cosponsored legislation last year meant to strengthen the cybersecurity workforce by promoting measures to help workers from underrepresented and disadvantaged communities to join the field.

But she also had criticisms of U.S. cybersecurity under the Biden administration, including of Microsoft’s role in the SolarWinds breach.

In a statement about her appointment Tuesday, Ramirez took aim at at Trump, Vice President JD Vance, Department of Homeland Security Secretary Markwayne Mullin and White House homeland security adviser Stephen Miller.

“It’s clear that the security of our communities’ information, federal networks, and critical infrastructure have not been priorities” under them, she said. “Between the security failures of DOGE, the abuses of immigrant families’ data, and the decimation of CISA’s workforce and resources, Republicans have demonstrated a lack of interest in safeguarding our nation’s cybersecurity and our residents’ civil rights and privacy. In neglecting necessary oversight, Republicans have deregulated emerging technologies, allowed bad actors to profit from violations of our civil rights, and consented to the weaponization of government systems. It is more critical than ever that we assert our Congressional authority and disrupt the blatant corruption making us all less safe.”

Swalwell left the position following his resignation from Congress as a representative from California amid allegations of sexual misconduct.

Her ascension completes a full leadership turnover for the subcommittee. Rep. Andy Ogles, R-Tenn., took over the gavel late last year after former chairman Andrew Garbarino, R-N.Y., took over as chairman of the full committee.

The subcommittee is set to hold a hearing Wednesday on CISA and its role as the sector risk management agency for a number of critical infrastructure sectors.

Updated 4/28/26: to include comment from Ramirez.

The post Rep. Delia Ramirez takes over as top House cybersecurity Dem appeared first on CyberScoop.

AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it

Wade Woolwine is Senior Director, Product Security at Rapid7.

The headlines around Glasswing have focused on how quickly AI can surface vulnerabilities, which has naturally caught the attention of security leaders. In my conversations with teams and customers, the more useful discussion has been about what that speed means in practice for business protection, especially across open source risk, dependency choices, and software supply chain resilience. The deeper issue for security leaders sits elsewhere. 

Software risk is becoming harder to manage across the full lifecycle, especially in open source dependencies, build pipelines, developer environments, and the operational processes that sit between disclosure and remediation. When vulnerabilities can be found faster and at greater depth, security teams need more than another source of findings. They need a stronger way to understand what they run, what they trust, what they can patch quickly, and where a single weak dependency can create disproportionate risk.

Faster discovery makes software supply chain resilience a more immediate leadership issue. CISOs need a clearer view of how dependencies are chosen, monitored, validated, and governed across production, build, and developer environments, especially as open source remains essential to modern software development.

Organizations already struggle to absorb vulnerability disclosures at the pace they are coming in, because when discovery gets faster, the operational gap widens between knowing there is a problem and being able to do something useful about it. That gap is especially serious in the software supply chain, where a single dependency can introduce risk into build systems, production workloads, developer endpoints, and the tools used to secure them.

This is why I would frame AI-driven vulnerability discovery risk as a lifecycle challenge. The pressure does not sit in one place, but across inventory, dependency decisions, threat intelligence, patching discipline, and validation – with people, process, and visibility shaping how well an organization can respond. Technology matters, but it cannot compensate for a weak operating model underneath it.

Open source still matters. Dependency choices matter more.

Open source remains essential to modern software development because it helps teams move faster and get products to market without rebuilding common functionality from scratch. The better response is to be more deliberate about where and how third-party code enters the environment. 

Open source has always involved a trade-off between speed, efficiency, flexibility, and inherited risk, and that trade-off becomes harder to manage as AI makes code review deeper and faster. More flaws and supply chain compromises will likely be found in packages that teams have trusted for years, including transitive dependencies most developers did not knowingly choose. One only needs to look back a few weeks to find that the widely used Axios package suffered a supply chain compromise that bundled a Remote Access Trojan (RAT) charged with stealing secrets. That raises the value of understanding which dependencies are essential, which ones can be removed, which ones pull in large chains of transitives, and which ones are maintained by too few people to inspire confidence.

That work starts with a more disciplined question than “Is there a package that does this?” It starts with “Do we need this dependency, and do we understand the risk that comes with it?” The safest dependency is often the one that never enters the environment in the first place.

Why inventory has to go deeper than package lists

Supply chain resilience begins with knowing what you are actually running, which sounds straightforward until a critical disclosure lands in a package no one realized was in the environment three layers deep. Dependency graphs are deeper than most teams think, and transitive risk is where a lot of operational pain begins. A package chosen directly by a developer may bring in dozens of additional packages, each with its own maintainers, release cadence, security posture, and potential failure points.

A mature approach to inventory needs to move beyond a static package list, because CISOs need confidence in three views at once: What is declared in source, what is resolved and built, and what is actually running in production? Those views often drift apart over time, which means a package can be patched in source and still remain unpatched in a deployed container or runtime environment. An SBOM on its own will not close that gap; continuous, usable inventory will.

That inventory also needs clear ownership attached to it, because the moment a critical dependency is identified, someone has to decide what happens next, coordinate the change, and absorb the operational consequences. Security teams cannot do that well if responsibility is unclear, which is why ownership needs to be treated as part of resilience rather than an administrative detail.

Build pipelines and developer environments deserve the same scrutiny as production

Supply chain conversations still tend to start with production systems, even though recent incidents have shown how quickly compromise can move through the build layer, developer tooling, or the security tooling inside the pipeline itself. Those environments hold code, secrets, and trust relationships that attackers know how to exploit, while developer workstations often carry a rich mix of credentials and elevated privileges because speed matters to the business. Build systems are predictable and privileged, which makes them both valuable and vulnerable, but also easier to monitor.

Seeing those layers as part of the same attack surface means asking harder questions about how code enters the build, how package updates are governed, how actions and dependencies are pinned, what secrets exist in CI/CD, and what controls are in place on developer endpoints to detect anomalous behavior or stop high-risk package activity before it goes unnoticed.

You can gauge the maturity of the operating model with the answers to a few basic questions:

  • How tightly are dependencies controlled in CI?

  • How are package lifecycle scripts governed?

  • What secrets exist in CI/CD, and what protections surround them?

  • What visibility exists into anomalous behavior on developer endpoints?

  • How would the team detect or prevent high-risk package activity before it spreads?

If those answers are unclear, important parts of the model are still missing.

Why prioritization matters more as scanning accelerates

When software risk rises, the instinct is often to add another scanner because more visibility feels like progress. What matters more over time, though, is how well teams can prioritize the findings that follow, assign them to the right owner, choose the right mitigation, and prove that exposure actually went down. Broader scanning and faster discovery mostly add to the pile unless the operating model behind them is strong enough to turn findings into action. Feed more issues into a process that is already stretched and the backlog grows, priorities become harder to sort, and remediation slows in the places where speed matters most. The organizations that come through this period well will be the ones that treat supply chain resilience as a systems problem, with stronger intake, clearer governance, better intelligence, and faster paths from alert to action.

What stronger software supply chain resilience looks like in practice

A stronger response starts with a deeper inventory of dependencies across source, build, and runtime, so teams can see both direct and transitive packages and connect them back to real environments and real owners. Once that picture is in place, intelligence monitoring becomes far more useful when it runs continuously against credible signals on vulnerabilities, package risk, maintainer health, end-of-life software, and unusual changes in dependency behavior.

The same level of care needs to carry through into dependency governance, where better decisions depend on asking whether a new package is necessary, how much transitive risk it introduces, whether its maintenance model is healthy, and what policy governs its path into production. Build and developer controls belong in that same conversation, because version pinning, private registries, secret handling, script restrictions, immutable builds, ephemeral runners, and stronger endpoint monitoring all reduce the attack surface around the software supply chain.

Monitoring threat intelligence for notifications about new vulnerabilities and compromised packages and having a well defined and practiced process for scoping and remediating emerging threats becomes critical. Your supply chain vulnerability and compromise response should be practiced – just like your incident response plan – through table top exercises and simulated threat events. You don’t want to wait until the house is on fire to know how to execute an effective response.

Similarly, Engineering, DevOps, and Security teams should collaborate on establishing a trust and reputation scoring mechanism for supply chain dependencies. Being able to evaluate the speed of response, transparency of communication and updates, and ultimate resolution of the vulnerability or compromise speak volumes for how much you can trust the maintainers of the software you depend on. The OpenSSF Scorecard project offers a great place to start evaluating the open source packages you’re already using.

Organizations should also have a fallback plan for when obtaining a security patch is not available. Some options to consider include exploring other open source packages that perform similar functions, exploring other mitigations such as application firewalling, or even forking and contributing a security patch back to the community.

Validation closes the loop by showing whether the artifact came from where it was supposed to, whether the package has drifted in unexpected ways, and whether the mitigations applied are reducing live risk rather than simply documenting the process.

How CISOs should think about the next 12 months

The strain on security teams is only growing, and the potential for AI to relieve some of that pressure is understandably compelling, especially when boards, CEOs, and CFOs are asking how the organization plans to adopt it. That makes this a leadership question as much as a technology one. CISOs need a clear point of view on where AI can genuinely improve resilience, where it still introduces too much uncertainty, and how to explain those choices in business terms.

If software engineering teams are already adopting AI-assisted development, security teams should be part of that conversation early, especially around dependency management. I have seen teams begin connecting AI coding agents to vulnerability management workflows so those agents can interpret vulnerabilities found in the code base, assess reachability with more context, help plan remediation, and validate updates much faster than traditional handoffs usually allow. Used well, that can reduce drag across the workflow and help teams move faster on classes of issues that are currently slowing them down.

Getting there safely still depends on the foundation underneath it. A more resilient path starts with a clearer picture of the environment and a more complete inventory of dependencies across source, build, and runtime. From there, ownership needs to be explicit, threat and vulnerability intelligence needs to be embedded into how the organization prioritizes, and dependency sprawl needs to be reduced with more discipline around what actually enters production. The same mindset should carry through to the build layer and developer endpoints, where tighter controls and better visibility help reduce unnecessary exposure, while faster and more repeatable paths from disclosure to action make it easier for teams to respond before risk compounds.

That foundation will matter regardless of which AI model or platform becomes dominant six or twelve months from now. It will also matter if the next wave of AI makes backlog reduction, lower-tier remediation, or patch validation more practical. Organizations that know what they run and how they operate will be in a much better position to adopt those capabilities with intent.

The shift security leaders should make now

Security in an AI-accelerated world needs to be managed as a systems challenge, with supply chain resilience shaped by how well organizations connect software composition, exposure visibility, dependency governance, threat intelligence, build integrity, endpoint controls, remediation workflows, and validation. When those layers are treated separately, gaps open quickly; when they are tied together through a stronger operating model, teams are in a much better position to absorb faster discovery without losing control of the response.

For CISOs, that means continuing to use open source with a more deliberate view of dependency risk, reducing unnecessary packages where possible, knowing what is running and who owns it, and monitoring threat and vulnerability intelligence with enough discipline to act before the queue overwhelms the team. It also means paying closer attention to the attack surface across production, build, and developer environments, while treating AI as something that will amplify both the strengths and the weaknesses already present in the program. Faster discovery is here, and the organizations that handle it best will be the ones that can respond with the same level of discipline.

Vercel attack fallout expands to more customers and third-party systems

Vercel said the fallout from an attack on its internal systems hit more customers than previously known, as ongoing analysis uncovered additional evidence of compromise

The company, which makes tools and hosts cloud infrastructure for developers, maintains a “small number” of accounts were impacted, but it has yet to share a number or range of known incidents linked to the attack. Vercel created and maintains Next.js, a platform supporting AI agents that’s downloaded more than 9 million times per week, and other popular open-source projects. 

Vercel CEO Guillermo Rauch said the company and partners have analyzed nearly a petabyte of logs across the Vercel network and API, and learned malicious activity targeting the company and its customers extends beyond an initial attack that originated at Context.ai. 

“Threat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers,” Rauch said in a post on X

“Once the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables,” he added.

The attack exemplifies the widespread and compounded risk posed by interconnected systems that rely on OAuth tokens, trusted relationships and overly privileged permissions linking multiple services together.

“The real vulnerability was trust, not technology,” Munish Walther-Puri, head of critical digital infrastructure at TPO Group, told CyberScoop. “OAuth turned a productivity app into a backdoor. Every AI tool an employee connects to their work account is now a potential attack surface.”

An attacker traversed Vercel’s internal systems to steal and decrypt customer data, including environment variables it stored, posing significant downstream risk. 

The company insists the breach originated at Context.ai, a third-party AI tool used by one of its employees. Researchers at Hudson Rock previously said the seeds of that attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments. 

Vercel has not specified the systems and customers data compromised, nor has it described the threat eradicated or contained. The company said it’s found no evidence of tampering across the software packages it publishes, concluding “we believe the supply chain remains safe.” 

The company fueled further intrigue in its updated security bulletin, noting that it also identified a separate “small number of customers” that were compromised in attacks unrelated to the breach of its systems. 

“These compromises do not appear to have originated on Vercel systems,” the company said. “This activity does not appear to be a continuation or expansion of the April incident, nor does it appear to be evidence of an earlier Vercel security incident.”

It’s unclear how Vercel became aware of those attacks and why it’s disclosing them publicly. 

Vercel declined to answer questions, and Mandiant, which is running incident response and an investigation into the attack, referred questions back to Vercel. 

Vercel has not attributed the breach to any named threat group or described the attackers’ objectives. 

An online persona identifying themselves as ShinyHunters took responsibility for the attack and is attempting to sell the stolen data, which they claim includes access keys, source code and databases. Austin Larsen, principal threat analyst at Google Threat Intelligence Group, said the attacker is “likely an imposter,” but emphasized the risk of exposure is real.

Walther-Puri warned that the downstream blast radius from the attack on its systems remains undefined. “Stolen API keys and source code snippets from internal views are potentially keys to customer production environments,” he said.

The stolen data attackers claim to have “sounds almost boring … but it’s infrastructure intelligence,” Walther-Puri added. “The right environment variable doesn’t just unlock a system — it lets adversaries become that system, silently, from the inside.”

The post Vercel attack fallout expands to more customers and third-party systems appeared first on CyberScoop.

CISA director pick Sean Plankey withdraws his nomination

Sean Plankey, the long-sidelined nominee to lead the Cybersecurity and Infrastructure Security Agency, asked President Donald Trump on Wednesday to withdraw his nomination.

“At this point in time, I am asking the President to remove my nomination from consideration,” he said in a notification letter seen by CyberScoop. “After thirteen months since my initial nomination, it has become clear that the Senate will not confirm me.”

Plankey’s request comes weeks after the Senate confirmed MarkWayne Mullin to lead the Department of Homeland Security, CISA’s parent agency.

“The Nation and Department of Homeland Security Secretary MarkWayne Mullin requires a confirmed director of CISA without further delay,” Plankey wrote, adding thanks to Trump himself. “While I humbly request the removal of my nomination, I wholeheartedly support President Trump’s upcoming nomination for CISA and look forward to the continued success of the United States of America.”

Plankey’s nomination was considered dead by most at the end of last year. His renomination this year caught many by surprise, with CBS reporting the paperwork filing was an accident. The White House denied that.

Numerous senators had placed holds on his nomination, including GOP senators who held him up over matters unrelated to cybersecurity. Most prominently, Sen. Rick Scott, R-Fla, had placed a hold on his nomination over a Coast Guard contract with a Florida company that DHS had partially canceled.

Plankey had been serving as an adviser to then-DHS Secretary Kristi Noem on Coast Guard matters. He retired from the Coast Guard last month.

While Plankey awaited confirmation, Bridget Bean, then Madhu Gottumukkala, served as acting director. Gottumukkala recently left the position for another at DHS amid widespread complaints about his leadership. Nick Andersen is currently serving as acting director.

Plankey told CyberScoop he had discussed withdrawing his nomination with Mullin. He said he has a “positive relationship” with Mullin and supported his leadership of DHS. And Plankey called Andersen “one of the most competent cybersecurity people in the country.”

Politico first reported Plankey’s withdrawal request. The White House and CISA did not respond to an official request for comment. When asked for a comment, a DHS spokesperson said the department doesn’t comment on personnel matters.

Plankey’s plans leave the agency with yet more upheaval. Trump has dramatically cut personnel and budget at CISA, with many top officials pushed out or otherwise departing. He has proposed deeper budget cuts still for fiscal year 2027.

Updated 4/22/26: to include DHS response.

The post CISA director pick Sean Plankey withdraws his nomination appeared first on CyberScoop.

Vercel’s security breach started with malware disguised as Roblox cheats

Vercel customers are at risk of compromise after an attacker hopped through multiple internal systems to steal credentials and other sensitive data, the company said in a security bulletin Sunday. 

The attack, which didn’t originate at Vercel, showcases the pitfalls of interconnected cloud applications and SaaS integrations with overly privileged permissions. 

An attacker traversed third-party systems and connections left exposed by employees before it hit the San Francisco-based company that created and maintains Next.js and other popular open-source libraries. 

Researchers at Hudson Rock said the seeds of the attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments.

Each of the companies are pinning at least some blame for the attack on the other vendor.

Context.ai on Sunday said that breach allowed the attacker to access its AWS environment and OAuth tokens for some users, including a token for a Vercel employee’s Google Workspace account. Vercel is not a Context customer, but the Vercel employee was using Context AI Office Suite and granted it full access, the artificial intelligence agent company said. 

“The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as sensitive,” Vercel said in its bulletin. 

The company said a limited number of its customers are impacted and were immediately advised to rotate credentials. Vercel, which declined to answer questions, did not specify which internal systems were accessed or fully explain how the attacker gained access to Vercel customers’ credentials. 

Vercel CEO Guillermo Rauch said customer data stored by the company is fully encrypted, yet the attacker got further access through enumeration, or by counting and inventorying specific variables. 

“We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI,” he said in a post on X. “They moved with surprising velocity and in-depth understanding of Vercel.”

A threat group identifying itself as ShinyHunters took responsibility for the attack in a post on Telegram and is attempting to sell the stolen data, which they claim includes access keys, source code and databases.

The attacker “is likely an imposter attempting to use an established name to inflate their notoriety,” Austin Larsen, principal threat analyst at Google Threat Intelligence, wrote in a LinkedIn post. “Regardless of the threat actor involved, the exposure risk is real.”

Vercel also warned that the attack on Context’s Google Workspace OAuth app “was the subject of a broader compromise, potentially affecting its hundreds of users across many organizations.” It published indicators of compromise and encouraged customers to review activity logs, review and rotate variables containing secrets.

Context and Vercel said their separate and coordinated investigations into the attack aided by CrowdStrike and Mandiant remain underway.

The post Vercel’s security breach started with malware disguised as Roblox cheats appeared first on CyberScoop.

Why the Axios attack proves AI is mandatory for supply chain security

Two weeks ago, a suspected North Korean threat actor slipped malicious code into a package within Axios, a widely used JavaScript library. The immediate concern was the blast radius: roughly 100 million weekly downloads spanning enterprises, startups, and government systems. But beyond the sheer scale, the attack’s speed was just as worrisome – a stark reminder of the tempo modern adversaries now operate at.

The Axios compromise was identified within minutes of publication by an Elastic researcher using an AI-powered monitoring tool that analyzed package registry changes in real time. The approach was right: AI classifying code changes at machine speed, at the moment of publication, before the damage compounds. By any standard, it was a fast response. The compromised package was removed in about three hours. But even in those three hours, the widely-used package may have been downloaded over half a million times.

This underscores a new reality. Enterprises and the public sector are being overwhelmed with attacks that are increasing in both speed and complexity, driven in part by AI. Adversaries are probing every link in the supply chain, and they are doing it at a pace that human-speed defenses cannot match.

This project is one example of using AI to tackle a security problem, but it also makes a broader case: AI-powered security can dramatically improve SOC efficiency especially when organizations across the public sector and beyond are drowning in attacks.

The direct threat to the public sector

Government agencies increasingly rely on the same open-source JavaScript frameworks as the private sector, so a poisoned package can give an adversary access to sensitive systems before anyone realizes the supply chain has been poisoned. This is a direct threat to national security and critical infrastructure, especially when the payloads are cross-platform, affecting macOS, Windows, and Linux.

What is most critical now is understanding and correctly preparing for the frequency and speed at which these attacks occur.

AI has fundamentally lowered the barrier to sophisticated cyber operations, granting relatively unsophisticated bad actors and small nation-states capabilities once reserved for elite criminal groups and countries. Adversaries now leverage AI to automate reconnaissance, craft convincing social engineering, and develop evasive malware. With a new vulnerability discovered every few minutes, the pace is accelerating.

For the public sector, the threat model has expanded. Defending against known nation-state playbooks is no longer sufficient—that’s just the baseline. Groups that couldn’t execute at nation-state levels five years ago now operate with comparable sophistication, while state-sponsored actors operate with unprecedented speed and automation. Staying ahead means moving beyond traditional defense to meet a threat landscape that is increasingly automated and ubiquitous.

AI is not optional

Adversarial AI is the defining threat of the current operating environment. Automated reconnaissance. AI-generated obfuscation. Machine-speed deployment across multiple vectors simultaneously. The adversary has implemented AI faster and more aggressively than most defensive teams.

It is rapidly becoming unquestionable in security: if you are not using AI to battle AI, you will lose.

That does not mean buying into the autonomous SOC fantasy. That approach treats AI in isolation, as if defenders are the only ones with access to the technology. Defensive AI is not a win button, but the minimum entry fee to stay level with the attacker. You still need business context, mission knowledge, and human judgment.

The agentic SOC transformation

The Axios compromise should serve as a clear signal. Nation-state actors are targeting the software supply chain with increasing frequency and sophistication. The government agencies and organizations that will defend successfully against these threats are the ones building security operations that can move just as fast as the threat actors they face.

AI-driven security operations that can match the speed of modern threats, like agentic workflows that automatically triage, investigate, and contain suspicious activity are operationally necessary. Having an agentic SOC mindset and approach to how these centers work will empower analysts’ activity. Agents will operate on behalf of the analyst automatically and transparently.

The traditional SOC pyramid puts humans at the bottom doing the highest-volume work. A wide analyst tier triaging alerts, feeding a narrower senior tier handling investigations. Adversarial AI has made that base layer untenable. The volume is too high, the speed too fast, the surface area too broad. The pyramid inverts into a diamond – AI takes the base while analysts rise to become threat engineers: managing, validating, and improving the agents working on their behalf.

AI agents handle the high-volume work of alert correlation, investigation enrichment, and initial containment while human analysts focus on strategic decisions and mission context. These agents amplify the expertise that government security professionals bring, delivering pre-investigated, correlated findings rather than a flood of disconnected alerts.

The rapid acceleration of sophisticated attacks calls for this essential change across the SOC. The public sector and industry are undergoing a significant transformation, shifting away from eyes-on-glass alert triage toward a high-impact era of threat engineering. In doing so, public sector teams will have the ability to greatly reduce mean time to detect/respond, in turn reducing SOC analyst fatigue and compressing investigation timelines.

Mike Nichols is the GM of Security at Elastic.

The post Why the Axios attack proves AI is mandatory for supply chain security appeared first on CyberScoop.

FSF to OnlyOffice: You Can't Use the GNU (A)GPL to Take Software Freedom Away

Nextcloud joined a project to create a sovereign replacement for Microsoft Office called "Euro-Office". But after that project forked OnlyOffice, OnlyOffice suspended its partnership with Nextcloud. "They removed all references to our brand/attribute as required by our license," argued OnlyOffice CEO Lev Bannov on March 30th. ("The core issue here isn't just about what the AGPL license states, but about the additional provisions we, as the authors, have included... If the Euro-Office team believes our approach conflicts with the AGPLv3 license, we invite them to submit an official request to FSF for review.") But this week the FSF responded (as "the steward of the GNU family of General Public Licenses"), criticizing OnlyOffice's "attempt to impose an additional restriction on the AGPLv3" and calling it "inconsistent with the freedoms granted by the license," in a blog post from FSF licensing/compliance manager Krzysztof Siewicz: It is possible to modify the (A)GPLv3 with additional terms, but only by adhering to the terms of the license... The (A)GPLv3 makes it clear that it permits all licensees to remove any additional terms that are "further restrictions" under the (A)GPLv3. It states, "[i]f the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term"... We urge OnlyOffice to clarify the situation by making it unambiguous that OnlyOffice is licensed under the AGPLv3, and that users who already received copies of the software are allowed to remove any further restrictions. Additionally, if they intend to continue to use the AGPLv3 for future releases, they should state clearly that the program is licensed under the AGPLv3 and make sure they remove any further restrictions from their program documentation and source code. Confusing users by attaching further restrictions to any of the FSF's family of GNU General Public Licenses is not in line with free software. "If FSF determines that our license and project align with AGPLv3, we will continue as an open-source initiative," OnlyOffice's CEO had written in March. "However, if the decision goes against us, we are ready to consider other options."

Read more of this story at Slashdot.

CISA cancels summer internships for cyber scholarship students amid DHS funding lapse

The Cybersecurity and Infrastructure Security Agency has informed participants of the federal government’s Scholarship for Service program that it has canceled this year’s summer internship programs due to the current funding issues at the Department of Homeland Security. 

Emails from CISA obtained by CyberScoop recently informed applicants that the agency will not bring any CyberCorps: Scholarship for Service interns onboard this summer due to the impacts of the federal funding lapse and the current administrative situation at DHS. For some applicants, agency representatives acknowledged that the cancellations represent a second consecutive year of disrupted placement efforts.

The National Science Foundation (NSF) leads and manages the program, in coordination with the Office of Personnel Management (OPM) and DHS. The program covers tuition and provides stipends for students specializing in cybersecurity and artificial intelligence. In exchange, graduates must complete an internship and subsequently work in federal service for a period equal to the duration of their scholarship. 

An OPM official told CyberScoop the agency is “actively in contact with all Federal cabinet agencies on this topic, and are confident that we will place nearly all eligible Scholarship for Service participants within the next couple months.”

An NSF spokesperson declined to comment.  CISA did not respond to CyberScoop’s request for comment. 

The sudden closure of agency pipelines highlights how federal job seekers are currently navigating a paralyzed hiring environment, exacerbated by budget turmoil at DHS and proposed workforce reductions under the Trump administration. The White House’s fiscal 2027 budget would slash CISA’s budget by $707 million, according to a summary released earlier this month, which would deeply chop down an agency that already took a big hit in President Donald Trump’s first year.

Sources told CyberScoop Tuesday that CISA has been reaching out to internship applicants who had participated in a virtual job fair held in February, where they were told that the agency would have 100 internship roles available. However, applicants were warned that the agency would not be able to hire anyone until the agency was funded. 

Program participants expressed regret to CyberScoop last November over taking part in an initiative that binds them to an employer currently unable to hire them. Program administrators have reportedly advised students to get creative in their job searches, a directive that caused frustration among participants who rely on standard federal placement pipelines.

In response to the growing backlog of unplaced graduates, OPM announced plans to collaborate with the National Science Foundation on a mass deferment. OPM Director Scott Kupor stated that the deferment will be implemented after the government shutdown resolves, providing graduates additional time to secure qualifying positions.

The structural breakdown of the CyberCorps pipeline presents long-term challenges for the federal government’s ability to recruit technical talent. The United States currently faces an estimated 500,000 open cybersecurity positions. The scholarship program was historically viewed as a reliable mechanism to bypass private-sector wage competition and secure early-career talent for the federal government.

Lawmakers are currently battling over bills that would end the DHS shutdown. 

Tim Starks contributed to this story. 

The post CISA cancels summer internships for cyber scholarship students amid DHS funding lapse appeared first on CyberScoop.

Space Force official touts AI’s impact on cyber compliance

Seth Whitworth, who is both acting Associate Deputy Chief of Space Operations for Cyber and Data and acting chief information security officer, said he believes AI tools are shifting the way defenders review cyber risk, both for individual systems and more holistically throughout an enterprise.  

In particular, Large Language Models can be used to systematically implement fixes for the smaller but critical weaknesses that have allowed state-sponsored hackers and cybercriminals to get inside victim networks and live off the land.

“Our adversaries are not looking for the massive cybersecurity vulnerabilities – we’re actually pretty good at [defending] that,” said Whitworth Tuesday at AI Talks, presented by Scoop News Group. “They’re looking for a misconfiguration, a failed update, a tiny little thing that allows them an entry point into a very connected network.”

Many of these basic cyber hygiene problems tend to fall under existing compliance programs, but it can take more than legal mandates to fix them. Many enterprise IT networks – particularly older ones – build up technical debt over time, leading to forgotten systems, hidden routers and other forms of shadow IT that get more insecure over time.

Cybersecurity experts say agents and the Large Language Models that power them – which operate in perpetuity 24/7, – are particularly well-suited to finding these smaller flaws and quickly exploiting them.

But Whitworth argued that the same technology can be used to reshape how organizations measure and track cyber compliance, from a sluggish box-checking exercise to something more nimble and substantive. He claimed that Space Force’s internal process for obtaining Authorities to Operate and other formal security certifications used to take 3-18 months. Now, it “can now be done in weeks and days.”

That in turn can empower program managers to “pull in all of that massive amount of data, allow the AI – who doesn’t get tired, who doesn’t miss patterns, who doesn’t miss these components – to churn on those items and them deliver something” that can inform real-time changes to cybersecurity, he said.

Whitworth also acknowledged the “fear” that many organizations still have around the use of AI, as well as lingering concerns about some of the technology’s enduring limitations like hallucinations and data poisoning. He said he still gives AI-generated outputs “extra scrutiny, because I haven’t seen the trusted validation” yet.

But he also said he gets more valuable insight on the Space Force’s holistic cyber risk from using Large Language Models than he does from other security control assessments, which tend to narrowly focus on the risk of single systems or assets in isolation.

“We are operating in a highly connected, highly orchestrated world, and so moderate risk that’s accepted in one program immediately becomes moderate risk that is accepted in another program,” said Whitworth. “AI can take that whole picture and understand that when this system change impacts this system, it also impacts this [other] system.”

The post Space Force official touts AI’s impact on cyber compliance appeared first on CyberScoop.

OpenAI’s Mac apps need updates thanks to the Axios hack

OpenAI updated its security certificates and is requiring all macOS users to update to the latest versions after determining its products, along with many others, were impacted by a widespread supply-chain attack that briefly infected a popular open-source library in late March, the company said in a blog post Friday.

The artificial intelligence vendor said it “found no evidence that OpenAI user data was accessed, that our systems or intellectual property was compromised, or that our software was altered.”

Yet, because a GitHub workflow the company uses to sign certificates for macOS applications downloaded and executed a malicious version of Axios, the company is treating the soon-to-be defunct certificate as compromised.

A North Korean hacking group injected malware into two versions of Axios after it compromised the lead maintainer’s computer via social engineering and took over his npm and GitHub accounts. Jason Saayman, the lead maintainer for Axios, said the malicious versions of the software were live for about three hours before removal. 

Google Threat Intelligence Group, which tracks the threat group as UNC1069, said the impact of the attack was broad with ripple effects potentially exposing other popular packages. The JavaScript libraries flow into dependent downstream software through more than 100 million and 83 million downloads weekly. 

The attack was discovered just weeks after a series of other open-source tools, including Trivy, were compromised by UNC6780, also known as TeamPCP, resulting in aggressive extortion attempts. 

OpenAI insists the malware that infected Axios did not directly impact its certificate, which is designed to help customers confirm they are downloading legitimate software. 

“The signing certificate present in this workflow was likely not successfully exfiltrated by the malicious payload due to the timing of the payload execution, certificate injection into the job, sequencing of the job itself, and other mitigating factors,” the company said in the blog post. “Nevertheless, out of an abundance of caution we are treating the certificate as compromised, and are revoking and rotating it.”

Older versions of OpenAI’s macOS apps may lose functionality and will no longer be supported when the certificate is fully revoked May 8, the company said.

OpenAI, which hired a third-party digital forensics and incident response firm to aid its investigation and response, pinned the root cause of the security issue on a misconfiguration in its GitHub workflow. The company said it corrected that error and worked with Apple to ensure fraudulent apps posing as OpenAI cannot use the impacted certificate.

The 30-day window is designed to minimize disruption for users, but OpenAI said it will speed up the revocation deadline if it identifies any malicious activity. The company did not immediately respond to a request for comment.

The post OpenAI’s Mac apps need updates thanks to the Axios hack appeared first on CyberScoop.

Commerce setting up new AI export regime to push adoption of ‘American AI’ abroad

The Department of Commerce is putting together a catalog of AI tools that will be given special export status by the federal government to be sold abroad.

The department issued a call for proposals to participating companies in the Federal Register, looking to create a “menu of priority AI export packages that the U.S. Government will promote to allies and partners around the world.”

The companies and technologies included “will be presented by U.S. Government representatives as a standing, full-stack American AI export package and may receive priority government advocacy, export licensing review and processing, interagency coordination, and financing referrals, subject to applicable law,” the department said in a Federal Register notice Friday.

The export package was mandated through President Donald Trump’s AI executive order last year, which described the export packages as part of a larger effort to “ensure that American AI technologies, standards, and governance models are adopted worldwide” and “secure our continued technological dominance.”

“The American AI Exports Program delivers on President Trump’s directive to ensure that American AI systems – built on trusted hardware, secure data, and world-leading innovation – are deployed at scale around the world,” Secretary of Commerce Howard Lutnick said in a statement earlier this month. “By promoting full-stack American solutions, we are strengthening our economic and national security, deepening ties with allies and partners, and ensuring that the future of AI is led by the United States.”

The executive order called for certain technologies to be included in the package, including AI models and systems but also computer chips, data center storage, cloud services and networking services, along with unspecified “measures” to ensure security and cybersecurity of AI systems.

The Commerce notice envisions offering multiple packages of AI technology from “standing teams of AI companies organized to offer a complete American AI technology stack to foreign markets on an ongoing basis.” There is no limit on the number of companies that participate in a consortium, and Commerce said there isn’t “any particular legal structure” required.

While the proposal at several points refers to these packages as “American AI,” the notice does specify that foreign companies can participate.

In fact, for certain categories like hardware, the total level of U.S.-made content only needs to be 51% or greater. Member companies providing data, software, cybersecurity or application layer services can’t be incorporated or primarily based in countries like China or Russia, where national security laws may compel them to work with foreign governments or hand over sensitive data.

The potential business would be broad, covering foreign public and private sector buyers in global, regional, and country-specific markets. It also includes the potential formation of separate, “on demand” packages of companies and products meant for “specific foreign opportunities.”

But the notice also states that final decisions will be made on the basis of “national interest” by principals at the Departments of Commerce, State, Defense and Energy, as well as the White House Office of Science, Technology and Policy.

Commerce does not intend to formally rank proposals or use fixed scoring formulas to approve packages of technology for the export program, and the language in the notice appears to give wide latitude to federal decisionmakers to determine whether a particular proposal meets the “national interest” threshold.

“A proposal that undertakes reasonable efforts to satisfy the 51 percent hardware U.S.-content presumption is not automatically entitled to designation, and a proposal that does not satisfy that presumption is not automatically disqualified,” the notice said. 

The post Commerce setting up new AI export regime to push adoption of ‘American AI’ abroad appeared first on CyberScoop.

Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities

Major technology companies have joined forces in an effort to use advanced artificial intelligence to identify and address security flaws in the world’s most critical software systems, marking a significant shift in how the industry approaches cybersecurity threats.

Anthropic announced Project Glasswing on Tuesday, bringing together Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. The initiative centers on Claude Mythos Preview, an unreleased AI model that Anthropic will make available exclusively to project partners and approximately 40 additional organizations responsible for critical software infrastructure.

The model has already identified thousands of previously unknown vulnerabilities in its initial testing phase, including security flaws that have existed in widely used systems for decades, according to Anthropic. Among the discoveries is a 27-year-old bug in OpenBSD, an operating system known primarily for its security focus, and a 16-year-old vulnerability in FFmpeg, a widely used video software program that automated testing tools had failed to detect despite running the affected code line five million times. The company has been in contact with the maintainers of the relevant software, and all found vulnerabilities have been patched. 

Anthropic will commit up to $100 million in usage credits for the project, along with $4 million in direct donations to open-source security organizations. The company has stated it does not plan to make Mythos Preview available to the general public, citing concerns about the model’s potential misuse.

The initiative reflects growing concerns within the technology sector about the dual-use nature of advanced AI systems. While Mythos Preview was not trained specifically for cybersecurity purposes, its coding and reasoning capabilities have proven effective at identifying subtle security flaws that have eluded human analysts and conventional automated tools.

“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the company said in a blog post. “Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”

The project comes as the industry has predicted that similar AI capabilities will soon become more widespread. Anthropic executives have indicated that without coordinated action, such tools could eventually reach actors who might deploy them for malicious purposes rather than defensive security work.

Participating organizations will be required to share their findings with the broader industry. The project places particular emphasis on open-source software, which forms the foundation of most modern systems, including critical infrastructure, yet whose maintainers have historically lacked access to sophisticated security resources.

“Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation,” said Jim Zemlin, CEO of the Linux Foundation. “This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.” 

Additionally, Anthropic says it has engaged in ongoing discussions with U.S. government officials regarding Mythos Preview’s capabilities. The company has framed the project in national security terms, arguing that maintaining leadership in AI technology represents a strategic priority for the United States and its allies. Anthropic has been locked in a high-stakes dispute with the Department of Defense about the U.S. military’s use of the startup’s Claude AI model in real-world operations. 

The project’s success will depend partly on whether the collaborative approach can keep pace with rapid advances in AI capabilities. Anthropic has indicated that frontier AI systems are likely to advance substantially within months, potentially creating a dynamic environment where defensive and offensive capabilities evolve in parallel.

“Project Glasswing is a starting point,” Anthropic wrote in a blog post. “No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.”

The post Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities appeared first on CyberScoop.

❌