Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Project Glasswing and the Next Challenge for Defenders: Turning Faster Discovery into Faster Action

20 April 2026 at 12:20

Anthropic’s Project Glasswing has sparked plenty of discussion about what AI might soon do for vulnerability discovery, but the more useful question for most security teams is how to prepare for, and more importantly seize the opportunity of, what comes next.

 As we wrote in our earlier blog, What Project Glasswing Means for Security Leaders, AI is becoming more capable of finding software flaws. The pressure that follows lands on the teams responsible for deciding what matters, validating risk, assigning ownership, and getting remediation moving across environments that were already hard to manage. We believe that the organizations that will benefit most from the next wave of AI will be the ones that understand their environment well enough to use these emerging AI models with intent, rather than layering them onto immature processes and hoping that speed alone will solve the backlog.

What this moment means for security teams

The number of publicly tracked software vulnerabilities has broken records almost every year over the last decade, while supply chain risk has continued to rise. Most teams were already feeling the strain of more findings than they could process cleanly. The Common Vulnerabilities and Exposures (CVE) program, the standard system for identifying and tracking known vulnerabilities, recorded 48,185 disclosures in 2025, a 20% increase over 2024, with roughly 40% of those disclosed vulnerabilities rated high or critical. 

The pace in 2026 was already working out to hundreds of new CVEs per day when those figures were cited. That tells you something important about the current environment: the challenge has not necessarily been  a lack of findings, but instead converting a growing stream of findings into measurable risk reduction.

The reality is that very few organizations are going to hand a model free rein over their most sensitive environments the minute those capabilities become more widely available. Trust will be built in stages: early adoption is much more likely to focus on backlog reduction, triage support, patch testing, and repetitive lower-tier remediation work that consumes time without carrying the same level of operational risk as the most critical systems in the business. That is a more realistic starting point, and it leads to a more useful question. Before teams apply AI more broadly, they need to understand their environment well enough to use it intentionally.

Establish the foundation before layering in AI

The promise from Project Glasswing and almost every other AI-powered security initiative is quite similar: leverage AI to identify patterns, summarize risk, suggest fixes, and speed up repetitive work. Regardless of technology, success  still depends on how well an organization understands its environment, the context around each finding, and the process used to act on it. 

A model can generate more output than a team ever could on its own, but that output becomes noise if the organization cannot answer basic questions about scope, ownership, criticality, and exposure. Teams need a clear, continuously updated picture of the environment before they can decide where AI should be applied, what should remain human-led, and which parts of the backlog are safe to push through more automated workflows.

The AI landscape is already shifting fast, and it will keep shifting, which is why this moment should prompt a more preemptive and resilient strategy rather than another round of tooling hype. Chasing each new capability as it arrives will inevitably force teams to keep reorganizing around the latest announcement. A stronger path is to get the foundation right first - understand the environment, the attack paths, and the assets that matter most; but most importantly, establishing the process and the people behind making these decisions. Then use AI where it meaningfully improves speed, consistency, and focus.

Why Attack Surface Management should be part of that foundation

A strong foundation starts with visibility. Security teams need a live picture of what exists in the environment, what is exposed, how assets connect to one another, and which systems carry the greatest business impact if something goes wrong. That is where Attack Surface Management becomes central. Rapid7’s approach through Surface Command is built around a continuous view of the attack surface across the digital estate, which helps teams understand where exposures sit and how they relate to internet-facing, business-critical, or otherwise high-impact systems.

That matters for AI adoption just as much as it matters for day-to-day security operations. Teams cannot apply AI strategically if they are guessing about which parts of the environment are lower priority, which assets belong to which owners, or where a newly disclosed flaw could create real business risk. A better view of the attack surface gives organizations the context they need to segment the problem properly. That makes it far easier to start with the right use cases, whether that is backlog reduction in lower-impact systems, targeted prioritization of exposed assets, or faster triage where the risk picture is already well understood.

Ownership is part of that foundation too. Remediation slows down when no one can quickly identify who owns the affected application, environment, or workflow. Security teams already lose time there today, and AI will only make that bottleneck more visible if it starts surfacing issues faster than organizations can assign them. Attack Surface Management helps turn that ambiguity into something more actionable by tying exposure to environment context and likely ownership.

How Vulnerability and Exposure Management turns visibility into action

Once the environment is understood, teams still need a way to move from findings to outcomes. That is where Vulnerability and Exposure Management becomes the operating layer that keeps the work grounded.

The biggest value here is not simply collecting more vulnerability data. It is targeted prioritization and validation. When a disclosure lands, teams need to know whether the issue affects an exposed asset, whether there is evidence of exploitation or attacker interest, whether the impacted system is business-critical, and whether existing controls already reduce some of the risk. That is the kind of context that helps organizations decide what deserves immediate attention and what can be handled through a normal remediation cycle.

This is where artificial intelligence can help move remediation forward faster. Instead of asking teams to manually connect exploit signals, asset criticality, and vulnerability intelligence on their own, AI can distill that context directly in the remediation workflow. That makes it easier to understand why an issue matters, what the likely impact is, and what to do next, which shortens the gap between discovery and a confident decision on how to respond.

We expect most organizations to use AI to assist with, or in some cases take over, lower-tier triage, backlog cleanup, summary generation, and patch support in areas where the workflow is already established and the blast radius is more manageable. Human experts still stay closest to the most critical business logic, the most sensitive environments, and the most complex remediation paths. That is a practical adoption model, and it only works when the organization already has enough structure in place to know where those boundaries are.

Curated vulnerability intelligence changes the quality of decisions

That kind of deliberate adoption only works when teams can make better decisions, faster. Security teams need more than severity scores and a long list of CVEs. They need enough context to understand what matters, what can wait, and where action will reduce real risk fastest. As Rapid7 outlined in The Power of Curated Vulnerability Intelligence, the goal is to identify the vulnerabilities that actually matter and give teams enough context to act with confidence.

That intelligence provides a form of validation that most teams need badly as disclosure volume rises. It helps answer whether a finding is tied to active attacker interest, whether proof-of-concept activity is public, whether the asset is exposed, and whether delaying a patch creates unacceptable risk. It also supports the decisions that happen in the gap between discovery and full remediation. When a patch is delayed because of change controls, testing constraints, or lack of a vendor fix, teams still need to reduce exposure. Curated intelligence helps them decide whether to use segmentation, access restrictions, configuration changes, added monitoring, or virtual patching while the longer-term fix is being worked through.

That is one of the clearest ways Rapid7 helps customers move from data to outcomes. Intelligence is fused into the workflow so teams can prioritize with more precision and validate their actions against real threat context, not just generalized scores.

How runtime and remediation fit into the broader AI story

There is another part of this story that matters as organizations think more seriously about AI-driven security operations. As AI shapes the way teams handle exposures earlier in the lifecycle, context of application at runtime matters more too.

To make that foundation complete, organizations need to look beyond static posture and bring runtime validation into the picture. When teams can identify which vulnerabilities and misconfigurations are actively exploitable in production, and map sensitive data and identity access to real-world attack paths, they get a much clearer view of actual risk. Security teams need to understand what is vulnerable, how systems behave when live, and where unusual activity may suggest a problem is moving toward exploitation. With that runtime context in place, teams can spend less time chasing theoretical vulnerabilities and more time focusing on the exposures that are actively creating risk in live environments. 

That connection between exposure, intelligence, remediation, and runtime behavior is where AI starts to become genuinely useful rather than simply impressive. It supports a more intentional model of security decision-making, one that narrows the gap between what is found, what matters, and what happens next.

What security leaders should do now

This is a good time for security leaders to step back and ask a more disciplined set of questions.

  • Do we understand our environment well enough to direct AI toward the right problems? 

  • Can we clearly separate higher-risk, higher-impact assets from the parts of the backlog that are mostly operational drag? 

  • Is threat intelligence embedded in how we interpret findings, or are we still depending too heavily on raw severity? 

  • Can we identify ownership fast enough for AI-assisted triage to result in meaningful action? 

  • Are compensating controls part of the plan when remediation cannot happen immediately?

Those questions shape the quality of everything that follows.

Glasswing creates a real opportunity for security teams that are ready to use AI with more intention. AI can move work forward faster, reduce manual drag, and absorb classes of issues that currently consume time without improving outcomes. The teams that benefit most will not be the ones that rush to apply new models everywhere. They will be the ones that understand their environment, have a clear view of their attack surface, have mature enough workflows to apply AI where it makes sense, and can measure whether the actions taken actually reduced exposure.

Rapid7’s approach to building resilience is grounded in those same needs. Attack Surface Management provides the environmental foundation, Vulnerability Management drives prioritization and action, curated vulnerability intelligence strengthens validation and decision-making, AI-generated remediation insights compress the time from discovery to the next step, and runtime security adds context where live behavior matters. Together, those pieces help customers build a security program that is ready for AI rather than constantly reacting to it.

Here’s how cyber heavyweights in the US and UK are dealing with Claude Mythos

By: djohnson
13 April 2026 at 17:43

A joint report from the Cloud Security Alliance (CSA), the SANS Institute and the Open Worldwide Application Security Project (OWASP) concludes that in the near term, organizations are “likely to be overwhelmed” by threat actors using AI to find and exploit vulnerabilities faster than defenders can patch them.

While those organizations can use AI tools to speed up their own defenses, attackers “still face a heavier relative burden due to the inherent limitations of patching. This in turn leads to “asymmetric benefits” for attackers who can afford to adopt the technology without the same caution and bureaucracy as a multi-billion dollar business.

“The cost and capability floor to exploit discovery is dropping, the time between disclosure and weaponization is compressing toward zero, and capabilities that previously required nation-state resources are now becoming broadly accessible,” wrote Robert Lee, SANS Institute’s Chief AI Officer, Gadi Evron, CEO of Knostic and Rich Mogull, chief analyst at CSA, who served as the primary authors.

The report marks one of the first comprehensive responses to the capabilities of Claude Mythos from the U.S., boasting cybersecurity luminaries who have set policy at the highest levels as contributing authors, including Jen Easterly, former director of the Cybersecurity and Infrastructure Security Agency, Rob Joyce, a former top White House and NSA cybersecurity official, and Chris Inglis, former National Cyber Director.

It also includes private sector stalwarts like Heather Adkins, Google’s CISO, Katie Moussouris, CEO of Luta Security, and Sounil Yu, chief technology officer at Knostic. Another seventy CISOs, CTOs and other security executives are named as editors and reviewers.

Also this week, the UK’s AI Security Institute (AISI) detailed the results of tests it performed on a preview version of Claude Mythos, calling it a “step up” from past Anthropic models in the cybersecurity arena and able to “execute multi-stage attacks on vulnerable networks and discover and exploit vulnerabilities autonomously.”

Using a mix of Capture the Flag exercises and cyber range testing, AISI researchers found that Mythos not only raised the ceiling of technical non-experts and apprentice-level users, it narrowed the overall gap in hacking proficiency between the two. In other words, there’s becoming less of a distinction between the capabilities of amateur “script kiddies” and mid-level hackers with technical knowledge.

Claude Mythos and other Large Language Models are increasing the capabilities of both lower and mid-level hackers when it comes to solving cybersecurity-specific tasks and challenges. (Source: AISI)

Before April 2025, no Large Language Model could complete a single expert-level CTF problem. Mythos successfully solved nearly three quarters (73%) of them.

In cyber range tests – which are meant to simulate more complex, multi-chain attacks – the results were uneven, but also represented meaningful progress over prior Claude models.

Mythos was subjected to a 32-step attack playbook modeled on corporate networks, spanning initial network access to full network takeover. In three of the 10 simulations, the model completed an average of 24 of the 32 steps. Older versions of Claude and other frontier models never averaged more than 16.

Claude Mythos improved on other models ability to complete a 32 step cyber attack targeting a simulated corporate network environment. (Source: AISI)

Mythos flunked its test against a simulated operational technology cooling tower, but researchers noted that this doesn’t mean AI is bad at exploiting OT: the model actually faltered during the IT section of the exercise.

UK researchers were more measured in their analysis of Mythos, noting that their testing indicates it is “at least capable” of autonomously taking down smaller, weakly defended enterprise networks.

But they also note their cyber ranges lack security features – like active defenders and defensive tooling – that would be common in many real-world networks and present additional obstacles, nor did they penalize the model for triggering security alerts.

“This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems,” the researchers concluded.

Technical debt coming due

Both the US and UK reports agree that large language models are broadly moving in a similar direction of lowering the technical barrier. The US authors call for organizations to more quickly adopt AI for cyber defense while overhauling their incident response playbooks and corporate policies to account for more automated defense postures.

For its part, Anthropic has said it is not selling Mythos commercially, and last week it announced the model would be made available to Project Glasswing, a consortium of major tech companies that will use it to root out and patch vulnerabilities in commonly used products and services.

But other experts have warned that businesses and governments are not well-positioned to either absorb the influx of expected vulnerability exploitation or deftly harness AI tools of their own to counter them.

Casey Ellis, CTO and founder of Bugcrowd, wrote that recent advances in AI cyber tools has succeeded largely by “living in the places we stopped looking a decade ago.”

While the cybersecurity community has spent years focusing on application security, vulnerability triage and other “top layer” security problems, AI tools and apex level hacking groups have been feasting on vulnerabilities in forgotten firmware, or routers whose manufacturers long went out of business.

This reality that tools like Mythos can endlessly weaponize the massive technical debt of large organizations has taken the traditional defender’s dilemma and “the knob that used to go to ten and turned it to seven hundred,” Ellis wrote.

Additionally, corporations and governments run on consensus-building, multiple layers of hierarchy and legal compliance. While those are all necessary when handing your cybersecurity over to automated tooling, it can also lead to a slower process and more asymmetry against defenders in the short term.

“Integration into actual production becomes the battlezone,” wrote Ellis. “Lag is real. Bureaucracy is real. Supply chains are real.”

The post Here’s how cyber heavyweights in the US and UK are dealing with Claude Mythos appeared first on CyberScoop.

What Project Glasswing Means for Security Leaders

Anthropic’s Project Glasswing matters because it offers an early look at how quickly software flaws may soon be found, validated, and potentially turned into viable attack paths, even if that capability is currently limited to a closed partner program. Anthropic says its restricted Claude Mythos Preview model has already identified thousands of high-severity vulnerabilities, including flaws in major operating systems and browsers, and in some cases developed related exploits autonomously.

Some early coverage has emphasized the risks and need for restraint in deploying capabilities like this, and for most organizations, it won’t immediately change day-to-day security operations. What it does offer is a signal of where the industry may be heading: a future where discovery moves faster, and where the pressure shifts to everything that follows, including prioritization, remediation, validation, and response. Glasswing feels less like the storm itself and more like the first sign that the radar is getting better faster than the emergency plan. How well can we handle what comes next?

What is Project Glasswing?

Project Glasswing is Anthropic’s new defensive security initiative built around Claude Mythos Preview, a model the company is not releasing publicly because of its cyber capabilities. Anthropic says the preview is being provided to a limited set of organizations, including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with access also extended to more than 40 additional organizations. Anthropic has also committed up to $100 million in usage credits and additional support for open-source security work. 

That makes this more than another AI feature release. Anthropic is effectively signaling two things at once. First, there is a meaningful backlog of serious, undisclosed vulnerabilities still out there. Second, capabilities like this are sensitive enough that broad public release would be irresponsible right now. For security leaders, the message is not that AI replaces human researchers. It is that AI is becoming materially more useful in vulnerability research, and defenders should be thinking now about how they will handle what comes next.

Why this matters to vulnerability management

It would be easy to read this as a story about faster vulnerability discovery alone. That misses the more important point. If Anthropic’s claims are directionally right, the immediate pressure does not land on discovery alone. It lands on everything downstream of discovery: asset context, exploitability analysis, ownership, compensating controls, patching, exception handling, validation, and detection coverage. In other words, the harder part of security becomes more obvious.

That matters because most enterprise programs do not struggle to generate findings. They struggle to decide which findings matter first, who should act, what can wait, and whether remediation actually reduced exposure. If AI pushes vulnerability discovery into a new gear, weak operating models will feel that pressure first. Backlogs get bigger. Teams drown in queues. Fix rates do not keep pace. Risk stays put. That is not a model problem. It is an execution problem. 

This is why security leaders should be careful with the framing. The headline is not “AI found bugs, therefore security improves.” The headline is that the bottleneck may be moving downstream even faster than expected. That raises the value of programs that connect exposure management, remediation, and runtime defense instead of treating them as separate activities. 

What Anthropic’s examples really tell us

Some of the reported examples are striking. Anthropic and media reports say Mythos Preview found a 27-year-old OpenBSD vulnerability, a 16-year-old FFmpeg flaw that reportedly evaded millions of automated test executions, and multiple Linux kernel vulnerabilities that could be chained together. Anthropic has also said the model reproduced vulnerabilities and built proof-of-concept exploits at a high success rate in testing. Even if individual examples get debated over time, the pattern is the important part. The model appears to compress several human steps into one workflow, from discovery to validation to exploit construction. 

Security has seen faster discovery before. Fuzzing changed the game. Better automation changed the game. Large-scale bug bounty operations changed the game. What is different here is the combination of reasoning, coding, persistence, and iteration inside a single model loop. If that loop becomes reliable, then defender workflows built for human-speed intake and triage will come under more strain. That does not make coordinated disclosure obsolete. It makes today’s processes look slow.

What CISOs should ask right now

CISOs do not need to decide this week whether Anthropic’s model changes the entire market. They do need to ask a more practical question: if my environment starts surfacing materially more vulnerabilities tomorrow, what happens next?

For many organizations, that answer is uncomfortable. Findings land in multiple tools. Asset inventory is incomplete. Internet exposure is only partly understood. Ownership is fragmented. Patch cycles are slow. Exceptions pile up. Security teams cannot easily prove that a fix changed reachable risk in the real environment.

That is where this news becomes relevant. AI-driven discovery does not reduce the need for an exposure-led security model. It increases it. The organizations that benefit most will not be the ones with the biggest pile of findings. They will be the ones that can connect those findings to business-critical assets, internet exposure, identity paths, existing detections, remediation workflows, and validation. 

A good board-level translation is that faster discovery only has value if the organization can prioritize effectively, remediate quickly, and prove that the fix reduced real exposure. Otherwise, the result is more volume and more noise.

What engineers should take away from Project Glasswing

For engineers, this announcement is less a reason to either celebrate or dismiss the technology than it is a sign that defensive research workflows may change quickly if capabilities like this spread more broadly. Today, Glasswing is still limited to a small group of trusted partners, so this is not yet a shift most engineering teams will feel directly in their daily work. What it does offer is an early look at where software security may be heading.

AI-assisted discovery is likely to become more common across secure development, code review, infrastructure testing, and open-source maintenance. That creates real opportunities. Models can help explore deep code paths faster, challenge assumptions earlier, improve reproduction, and generate more detailed reports than many conventional workflows produce today.

The harder question is what comes next. If AI can generate more findings and more exploit hypotheses, engineering teams will need stronger intake, validation, and prioritization discipline, not less. Triage quality, deduplication, severity context, reproducibility, and ownership all become more important as discovery speeds up. Many maintainers and internal product security teams already struggle with volume, and machine-generated reporting could make that problem worse if workflows do not mature alongside the tooling.

At the same time, that is only one side of the equation. If models can help find bugs faster, they may also help defenders confirm impact, suggest code changes, support patch development, and reduce some of the manual effort that slows remediation today. In the longer run, the same AI shift that increases pressure on defenders may also help them absorb some of that pressure. The real issue is not whether AI adds more findings. It is whether teams can use it to shorten the full path from discovery to decision to verified fix.

The best engineering response, then, is not to argue about whether these models are impressive. It is to improve the operating path around them. Can the team confirm impact quickly, tie a flaw to reachable attack surface, deploy a patch or control change, and verify that exposure is actually reduced in production? If that chain does not improve, faster discovery alone will not deliver much value.

What this means for the next phase of security

Anthropic’s decision to restrict access is understandable, but it also underscores a harder truth - capabilities like this rarely stay contained for long. Whether through competitors, open customization, or less restrained releases, the broader industry should assume similar models will become more widely available in the near term. For most organizations, this is not a market-wide operational shift today. It is a warning of what may be closer than it appears.

That signal arrives at a time when many security operations teams are already under strain. Most can investigate only a fraction of the alerts and exposures their environments generate, which keeps them in reactive mode, manually triaging high-priority signals across fragmented telemetry while scale and consistency remain difficult to achieve. Many promises of AI super-productivity have not yet translated into day-to-day operational relief. That is part of what makes Glasswing worth paying attention to. It points to a future where discovery may improve faster than most response models do.

It also points to an opportunity. If AI can compress parts of vulnerability research, the same broader class of capabilities may eventually help defenders improve prioritization, investigation, remediation, and validation as well. That is where the next phase of security is likely to be decided. Not in whether organizations can generate more findings, but in whether they can use AI to make response workflows faster, more consistent, and more precise.

From our perspective, that raises the operational bar for defenders. If discovery gets faster, organizations will need to shorten time to detect, accelerate time to patch, and manage vulnerability backlogs with far more urgency than they do today. That starts with a threat-led view of the environment. Teams need to understand which weaknesses are most exposed, most exploitable, and most likely to matter in real attack paths so they can prioritize action based on actual risk, not just queue depth.

That is the practical lesson from Glasswing. It feels less like the storm itself and more like the first sign that the radar is getting better faster than the emergency plan. For most organizations, the announcement does not change the queue tomorrow morning. What it does change is the urgency of preparing for a future in which discovery, triage, and response may all begin moving at a very different pace.

Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities

By: Greg Otto
7 April 2026 at 14:00

Major technology companies have joined forces in an effort to use advanced artificial intelligence to identify and address security flaws in the world’s most critical software systems, marking a significant shift in how the industry approaches cybersecurity threats.

Anthropic announced Project Glasswing on Tuesday, bringing together Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. The initiative centers on Claude Mythos Preview, an unreleased AI model that Anthropic will make available exclusively to project partners and approximately 40 additional organizations responsible for critical software infrastructure.

The model has already identified thousands of previously unknown vulnerabilities in its initial testing phase, including security flaws that have existed in widely used systems for decades, according to Anthropic. Among the discoveries is a 27-year-old bug in OpenBSD, an operating system known primarily for its security focus, and a 16-year-old vulnerability in FFmpeg, a widely used video software program that automated testing tools had failed to detect despite running the affected code line five million times. The company has been in contact with the maintainers of the relevant software, and all found vulnerabilities have been patched. 

Anthropic will commit up to $100 million in usage credits for the project, along with $4 million in direct donations to open-source security organizations. The company has stated it does not plan to make Mythos Preview available to the general public, citing concerns about the model’s potential misuse.

The initiative reflects growing concerns within the technology sector about the dual-use nature of advanced AI systems. While Mythos Preview was not trained specifically for cybersecurity purposes, its coding and reasoning capabilities have proven effective at identifying subtle security flaws that have eluded human analysts and conventional automated tools.

“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the company said in a blog post. “Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”

The project comes as the industry has predicted that similar AI capabilities will soon become more widespread. Anthropic executives have indicated that without coordinated action, such tools could eventually reach actors who might deploy them for malicious purposes rather than defensive security work.

Participating organizations will be required to share their findings with the broader industry. The project places particular emphasis on open-source software, which forms the foundation of most modern systems, including critical infrastructure, yet whose maintainers have historically lacked access to sophisticated security resources.

“Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation,” said Jim Zemlin, CEO of the Linux Foundation. “This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.” 

Additionally, Anthropic says it has engaged in ongoing discussions with U.S. government officials regarding Mythos Preview’s capabilities. The company has framed the project in national security terms, arguing that maintaining leadership in AI technology represents a strategic priority for the United States and its allies. Anthropic has been locked in a high-stakes dispute with the Department of Defense about the U.S. military’s use of the startup’s Claude AI model in real-world operations. 

The project’s success will depend partly on whether the collaborative approach can keep pace with rapid advances in AI capabilities. Anthropic has indicated that frontier AI systems are likely to advance substantially within months, potentially creating a dynamic environment where defensive and offensive capabilities evolve in parallel.

“Project Glasswing is a starting point,” Anthropic wrote in a blog post. “No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.”

The post Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities appeared first on CyberScoop.

❌
❌