This week on Experts on Experts, Iβm joined by Christiaan Beek, Rapid7βs VP of Threat Analytics, to talk through what weβre seeing in the 2026 threat landscape and how it connects to recent research coming out of Rapid7 Labs.
We start with the report, but quickly move into whatβs already playing out in active campaigns. What stands out is not a change in attacker technique, but the pace. Weak credentials, missing MFA, exposed services, and unpatched systems still drive most intrusions. What has changed is how quickly those conditions are identified and exploited, and that shift is forcing security teams to rethink how they prioritize and respond.
The window to act is disappearing
One of the clearest themes in the conversation is timing. The issue is no longer how many vulnerabilities exist, but how quickly they are being used. The gap between disclosure and exploitation has narrowed to a matter of days in many cases, which removes the buffer teams used to rely on.
At the same time, most intrusions still begin with familiar conditions. Identity and access remain consistent weaknesses, with missing MFA and exposed remote access continuing to provide reliable entry points. What has changed is how those weaknesses are used. Access is now packaged and sold through a broader ecosystem, which increases both the speed and scale of attacks.
Access, persistence, and trusted systems
We also look at how attacker behaviour is evolving beyond initial access. In some environments, the goal is no longer immediate disruption but long-term presence. That changes how teams should think about detection, because finding activity is only the starting point. Understanding how long access has existed and what has already happened becomes just as important.
At the same time, attacks are concentrating inside systems organizations rely on every day. Identity platforms, cloud environments, and collaboration tools are all becoming key targets. The challenge is that activity in these systems often looks legitimate, which makes it harder to distinguish between normal behaviour and something that requires investigation.
AI is accelerating what already works
AI is part of this shift, but not because it introduces entirely new attack paths. What it does is make existing techniques faster and easier to scale, particularly in areas like social engineering and reconnaissance. Attackers can generate and adapt campaigns quickly, while defenders are dealing with increasing volumes of data.
That creates a simple but important shift. Security teams are not falling behind because they lack tools, but because the timing of attacks has changed and their processes have not kept up. The focus now is on understanding exposure earlier, prioritizing what matters, and preparing actions in advance.
Watch the full episode below to hear Christiaanβs perspective on how these trends are evolving and what they mean for security leaders heading into 2026.
Anthropicβs Project Glasswing has sparked plenty of discussion about what AI might soon do for vulnerability discovery, but the more useful question for most security teams is how to prepare for, and more importantly seize the opportunity of, what comes next.
Β As we wrote in our earlier blog, What Project Glasswing Means for Security Leaders, AI is becoming more capable of finding software flaws. The pressure that follows lands on the teams responsible for deciding what matters, validating risk, assigning ownership, and getting remediation moving across environments that were already hard to manage. We believe that the organizations that will benefit most from the next wave of AI will be the ones that understand their environment well enough to use these emerging AI models with intent, rather than layering them onto immature processes and hoping that speed alone will solve the backlog.
What this moment means for security teams
The number of publicly tracked software vulnerabilities has broken records almost every year over the last decade, while supply chain risk has continued to rise. Most teams were already feeling the strain of more findings than they could process cleanly. The Common Vulnerabilities and Exposures (CVE) program, the standard system for identifying and tracking known vulnerabilities, recorded 48,185 disclosures in 2025, a 20% increase over 2024, with roughly 40% of those disclosed vulnerabilities rated high or critical.Β
The pace in 2026 was already working out to hundreds of new CVEs per day when those figures were cited. That tells you something important about the current environment: the challenge has not necessarily beenΒ a lack of findings, but instead converting a growing stream of findings into measurable risk reduction.
The reality is that very few organizations are going to hand a model free rein over their most sensitive environments the minute those capabilities become more widely available. Trust will be built in stages: early adoption is much more likely to focus on backlog reduction, triage support, patch testing, and repetitive lower-tier remediation work that consumes time without carrying the same level of operational risk as the most critical systems in the business. That is a more realistic starting point, and it leads to a more useful question. Before teams apply AI more broadly, they need to understand their environment well enough to use it intentionally.
Establish the foundation before layering in AI
The promise from Project Glasswing and almost every other AI-powered security initiative is quite similar: leverage AI to identify patterns, summarize risk, suggest fixes, and speed up repetitive work. Regardless of technology, successΒ still depends on how well an organization understands its environment, the context around each finding, and the process used to act on it.Β
A model can generate more output than a team ever could on its own, but that output becomes noise if the organization cannot answer basic questions about scope, ownership, criticality, and exposure. Teams need a clear, continuously updated picture of the environment before they can decide where AI should be applied, what should remain human-led, and which parts of the backlog are safe to push through more automated workflows.
The AI landscape is already shifting fast, and it will keep shifting, which is why this moment should prompt a more preemptive and resilient strategy rather than another round of tooling hype. Chasing each new capability as it arrives will inevitably force teams to keep reorganizing around the latest announcement. A stronger path is to get the foundation right first - understand the environment, the attack paths, and the assets that matter most; but most importantly, establishing the process and the people behind making these decisions. Then use AI where it meaningfully improves speed, consistency, and focus.
Why Attack Surface Management should be part of that foundation
A strong foundation starts with visibility. Security teams need a live picture of what exists in the environment, what is exposed, how assets connect to one another, and which systems carry the greatest business impact if something goes wrong. That is where Attack Surface Management becomes central. Rapid7βs approach through Surface Command is built around a continuous view of the attack surface across the digital estate, which helps teams understand where exposures sit and how they relate to internet-facing, business-critical, or otherwise high-impact systems.
That matters for AI adoption just as much as it matters for day-to-day security operations. Teams cannot apply AI strategically if they are guessing about which parts of the environment are lower priority, which assets belong to which owners, or where a newly disclosed flaw could create real business risk. A better view of the attack surface gives organizations the context they need to segment the problem properly. That makes it far easier to start with the right use cases, whether that is backlog reduction in lower-impact systems, targeted prioritization of exposed assets, or faster triage where the risk picture is already well understood.
Ownership is part of that foundation too. Remediation slows down when no one can quickly identify who owns the affected application, environment, or workflow. Security teams already lose time there today, and AI will only make that bottleneck more visible if it starts surfacing issues faster than organizations can assign them. Attack Surface Management helps turn that ambiguity into something more actionable by tying exposure to environment context and likely ownership.
How Vulnerability and Exposure Management turns visibility into action
Once the environment is understood, teams still need a way to move from findings to outcomes. That is where Vulnerability and Exposure Management becomes the operating layer that keeps the work grounded.
The biggest value here is not simply collecting more vulnerability data. It is targeted prioritization and validation. When a disclosure lands, teams need to know whether the issue affects an exposed asset, whether there is evidence of exploitation or attacker interest, whether the impacted system is business-critical, and whether existing controls already reduce some of the risk. That is the kind of context that helps organizations decide what deserves immediate attention and what can be handled through a normal remediation cycle.
This is where artificial intelligence can help move remediation forward faster. Instead of asking teams to manually connect exploit signals, asset criticality, and vulnerability intelligence on their own, AI can distill that context directly in the remediation workflow. That makes it easier to understand why an issue matters, what the likely impact is, and what to do next, which shortens the gap between discovery and a confident decision on how to respond.
We expect most organizations to use AI to assist with, or in some cases take over, lower-tier triage, backlog cleanup, summary generation, and patch support in areas where the workflow is already established and the blast radius is more manageable. Human experts still stay closest to the most critical business logic, the most sensitive environments, and the most complex remediation paths. That is a practical adoption model, and it only works when the organization already has enough structure in place to know where those boundaries are.
Curated vulnerability intelligence changes the quality of decisions
That kind of deliberate adoption only works when teams can make better decisions, faster. Security teams need more than severity scores and a long list of CVEs. They need enough context to understand what matters, what can wait, and where action will reduce real risk fastest. As Rapid7 outlined in The Power of Curated Vulnerability Intelligence, the goal is to identify the vulnerabilities that actually matter and give teams enough context to act with confidence.
That intelligence provides a form of validation that most teams need badly as disclosure volume rises. It helps answer whether a finding is tied to active attacker interest, whether proof-of-concept activity is public, whether the asset is exposed, and whether delaying a patch creates unacceptable risk. It also supports the decisions that happen in the gap between discovery and full remediation. When a patch is delayed because of change controls, testing constraints, or lack of a vendor fix, teams still need to reduce exposure. Curated intelligence helps them decide whether to use segmentation, access restrictions, configuration changes, added monitoring, or virtual patching while the longer-term fix is being worked through.
That is one of the clearest ways Rapid7 helps customers move from data to outcomes. Intelligence is fused into the workflow so teams can prioritize with more precision and validate their actions against real threat context, not just generalized scores.
How runtime and remediation fit into the broader AI story
There is another part of this story that matters as organizations think more seriously about AI-driven security operations. As AI shapes the way teams handle exposures earlier in the lifecycle, context of application at runtime matters more too.
To make that foundation complete, organizations need to look beyond static posture and bring runtime validation into the picture. When teams can identify which vulnerabilities and misconfigurations are actively exploitable in production, and map sensitive data and identity access to real-world attack paths, they get a much clearer view of actual risk. Security teams need to understand what is vulnerable, how systems behave when live, and where unusual activity may suggest a problem is moving toward exploitation. With that runtime context in place, teams can spend less time chasing theoretical vulnerabilities and more time focusing on the exposures that are actively creating risk in live environments.Β
That connection between exposure, intelligence, remediation, and runtime behavior is where AI starts to become genuinely useful rather than simply impressive. It supports a more intentional model of security decision-making, one that narrows the gap between what is found, what matters, and what happens next.
What security leaders should do now
This is a good time for security leaders to step back and ask a more disciplined set of questions.
Do we understand our environment well enough to direct AI toward the right problems?Β
Can we clearly separate higher-risk, higher-impact assets from the parts of the backlog that are mostly operational drag?Β
Is threat intelligence embedded in how we interpret findings, or are we still depending too heavily on raw severity?Β
Can we identify ownership fast enough for AI-assisted triage to result in meaningful action?Β
Are compensating controls part of the plan when remediation cannot happen immediately?
Those questions shape the quality of everything that follows.
Glasswing creates a real opportunity for security teams that are ready to use AI with more intention. AI can move work forward faster, reduce manual drag, and absorb classes of issues that currently consume time without improving outcomes. The teams that benefit most will not be the ones that rush to apply new models everywhere. They will be the ones that understand their environment, have a clear view of their attack surface, have mature enough workflows to apply AI where it makes sense, and can measure whether the actions taken actually reduced exposure.
Rapid7βs approach to building resilience is grounded in those same needs. Attack Surface Management provides the environmental foundation, Vulnerability Management drives prioritization and action, curated vulnerability intelligence strengthens validation and decision-making, AI-generated remediation insights compress the time from discovery to the next step, and runtime security adds context where live behavior matters. Together, those pieces help customers build a security program that is ready for AI rather than constantly reacting to it.
In the latest episode of Rapid7βs Experts on Experts, Iβm joined by Rapid7 CEO Corey Thomas for a candid conversation about where AI is genuinely changing security operations, and where the hype still outruns reality. The short version is that AI is already improving productivity in software development, but the bigger shift for security leaders is what it can do with telemetry at scale. As Corey puts it, no team of humans can process all security telemetry, all the time, across an entire environment. That gap is where AI can help, but only if the inputs are right.
We also dig into what this means for Managed Detection and Response (MDR), and why the market is moving from βwatch a subset of signalsβ toward monitoring the full environment, 24 x 7. The catch is that raw volume is not the goal. The goal is a comprehensive data set that enables decision making under pressure, with enough context to act early.
AI is only as good as the context behind it
One theme that kept coming up in our conversation is trust. Corey explains why earlier automation and SOAR efforts struggled. They followed strict rules, but security rarely behaves in strict patterns. When something looked similar but required a different response, teams hesitated to rely on automation. The dynamic rule making that newer AI models provide can help, but only if fueled with the right context.
Corey breaks βcontextβ into practical components: understanding what technologies are deployed, how they are configured, what controls exist, what vulnerabilities are present, and what activity is actually happening across those systems. Without that full picture, teams spend time chasing the wrong risks. He compares it to buying earthquake insurance without knowing where you live. If you are in California, it might make sense. If you are in Florida, hurricane coverage is the real concern. Context tells you which risk actually matters.
Preemptive MDR is the shift CISOs should plan for now
Where the conversation gets especially relevant for 2026 is the move from reactive to preemptive security. To frame the change in plain terms: reactive posture waits for alerts, while leaders want partners who anticipate and identify risks earlier.
Corey describes preemptive MDR as an attack surface discipline. It starts with understanding the full attack surface, spotting where attacks are likely to occur, and identifying the most attractive exposures in the environment. The operational step is what matters: identifying those exposures quickly, prioritizing realistically, and having preset remediation and response plans ready before the moment hits. Corey is direct about constraints, too. No organization can remediate everything all the time, but better planning and efficiency are still possible, and business expectations of security leaders are rising. He also notes that government and regulators are pushing in the same direction, and that Gartner and other analysts are reinforcing the shift toward anticipation rather than after the fact response.
Cloud scale forces MDR to evolve, especially around identity
We also spent time on the cloud, because it continues to reshape how security programs operate.Β Most organizations are building more, faster, across more cloud technologies and identities, and AI only accelerates that pace. Coreyβs view is that MDR has to mirror that technology reality. At a baseline, teams need to monitor what their cloud providers already offer. He calls out identity as the harder requirement: understanding identity traffic across the environment, separating legitimate from malicious behavior, and tracking roles and responsibilities so investigations do not happen in a vacuum. If an MDR program is not looking across the cloud landscape, it cannot confidently say it is monitoring the right things, especially in the areas where new bugs and misconfigurations show up first.
Transparency becomes a differentiator when AI enters the loop
As AI becomes more present in triage and investigation, Corey argues that transparency will matter even more. He shares that Rapid7 built MDR with the assumption that customers should be able to log in at any time and audit what is happening in their environment. That level of visibility can be uncomfortable, but it becomes more important as AI plays a larger role in how decisions are made. The presence of AI in MDR programs does not reduce the need for trust, but increases it. And that trust is built through transparency and auditability, not assumption.
That also means being able to show where AI is actually making a difference. It is not enough to say it is working. Teams need to see the impact in real terms.
Corey contrasts that with what he sees as the market default: black box approaches that ask customers to trust the output until something goes wrong. His prediction is blunt and practical. As buyers mature, RFPs will demand the ability to inspect how alerts are processed and how investigations are run, because that is what trust looks like at scale.
Watch the full episode below to hear Coreyβs take on what is changing, what is still missing, and why the strongest MDR programs in 2026 will be the ones that plan for preemptive action, not just faster reaction.