Let's be honest, the patching window just shrank to something no practitioner or organization can keep up with. Organizations now need to operate in an environment that must assume breach, which means fundamentals like attack surface management, micro-segmentation, identity management, and attack path validation – aka a few core pillars of CTEM – just became the most important initiatives within the cybersecurity department. Rapid7 is the only vendor that provides a truly unified platform to master Continuous Threat Exposure Management (CTEM).
How Rapid7 satisfies all 5 steps of the CTEM Framework
Steps 1 and 2: Scoping and Discovery
Achieving full visibility
Rapid7 eliminates "unknown unknowns" by providing line-of-sight into 100% of your hybrid attack surface.
Surface Command (CAASM): We establish a single source of truth by unifying asset and identity inventory from over 200 third-party vendors and native sources.
Vulnerability Management: Our full-stack active scanning discovers shadow IT hidden within your enterprise network.
External Attack Surface Management (EASM): We scan the entire IPv4 space of the internet to automatically track changes to registered domains and public networks so you can map your external kingdom.
Unified CNAPP (Cloud Security): Our platform provides real-time, agentless visibility into every resource running across your multi-cloud environment (AWS, Azure, GCP, and Kubernetes). Through Event-Driven Harvesting (EDH), we identify infrastructure changes in under 60 seconds. This allows us to map not just the assets, but the complex identities and permissions that define your cloud risk.
Step 3: Prioritization
Moving beyond static scores
We replace generic risk scores with Active Risk and Threat-Aware Context. Our platform automatically prioritizes vulnerabilities based on real-world exploitability data from Rapid7 Labs and the Exploit Prediction Scoring System (EPSS). We are also able to incorporate your own organization’s tagging infrastructure to properly contextualize your enterprise so you focus on what matters most.
Step 4: Validation
Continuous human-led red teaming
This is where Rapid7 truly stands apart from automated-only vendors or point-in-time pen tests. Vector Command provides the expert human logic needed to bypass compensating controls like WAFs that stop automated tools cold. This gives Rapid7 the ability to answer the question: “How would an attacker get in?” We fully map the attack chain from the external to the internal so you have insight into where your controls are weakest. Ed Montgomery at Rapid7 has written extensively about the power of Vector Command – you can find his blogs here. Here’s a sampling of a couple of those stories:
The Telerik UI Example: While a scanner flags an old version of Telerik, our operators discovered they could bypass a WAF by splitting a malicious payload into 118 individual, "harmless" fragments. We bypassed the WAF and this achieved full remote code execution that a time-boxed, two-week pentest would never have uncovered. An automated scan might have flagged the outdated telerik as something notable but it was really the configuration of the WAF that allowed us to bypass. Something an automated scan would never have found.
SaaS Phishing: Our team used a misconfigured public Jira instance that allowed self-registration to hijack an Office 365 session and move laterally through internal trust. This validated that the true risk was a SaaS misconfiguration, not a patchable CVE.
Step 5: Mobilization
Instant response and remediation
We don't just find problems; we close the loop with integrated action.
Cloud Runtime Security (CADR): Powered by our partnership with ARMO, our eBPF-based sensor can shut down an attack in seconds by killing malicious processes or pausing containers at the moment of detection.
Automation (SOAR): InsightConnect and our "Bot Factory" in CNAPP trigger automated remediation workflows to lock down S3 buckets or disable compromised users instantly.
Remediation Hub: We provide a centralized, vendor agnostic action-driven list of prioritized fixes to coordinate seamlessly with IT teams.
The new standard: From weeks to minutes
If your CTEM strategy relies on static tools and annual checkboxes, you are not just behind the curve. You are operating in a completely different era. By unifying the full visibility of Surface Command with the critical thinking of Vector Command and the instant response of our Cloud Runtime capabilities, Rapid7 empowers you to take command of your attack surface.
Do not wait for a 118 single bit request bypass to prove your defenses are porous. Move from a posture of passive observation to one of preemptive security.
Security teams want more from their data than APIs and one-off reports.
They want to ask better questions, move faster, and bring security context into the workflows they are already building. That’s especially true as more organizations experiment with private AI assistants, internal copilots, and LLM-powered automation. Part of this experimentation is, of course, attempting to lower the pressure on teams that have to figure out how to prioritize the sheer number of actionable vulnerabilities efforts like Project Glasswing are quickly becoming hyper-skilled at spotting.
That’s why Rapid7 is introducing a free, open-source MCP Server and Agent Skill for Bulk Export. Bulk export is a highly efficient way to access all your Rapid7 data; no more paging APIs, no more verbose output. Bulk Export creates a local offline replica of your data the LLM can efficiently and quickly interrogate, reducing token cost and time to answer questions.
This new MCP and Agent Skill gives customers a standardized way to connect Rapid7 vulnerability and exposure data to AI assistants and custom AI workflows. Built as an open-source bridge, it helps customers bring their Rapid7 data into the tools and experiences that work best for their teams.
Why this matters now
Security teams are no longer just buying tools. They’re connecting systems, shaping workflows, and testing how AI can help analysts, IT teams, and leaders get to answers faster. For many teams, the path from raw security data to usable AI context is still manual. It often means exporting data, building wrappers, shaping queries, and managing custom integrations.
Rather than leave every team to solve that challenge from scratch, we wanted to provide a stronger foundation that is flexible, practical, and easy to extend over time. With projects like Metasploit and Velociraptor, Rapid7 is committed to Open Source, and by sharing with the broader community we hope to accelerate velocity and ensure we’re able to incorporate more use cases and fixes. These processes also give customers full visibility of the code running and tools used, ensuring data privacy and allowing the user to do with their data what they please.
What MCP does
Model Context Protocol, or MCP, is an emerging standard for helping AI systems interact with external data and tools in a structured way.
In practical terms, it gives AI assistants a cleaner way to ask questions, retrieve data, and work with systems beyond the model itself. For customers, that means less custom glue code and a more consistent way to use security telemetry in AI-driven workflows.
That matters because many security reporting and analysis workflows still assume a high technical bar. Answering a simple question can require custom queries, SQL knowledge, or dashboard work. But the people who need those answers aren’t always security specialists. They may be IT partners, compliance stakeholders, or executives who want clarity but might not need to understand the underlying query logic.
The MCP server helps lower that barrier: Instead of starting with raw exports and working backward, teams can start with the question they need answered.
CTEM is about helping teams move beyond point-in-time findings toward a more continuous, contextual understanding of risk. That requires security data that can be accessed, connected, and used across the workflows teams rely on.
Bulk Export helps make that possible by giving customers more flexibility in how they use Rapid7 data. The open-source MCP server makes it easier to bring that data into AI-assisted and custom workflows.
⠀
That can support more continuous exposure management workflows by making it easier for teams to triage vulnerability and exposure data. For example, an analyst facing a large queue of new vulnerabilities could use LLM assistance to quickly narrow in on the findings most likely to need attention first. Instead of manually working through exports and queries, they could ask natural-language questions to surface the exposures tied to critical assets, unresolved remediation work, or other signals available in the data.
From data portability to AI-ready interoperability
Bulk Export was already an important step toward giving customers more control over their data. It made it easier to extract and use security telemetry in external tools and analytics environments.
The open-source MCP server builds on that foundation: Instead of using exported data only for dashboards or custom reporting, customers can now use that same data in AI-native experiences. That includes internal assistants, private copilots, workflow automation, and natural-language exploration of vulnerability and exposure data. This makes existing security data easier to use in the environments customers are already investing in.
How it works
At a high level, the architecture is straightforward. Using the Agent Skill, your LLM runs the MCP server locally and automatically prepares the environment by performing the bulk export and loading the data into a local file store. The Agent Skill provides the schemas and knowledge, with the MCP providing the tools to access this data. The LLM then will answer any question by querying, summarizing, and synthesising data locally – an extremely fast and simple process that's for the LLM.
Depending on the data a customer exports, answers can include vulnerability records, asset data, remediated vulnerabilities, and policy-related results.
The point here isn't just that a model can access the data, it’s that an open-source layer helps customers inspect, adapt, and extend over time, empowering teams to control how that connection works in their own environment.
What customers can do with it
This opens the door to practical use cases, including:
Using LLM assistance to triage vulnerability data faster
Asking natural-language questions to spot exposure and remediation trends
Investigating which assets are tied to the most urgent vulnerabilities
Understanding what changed over time without manual analysis
Exploring policy failures without building manual queries
Feeding Rapid7 telemetry into private AI assistants and internal workflows
Making reporting more accessible for non-technical stakeholders
⠀
For teams already trying to operationalize AI, this creates a lower-friction path. Instead of building every integration from the ground up, they can start with a reusable bridge and focus on the workflows they want to enable.
A better path from data to action
Security data only creates value when teams can use it. For many organizations, turning raw telemetry into timely answers is still harder than it should be. Analysts need speed. Leaders need clarity. Builders need flexibility. And more customers want security data that works inside the tools and workflows they already rely on.
The open-source MCP server for Bulk Export is designed to help make that possible.
Bulk Export helps customers take control of their data. This is the next step: helping them put that data to work in AI-ready security workflows.
Security leaders know that reducing risk is not just about finding the right exposures, but helping the organization act on them before known issues turn into real incidents.
That is often where remediation gets harder. Security teams may know which actions matter most, but progress can slow when infrastructure, cloud, endpoint, and IT teams do not have the context needed to execute. Teams need clear asset detail to scope the work, trusted status signals to validate remediation, and usable reporting to track progress and stay aligned.
This is exactly the challenge Exposure Command is built to help solve. Exposure Command helps customers understand and prioritize the exposures that matter most, while Remediation Hub (a prioritized remediation view within Exposure Command) helps teams turn that prioritization into action. With new enhancements to Remediation Hub, customers can now do that with more context and confidence, along with better visibility into progress over time through exportable reports.
Why remediation work slows down
Prioritization is an important step, but remediation rarely happens in one place or with one team. Security, infrastructure, cloud, endpoint, and IT operations all need enough context to understand what is being asked of them.
When that context is hard to access, progress slows. Security teams may know what should be fixed, but asset owners still need the information required to assess impact, plan the work, and take action. Teams also need to understand whether assets are actually protected, whether patching has fully taken effect, and how remediation progress should be tracked over time. Without that clarity, remediation becomes harder to coordinate and harder to validate.
Making remediation more actionable
The Top Remediations Report helps close that gap by adding a comprehensive asset-level breakdown for each remediation. In addition to summary remediation information, customers can see source-specific metadata such as operating system, IP address, cloud provider, tags, endpoint protection, and patch management.
It can be used as a high-level summary of remediation priorities; many security teams use it to define remediation goals and share clear, actionable guidance with teams that may not work directly in security tools.
That gives teams a clearer view of the work behind each remediation and makes it easier to move from prioritization to execution.
Customers can also tailor reports to match the way they work, with customizable filters for specific environments, tags, or ownership groups. Reports can be exported in CSV, HTML, and PDF formats, shared with the teams responsible for action, and automatically generated and emailed on a schedule.
Building clearer visibility into patching and endpoint coverage
Action is only part of the equation, since teams also need clear, trustworthy context around asset posture.
Remediation Hub now shows the source of patch management and endpoint protection coverage directly in remediation details, giving customers clearer visibility into where that data comes from and which tools are protecting a given asset. This is especially helpful in environments with multiple solutions in use, and reduces confusion when missing integrations would otherwise make assets appear unprotected.
The update also surfaces whether an asset still requires a reboot after patching, helping explain why vulnerabilities may persist even when remediation work has already started. Together, these additions make it easier for teams to assess true exposure, validate remediation progress, and identify where follow-up is still needed.
Extending remediation data into operational workflows
Remediation does not stop once a team has identified what to fix or validated that a change has been made. Security leaders also need ways to track progress, measure performance, and share remediation outcomes across the organization. That is where exported remediation data becomes important.
By extending access to remediation data outside the platform, customers can more easily support the workflows across reporting, operations, and leadership teams. This makes it easier to analyze remediation activity over time, align to internal reporting needs, and give stakeholders a clearer view of progress.
For security leaders, that means better visibility into whether remediation efforts are moving the organization in the right direction. For operational teams, it means less manual work to assemble and share updates, and more flexibility in how remediation data is used.
What this looks like in practice
For vulnerability management teams, this means faster handoff. Instead of sharing a remediation recommendation and then answering follow-up questions, they can send a report that already includes the asset context needed to begin planning.
For infrastructure and cloud teams, it becomes easier to focus on the parts of the environment they own. Filters help narrow remediation data to the assets, environments, and ownership groups that matter most, reducing noise and making action more straightforward.
For endpoint and patching teams, greater visibility into coverage source and reboot status helps explain why exposure may still remain, even when remediation work is already underway. That makes validation easier and helps teams troubleshoot more effectively.
For security and IT leaders, scheduled reporting and exported remediation data improve shared visibility. Rather than relying on one-off exports or manual updates, teams can more consistently track prioritized remediation work and measure progress over time.
Better context, faster progress
These enhancements help customers do more than identify top remediation priorities. They help teams act on them, validate them, and track them with more confidence.
By bringing together deeper asset context, clearer patch and endpoint visibility, and more usable remediation reporting, Remediation Hub helps reduce the friction that often slows remediation down. The result is smoother collaboration across functional teams, less manual effort, and quicker progress on the remediation work that matters most.
Read more about how Exposure Command helps teams share remediation context, improve coordination, and move faster on risk reduction.
If product releases had a runway moment, Q1 at Rapid7 would’ve walked out in Cloud Dancer; crisp, confident, and quietly powerful, before breaking into a full gallop in the Year of the Horse. At Rapid7, our first-quarter launches combined velocity with refinement: meaningful enhancements designed to move security teams faster without adding complexity. Let’s cover off the key launches, one by one.
Detection and response
MDR for Microsoft
Getting more value from the tools you already have is an objective shared by all of us. For many of you, that translates to achieving greater security operations outcomes and resilience from your Microsoft technology. With MDR for Microsoft, organizations correlate their Microsoft, Rapid7, and third-party telemetry with prioritized risk context so the service can anticipate attacks before they start.
AI-powered triage and investigations – backed by unlimited incident response that ensures threats are fully eradicated – delivers certainty in an uncertain attack environment. Dedicated advisory provides strategic recommendations and program hardening guidance that drives long-term security resilience. Customers ultimately experience security operations excellence and achieve stronger outcomes from their existing Microsoft foundation.
The acquisition of Kenzo Security marks another step forward for the Rapid7 Command Platform and Rapid7’s vision for preemptive, AI-powered security operations. In an environment where most security teams are forced to leave large volumes of alerts uninvestigated, Kenzo’s agentic AI capabilities are expected to help accelerate Rapid7 from AI-assisted workflows toward AI-driven, machine-speed operations. Designed around specialized AI agents that work together across security operations tasks, this technology has the potential to reduce manual strain, broaden investigative coverage, and deliver more consistent, precise outcomes.
An average Kenzo customer reported a 94% reduction in investigation time, and their alert coverage increased from 12% to 100%. As these capabilities are brought into MDR, Managed Threat Complete, InsightIDR, and Incident Command, customers will benefit from a stronger, more scalable approach to cyber defense.
Incident Command
User to Identity mapping
Connecting user activity to full identity context is critical for faster, more confident investigations. With User to Identity mapping in Incident Command, analysts can seamlessly link SIEM users to their corresponding identity profiles, gaining instant visibility into MFA status, account posture, and group memberships. By unifying detection and exposure data, teams eliminate manual reconciliation and close visibility gaps across the identity attack surface. This enables faster triage, deeper insight into user risk, and a complete, connected view of identity-driven threats.
User to Identity mapping within Incident Command
AI-Powered Log Entry Summary
AI-powered Log Entry Summary brings instant clarity to even the most complex log data. By translating raw log lines into a simple “who, what, when, where, and why” framework, analysts can quickly uncover insights without needing to interpret vendor-specific syntax or business logic. This removes the cognitive burden from investigations and hunts, allowing teams to spot threats faster across all data sources. Teams benefit from accelerated triage, more efficient investigations, and smarter decisions driven by clear, actionable context.
Instant context with AI Log Entry summary
Exposure management
Cloud Runtime Security (application detection and response)
Earlier this year, we made a significant announcement that Rapid7 had partnered with ARMO to add AI-powered cloud application detection and response (CADR) – or cloud runtime security – to our cloud security portfolio. We are thrilled to announce that these capabilities are now integrated with Rapid7 Exposure Command Ultimate. For our customers, this milestone represents our ability to deliver on the promise of a complete cloud-native application protection platform (CNAPP) that helps security teams preemptively identify and proactively thwart attacks. If you’re interested in learning more about this latest innovation to our cloud security portfolio, reach out to one of our account executives.
Runtime security delivering real-time visibility across cloud-native and containerized workloads
Top Remediation Report in Remediation Hub
Understanding which remediations to prioritize is only part of the process, teams also need asset-level detail to act. Top Remediations Report adds that context in Remediation Hub, with customizable filters, shared visibility across teams, and automated scheduling for recurring delivery to key stakeholders in CSV, HTML, or PDF. The result is faster coordination, clearer ownership, and quicker remediation progress.
Remediation Bulk Export API
We understand that organizations need to customize reporting for various stakeholders and levels across their business to drive effective vulnerability remediation and communicate security posture. One of the ways that organizations address this need is through our powerful cloud-based API, which enables teams to extract and export large amounts of security data into external tools like Tableau or PowerBI. Customers can export security data at scale, including assets, vulnerabilities, remediations and agent-based policy data, resulting in more flexible reporting and querying.
Data Security Posture Management (DSPM)
Understanding which exposures threaten sensitive data is difficult when data security and exposure insights live in separate tools. A partnership between Rapid7 and Symmetry Systems brings those perspectives together on Exposure Command, aligning sensitive data intelligence with real attacker reachability. DSPM capabilities discover sensitive data and map identity access, helping teams prioritize remediation based on breach impact.
Read the blog to learn how aligning data and exposure reduces breach risk.
Automated Sensitive Data Discovery: See how PII, PHI and Financial Data is flagged
Attack surface management
Dynamic External Attack Surface Discovery
Your attack surface doesn’t stand still, and point-in-time visibility can leave teams chasing what’s already changed. Dynamic EASM Discovery helps Surface Command automatically identify and track changes across the external attack surface by ingesting domain and IP data from across the environment. The result is more current visibility, fewer blind spots, and stronger confidence that teams are prioritizing and validating the exposures that matter most.
Read the blog to see how Dynamic EASM Discovery helps teams keep pace with a changing attack surface.
The Rapid7 Command Platform displaying your EASM seed data
Platform and Labs
Rapid7 Command Platform
We’re excited to introduce a centralized way to programmatically access data across all managed tenants with new multi-tenant API keys. For organizations managing multiple environments, tenants, or customers, integrating with each one individually has traditionally required significant manual effort, creating, maintaining, and rotating separate API keys for every tenant. This not only slows down development but also increases operational overhead and the risk of inconsistency.
With this new capability, you can build a single integration that seamlessly “loops” through tenants automatically, enabling consistent data access and streamlined workflows at scale. Whether you’re aggregating data for reporting, powering automation, or integrating with third-party tools, multi-tenant API keys simplify the process and reduce complexity, freeing up your teams to focus on higher-value tasks instead of repetitive configuration. Read all about it in our blog.
Rapid7 Labs
The latest threat research reports from Rapid7 Labs
This quarter Rapid7 Labs continued to deliver critical insights into the evolving threat landscape, uncovering how attackers are adapting their tactics – from stealthy, long-term intrusions to increasingly targeted and data-driven attacks. Our latest research reports highlight the growing complexity of modern threats and the real-world risks facing organizations today. Explore the findings below to better understand what’s changing and what it means for your security strategy.
BPFdoor in Telecom Networks: Sleeper Cells in the Backbone: Rapid7 uncovered a long-running espionage campaign in which a China-nexus threat actor, Red Menshen, embedded stealthy “sleeper cells” inside global telecommunications networks using the BPFdoor backdoor. Operating at the Linux kernel level, this malware enables persistent, hard-to-detect access without typical network signals, allowing attackers to monitor communications, subscriber data, and critical infrastructure over time. The research highlights a shift from opportunistic attacks to deliberate, long-term pre-positioning inside core systems that underpin global connectivity, raising national-level risk.
2026 Global Threat Landscape Report: The latest report from Rapid7 Labs delivers an in-depth analysis of global adversary behavior, drawing on telemetry from Rapid7 MDR investigations, vulnerability intelligence, and frontline incident response. This year’s findings highlight a rapidly evolving threat environment, marked by the collapse of the window between vulnerability disclosure and exploitation, the continued industrialization of ransomware operations, and the acceleration of modern attacks through the use of AI.
Executives’ Digital Footprints Threat Report: Today, 60% of an executive’s digital risk exposure is retrievable through surface web searches, including public records, professional history, and social media activity — all of which can be weaponized for highly targeted attacks. The Executive Digital Footprints Threat Report from Rapid7 Labs details how these executive digital footprints are an often overlooked threat vector that can be exploited, posing risks to the executive, their families, and organizations.
Exposing the Chrysalis Backdoor
Last month, Rapid7 uncovered the Chrysalis backdoor, a sophisticated supply chain attack that leveraged the Notepad++ update mechanism to selectively target organizations with a stealthy, persistent backdoor. This discovery highlights the growing risk of trusted software being weaponized and the real-world impact of advanced, targeted campaigns that can evade traditional defenses, reinforcing the importance of continuous monitoring and validating third-party software behavior in today’s threat landscape. Learn more about the Chrysalis backdoor here, and see more details on its impact and what you can do next here.
Cyber threat activity related to the Iran conflict
Rapid7 is actively monitoring cyber threat activity related to the Iran conflict, providing support for our customers and the cybersecurity community. Review observed activity, official advisories, and recommended defensive actions here.
Announcing Metasploit Pro 5.0.0
We’re excited to announce the launch of Metasploit Pro 5.0.0, a major evolution in red-team and penetration testing. Built to address today’s dynamic threat landscape, this release delivers a significantly improved UI, usability, validation, and workflow improvements that empower security teams to validate vulnerabilities faster and more effectively. Learn more in our blog post here.
Newly designed interface of Metasploit Pro
We’re just getting started
The innovation doesn’t stop here. We have a strong pipeline of product enhancements and new capabilities rolling out all year long. Be sure to follow our blog and release notes to see how Rapid7 continues to advance our platform and deliver greater value.
For years, cybersecurity professionals have relied on a familiar metric to dictate their day-to-day priorities: the Common Vulnerability Scoring System (CVSS). In today’s hyper-connected, sprawling IT environments, utilizing a static severity score as the ultimate arbiter of risk creates opportunities for threat actors. While defenders chase down theoretical, high-scoring alerts, adversaries are quietly targeting the truly exploitable, business-critical exposures that slip through the cracks.
In a recent report, Gartner® highlighted a projection:
"By 2028, organizations that prioritize exposures using threat intelligence, asset context, exploitability modeling and security control validation will reduce breach likelihood by at least 70% compared to peers relying primarily on CVSS-based vulnerability prioritization." [1]
This affirms what many seasoned practitioners have suspected for years: there’s an abundance of vulnerability findings, but a lack of actionable context.
Static scores. Reactive security.
Most vulnerability management programs evolved during a time when the attack surface was relatively static, adversary tooling was rudimentary, and remediation capacity generally exceeded the volume of new disclosures. Today, enterprises are confronted with vulnerabilities scattered across complex cloud architectures, SaaS applications, and intricate supply chains.
In this modern threat landscape, CVSS alone is insufficient because it measures theoretical severity, does not factor in whether an attacker is actually using the vulnerability in the wild, or consider the business value of any affected assets. According to Gartner®, fewer than 10% of vulnerabilities are exploited, yet most are treated as urgent [1]. This all leads to prioritization paralysis, where security teams spend countless hours patching vulnerabilities that pose low material risk to the business. The legacy approach rewards what is auditable rather than what is genuinely impactful.
The path toward smarter prioritization
To break free from endless patching and ineffective risk reduction practices, security professionals are shifting toward a context-driven model. As Gartner notes, strong exposure prioritization requires integrating four critical elements: threat intelligence, asset context, data science, and security control validation. Organizations are approaching these elements in a few practical ways:
Threat intelligence to establish relevance
Instead of just asking how severe a vulnerability is, modern exposure management asks whether an exposure is relevant to a threat actor who is capable of exploiting it right now. By embedding threat intelligence into each vulnerability finding, teams shift the focus from theoretical to risk active exploitation. It introduces the adversary's perspective by identifying known exploited vulnerabilities, public or private exploit availability, and targeted campaigns. By filtering out exposures with no evidence of attacker interest, organizations can instantly collapse large vulnerability backlogs and focus only on relevant threats.
Asset context and business criticality to define impact
Not all assets are created equal. A critical vulnerability on an isolated, internal test server is vastly different from the same vulnerability on a public-facing cloud workload processing customer sensitive data. Asset context enriches exposure data with crucial business information: what the asset is, its external accessibility, and its relationship to core business functions. Without this context, security teams waste disproportionate effort on low-impact systems, treating every critical alert as an equal emergency.
Exploitability modeling for predicting breach likelihood
Security analysts often struggle to assess exploitability given the overwhelming volume of vulnerabilities. By using predictive models like the Exploit Prediction Scoring System (EPSS), organizations can analyze large datasets of historical exploitation to identify latent risks. Exposure assessment platforms should display this data alongside each exposure finding to make it easier to predict the vulnerabilities most likely to become attacks.
Security control validation
An exposure that appears highly exploitable in theory might be neutralized by existing defenses. By integrating security and policy controls, you can evaluate exposures in the context of endpoint protection and identity management. This passive validation confirms whether an attacker can realistically exploit the exposure in your specific environment.
Unified exposure management
Individually, each element highlighted above provides incremental value, but when integrated, they fundamentally transform how prioritization decisions are made. This integrated model ensures that remediation efforts are mobilized only after priorities have been validated in the context of the business and the current threat landscape. It transitions vulnerability management from a purely technical, tool-centric exercise into a strategic, process-driven risk decision.
Security leaders must measure success not by the sheer number of vulnerabilities closed, but by the demonstrable reduction of exploitable exposures and the alignment of remediation efforts with actual attacker behavior. Operationalizing these four elements requires a unified platform that eliminates the silos between vulnerability management, cloud security, and threat intelligence. You cannot manually stitch together disconnected spreadsheets and hope to outpace modern adversaries. This is where forward-thinking organizations are leaning on comprehensive, end-to-end solutions like Rapid7 Exposure Command that seamlessly aggregate visibility across on-premises and dynamic cloud environments. With deep, native integration of Rapid7 Cloud Security capabilities, teams can instantly map asset criticality and external accessibility within complex, ephemeral cloud architectures. Furthermore, by infusing world-class threat intelligence and active exploit data directly into exposure findings, Rapid7 enables security teams to cut through the noise, validate security controls, and pinpoint the exact exposures that matter most—all with minimal friction.
[1] Gartner, Prioritize What Attackers Will Exploit: 4 Elements of Strong Exposure Prioritization, Jonathan Nunez, 5 March 2026.
Earlier this year, we made a significant announcement: Rapid7 partnered with ARMO to add AI-powered cloud application detection and response (CADR) – or cloud runtime security – to our cloud security portfolio. At the time, I published a blog highlighting this two-part approach for modern cloud security that combines preemptive exposure management (understanding the threats that could exist) with proactive runtime security (detecting the threats that are happening).
Today, we are thrilled to announce that this vision is fully realized and integrated with Rapid7 Exposure Command. For our customers, this milestone represents our ability to deliver on the promise of a complete Cloud-Native Application Protection Platform (CNAPP) that helps security teams preemptively identify and proactively thwart attacks.
Exploring the possibilities of this unified CNAPP
At Rapid7, we believe that a CNAPP is unified if it operates from a single, objective source of truth. By integrating cloud runtime security directly into Exposure Command, we are seamlessly merging the preemptive (posture, configurations, identities, and vulnerabilities) with the proactive (runtime behavior and active threats). The table below summarizes this enhancement:
⠀
Today’s Rapid7 Cloud Security solution
What cloud runtime adds
Primary Focus
Prevention, risk reduction, and preemptive response
Real-time exposure detection and proactive response
Core Question
"What is vulnerable and could be attacked?"
"Is an attacker exploiting our environment now?"
Lifecycle Stage
Pre-deployment, continuous scanning, or periodic intervals
Continuous monitoring of live (in-production) workloads
Active exploits, lateral movement, unauthorized process execution, SQL injection
⠀
The true power of this unified architecture is best understood through the lens of a security practitioner’s daily battle against cloud threats. The previous blog post discussed this in theory; let’s use this blog to talk about the reality.
The baseline
Exposure Command continuously scans and assesses your cloud posture to identify whether a container exposure exists in a production cluster. Traditional scanners would stop here, leaving you to prioritize this vulnerability against others. In Exposure Command, this detection is not just part of a static score, but instead it is part of an attack path. Our preemptive security platform tells you, for instance, whether this specific container has internet access and an over-privileged IAM role, making it highly reachable and exploitable. This means that you are not just looking at a CVE; you are looking at the potential blueprint behind a major breach.
The proactive validation
This is where cloud runtime security turns theory into reality. Instead of treating the vulnerability as just a potential risk, the platform utilizes eBPF sensors to provide continuous, direct kernel-level observability and application L7 visibility. Exposure Command analyzes this sensor data, uses AI to establish baseline workload behavior, and uncovers anomalies in real time. For example, security analysts gain instant visibility when that vulnerable container suddenly spawns a reverse shell and initiates an external connection to a known malicious IP, rather than executing its standard database queries.
The response
When a runtime anomaly is detected on a high-priority asset, the platform instantly aggregates these events into streamlined alerts. It links the initial application-layer exploit to the infrastructure-level change, such as the attacker attempting a container escape using that over-privileged IAM role. More importantly, the platform can trigger an automated response. By automatically terminating the malicious process, pausing the compromised container, or isolating the namespace, Exposure Command effectively stops an attacker's lateral movement in seconds.
The investigation
Stopping the threat, understanding how it happened, and proving you resolved it, is what creates a truly resilient security program. Rapid7 Exposure Command does not just initially block the attack and leave you sifting through raw kernel logs to truly remediate the threat. Instead, it uses AI-generated remediation summaries to translate complex runtime telemetry into a clear, actionable remediation narrative. It explains exactly how the attacker bypassed initial defenses, what lateral movement they attempted, and the precise root-cause misconfigurations that allowed it. This empowers security teams to confidently report to leadership on the active threats they've neutralized, while providing developers with the exact context and code-level recommendations they need to patch the underlying exposure.
Amplifying signal vs. noise
When you combine predictive exposure analytics with deep application-layer and kernel-level visibility, you fundamentally change your operational efficiency. You stop chasing every theoretical risk and start focusing on what matters most. Exposure Command is a unified solution that eliminates the noisy alerts that tend to overwhelm security operations teams. Teams are able to prioritize remediation not just by CVSS score, but by real-time validation of what is actively loaded into memory and what is currently being exploited (i.e., risk and exposure). This means your developers spend less time patching vulnerabilities that fail to pose an immediate risk, and SecOps spends less time investigating benign container behavior.
With the general availability of cloud runtime security as part of Exposure Command, Rapid7 delivers a strategic, engineering-driven platform that achieves the mission of true CNAPP. We provide the precise answer to, "Could I be compromised?" through preemptive exposure management, and the definitive answer to, "Am I currently compromised?" through proactive runtime security. By closing the loop between these two questions, we allow enterprises to secure their cloud environments with accuracy, speed, and confidence. This is a great example of the wider approach to preemptive security that Rapid7 is delivering across different use cases through the Command Platform’s comprehensive exposure management and threat detection & response capabilities.
Visit Rapid7's CNAPP hub page to learn more about how the fully integrated Rapid7 Exposure Command with cloud runtime security can transform your cloud defense.
This blog was written in collaboration with Symmetry Systems' Claude Mandy.
Rapid7 and Symmetry Systems are partnering to help organizations reduce breach impact by aligning sensitive data intelligence with real-world exposure paths across both human and machine identities.
Breaches are measured in data, not vulnerabilities
Vulnerabilities are one thing, but the breaches that follow are rarely just technical incidents. More often, they become business events with far-reaching consequences, driven by something far more simple than a sophisticated exploit.
According to the 2025 Verizon Data Breach Investigations Report, 98% of system intrusion breaches involved the use of stolen credentials or brute force attacks against easily guessable passwords. Attackers are not just exploiting vulnerabilities; they are leveraging identity access to move through environments and reach sensitive data.
The financial impact of these breaches is staggering. IBM’s 2025 Cost of a Data Breach Report found the global average cost of a data breach is $4.44 million. In highly regulated regions and industries, that cost climbs significantly higher. Those figures reflect detection and response costs, regulatory fines, lost business, and operational disruption. Those figures also rarely capture the longer-term impact on brand trust and customer confidence.
Ultimately sensitive data defines breach impact. Yet, most organizations still evaluate exposure and data risk in isolation. Security teams understand where vulnerabilities exist. Data teams understand where sensitive data lives. But leadership often lacks a unified answer to the most important question: If an attacker compromises an identity or gains a foothold in our environment, what sensitive data could they realistically reach?
That gap is exactly what Rapid7 and Symmetry Systems are addressing through this new partnership.
Knowing where your data lives is only the beginning
“DSPM is an all-seeing, all-feeling nervous system for data security. It creates awareness of data vulnerabilities and enables mitigation before those are exploited.”
That awareness is foundational. Organizations need continuous visibility into where sensitive data lives, how it is classified, and who can access it. Without that foundation, security and risk decisions are based on assumptions rather than evidence. Awareness alone does not account for how attackers move through an environment.
Exposure management shows how adversaries move across cloud, SaaS, and on-prem environments, while DSPM shows what data is at stake and the potential impact for a compromised identity. Connecting the two is what turns visibility into impact-driven prioritization.
AI agents, copilots, and the new exposure multiplier
As organizations deploy AI agents and copilots across collaboration platforms and cloud systems, identity-driven exposure expands even further. These systems operate with delegated permissions, often aggregating and surfacing data across repositories. If misconfigured or compromised, they can amplify blast radius by inheriting privileged access to sensitive data. AI dramatically increases the scale and speed at which identity-based access can affect data exposure.
This makes the alignment between sensitive data context and attacker reachability even more critical, and that alignment is exactly what this partnership is designed to deliver.
Where sensitive data meets attacker reality
Rapid7 Exposure Command brings attacker context into focus by correlating signals across the attack surface, including:
Internet-facing exposure
Identity-driven access paths
Vulnerabilities and exploitability signals
Reachability across cloud and on-prem environments
Symmetry DataGuard delivers sensitive data and identity context. It provides:
Continuous sensitive data discovery and classification across cloud and SaaS environments
Identity and permission mapping to understand who can access sensitive data
Over-privileged, dormant, and risky access detection to reduce blast radius
Anomalous activity monitoring to surface data misuse and policy violations
Actionable data vulnerability insights to drive targeted remediation
Sensitive data insights from Symmetry are surfaced directly within Rapid7 workflows, showing whether high-value data is actually reachable through real-world attack paths.
Instead of asking “What is vulnerable?”, organizations can confidently answer “What sensitive data could actually get breached?”
Reduce breach impact before it disrupts the business
Every organization faces exposure, and AI only increases the scale and speed at which data can be accessed. This partnership brings together two focused capabilities through a strategic reseller and integrated experience between Rapid7 and Symmetry Systems.
Customers can access full DSPM capability through Rapid7, with sensitive data insights surfaced directly within Exposure Command. From there, teams can seamlessly pivot into Symmetry DataGuard for deeper investigation, governance, and remediation workflows.
Rapid7 provides attacker-aware exposure modeling across hybrid environments. Symmetry delivers deep data security posture management, including sensitive data discovery, identity-to-data mapping, and visibility into AI and machine identities. Together, they create a unified view of exposure and data risk while preserving the depth and specialization of each platform. By connecting sensitive data intelligence with exposure reachability, organizations gain clarity into what is truly at risk and which actions will have the greatest impact.
The result is measurable: reduced blast radius, a stronger regulatory posture, and remediation aligned to business consequences. If you are ready to bring sensitive data and identity-driven access (human and machine) into your exposure strategy, Rapid7 and Symmetry are working together to help you prioritize with clarity and confidence.
Cisco’s announcement that it will sunset Cisco Vulnerability Management (Kenna) marks a clear inflection point for many security teams. With end-of-sale and end-of-life timelines now defined, and no replacement offering on the roadmap, Kenna customers face an unavoidable decision window.
Beyond the practical need to replace a tool, Kenna’s exit raises a bigger question for security leaders: what should vulnerability management look like moving forward?
Not just a tool change
For many organizations, Kenna wasn’t “just another scanner”. Before their acquisition by Cisco in 2021, Kenna Security helped pioneer a shift away from chasing raw CVSS scores and toward prioritization based on real-world risk, influencing how many teams approach risk-based vulnerability management. Security teams invested years building workflows, reporting, and executive trust around that model.
That’s why this moment feels different. Replacing Kenna isn’t about checking a feature box, it’s about protecting the integrity of the progress teams have already made while using this moment to elevate programs past traditional vulnerability management.
Security leaders are rightly cautious. No one wants to:
Rush into a short-term replacement vs. a platform that suits current and future needs
Trade proven prioritization for untested promises
Disrupt remediation workflows that engineering teams finally trust
At the same time, few teams believe traditional vulnerability management – isolated scanners, static scoring, endless ticket queues – is sufficient on its own anymore.
So where does that leave you?
“Risk-based vulnerability management is dead” doesn’t tell the full story
In response to Kenna’s end-of-life, much of the market has rushed to frame this as the end of risk-based vulnerability management (RBVM) altogether. The message is often loud and binary: RBVM is outdated, jump straight to exposure management.
In practice, that framing doesn’t match how security programs actually evolve.
Most organizations are not abandoning vulnerability management. They are expanding it:
From on-prem to hybrid and cloud
From isolated findings to broader attack surface context
From vulnerability lists to exposure-driven decisions
From static to continuous
The mistake is assuming this evolution requires a hard reset, or that exposure management is completely separate and not part of that evolution.
For CISOs and hands-on leaders alike, the smarter question is: how do we preserve what works today, while building toward what we know we’ll need tomorrow?
What Kenna customers should prioritize next
As you evaluate what comes after Kenna, the right decision comes down to which platform can consistently deliver security outcomes and measurable risk reduction:
Continuity without disruption
Your team already understands risk-based prioritization. The next platform should strengthen that muscle, not force you back to severity-only thinking or one-dimensional scoring models that ignore business context and threat intelligence.
See risk clearly across on-prem, cloud, and external environments
Risk doesn’t live exclusively on-prem or in the cloud. Vulnerability data needs to reflect the reality of modern environments – endpoints, cloud workloads, external-facing assets – without fragmenting visibility. It needs to build on what teams already have by supporting findings from a broad range of existing tools and services, so risk can be understood in one place instead of scattered across platforms.
Customizable remediation workflows
Prioritization only matters if it leads to action. Look for platforms that help security and IT teams collaborate, track ownership, and measure progress without creating more friction.
A credible path forward
Exposure management is valuable only when it’s grounded in accurate data, operational context, and day-to-day usability. Security teams are already drowning in findings across tools, and without context that explains what matters and why, exposure management adds more noise instead of helping teams make decisions and reduce risk. That noise shows up in familiar ways: duplicate findings aren’t reconciled, conflicting risk scores between tools, unclear ownership for remediation, and long lists of issues with no clear path to action.
Why this moment favors steady platforms, not big bets
Kenna’s exit creates pressure, but pressure shouldn’t drive risky or forced decisions. Security leaders are accountable not just for vision, but for outcomes, such as:
Are we reducing real risk this quarter?
Can we explain prioritization decisions to the board?
Will this platform still support us two or three years from now?
This is where vendor stability, roadmap clarity, and operational proof start to matter more than bold claims.
The strongest next steps are coming from platforms that already deliver visibility across hybrid environments, mature, threat-informed vulnerability prioritization, and integrated remediation workflows that teams actually use. From there, exposure management becomes an evolution, not a leap of faith.
A measured path forward
Kenna’s EOL doesn’t signal the end of risk-based vulnerability management. It signals that security programs are ready to expect more from it. For security leaders this is an opportunity to reaffirm what has worked in your program, close real visibility and workflow gaps, and choose a platform that supports both near-term continuity and long-term growth.
The goal isn’t to chase the next trend. It’s to make a confident, practical decision – one that protects today’s outcomes while positioning your team for what’s next.
Looking ahead
If you’re navigating what comes after Cisco Kenna, the most important step is understanding your options early, before timelines force rushed decisions. Explore what a confident transition can look like and how teams are approaching continuity today while preparing for exposure management tomorrow.
Rapid7 has partnered with ARMO, a leader in cloud infrastructure and application security based on runtime data, to offer Cloud Runtime Security. The new offering, currently in beta, extends our vulnerability and exposure management solution, Exposure Command, into the moment where cloud risk becomes real: while applications and workloads are running. The solution does this with several differentiators that map directly to what security leaders need most: signal accuracy and response speed.
Introducing Rapid7 Cloud Runtime Security
Rapid7 Cloud Runtime Security combines kernel-level observability with AI-powered behavioral analysis to create a continuous, threat-aware defense layer within all cloud environments.
The solution provides:
AI-driven behavioral baselines for container activity. Because services, teams, and software releases create constant change, static policies can quickly become irrelevant and overly noisy. Cloud runtime security augmented by AI helps establish a behavioral baseline of what “normal” looks like for workload activity. This baseline becomes the standard for identifying deviations that indicate active exploits. This becomes even more critical for AI workloads in which runtime is the only place to understand behavior.
Root-cause in every risk finding. When a threat is detected, the platform does not just create noise by firing an alert. Instead, it reconstructs the entire event with root-cause insights by linking application-layer activity (like a SQL injection) to infrastructure-level changes (like a container escape). It also provides a natural-language narrative of the attack, showing exactly what happened, which credentials were used, and which resources were accessed.
Connected dots across the entire cloud ecosystem. Rapid7 Cloud Runtime displays the entire attack story, from cloud and Kubernetes events and clusters APIs, to container and workload processes and individual lines of code. Instead of sifting through siloed, disparate security tools that each present different alerts, teams gain a single source of objective truth for faster forensic analysis.
Deep application-layer visibility. Instantly detect and respond to common attacks, including SQL injections, command injections, local file inclusion (LFIs), and server-side request forgery (SSRF) that regular endpoint detection and response (EDR) tools overlook because their visibility is limited to the host and process level.
Orchestrated automated response to detected anomalies. Detection is only part of the full battle. Speed is the difference between a contained event and a disruptive, expensive data breach. The solution automatically terminates malicious processes, pauses compromised containers, isolates namespaces, or blocks egress to prevent an attacker’s lateral movement.
Rapid7 Cloud Runtime Security enables orchestrated automated response when anomalies are detected, enabling teams to quickly mobilize and contain threats.
Security amidst the chaos
Chaos is the natural state of cloud environments, where instances frequently shut down and containers constantly change. In these environments, chaos isn't a deficiency, but an inherent characteristic of distributed systems. Containers spin up and down constantly, deployments change multiple times per day, images get rebuilt and redeployed, identities and permissions drift, and workloads inherit misconfigurations at scale
Traditional vulnerability management (VM) was designed to protect static, on-prem technology architectures. Periodic scans, CVSS scores, and reactive patching have been effective here, but point-in-time snapshots and reactive remediation strategies collapse in dynamic, highly-distributed cloud environments for the following reasons:
Blind spots. Ephemeral cloud resources can spin up, perform a task, and disappear in minutes. If a vulnerable container exists for only 10 minutes between a scheduled scan, traditional VM tools will miss it and an automated attacker script will find and exploit it in seconds.
Missing context. Network scanners find CVEs, but they often lack contextual awareness. For instance, a ‘critical’ vulnerability may represent a low risk in a library that exists on an isolated container with no internet access. Conversely, a ‘medium’ vulnerability on a public-facing server with an over-privileged IAM role can be a catastrophic exploit.
Misconfigurations. In the cloud, vulnerabilities can live on unpatched software, but also arise from misconfigured systems. Consider a fully patched server that is compromised because of an open S3 bucket or a broad IAM policy. According to Gartner, “through 2026, nonpatchable attack surfaces will grow from less than 10% to more than half of the enterprise’s total exposure, reducing the impact of automated remediation practices1.”
AI-driven complexity. AI is accelerating innovation cycles, and as organizations push out more code, AI has introduced several new dimensions to the attack surface. These can include vulnerabilities that trick LLM models into revealing sensitive data or bypassing security controls.
The new baseline for modern cloud security
As modern cloud environments are constantly changing, security teams need to know in real time when exposures become active threats. Rather than toiling over a ‘high’ or ‘critical’ vulnerability, they prioritize remediation actions based on the paths that lead to compromise. This is because a vulnerability can become a critical exposure when the conditions around it make it reachable, exploitable, and high impact. Savvy security teams use exposure management solutions to assess whether they are likely to get compromised, then lean on cloud runtime platforms to identify, in real-time, whether they are actively compromised. As a result, the best security programs now run on a “two-engine” model:
Predictive and preemptive with exposure management. This risk-forecasting layer discovers, prioritizes, and guides action on the exposures most likely to lead to material impact. Organizations utilize exposure management solutions to identify which exposures should be addressed first, the shortest paths to breach, and the remediation activities that most reduce risk.
Real-time and proactive with runtime security. This threat-reality layer detects anomalous behavior as it happens and supports immediate containment actions. Organizations use runtime security solutions to assess whether an exposure is actively being exploited, the configuration changes that may have led to the exposure, and the actions that need to be taken to contain the threat.
On their own, each part of the engine is valuable, but exposure management without runtime can cause teams to overlook active threats; runtime without exposure context can drown teams in noisy alerts. Together, these solutions enable teams to prioritize what matters most and respond instantly when it becomes active.
Visit our cloud security pages to learn more about how Rapid7 empowers teams to proactively manage risk, accelerate DevSecOps, and enforce compliance across multi-cloud environments.