Reading view

There are new articles available, click to refresh the page.

From Bulk Export to AI-ready Security Workflows: Introducing Rapid7’s Open-Source MCP Server and Agent Skill

Security teams want more from their data than APIs and one-off reports.

They want to ask better questions, move faster, and bring security context into the workflows they are already building. That’s especially true as more organizations experiment with private AI assistants, internal copilots, and LLM-powered automation. Part of this experimentation is, of course, attempting to lower the pressure on teams that have to figure out how to prioritize the sheer number of actionable vulnerabilities efforts like Project Glasswing are quickly becoming hyper-skilled at spotting.     

That’s why Rapid7 is introducing a free, open-source MCP Server and Agent Skill for Bulk Export. Bulk export is a highly efficient way to access all your Rapid7 data; no more paging APIs, no more verbose output. Bulk Export creates a local offline replica of your data the LLM can efficiently and quickly interrogate, reducing token cost and time to answer questions.

This new MCP and Agent Skill gives customers a standardized way to connect Rapid7 vulnerability and exposure data to AI assistants and custom AI workflows. Built as an open-source bridge, it helps customers bring their Rapid7 data into the tools and experiences that work best for their teams.

image3.png

Why this matters now

Security teams are no longer just buying tools. They’re connecting systems, shaping workflows, and testing how AI can help analysts, IT teams, and leaders get to answers faster. For many teams, the path from raw security data to usable AI context is still manual. It often means exporting data, building wrappers, shaping queries, and managing custom integrations.

Rather than leave every team to solve that challenge from scratch, we wanted to provide a stronger foundation that is flexible, practical, and easy to extend over time. With projects like Metasploit and Velociraptor, Rapid7 is committed to Open Source, and by sharing with the broader community we hope to accelerate velocity and ensure we’re able to incorporate more use cases and fixes. These processes also give customers full visibility of the code running and tools used, ensuring data privacy and allowing the user to do with their data what they please.  

What MCP does

Model Context Protocol, or MCP, is an emerging standard for helping AI systems interact with external data and tools in a structured way.

In practical terms, it gives AI assistants a cleaner way to ask questions, retrieve data, and work with systems beyond the model itself. For customers, that means less custom glue code and a more consistent way to use security telemetry in AI-driven workflows.

That matters because many security reporting and analysis workflows still assume a high technical bar. Answering a simple question can require custom queries, SQL knowledge, or dashboard work. But the people who need those answers aren’t always security specialists. They may be IT partners, compliance stakeholders, or executives who want clarity but might not need to understand the underlying query logic.

The MCP server helps lower that barrier: Instead of starting with raw exports and working backward, teams can start with the question they need answered.

The bigger picture: MCP and CTEM

This approach also aligns with the broader shift toward continuous threat exposure management, or CTEM. 

CTEM is about helping teams move beyond point-in-time findings toward a more continuous, contextual understanding of risk. That requires security data that can be accessed, connected, and used across the workflows teams rely on. 

Bulk Export helps make that possible by giving customers more flexibility in how they use Rapid7 data. The open-source MCP server makes it easier to bring that data into AI-assisted and custom workflows.

image1.png

That can support more continuous exposure management workflows by making it easier for teams to triage vulnerability and exposure data. For example, an analyst facing a large queue of new vulnerabilities could use LLM assistance to quickly narrow in on the findings most likely to need attention first. Instead of manually working through exports and queries, they could ask natural-language questions to surface the exposures tied to critical assets, unresolved remediation work, or other signals available in the data.

From data portability to AI-ready interoperability

Bulk Export was already an important step toward giving customers more control over their data. It made it easier to extract and use security telemetry in external tools and analytics environments.

The open-source MCP server builds on that foundation: Instead of using exported data only for dashboards or custom reporting, customers can now use that same data in AI-native experiences. That includes internal assistants, private copilots, workflow automation, and natural-language exploration of vulnerability and exposure data. This makes existing security data easier to use in the environments customers are already investing in.

How it works

At a high level, the architecture is straightforward. Using the Agent Skill, your LLM runs the MCP server locally and automatically prepares the environment by performing the bulk export and loading the data into a local file store. The Agent Skill provides the schemas and knowledge, with the MCP providing the tools to access this data. The LLM then will answer any question by querying, summarizing, and synthesising data locally – an extremely fast and simple process that's for the LLM. 

Depending on the data a customer exports, answers can include vulnerability records, asset data, remediated vulnerabilities, and policy-related results.

The point here isn't just that a model can access the data, it’s that an open-source layer helps customers inspect, adapt, and extend over time, empowering teams to control how that connection works in their own environment. 

What customers can do with it

This opens the door to practical use cases, including:

  • Using LLM assistance to triage vulnerability data faster 

  • Asking natural-language questions to spot exposure and remediation trends

  • Investigating which assets are tied to the most urgent vulnerabilities

  • Understanding what changed over time without manual analysis

  • Exploring policy failures without building manual queries

  • Feeding Rapid7 telemetry into private AI assistants and internal workflows

  • Making reporting more accessible for non-technical stakeholders

image2.png

For teams already trying to operationalize AI, this creates a lower-friction path. Instead of building every integration from the ground up, they can start with a reusable bridge and focus on the workflows they want to enable.

A better path from data to action

Security data only creates value when teams can use it. For many organizations, turning raw telemetry into timely answers is still harder than it should be. Analysts need speed. Leaders need clarity. Builders need flexibility. And more customers want security data that works inside the tools and workflows they already rely on.

The open-source MCP server for Bulk Export is designed to help make that possible.

Bulk Export helps customers take control of their data. This is the next step: helping them put that data to work in AI-ready security workflows.

Ready to explore it for yourself? Visit the Rapid7 Bulk Export MCP Server project on GitHub to learn more and get started.

A Clearer Path from Prioritized Exposures to Remediation Progress

Security leaders know that reducing risk is not just about finding the right exposures, but helping the organization act on them before known issues turn into real incidents. 

That is often where remediation gets harder. Security teams may know which actions matter most, but progress can slow when infrastructure, cloud, endpoint, and IT teams do not have the context needed to execute. Teams need clear asset detail to scope the work, trusted status signals to validate remediation, and usable reporting to track progress and stay aligned.

This is exactly the challenge Exposure Command is built to help solve. Exposure Command helps customers understand and prioritize the exposures that matter most, while Remediation Hub (a prioritized remediation view within Exposure Command) helps teams turn that prioritization into action. With new enhancements to Remediation Hub, customers can now do that with more context and confidence, along with better visibility into progress over time through exportable reports. 

Why remediation work slows down

Prioritization is an important step, but remediation rarely happens in one place or with one team. Security, infrastructure, cloud, endpoint, and IT operations all need enough context to understand what is being asked of them.

When that context is hard to access, progress slows. Security teams may know what should be fixed, but asset owners still need the information required to assess impact, plan the work, and take action. Teams also need to understand whether assets are actually protected, whether patching has fully taken effect, and how remediation progress should be tracked over time. Without that clarity, remediation becomes harder to coordinate and harder to validate.

Making remediation more actionable

The Top Remediations Report helps close that gap by adding a comprehensive asset-level breakdown for each remediation. In addition to summary remediation information, customers can see source-specific metadata such as operating system, IP address, cloud provider, tags, endpoint protection, and patch management.

It can be used as a high-level summary of remediation priorities; many security teams use it to define remediation goals and share clear, actionable guidance with teams that may not work directly in security tools.

That gives teams a clearer view of the work behind each remediation and makes it easier to move from prioritization to execution.

Customers can also tailor reports to match the way they work, with customizable filters for specific environments, tags, or ownership groups. Reports can be exported in CSV, HTML, and PDF formats, shared with the teams responsible for action, and automatically generated and emailed on a schedule.

exposure-command-top-remediations-report.png

Building clearer visibility into patching and endpoint coverage

Action is only part of the equation, since teams also need clear, trustworthy context around asset posture.

Remediation Hub now shows the source of patch management and endpoint protection coverage directly in remediation details, giving customers clearer visibility into where that data comes from and which tools are protecting a given asset. This is especially helpful in environments with multiple solutions in use, and reduces confusion when missing integrations would otherwise make assets appear unprotected.

The update also surfaces whether an asset still requires a reboot after patching, helping explain why vulnerabilities may persist even when remediation work has already started. Together, these additions make it easier for teams to assess true exposure, validate remediation progress, and identify where follow-up is still needed.

remediation-hub-package-update-progress.png

Extending remediation data into operational workflows

Remediation does not stop once a team has identified what to fix or validated that a change has been made. Security leaders also need ways to track progress, measure performance, and share remediation outcomes across the organization. That is where exported remediation data becomes important.

By extending access to remediation data outside the platform, customers can more easily support the workflows across reporting, operations, and leadership teams. This makes it easier to analyze remediation activity over time, align to internal reporting needs, and give stakeholders a clearer view of progress.

For security leaders, that means better visibility into whether remediation efforts are moving the organization in the right direction. For operational teams, it means less manual work to assemble and share updates, and more flexibility in how remediation data is used.

What this looks like in practice

For vulnerability management teams, this means faster handoff. Instead of sharing a remediation recommendation and then answering follow-up questions, they can send a report that already includes the asset context needed to begin planning.

For infrastructure and cloud teams, it becomes easier to focus on the parts of the environment they own. Filters help narrow remediation data to the assets, environments, and ownership groups that matter most, reducing noise and making action more straightforward.

For endpoint and patching teams, greater visibility into coverage source and reboot status helps explain why exposure may still remain, even when remediation work is already underway. That makes validation easier and helps teams troubleshoot more effectively.

For security and IT leaders, scheduled reporting and exported remediation data improve shared visibility. Rather than relying on one-off exports or manual updates, teams can more consistently track prioritized remediation work and measure progress over time.

Better context, faster progress

These enhancements help customers do more than identify top remediation priorities. They help teams act on them, validate them, and track them with more confidence.

By bringing together deeper asset context, clearer patch and endpoint visibility, and more usable remediation reporting, Remediation Hub helps reduce the friction that often slows remediation down. The result is smoother collaboration across functional teams, less manual effort, and quicker progress on the remediation work that matters most.

Read more about how Exposure Command helps teams share remediation context, improve coordination, and move faster on risk reduction.

Protect What Matters Most: Aligning Sensitive Data with Exposure Risk

This blog was written in collaboration with Symmetry Systems' Claude Mandy.

Rapid7 and Symmetry Systems are partnering to help organizations reduce breach impact by aligning sensitive data intelligence with real-world exposure paths across both human and machine identities.

Breaches are measured in data, not vulnerabilities

Vulnerabilities are one thing, but the breaches that follow are rarely just technical incidents. More often, they become business events with far-reaching consequences, driven by something far more simple than a sophisticated exploit.

According to the 2025 Verizon Data Breach Investigations Report, 98% of system intrusion breaches involved the use of stolen credentials or brute force attacks against easily guessable passwords. Attackers are not just exploiting vulnerabilities; they are leveraging identity access to move through environments and reach sensitive data. 

The financial impact of these breaches is staggering. IBM’s 2025 Cost of a Data Breach Report found the global average cost of a data breach is $4.44 million. In highly regulated regions and industries, that cost climbs significantly higher. Those figures reflect detection and response costs, regulatory fines, lost business, and operational disruption. Those figures also rarely capture the longer-term impact on brand trust and customer confidence.

Ultimately sensitive data defines breach impact. Yet, most organizations still evaluate exposure and data risk in isolation. Security teams understand where vulnerabilities exist. Data teams understand where sensitive data lives. But leadership often lacks a unified answer to the most important question: If an attacker compromises an identity or gains a foothold in our environment, what sensitive data could they realistically reach?

That gap is exactly what Rapid7 and Symmetry Systems are addressing through this new partnership. 

Knowing where your data lives is only the beginning

Gartner® Market Guide for Data Security Posture Management (DSPM) describes DSPM in clear terms:

“DSPM is an all-seeing, all-feeling nervous system for data security. It creates awareness of data vulnerabilities and enables mitigation before those are exploited.” 

That awareness is foundational. Organizations need continuous visibility into where sensitive data lives, how it is classified, and who can access it. Without that foundation, security and risk decisions are based on assumptions rather than evidence. Awareness alone does not account for how attackers move through an environment.

Exposure management shows how adversaries move across cloud, SaaS, and on-prem environments, while DSPM shows what data is at stake and the potential impact for a compromised identity. Connecting the two is what turns visibility into impact-driven prioritization. 

AI agents, copilots, and the new exposure multiplier

As organizations deploy AI agents and copilots across collaboration platforms and cloud systems, identity-driven exposure expands even further. These systems operate with delegated permissions, often aggregating and surfacing data across repositories. If misconfigured or compromised, they can amplify blast radius by inheriting privileged access to sensitive data. AI dramatically increases the scale and speed at which identity-based access can affect data exposure.

This makes the alignment between sensitive data context and attacker reachability even more critical, and that alignment is exactly what this partnership is designed to deliver.

Where sensitive data meets attacker reality

Rapid7 Exposure Command brings attacker context into focus by correlating signals across the attack surface, including: 

  • Internet-facing exposure

  • Identity-driven access paths

  • Vulnerabilities and exploitability signals 

  • Reachability across cloud and on-prem environments

Symmetry DataGuard delivers sensitive data and identity context. It provides: 

  • Continuous sensitive data discovery and classification across cloud and SaaS environments

  • Identity and permission mapping to understand who can access sensitive data

  • Over-privileged, dormant, and risky access detection to reduce blast radius

  • Anomalous activity monitoring to surface data misuse and policy violations

  • Actionable data vulnerability insights to drive targeted remediation

Sensitive data insights from Symmetry are surfaced directly within Rapid7 workflows, showing whether high-value data is actually reachable through real-world attack paths.

Instead of asking “What is vulnerable?”, organizations can confidently answer “What sensitive data could actually get breached?”

Reduce breach impact before it disrupts the business

Every organization faces exposure, and AI only increases the scale and speed at which data can be accessed. This partnership brings together two focused capabilities through a strategic reseller and integrated experience between Rapid7 and Symmetry Systems.

Customers can access full DSPM capability through Rapid7, with sensitive data insights surfaced directly within Exposure Command. From there, teams can seamlessly pivot into Symmetry DataGuard for deeper investigation, governance, and remediation workflows.

Rapid7 provides attacker-aware exposure modeling across hybrid environments. Symmetry delivers deep data security posture management, including sensitive data discovery, identity-to-data mapping, and visibility into AI and machine identities. Together, they create a unified view of exposure and data risk while preserving the depth and specialization of each platform. By connecting sensitive data intelligence with exposure reachability, organizations gain clarity into what is truly at risk and which actions will have the greatest impact.

The result is measurable: reduced blast radius, a stronger regulatory posture, and remediation aligned to business consequences. If you are ready to bring sensitive data and identity-driven access (human and machine) into your exposure strategy, Rapid7 and Symmetry are working together to help you prioritize with clarity and confidence.

The End of the Road for Cisco Kenna: Take a Measured Path into Exposure Management

Cisco’s announcement that it will sunset Cisco Vulnerability Management (Kenna) marks a clear inflection point for many security teams. With end-of-sale and end-of-life timelines now defined, and no replacement offering on the roadmap, Kenna customers face an unavoidable decision window. 

Beyond the practical need to replace a tool, Kenna’s exit raises a bigger question for security leaders: what should vulnerability management look like moving forward? 

Not just a tool change

For many organizations, Kenna wasn’t “just another scanner”. Before their acquisition by Cisco in 2021, Kenna Security helped pioneer a shift away from chasing raw CVSS scores and toward prioritization based on real-world risk, influencing how many teams approach risk-based vulnerability management. Security teams invested years building workflows, reporting, and executive trust around that model. 

That’s why this moment feels different. Replacing Kenna isn’t about checking a feature box, it’s about protecting the integrity of the progress teams have already made while using this moment to elevate programs past traditional vulnerability management.

Security leaders are rightly cautious. No one wants to: 

  • Rush into a short-term replacement vs. a platform that suits current and future needs

  • Trade proven prioritization for untested promises 

  • Disrupt remediation workflows that engineering teams finally trust 

At the same time, few teams believe traditional vulnerability management – isolated scanners, static scoring, endless ticket queues – is sufficient on its own anymore. 

So where does that leave you? 

“Risk-based vulnerability management is dead” doesn’t tell the full story

In response to Kenna’s end-of-life, much of the market has rushed to frame this as the end of risk-based vulnerability management (RBVM) altogether. The message is often loud and binary: RBVM is outdated, jump straight to exposure management.

In practice, that framing doesn’t match how security programs actually evolve. 

Most organizations are not abandoning vulnerability management. They are expanding it:

  • From on-prem to hybrid and cloud

  • From isolated findings to broader attack surface context 

  • From vulnerability lists to exposure-driven decisions 

  • From static to continuous

The mistake is assuming this evolution requires a hard reset, or that exposure management is completely separate and not part of that evolution.  

For CISOs and hands-on leaders alike, the smarter question is: how do we preserve what works today, while building toward what we know we’ll need tomorrow?

What Kenna customers should prioritize next 

As you evaluate what comes after Kenna, the right decision comes down to which platform can consistently deliver security outcomes and measurable risk reduction: 

Continuity without disruption

Your team already understands risk-based prioritization. The next platform should strengthen that muscle, not force you back to severity-only thinking or one-dimensional scoring models that ignore business context and threat intelligence. 

See risk clearly across on-prem, cloud, and external environments

Risk doesn’t live exclusively on-prem or in the cloud. Vulnerability data needs to reflect the reality of modern environments – endpoints, cloud workloads, external-facing assets – without fragmenting visibility. It needs to build on what teams already have by supporting findings from a broad range of existing tools and services, so risk can be understood in one place instead of scattered across platforms. 

Customizable remediation workflows

Prioritization only matters if it leads to action. Look for platforms that help security and IT teams collaborate, track ownership, and measure progress without creating more friction. 

A credible path forward

Exposure management is valuable only when it’s grounded in accurate data, operational context, and day-to-day usability. Security teams are already drowning in findings across tools, and without context that explains what matters and why, exposure management adds more noise instead of helping teams make decisions and reduce risk. That noise shows up in familiar ways: duplicate findings aren’t reconciled, conflicting risk scores between tools, unclear ownership for remediation, and long lists of issues with no clear path to action.

Why this moment favors steady platforms, not big bets

Kenna’s exit creates pressure, but pressure shouldn’t drive risky or forced decisions. Security leaders are accountable not just for vision, but for outcomes, such as: 

  • Are we reducing real risk this quarter? 

  • Can we explain prioritization decisions to the board? 

  • Will this platform still support us two or three years from now? 

This is where vendor stability, roadmap clarity, and operational proof start to matter more than bold claims. 

The strongest next steps are coming from platforms that already deliver visibility across hybrid environments, mature, threat-informed vulnerability prioritization, and integrated remediation workflows that teams actually use. From there, exposure management becomes an evolution, not a leap of faith. 

A measured path forward

Kenna’s EOL doesn’t signal the end of risk-based vulnerability management. It signals that security programs are ready to expect more from it. For security leaders this is an opportunity to reaffirm what has worked in your program, close real visibility and workflow gaps, and choose a platform that supports both near-term continuity and long-term growth.

The goal isn’t to chase the next trend. It’s to make a confident, practical decision – one that protects today’s outcomes while positioning your team for what’s next. 

Looking ahead

If you’re navigating what comes after Cisco Kenna, the most important step is understanding your options early, before timelines force rushed decisions. Explore what a confident transition can look like and how teams are approaching continuity today while preparing for exposure management tomorrow. 

Explore a confident path forward.

❌