Reading view

There are new articles available, click to refresh the page.

A Practical Guide to BloodHound Data Collection

This blog will not dive too deeply into BloodHound itself; instead, we will focus on various methods to collect AD data to provide BloodHound as input.

The post A Practical Guide to BloodHound Data Collection appeared first on Black Hills Information Security, Inc..

Getting Started In Pentesting – Advice From The BHIS Pentest Lead

Getting Started in Pentesting

Advice about getting started in pentesting from the BHIS pentest lead, including a learning path and why you should go all in on offensive security skills.

The post Getting Started In Pentesting – Advice From The BHIS Pentest Lead appeared first on Black Hills Information Security, Inc..

Security leaders say the next two years are going to be ‘insane’

SAN FRANCISCO — Every RSA Conference has its buzzwords. Cloud. Ransomware. Zero trust. Plastered across the 87-acre Moscone Center complex on every booth, banner and bar. This year was AI, with vendors pitching AI-powered solutions to every security problem imaginable. But 2026 stood out for a different reason: Industry leaders spent the conference warning about disruption from the very technology everyone was selling.

In an exclusive discussion with CyberScoop at this year’s conference, Kevin Mandia, founder of AI security company Armadin, Morgan Adamski, former executive director of U.S. Cyber Command, and Alex Stamos, a researcher and former chief security officer at several major technology companies, said the industry is entering what they described as an unprecedented two- to three-year period of upheaval, driven by AI systems that are discovering vulnerabilities exponentially faster than defenders can respond and threatening to render decades of security practices obsolete.

“We are just at the inflection point that is going to be pretty insane, at least two to three years,” Stamos said, describing a near-term future in which AI systems flood the threat landscape with working exploits while organizations struggle to patch vulnerabilities faster than attackers can weaponize them.

Mandia put the timeline more bluntly. “It’s a perfect storm for offense over the next year or two,” he said.

The core problem, according to the executives, is speed. AI has made vulnerability discovery almost trivial, while remediation takes time and effort, creating a widening gap that favors attackers across every stage of the kill chain.

“Because of the asymmetry in the cyber domain, where one person on offense can create work for millions of defenders, speed leverages that asymmetry,” Mandia said. “In the near term, there’s an advantage to the attackers as they start to use models and agents to do a lot of the offense.”

Bug discovery goes exponential

The shift is already underway. Stamos, who is currently chief security officer at Corridor, said foundation model companies are sitting on thousands of bugs discovered through AI-assisted analysis that they lack the capacity to verify or patch. 

“The exploit discovery has gone exponential,” Stamos said. “What we haven’t seen go exponential yet is plugging that into working shellcode that bypasses protections on modern processors. But maybe six months or a year from now” AI will be generating sophisticated exploits on demand.

He pointed to examples of AI systems discovering vulnerabilities in decades-old code that had been reviewed by thousands of developers and professional security researchers. In one case, he said, an AI system identified a flaw in foundational Linux kernel code that humans had overlooked for years.

 “This superintelligent system was able to figure out a way to manipulate the machine into a place that, when you look at the bug, I’m not sure how a human could have found that,” Stamos said.

The pace of discovery is creating what Stamos called “a massive collective action problem.” Each successive generation of AI models could surface hundreds of new vulnerabilities in the same foundational software. “It’s quite possible that all this development we’ve done in memory-unsafe languages, without formal methods, that none of that is actually secure in the presence of superintelligent bug-finding machines,” he said. “In which case we need to be massively rebuilding the base infrastructure we all work on. And nobody is doing that.”

The timeline for when those capabilities become widely accessible is measured in months. When Chinese open-source models, like DeepSeek or Alibaba’s Qwen, reach current American foundation model capability levels, Stamos said, “you’re going to have every 19-year-old in St. Petersburg with the same capability” as elite vulnerability researchers.

Models trained on existing shellcode are already “reasonably good” at generating exploit code, he said, and may be capable of producing EternalBlue-level exploits within a year. That NSA-developed exploit, leaked in 2017, was used in the WannaCry and NotPetya attacks and remained effective for years because of how difficult such capabilities were to develop. 

“Imagine when that becomes available on demand,” Stamos said.

Agents already operating beyond human scale

Mandia’s company Armadin has built AI agents capable of autonomous network penetration that he said would be devastating if deployed maliciously. Unlike human attackers who must manually type commands and wait for results, AI agents operate across hundreds of threads simultaneously, interpolating command outputs before they arrive and launching follow-on actions in microseconds.

“The scale and scope and total recall of an AI agent compromising you and swarming you is not humanly comprehensible,” said Mandia, who founded Mandiant and served as CEO from 2016 to 2024. “If the old way was a red team that would get in, there’s a human on a keyboard typing commands. That’s a joke compared to” what AI agents can do.

Those agents can evade endpoint detection and response systems in under an hour, he said, and operate at human speed to avoid rate-limiting detection mechanisms. Once inside a network, an AI agent can analyze documentation, packet captures and technical manuals faster than humans can read them, designing attacks tailored to specific control systems on the fly.

“When you build the offense, it scares the heck out of you,” Mandia said. “If we let the animal out of the cage today, nobody’s ready for it.”

He said Armadin recently tested a Fortune 150 company with a strong security team and found either remote code execution vulnerabilities or data leakage paths in every application tested. “Both of us were shocked,” he said.

The shift changes the fundamental question boards ask after penetration tests. Historically, directors wanted to know the probability a demonstrated attack would occur in the real world. “In the age of humans, you could never really answer,” Mandia said. “But with AI, it’s 100 percent. It’s coming and it’s going to get cheaper and more effective at the same time.”

Defenders face impossible timelines

The compression of attack timelines is colliding with organizational realities that are moving in the opposite direction. Adamski, who is now the U.S. lead for PwC’s Cyber, Data & Technology Risk business, said chief information security officers face pressure from boards to adopt AI rapidly, often with explicit goals of reducing headcount, even as compliance requirements remain unchanged and the threat landscape accelerates.

“CISOs are getting squeezed in that they cannot stop adoption because of demand from the board, from the CEO,” Adamski said. “None of the SOC 2 requirements have changed. ISO 27000, anything that helps people get through from a compliance perspective, all those rules are exactly the same.”

Stamos said patch cycles illustrate the mismatch. Where previously only sophisticated adversaries could reverse-engineer Microsoft’s Patch Tuesday updates to develop exploits, AI will democratize that capability. “You’re going to be able to drop the patch into Ghidra, driven by an agent, and come up with [an exploit],” he said. “Patch Tuesday, exploit Wednesday.”

Many CISOs are trying to bolt AI capabilities onto existing security operations, an approach the executives said is insufficient. “They’re not stepping back and looking at the bigger picture, that we have a fundamental, much more holistic problem in terms of how to reimagine and redo an entire cyber defense ecosystem that is solely driven by AI machine to machine,” Adamski said.

Avoiding Pandora’s box

The national security implications compound the problem. While other former government leaders talked at the conference about what they saw as the United States’ slipping in offensive cybersecurity, the three industry leaders spoke to what they believe nation-states have developed with the use of AI.

“I think we’re seeing less than 50 percent of the AI capability from modern nation-states right now,” Mandia said. “They’re not pressing. Nobody wants to be the first one to open that door.”

Stamos said the operational tempo favors U.S. adversaries. Russian intelligence services can observe and record data from the hundreds of businesses hit by ransomware daily, using that operational experience to train offensive AI models. “We don’t have that kind of operational pace in the U.S.,” he said.

Adamski said any AI capability the United States develops for offensive cyber operations carries inherent risks. “Anything you introduce, you’re introducing it to an ecosystem that they can use back at us,” she said.

Stamos said AI’s impact on cybersecurity will likely produce harmful consequences before other domains because the threshold for cyber operations is already low. “We allow on a Tuesday to happen in the cyber world what we would consider an act of war if it was in any other context,” he said. “I think this is where AI will be used first to hurt people, will be in cyber.”

Two years, maybe

The executives offered limited optimism that AI could also accelerate defensive capabilities, primarily by making security testing affordable at scale and enabling autonomous response systems. But the timeline for when defensive capabilities might catch up depends on immediate action. 

“Two years if we’re good,” Stamos said. “Two years is the minimum if we actually start really fixing code and refactoring stuff into type-safe languages using formal methods.”

Mandia offered optimism “a few years out” if offensive AI built by defenders successfully trains autonomous defensive systems. But he acknowledged the current state is dire. Organizations will need autonomous systems capable of immediately quarantining anomalous behavior, he said, because traditional detection and response timelines will collapse.

“You’re not going to have time to call Mandiant on a Thursday afternoon, get people in, sign a contract,” Mandia said. “You’re going to have to be able to respond at machine speed.”

Stamos said defenders must assume they cannot patch their way out of the problem and focus instead on defense in depth, particularly around lateral movement and persistence, which remain more difficult for AI to automate than initial exploitation.

But even that assumes organizations have time to prepare. The executives suggested that window is closing rapidly, if it hasn’t already shut for good.

Adamski summed up the reckoning facing the industry: “AI is going to potentially make us pay for the sins of yesterday.”

The post Security leaders say the next two years are going to be ‘insane’ appeared first on CyberScoop.

Social Engineering and Microsoft SSPR: The Road to Pwnage is Paved with Good Intentions 

Social Engineering and Microsoft SSPR

This scenario simultaneously tests identity confirmation tooling (SSPR, MFA, Conditional Access), how users act under pressure, and the organization's ability to detect and follow-up on social engineering attacks.

The post Social Engineering and Microsoft SSPR: The Road to Pwnage is Paved with Good Intentions  appeared first on Black Hills Information Security, Inc..

Bypassing CSP with JSONP: Introducing JSONPeek and CSP B Gone

A Content Security Policy (CSP) is a security mechanism implemented by web servers and enforced by browsers to prevent various types of attacks, primarily cross-site scripting (XSS). CSP works by restricting resources (scripts, stylesheets, images, etc.) on a webpage to only execute if they come from approved sources. However, like most things in security, CSP isn't bulletproof.

The post Bypassing CSP with JSONP: Introducing JSONPeek and CSP B Gone appeared first on Black Hills Information Security, Inc..

Getting Started with NetExec: Streamlining Network Discovery and Access

One tool that I can't live without when performing a penetration test in an Active Directory environment is called NetExec. Being able to efficiently authenticate against multiple systems in the network is crucial, and NetExec is an incredibly powerful tool that helps automate a lot of this activity.

The post Getting Started with NetExec: Streamlining Network Discovery and Access appeared first on Black Hills Information Security, Inc..

How to Design and Execute Effective Social Engineering Attacks by Phone

How to Design and Execute Effective Social Engineering Attacks by Phone

Social engineering is the manipulation of individuals into divulging confidential information, granting unauthorized access, or performing actions that benefit the attacker, all without the victim realizing they are being tricked.

The post How to Design and Execute Effective Social Engineering Attacks by Phone appeared first on Black Hills Information Security, Inc..

Abusing S4U2Self for Active Directory Pivoting

TL;DR If you only have access to a valid machine hash, you can leverage the Kerberos S4U2Self proxy for local privilege escalation, which allows reopening and expanding potential local-to-domain pivoting paths, such as SEImpersonate!

The post Abusing S4U2Self for Active Directory Pivoting appeared first on Black Hills Information Security, Inc..

Augmenting Penetration Testing Methodology with Artificial Intelligence – Part 1: Burpference

Burpference is a Burp Suite plugin that takes requests and responses to and from in-scope web applications and sends them off to an LLM for inference. In the context of artificial intelligence, inference is taking a trained model, providing it with new information, and asking it to analyze this new information based on its training.

The post Augmenting Penetration Testing Methodology with Artificial Intelligence – Part 1: Burpference appeared first on Black Hills Information Security, Inc..

Offline Memory Forensics With Volatility

Volatility is a memory forensics tool that can pull SAM hashes from a vmem file. These hashes can be used to escalate from a local user or no user to a domain user leading to further compromise.

The post Offline Memory Forensics With Volatility appeared first on Black Hills Information Security, Inc..

Why Your Org Needs a Penetration Test Program

This webcast originally aired on February 27, 2025. Join us for a very special free one-hour Black Hills Information Security webcast with Corey Ham & Kelli Tarala on why your […]

The post Why Your Org Needs a Penetration Test Program appeared first on Black Hills Information Security, Inc..

Gone Phishing: Installing GoPhish and Creating a Campaign

GoPhish provides a nice platform for creating and running phishing campaigns. This blog will guide you through installing GoPhish and creating a campaign. 

The post Gone Phishing: Installing GoPhish and Creating a Campaign appeared first on Black Hills Information Security, Inc..

5 Things We Are Going to Continue to Ignore in 2025

In this video, John Strand discusses the complexities and challenges of penetration testing, emphasizing that it goes beyond just finding and exploiting vulnerabilities.

The post 5 Things We Are Going to Continue to Ignore in 2025 appeared first on Black Hills Information Security, Inc..

Attack Tactics 9: Shadow Creds for PrivEsc w/ Kent & Jordan

In this video, Kent Ickler and Jordan Drysdale discuss Attack Tactics 9: Shadow Credentials for Primaries, focusing on a specific technique used in penetration testing services at Black Hills Information Security

The post Attack Tactics 9: Shadow Creds for PrivEsc w/ Kent & Jordan appeared first on Black Hills Information Security, Inc..

DLL Hijacking – A New Spin on Proxying your Shellcode

This webcast was originally published on October 4, 2024. In this video, experts delve into the intricacies of DLL hijacking and new techniques for malicious code proxying, featuring a comprehensive […]

The post DLL Hijacking – A New Spin on Proxying your Shellcode appeared first on Black Hills Information Security, Inc..

Blue Team, Red Team, and Purple Team: An Overview

By Erik Goldoff, Ray Van Hoose, and Max Boehner || Guest Authors This post is comprised of 3 articles that were originally published in the second edition of the InfoSec […]

The post Blue Team, Red Team, and Purple Team: An Overview appeared first on Black Hills Information Security, Inc..

❌