Reading view

There are new articles available, click to refresh the page.

The missing cybersecurity leader in small business

The average cyberattack costs for a small- or medium-size business is more than $250,000. The salary for a chief information security officer (CISO) is about the same, pulling in between $250,000 and $400,000, according to the annual 2026 CISO Report from Sophos and Cybersecurity Ventures. Small- and medium-size businesses (SMBs) know they cannot afford the salary, so they roll the dice, hoping they will not be attacked. This is a dangerous gamble that these businesses, which make up the backbone of the American economy, should not have to take. A virtual (vCISO) or fractional CISO (fCISO) can provide a practical solution.

As the American economy goes digital, SMBs now rely on the same building blocks as big enterprises — cloud services, payment systems, remote access, customer data, and other third-party vendors.  But without senior cyber leadership, cybersecurity often becomes a patchwork of tools, checklists, insurance paperwork, and whatever guidance a vendor offers. That may get these companies through a questionnaire; it will not build real resilience. Nearly half, all reported cyber incidents, which is projected to cost the global economy $12.2 trillion annually by 2031, involve smaller firms.

The threat is growing in both size and sophistication. Adversaries are deploying AI to automate reconnaissance, develop malware, and run phishing campaigns at scale.  This reduces the cost and skill needed to target smaller firms at volume. Adversaries are also collecting encrypted data with the intent to decrypt it later when they have access to large enough quantum computers. SMBs in defense, healthcare, and financial supply chains often hold sensitive credentials that provide access into larger enterprise environments, but most are not prepared to adopt quantum-resistant encryption.

SMBs generally understand they face cyber risk. The real gap is leadership: someone who can turn technical vulnerabilities into business decisions, set priorities, brief executives, prepare for audits, and hold vendors accountable. For more SMBs, hiring a full-time CISO is financially unrealistic.

A Virtual CISO provides remote, on-demand cybersecurity leadership and advice, typically supporting several organizations at the same time. A fractional CISO is a dedicated, part-time executive who is more deeply integrated into one organization’s governance, security planning, and day-to-day operations. Both models give smaller organizations access to senior-level cybersecurity expertise in a flexible, more affordable way than hiring a full-time CISO.

Washington should make it easier for SMBs to hire fractional cybersecurity leaders, because the private market is not closing this gap on its own. The Cybersecurity and Infrastructure Security Agency (CISA) and the Small Business Administration (SBA) could help by publishing buyer guidance: vetted criteria for evaluating providers, example scopes of work and deliverables, and real-world case studies that show SMB owners what a high-quality vCISO or fCISO engagement should look like.

Clear guidance matters because many smaller firms cannot easily tell the difference between true cybersecurity leadership and a tool reseller, compliance-only consultant, or a generic managed services contract. Any vetted provider criteria should emphasize proven experience building and running security programs, independence from vendor incentives and product quotas, and the ability to tie security investment to real business risk, not just a list of certifications. Model scopes of work should also spell out the basics every engagement should deliver: an initial risk assessment, a prioritized remediation roadmap, and simple metrics that show whether security is improving over time. Without clear buyer criteria, federal efforts could end up funding low-quality services that add cost and paperwork without making companies safer.

The National Institute for Standards and Technology (NIST) should recognize these CISO models in its SMB-focused Cybersecurity Framework guidance. That would help smaller firms turn the framework’s Govern, Identify, Protect, Detect, Respond, and Recover functions into a clear, accountable leadership structure. This would make these roles less abstract: the point is not merely providing advice, but taking executive-level ownership of risk priorities, vendor oversight, incident readiness, and communication with the owner or board.

Congress and the Treasury Department should consider targeted tax incentives or credits for qualified cybersecurity leadership services, tied to measurable risk-reduction outcomes. Eligible activities could include completing a risk assessment, building a incident response plan, conducting vendor security reviews, running employee training, and producing a remediation roadmap. SMBs often defer cybersecurity because every dollar competes with payroll, inventory, and growth. A targeted incentive would make security leadership easier to justify as a business investment rather than an optional add-on.

Federal acquisition officials should require contractors that handle sensitive government data to show it has executive-level cybersecurity oversight, whether it is full-time, virtual, or fractional, and should extend that expectation down to relevant subcontractors and suppliers. This is necessary because SMBs serve as entry points into defense, healthcare, financial, and critical infrastructure supply chains.

Finally, CISA and the SBA should support vCISO- and fractional-CISO-led workforce training. Employees improve security when training comes with leadership, regular reinforcement, and clear accountability, not just annual awareness training. The aim is not to turn every SMB into a Fortune 500 security shop. It should be to give smaller firms access to the leadership they need before the next incident forces the issue.

Georgianna Shea, who is a Doctor of Computer Science, is chief technologist at the Foundation for Defense of Democracies’ Center on Cyber and Technology Innovation and its Transformative Cyber Innovation Lab, where Cason Smith served as a summer 2025 intern. Cason is studying integrated information technology at the University of South Carolina.

The post The missing cybersecurity leader in small business appeared first on CyberScoop.

Why data centers now belong on the critical infrastructure list

Missile and drone attacks that took out cloud data centers in the Middle East underscored a critical vulnerability in the modern economy: reliance on digital infrastructure that sustains competitive advantage and operational continuity for corporations, nations, and militaries. 

The outages and downstream disruption were a preview of a new form of strategic and operational risk. Data centers have long been the backbone of the digital economy. What is changing is the scale of dependence as AI workloads dramatically increase the compute power required to run businesses, supply chains, and national security systems. 

Artificial intelligence has moved beyond business applications and into the core of warfare and national security. Last month, The New York Times reported that AI is “totally integrated” into the collection of intelligence and its use in strategic decision-making and military operations. Even if AI models are not directly firing weapons, AI-enabled analysis now plays a central role in how modern militaries gain visibility, find insights, and drive action.

That matters because it changes what should be considered critical infrastructure. If AI is a competitive advantage for companies and a battlefield advantage for warfighters, then the infrastructure that trains, hosts and runs AI becomes a high-value target. Attacks on the digital infrastructure organizations rely on can do more than inflict financial damage. They can slow decision-making, degrade logistics and reduce military effectiveness without ever engaging a conventional force.

Historically, nation-state campaigns targeting data centers and service providers focused on cyber intrusions for espionage or pre-positioning. What is different now is the emergence of physical attacks on digital infrastructure during active conflict. Russian military intelligence has been linked to campaigns aimed at digital infrastructure and managed services, often as part of a supply chain attack to compromise organizations at scale. Iran-aligned groups have repeatedly demonstrated willingness to target private sector entities to advance geopolitical goals. In many cases, the objective was access: steal data, implant persistence, map networks, and maintain a foothold that could be used later for espionage or disruption. 

What’s clearer now than ever before is that data centers and the AI workloads they support have become so vital to modern society, our adversaries will seek to degrade or destroy their efficacy as a tactic of both kinetic and cyber warfare.

We have already seen how quickly a digital incident can become real-world disruption. On March 11, reports surfaced of thousands of servers and endpoints wiped inside Stryker, a U.S.-based medical device manufacturer. A hacktivist group sympathetic to Iran, known as Handala, claimed responsibility. The incident reportedly halted Stryker’s global production after attackers accessed its Microsoft environment and issued a wipe command via Intune. Even without a single missile, the outcome looked like a strategic disruption: operations stopped and downstream customers felt it.

For business leaders, the imperative is clear: treat operational resilience as a board-level priority in the AI era.

In the world of corporate IT, cybersecurity prioritizes confidentiality: preventing theft of sensitive information. Resilience is a different discipline. It is the ability to sustain operations when systems are degraded, disrupted or actively under attack. For data centers and the businesses that depend on them, resilience comes down to preventing cascading failures and reducing the consequence when something inevitably goes wrong.

These developments carry an important implication for the private sector. Digital infrastructure is increasingly a strategic target, making resilience a core business priority rather than a narrow IT issue. For business leaders, the impact of data center disruption extends into multiple, often overlooked areas of cybersecurity risk.

For example, AI’s growth is colliding with a power wall in many regions where grid capacity cannot scale fast enough. That is driving facilities toward new power dependencies, including on-site generation through distributed energy and renewables, yielding more complex power management environments. This power infrastructure becomes a pressure point as interruptions to power supply or management systems can quickly force a data center offline. Russia has on several occasions demonstrated the ability to target and disrupt power generation and distribution in Ukraine in both 2015 and 2016.

Building management and automation systems, including HVAC and physical access controls, are another. These systems are essential to creating safe and supporting operational environments, but they typically have long capital depreciation cycles and inconsistent security safeguards. Frequently exposed to the Internet, and commonly misconfigured and not properly secured, they can become a pathway to outages by an attacker.

With an increasing density of computing infrastructure, thermal management has become a core environment control in data centers. As the industry adopts liquid cooling for dense AI loads, interference with cooling is no longer a niche technical issue. It is a risk vector that can cause downtime and potential equipment damage if breached by attackers.

Remote access creates another major exposure. Data centers rely on vendors, contractors, and systems integrators for maintenance, monitoring, and support, and each remote connection can become an entry point if it isn’t tightly controlled, centrally managed, and well secured. Adversaries often target these trusted access routes because they can be easier to compromise than a well-defended perimeter, allowing attackers to bypass standard controls and safeguards.

All of this has broader economic implications because data center disruption does not stay inside the technology sector. It cascades into the industries that keep society functioning and supply chains moving: hospitals, electric utilities, chemical production, food and beverage, oil and gas, and transportation. An extended outage becomes missed shipments, halted production, delayed care, safety concerns and lost trust.

What should leaders do now?

Start by defining resilience targets that match business reality: what must stay running, what can degrade, what cannot fail. Then invest in the controls that limit the impact of an incident. Segmentation between IT and OT assets should be non-negotiable. Remote access should be treated as a critical risk pathway with least privilege, strong authentication and continuous monitoring.

Manage facilities systems such as building management systems, power, and cooling controls as critical operational technology, with asset inventories, vulnerability management, logging, and incident response plans that anticipate disruption.

Finally, train to operate under degraded conditions. Tabletop exercises should include scenarios like loss of a cloud region, partial failure of a facility, or compromise of a management plane. Use these exercises to validate that the organization can maintain essential operations and recover quickly when disruptions occur. 

Policy is moving in this direction as well. Governments are increasingly treating data centers as critical infrastructure. Policies and frameworks such as the National Cybersecurity Strategy, CISA’s Secure by Design principles, and international standards like IEC 62443 all reflect a growing recognition that digital infrastructure is a national security issue. Companies that get ahead of this shift will not only reduce risk, they will build competitive advantage in a world where downtime can become a strategic weapon.

In the AI era, data centers are essential infrastructure for modern economies and national security. Their rising importance also makes them attractive targets in cyber and physical conflict. Protecting them is no longer just about safeguarding company operations, it is about protecting the systems society depends on every day. 

Grant Geyer is the chief strategy officer at Claroty.

The post Why data centers now belong on the critical infrastructure list appeared first on CyberScoop.

Everyone’s building AI agents. Almost nobody’s ready for what they do to identity.

Anthropic recently announced that it would not release Mythos, its most powerful AI model, to the public. The model discovered thousands of previously unknown software vulnerabilities — flaws that had sat undetected in major operating systems and web browsers for as long as nearly three decades. Anthropic said the model was too dangerous to deploy broadly because the same capabilities that let it find and fix security flaws could let attackers exploit them. A single AI agent, the company warned, could scan for weaknesses faster and more persistently than hundreds of human hackers. 

That decision tells you something important about where we are. The same AI systems that companies are racing to deploy as autonomous assistants — scheduling your appointments, writing your code, managing your workflows — are also capable of probing digital defenses at a speed and scale no human team can match. And most of the systems they’d be probing still rely on a security model designed for an era when a person sat behind every keyboard. 

Think of it like a building where every door has a lock, but the locks were all designed to recognize human hands. Now the building is full of robots — some of them authorized couriers, some of them intruders — and the locks can’t tell the difference. 

Not long ago, you could sit at your desk, glance at the sticky note on your monitor for your username and password, type them in, and grab a cup of coffee while your browser opened a doorway to the rest of the world. Every layer of security that followed — passwords, security questions, biometric scans, two-factor authentication — grew out of a single bedrock assumption: a person was on the other end. 

AI agents break that assumption from two directions at the same time. Legitimate agents need credentials to act like a human. OpenAI’s Operator navigates websites on your behalf. Google’s Gemini can plan your next family vacation while you sleep. Visa recently unveiled Intelligence Commerce Connect, a platform that lets AI agents do the shopping for consumers. These aren’t demos or hot takes from a tech conference floor. They’re shipping products that act on behalf of real people—and to do that, they need your identity. 

At the same time, adversaries can fake humanity at scale. The same AI that can act like a helpful assistant convincing can also be a malicious impersonator. They don’t break in, they log in—through shared credentials, hiring pipelines, vendor onboarding portals, and collaboration tools. Most organizations still treat identity as a login problem—something IT handles with stronger passwords or additional authentication steps layered on top of existing systems. The harder challenge now is knowing who, or what, you’ve already let in. 

That distinction is collapsing just as digital systems become more autonomous. 

When that distinction blurs, the damage is concrete. If a procurement workflow cannot distinguish between a human manager and an AI impersonator, purchase orders go out under false authority. When compliance logs cannot determine how a decision was authorized — by a person or a bot — the accountability chain falls apart. Regulators and customers will not accept “we’re not sure” as an explanation. 

The economics have tilted sharply toward the attacker. Sophisticated fraud once required coordination, with people researching targets, crafting messages, and adjusting tactics in real time. AI agents eliminate those constraints. One person can now supervise an army of autonomous systems, each running a valid persona across multiple interactions simultaneously. A single operator can field a hundred synthetic employees for the cost of one real salary. The barrier to large-scale impersonation is no longer skill or manpower. It is access to a capable model and a set of stolen credentials. 

Stronger identity controls do carry a cost. Every additional verification step is a moment when a customer might abandon a transaction, or an employee might lose patience with a security protocol. The goal is not to shut down automation. It is to make sure the systems acting in your name are authorized to do so. 

Some organizations are adapting. They are treating AI agents less like software and more like new employees, cataloging every agent in their environment, limiting permissions, requiring human approval for sensitive actions. They are moving beyond passwords to phishing-resistant authentication that binds access to a known device and a verified user. They are building behavioral baselines so that when a customer service bot suddenly queries a financial database, or a new hire accesses source code on day one, alarms go off. 

Nobody keeps their password on a sticky note anymore (I hope). But the assumption behind the sticky note, that a human hand would type it in, still underpins most of the systems we depend on. These systems hold your medical records, process your mortgage, and let an AI assistant rebook your flight. In a world where AI agents act faster, more persistently, and more convincingly than any person, that assumption is the vulnerability. 

The organizations that can verify identity continuously — not just at the door, but at every action, for every actor, human or machine — will have a durable advantage. The ones that cannot will find out what ambiguity costs. 

Devin Lynch is Senior Director of the Paladin Global Institute and a former Director for Policy and Strategy Implementation at the Office of the National Cyber Director. 

The post Everyone’s building AI agents. Almost nobody’s ready for what they do to identity. appeared first on CyberScoop.

The AI era demands a different kind of CISO

Many security leaders are still operating with frameworks built for a different era. For years, success was measured by fixed checkpoints, such as passing audits, closing vulnerabilities, and maintaining compliance. Those markers still have value, but they were designed for a threat landscape that moved in predictable, linear ways.

Today, that landscape is shifting in real time. AI is accelerating how attackers can identify and exploit weaknesses, while cloud environments and autonomous systems are constantly changing the terrain. The result is a gap between how risk is measured and how it actually unfolds, where static signals can’t keep up with dynamic threats.

CISOs are under pressure from two directions: risk is growing, and the tools meant to measure it are struggling to keep up. Traditional indicators often reflect yesterday’s threat landscape, leaving security leaders with an incomplete picture of where they actually stand.

The Mythos signal

Recent reports about Anthropic’s Claude Mythos Preview, described as so effective at vulnerability discovery that access has been restricted, offer a clear signal of where cybersecurity is headed. AI models like this one demonstrate that the speed and scale of exploitation have fundamentally changed. What once took skilled attackers days or weeks can now happen in minutes, and increasingly without human intervention.

That shift matters because attacker capabilities are accelerating faster than most organizations can measure them. The gap between how risk unfolds and how security teams track it is widening. A “passed” audit tells you where you’ve been, not where you are. A posture dashboard reflects a moment in time, not a continuously changing environment. And a pen test is a snapshot, in a world where conditions evolve constantly.

Sharpening the conversation this quarter

If your conversations haven’t evolved to match this new reality, your organization has a significant blind spot. Here are five questions CISOs should be using to turn the current shift into action:

What can we see at runtime without waiting for a report?
Configuration tools tell you what should be true. Runtime visibility tells you what is true right now. (Follow up: If an attacker starts moving laterally in our cloud environment today, how fast do we know, in minutes or days?)

Do we have a complete inventory of identities, including non-human?
Business environments are full of identities beyond employees. Vendors, contractors, service accounts, API keys, automations, machine identities, and cloud principals sprawl across systems. Attackers love that sprawl because stealing credentials is often easier than writing malware.
(Follow up: How many human and non-human identities do we have, and which ones can access sensitive data or modify critical infrastructure?)

Where are we over-permissioned, and how quickly can we reduce it?
Over-permissioned accounts act like master keys: convenient until they’re compromised. Least privilege must be measurable, not aspirational. (Follow up: Can you show me the highest-risk access paths and what we can remove or tighten in 30 days?)

Are we using AI to reduce noise and speed decisions or just adding another screen?
Many teams are drowning in alerts. AI can help by adding context (connecting a risky identity + vulnerable workload + exposed secret) so responders can act quickly, instead of chasing disconnected warnings. (Follow up: What’s our alert volume, what percentage is actionable, and what’s improved response time?)

Can you walk me through a realistic incident end to end, with decision points?
Prevention matters, but resilience is what separates organizations when something gets through. Incidents are inevitable. What matters is detection speed, containment, recovery, and communications. (Follow up: Pick a scenario — credential theft, ransomware, vendor compromise — What happens here, who decides what, and when does executive leadership need to know? What do customers need to know?)

What to do with the answers

If these questions surface gaps, the path forward is usually practical. Start by prioritizing runtime visibility on systems that support critical services and sensitive resident data. Treat identity like infrastructure — inventory it, right-size permissions, and monitor continuously. Shift measurement toward outcomes like time to detect, contain, and restore, rather than activity metrics like tickets closed or controls checked. And rehearse the hard day with both technical teams and leadership, including communications.

In an era where threats move at AI speed, the advantage belongs to teams that can see clearly and act immediately. The defining question now is how quickly you can identify a risk, understand its impact, and respond before it escalates.

Rinki Sethi is the chief security & strategy officer at Upwind Security, holding over two decades of cybersecurity leadership experience from roles at Twitter, Rubrik, BILL, Palo Alto Networks, IBM, and eBay. She is a founding partner at Lockstep Ventures, serves on the boards of ForgeRock and Vaultree, and is widely recognized for her contributions to the cybersecurity community, including developing the first national cybersecurity curriculum for the Girl Scouts of USA.

The post The AI era demands a different kind of CISO appeared first on CyberScoop.

Mythos can find the vulnerability. It can’t tell you what to do about it.

Mythos matters. It is a significant step forward in AI-assisted vulnerability discovery. But it does not mean cybersecurity changed overnight, nor does it mean enterprises are suddenly facing fully automated exploitation at internet scale tomorrow.

It does mean the offensive side of AI is continuing to improve. The defensive side needs to catch up now.

Mythos is the latest step in a longer trend. Over the next several years, expect the same pattern to repeat: incremental progress, then a jump; incremental progress, then a jump. Models will get more capable and cheaper with each cycle, and each jump will put more pressure on security teams still operating at human speed.

Mythos demonstrated that AI can find software vulnerabilities with unprecedented depth. That is real progress and should be taken seriously. However, this was not a case where AI suddenly made enterprise compromise cheap, easy, or automatic. Even in Anthropic’s own examples, the cost of discovering a critical vulnerability was significant. One example cited roughly $20,000 in token costs to identify a significant OpenBSD issue. 

Mythos made vulnerability discovery cheaper to scale by replacing bodies with dollars. But finding a vulnerability is only one part of the operational reality.

An attacker still has to determine whether that vulnerability is exploitable in a specific enterprise, identify a viable attack path, gain the necessary access, and successfully operationalize the exploit in a real environment. None of that became easy just because a model found a software bug.

And on the defensive side, Mythos does not yet solve the much harder enterprise problem: How do I know whether this vulnerability is actually exploitable in my environment, and what is the most efficient way to remediate it without breaking the business?

The real enterprise problem is not discovery. It is prioritization and action. Security leaders do not struggle only because vulnerabilities exist. They struggle because the operational cost of deciding what matters, what is exploitable, what can wait, and what can be fixed safely is enormous.

If a large enterprise learns that a critical vulnerability has been found in widely used software, the next step is not magic. It is a painful chain of operational questions focused on where they run the software, what version it is, whether there is a realistic attack path, and many more.

Mythos leaves the defensive cost of answering those questions inside a real enterprise largely unchanged. The right lesson is preparation.

One of the mistakes the market often makes with AI is assuming every new capability is the moment everything changes. The right move is to start now with defensive AI systems that are useful today and positioned to improve over time. For most enterprises, that means looking for AI products that help improve alert investigation, threat hunting, and vulnerability management, offer full audit capabilities, connect to enterprise data and reason to provide organizational context, and evolve as the model landscape matures.

The goal is to build the operational foundation now for a future in which more of the work can be automated safely.

Today, defenders need systems that let humans remain involved while the machine helps them scale. Over time, that involvement will change. Analysts will spend less time doing repetitive work themselves and more time orchestrating, reviewing, and improving how automated work gets done.

Eventually, some workflows will need to be reviewed in bulk rather than one action at a time. When response moves at machine speed, a human may not approve every individual remediation action. Instead, they will need a control center view into patterns: what the system did today, what worked, what did not, and what should be adjusted tomorrow.

That is a very different future from the simplistic idea of “replace the analyst.”

The real future is one where humans move from doing every task manually to supervising systems, shaping policy, reviewing patterns, and controlling how increasingly capable agents operate.

Mythos is a warning. Not because it means the sky is falling. Because it shows where the offensive side is heading. Defenders should move accordingly and with urgency.

Alex Thaman is the chief technology officer at Andesite. Over a 20+ year career, Alex has been an engineering leader at Microsoft, Unity Software, and Scale AI.

The post Mythos can find the vulnerability. It can’t tell you what to do about it. appeared first on CyberScoop.

Why the Axios attack proves AI is mandatory for supply chain security

Two weeks ago, a suspected North Korean threat actor slipped malicious code into a package within Axios, a widely used JavaScript library. The immediate concern was the blast radius: roughly 100 million weekly downloads spanning enterprises, startups, and government systems. But beyond the sheer scale, the attack’s speed was just as worrisome – a stark reminder of the tempo modern adversaries now operate at.

The Axios compromise was identified within minutes of publication by an Elastic researcher using an AI-powered monitoring tool that analyzed package registry changes in real time. The approach was right: AI classifying code changes at machine speed, at the moment of publication, before the damage compounds. By any standard, it was a fast response. The compromised package was removed in about three hours. But even in those three hours, the widely-used package may have been downloaded over half a million times.

This underscores a new reality. Enterprises and the public sector are being overwhelmed with attacks that are increasing in both speed and complexity, driven in part by AI. Adversaries are probing every link in the supply chain, and they are doing it at a pace that human-speed defenses cannot match.

This project is one example of using AI to tackle a security problem, but it also makes a broader case: AI-powered security can dramatically improve SOC efficiency especially when organizations across the public sector and beyond are drowning in attacks.

The direct threat to the public sector

Government agencies increasingly rely on the same open-source JavaScript frameworks as the private sector, so a poisoned package can give an adversary access to sensitive systems before anyone realizes the supply chain has been poisoned. This is a direct threat to national security and critical infrastructure, especially when the payloads are cross-platform, affecting macOS, Windows, and Linux.

What is most critical now is understanding and correctly preparing for the frequency and speed at which these attacks occur.

AI has fundamentally lowered the barrier to sophisticated cyber operations, granting relatively unsophisticated bad actors and small nation-states capabilities once reserved for elite criminal groups and countries. Adversaries now leverage AI to automate reconnaissance, craft convincing social engineering, and develop evasive malware. With a new vulnerability discovered every few minutes, the pace is accelerating.

For the public sector, the threat model has expanded. Defending against known nation-state playbooks is no longer sufficient—that’s just the baseline. Groups that couldn’t execute at nation-state levels five years ago now operate with comparable sophistication, while state-sponsored actors operate with unprecedented speed and automation. Staying ahead means moving beyond traditional defense to meet a threat landscape that is increasingly automated and ubiquitous.

AI is not optional

Adversarial AI is the defining threat of the current operating environment. Automated reconnaissance. AI-generated obfuscation. Machine-speed deployment across multiple vectors simultaneously. The adversary has implemented AI faster and more aggressively than most defensive teams.

It is rapidly becoming unquestionable in security: if you are not using AI to battle AI, you will lose.

That does not mean buying into the autonomous SOC fantasy. That approach treats AI in isolation, as if defenders are the only ones with access to the technology. Defensive AI is not a win button, but the minimum entry fee to stay level with the attacker. You still need business context, mission knowledge, and human judgment.

The agentic SOC transformation

The Axios compromise should serve as a clear signal. Nation-state actors are targeting the software supply chain with increasing frequency and sophistication. The government agencies and organizations that will defend successfully against these threats are the ones building security operations that can move just as fast as the threat actors they face.

AI-driven security operations that can match the speed of modern threats, like agentic workflows that automatically triage, investigate, and contain suspicious activity are operationally necessary. Having an agentic SOC mindset and approach to how these centers work will empower analysts’ activity. Agents will operate on behalf of the analyst automatically and transparently.

The traditional SOC pyramid puts humans at the bottom doing the highest-volume work. A wide analyst tier triaging alerts, feeding a narrower senior tier handling investigations. Adversarial AI has made that base layer untenable. The volume is too high, the speed too fast, the surface area too broad. The pyramid inverts into a diamond – AI takes the base while analysts rise to become threat engineers: managing, validating, and improving the agents working on their behalf.

AI agents handle the high-volume work of alert correlation, investigation enrichment, and initial containment while human analysts focus on strategic decisions and mission context. These agents amplify the expertise that government security professionals bring, delivering pre-investigated, correlated findings rather than a flood of disconnected alerts.

The rapid acceleration of sophisticated attacks calls for this essential change across the SOC. The public sector and industry are undergoing a significant transformation, shifting away from eyes-on-glass alert triage toward a high-impact era of threat engineering. In doing so, public sector teams will have the ability to greatly reduce mean time to detect/respond, in turn reducing SOC analyst fatigue and compressing investigation timelines.

Mike Nichols is the GM of Security at Elastic.

The post Why the Axios attack proves AI is mandatory for supply chain security appeared first on CyberScoop.

Maine’s moratorium

Maine’s legislature passed on April 14, 2026, a bill prohibiting the “construction or permitting” of any new datacenter which consumes more than 20 megawatts of power. If signed by the governor, it would take immediate effect but expire on November 1, 2027. Other states have proposed similar bans, but Maine’s is the first to pass […]

Ghost breaches: How AI-mediated narratives have become a new threat vector


A company wakes up to a news story claiming it has suffered a major data breach. The details are specific, technical and convincing. But the breach didn’t happen. No systems were compromised. No data was taken. A language model generated the entire story, filling in plausible details from scratch. And before the company can figure out what’s going on, a reporter at a reputable outlet picks up the story and requests comment. Within hours, the company is drafting statements and mobilizing its communications team to address a fictional event.

A second incident begins with something real. Years earlier, a company had suffered a genuine breach that received wide media coverage. The incident was investigated, resolved and closed. Then one of the outlets that originally reported on it redesigned its website. Old articles received new URLs and updated timestamps, and search engines re-indexed them as fresh content. AI-powered news aggregators picked up the signal and flagged it as a developing story. The company found itself fielding inquiries about an incident that had been resolved years before.

[Ed. note: The authors are withholding full specifics about the incidents because full disclosure could cause harm, yet CyberScoop confirmed with the authors that the incidents did in fact take place].

A third incident introduces yet another dimension. A cybersecurity publication ran a story about a business email compromise attack that cost a UK company close to a billion pounds. The article quoted a well-known security researcher, yet in reality, he had not spoken to the publication. AI generated the quotes, assigned them to him with full confidence, and the publication ran them as fact.

Together, these three cases expose a threat that most organizations have yet to prepare for. AI has developed the ability to fabricate convincing security incidents from nothing, complete with technical detail, named sources, and enough credibility to trigger full-scale crisis responses. Any organization that treats this as a distant or theoretical problem risks learning the hard way just how fast AI-generated fiction can become a real-world emergency.

The assumption that no longer holds

Cyber crisis response has always been built on a simple premise: something real happens, then you respond. That premise is breaking. AI systems now generate, amplify, and validate claims before security teams have confirmed anything. Once a narrative enters the ecosystem, it can be ingested into threat intelligence feeds, risk scoring platforms, and automated workflows. Fiction becomes signal.

For security teams, this creates a new class of false positive. Not a noisy alert from a misconfigured tool, but a fully formed external narrative that appears credible. A hallucinated breach can trigger internal investigations, executive escalation, and defensive actions. Time and resources get diverted toward disproving something that never happened.

Worse, it can influence real attacker behavior. Threat actors can weaponize fabricated breach narratives as pretext. Phishing emails referencing a “known incident” become more believable. Impersonation of IT or incident response teams becomes more effective. The narrative becomes part of the attack surface.

What this means for security teams

Security teams are used to monitoring for indicators of compromise. They now need to monitor for indicators of narrative. Open source intelligence pipelines are increasingly automated. If those pipelines ingest false information, downstream systems will act on it. That includes SIEM enrichment, third-party risk scoring, and even automated containment decisions in some environments.

The practical implication is that security teams need visibility into how their organization is being represented externally, not just what is happening internally. This is not traditional threat intelligence, but it behaves like it. Early detection changes outcomes.

There is also a need for tighter integration with communications. When a false narrative emerges, the technical reality and the external perception diverge. Both need to be managed in parallel.

What this means for communications teams

For communications teams, the timeline has collapsed. The first signal of a “breach” may not come from the SOC. It may come from a journalist, a customer, or an automated alert.

Silence is no longer neutral. If a narrative exists, AI systems will fill gaps with whatever information is available. That can reinforce inaccuracies with each iteration. Responses need to be designed for machine consumption as well as human audiences. Clear, declarative language. Verifiable facts. Structured statements that can be easily parsed and reused. The goal is to establish a competitive presence in the information supply chain.

Preparation becomes critical. Pre-approved language that can be deployed quickly. Established coordination with legal and security before something surfaces.

Shared implications

Both security and communications teams are now operating in the same environment, whether they recognize it or not. A hallucinated breach can trigger real operational disruption. Vendor relationships may be paused, connections to third-party systems may be severed, regulators may take interest, and markets may react. None of that requires an actual compromise. And this creates a feedback loop. External narratives drive internal actions. Internal actions, if visible, reinforce external narratives.

Breaking that loop requires speed, coordination, and clarity.

AI audits as a control mechanism

One of the most effective controls in this new environment is systematic AI auditing. Regularly testing how AI systems describe your organization, your security posture, and any alleged incidents. This provides visibility into what machines “believe” before that belief spreads. It allows organizations to identify and correct false narratives early, before they propagate into tooling, decision-making, and attacker behavior. It also highlights where accurate information needs to exist. Not just anywhere online, but in sources that AI systems prioritize.

The mindset shift

This marks a shift from incident response to narrative response. Security teams need to treat every alert as potentially fabricated. Communications teams need to prepare for narratives that form independently of what actually happened. Both must operate with the understanding that perception alone can trigger real consequences. In this environment, the ability to detect and respond to false narratives matters as much as the ability to detect and respond to actual breaches.

Mary Catherine Sullivan, who holds a Ph.D. in political science from Vanderbilt University, is a senior director of Data Science for Digital & Insights, within FTI’s Strategic Communications segment. She is a communications and data science leader specializing in message testing, audience research, digital communications analytics, and reputational risk assessment. As part of FTI Consulting’s Data Science team, she develops state-of-the-art artificial intelligence, natural language processing, machine learning, and statistical models to analyze media ecosystems, stakeholder discourse, and audience response—supporting informed, defensible decision-making for clients navigating complex reputational environments.

Brett Callow is a senior advisor in the Cybersecurity and Data Privacy Communications at FTI Consulting. With more than two decades of cybersecurity policy and legislation understanding and extensive cybersecurity communications experience, Brett’s expertise is widely recognized within the industry, by policy makers and the media. He has been involved in some of the most high-profile ransomware incidents and has participated in panels and policy-related discussions, including at the Office of the Director of National Intelligence and the Aspen Institute, and has served on the Advisory Board of the Royal United Services Institute’s Ransomware Harms project.

The post Ghost breaches: How AI-mediated narratives have become a new threat vector appeared first on CyberScoop.

We’re only seeing the tip of the chip-smuggling iceberg

Last year, Nvidia CEO Jensen Huang repeatedly denied that China was obtaining America’s most advanced chips. ‘There’s no evidence of any AI chip diversion,’ he said, dismissing such reports on another occasion as ‘tall tales.’

Federal prosecutors would beg to differ. They’ve charged six men over the past three weeks with smuggling billions of dollars’ worth of AI chips to China. The indictments, while a tactical victory, are a warning of how pervasive the problem has become, thanks both to loopholes in federal law and a failure to support existing laws with serious enforcement.

Both Washington and Beijing have tried to reshape AI chip supply chains to bolster their respective national security agendas ahead of an expected trade-focused summit in May. While the United States has imposed export controls on advanced chips to cut off China’s military modernization efforts, China has pushed its firms to adopt domestically produced components to secure its self-reliance.

But neither side can fully avoid the Willie Sutton rule. Why smuggle chips? Because that’s where the profit is — particularly without enough resources dedicated to enforcement. 

A closed Chinese market grasping for more powerful alternatives to their own products offers a prime incentive for American firms to provide components to Beijing. Smuggling has also transformed an emerging network of data center infrastructure across Southeast Asia into a source of illicit computing power for U.S. adversaries.

The recent cases highlight these features in detail. In March, prosecutors charged three people connected to Super Micro Computer, an American computing firm, with smuggling an estimated $2.5 billion in chips to Chinese customers by shipping servers to the company’s offices in Taiwan and elsewhere in the region. In the meantime, the trio designed warehouses full of fake products to fool U.S. authorities. A week later, prosecutors unveiled charges against another three individuals accused of conspiring to ship advanced chips to China via business contacts in Thailand.

This string of prosecutions suggests that despite some high-profile successes, smuggling remains a pervasive issue across the industry. While this is partially a problem of professed ignorance, it can also be solved with a combination of policy, personnel, and policing. 

The United States must strengthen controls over emerging technologies at the factory floor rather than the airport gate. While Washington has strong export control laws, these regulations are intended to prevent components from leaving the country. They do not, however, block Chinese firms from purchasing these technologies inside the country.

This divergence in intentions produces difficulties for prosecution, as smugglers are often solely indicted for evading customs enforcement rather than charged with illicitly obtaining the components while still on American soil. However, Congress can close this loophole via stronger due diligence laws that require greater scrutiny of potential customers ahead of the customs enforcement process.

Washington is also in an arms race with AI firms to properly fund enforcement mechanisms, a race it is currently losing. While one smuggling case alone involved $2.5 billion, federal spending on policing export controls amounted to $122 million in all of 2025.

Moreover, this surge of investment in computer hardware is increasingly global in scope, magnifying the current shortage of federal agents responsible for enforcing export controls at the exact moment both allies and adversaries are seeking to purchase ever larger batches of advanced chips.

Even with stronger policies and more personnel, prosecuting AI chip smuggling must also remain a policing priority for federal law enforcement. While these cases are often complex due to a range of technical and jurisdiction challenges, as well as an array of shifting export control regimes, the FBI and the Commerce Department should remain committed to tracking and disrupting these smuggling networks.

It will be key for the administration to separate enforcement actions from its ongoing diplomatic exchanges with Beijing — dropping domestic prosecutions should not be used as a bargaining chip to deliver trade concessions during the President Donald Trump’s upcoming travels to Beijing.

We need stronger enforcement so that the next billion-dollar smuggling case marks real progress, rather than exposing just how much slipped through.

Jack Burnham is a senior research analyst at the Foundation for Defense of Democracies’ China Program, focusing on China’s military, emerging technologies, and science and technology policy. Follow Jack on X @JackBurnham802.

The post We’re only seeing the tip of the chip-smuggling iceberg appeared first on CyberScoop.

Secretary Mullin must help finish the job: Urge the Senate to confirm Plankey

On March 23, the Senate confirmed Senator Markwayne Mullin as the next homeland security secretary, marking an important step in strengthening leadership during a critical moment for our nation’s security.

But only half of the job is done.

The Cybersecurity and Infrastructure Security Agency (CISA), the federal government’s main civilian cyber defense agency, still lacks a Senate-confirmed director. As global cyber threats escalate,  this prolonged leadership gap poses a growing national security risk.

As Executive Director of the National Technology Security Coalition (NTSC), I represent Chief Information Security Officers who are responsible for protecting the systems that sustain America’s economy and critical infrastructure. In every sector, energy, healthcare, financial services, manufacturing, and transportation, there is a common concern: the threat landscape is growing more aggressive, and our defenses must stay ahead.

Our enemies are not waiting.

Since the start of the conflict with Iran, cybersecurity experts have reported increased malicious cyber activity targeting U.S. and allied systems. Iran-linked actors have shown their ability to disrupt operations and exploit vulnerabilities. Meanwhile, China continues its long-term effort to infiltrate American networks and position itself for possible disruption of critical infrastructure. Russia and its affiliated groups remain persistent, probing Western systems for weaknesses and exerting constant pressure.

This is the reality of modern conflict. Cyber operations have emerged as a primary domain of competition. In some cases, they can rival the effects of traditional military action, disrupting economies, communications, and public safety through code alone. 

Leadership is important in this environment.

CISA plays a key role in coordinating federal cyber defense, sharing threat intelligence with the private sector, and supporting state and local governments. It serves as the link between government and industry in protecting the nation’s digital infrastructure. Without a Senate-confirmed director, the agency’s ability to set priorities, coordinate efforts, and respond quickly is limited.

That challenge is growing more urgent. The President’s fiscal year 2027 budget plan proposes significant cuts to CISA’s funding. At a time when the agency faces increasing operational pressure, fewer resources make strong, steady leadership even more crucial.

This is the moment when Secretary Mullin’s leadership is critical.

As a former member of the Senate, Secretary Mullin understands the institution, its dynamics, and how to build consensus. He is uniquely positioned to connect with past colleagues and help advance Sean Plankey’s nomination as Director of CISA.

Plankey is highly qualified and widely respected in the cybersecurity community. His experience in the U.S. Coast Guard, at the Department of Energy securing the nation’s energy infrastructure, and in the private sector provides him with a clear understanding of both the threat landscape and the importance of public-private collaboration. At a time when coordination between government and industry is vital, these qualities are essential.

The Senate has already signaled that it takes cyberthreats seriously. It recently confirmed Lt. Gen. Joshua Rudd to lead U.S. Cyber Command and serve as director of the National Security Agency, ensuring strong leadership of America’s military cyber defense team.

Now it needs to do the same on the civilian side.

Confirming Plankey matters because the country’s main civilian cyber defense agency needs established leadership to combat adversaries who are already inside our networks, probing our systems, and preparing for the next phase of conflict.

The leadership gap at CISA has gone on long enough.

Secretary Mullin must engage. The Senate needs to act. And Sean Plankey should be confirmed without further delay.

America’s cyber defenses depend on it.

Chris Sullivan is the executive director of the National Technology Security Coalition, a nonprofit, non-partisan organization that serves as an advocacy voice for chief information security officers across the nation.

The post Secretary Mullin must help finish the job: Urge the Senate to confirm Plankey appeared first on CyberScoop.

Don’t just fight fraud, hunt it

Our nation has entered a new fraud arms race fueled by AI.

With billions of dollars in fraud losses mounting in both the private and public sectors, it’s clear the old ways of deterring fraud aren’t working. That’s why we need a new playbook that starts with understanding how fraudsters operate, evolving our defenses, and shifting to a proactive posture that doesn’t just fight fraud but actively hunts it down. 

In the AI era, treating fraud as just a front-door problem won’t work. This moment requires industry, government, and consumers to work together, reduce silos, and share real-time intelligence. The goal is to move beyond reactive detection by understanding the lifecycle of a threat—from its formation to its spread—so we can intervene before it establishes a foothold.

For decades, fraud has been treated like a series of isolated incidents. This false assumption has underpinned nearly every past effort to crack down on it. Those efforts, while well-intentioned, have missed the mark. 

Now, in light of the Trump Administration’s Cyber Strategy for America and accompanying executive order, it’s critical to understand the modern fraud landscape and the central role that digital identity exploitation plays within it.

New research from Socure reveals just how dramatically the landscape is evolving. 

Fraud has become industrialized, with organized crime syndicates running operations that are global, systemic, automated, and powered by AI. No organization, service, or program is safe. Fraudsters target government programs, banks, fintech platforms, telecom companies, and more, blurring the lines between public sector fraud, financial crime, and cybercrime.

It used to be that fraud could be detected through the reuse of identity elements across multiple applications: the same email, device, phone number, or IP address used over and over. 

But the data is clear: these links are declining fast. Today’s sophisticated fraudsters are now engineering their attacks to avoid traditional fraud detection patterns. Our research demonstrates that emails will be completely unique within fraud populations as soon as 2027, so we won’t be able to rely on email to identify patterns.

Speed is another defining feature of modern identity fraud. Fraudsters use AI to create clean, durable, synthetic and stolen identities at scale. In one observed campaign, 24,148 synthetic identities were built and launched in under a month, with many attacks occurring within 48 hours. What once took weeks or even months can now be completed in days. 

The rapid rise of identity farms is another indicator of the industrialization of fraud. Identity farms are operated by crime rings to systematically create synthetic or stolen identities over time in order to closely resemble legitimate identities. Matured identities are used to open bank, credit, and money-movement accounts, siphon government benefits, launder funds, and more. These identity farms focus on durable identities that can bypass traditional verification controls.

So what should we do? Simply put, we must go on offense. 

This means treating identity as critical infrastructure and implementing strategies that track how identities were created before the moment of application; expanding signals monitoring to include elements like residential proxies, ISP behavior, and domain registration activity; evaluating velocity and orchestration in real-time; and treating continuous measurement, rapid model iteration, and cross-industry intelligence as core capabilities.

Additionally, given the rapid scaling of fraud, we need more analysis of the complete ecosystem, including dynamic factors like device information, digital footprints, and behavioral biometrics so organizations can effectively distinguish genuine humans from machines. Ultimately, this layered and interconnected approach makes it significantly harder for malicious actors to recreate or steal identities at scale.

Fraud is no longer a series of isolated acts. It is a coordinated, global enterprise built on the exploitation of identity. Until our efforts reflect this new reality, we will continue to fight an imminent and ongoing threat with outdated tools and fall further behind. 

Now is the time to make this strategic shift and finally put fraudsters on their heels. 

Mike Cook serves as head of fraud insights at Socure, the identity and risk platform for the AI age.

The post Don’t just fight fraud, hunt it appeared first on CyberScoop.

It’s time to get serious about post-quantum security. Here’s where to start.

After decades of development, quantum computing is now becoming increasingly available for advanced scientific and commercial use. The potential marvels range from accelerating drug discovery and materials science, to optimizing complex logistics and financial modeling.

But there’s a paradox to this trend: Quantum computing also poses a growing threat to data security.

The risk is that the algorithms and protocols currently used to secure devices, applications and computer systems could eventually be broken by malicious actors using quantum computing, compromising even the strongest security measures. By some estimates, widely used encryption standards such as RSA and ECC could be cracked by quantum computers as soon as 2029—a doomsday known as “Q-Day,” when current security standards would be rendered ineffective by quantum computing’s number-calculating prowess.

The possibility that quantum computing could break today’s data protection protocols is prompting chief security officers and chief technology officers to ramp up countermeasures. They’re doing it with post-quantum cryptography (PQC), a niche area of cybersecurity that is rising in priority across the business world. Lack of preparedness could be costly, with one report putting the potential U.S. economic cost of a quantum attack at more than $3 trillion. Even before that potential calamity, the current average cost of a data breach is upwards of $10 million, and that number will only increase commensurate to the scale of a quantum-induced breach.

That is why the quantum threat should not be treated as a concern only for forward-thinking executives. It must become a board-level issue for every enterprise. Organizations should launch a comprehensive PQC initiative that builds enterprise-wide awareness and updates digital systems and data assets to be resilient against quantum attacks.

Waiting until Q-Day would be mistake because people will not know when it occurs. It probably will not arrive with press releases or product announcements. Instead, in may unfold quietly as attackers try to maximize what they can steal before anyone notices. The reality is that sensitive data is already at risk of being stolen and stored away so it can be decoded – an attack referred to as “harvest now, decrypt later”- when Q-Day is a reality. Security pros need to give this immediate attention, even if the ultimate threat appears to be a few years away.

Quantum-proofing data at scale

Security teams are usually focused on immediate threats, but they still have a window of opportunity to prepare for Q-Day, as long as they start now. 

One interim measure underway is the transition to more robust versions of the digital certificates and keys that are already pervasive in business and everyday life. Such certificates, which act as identity credentials, are used to authenticate billions of users, devices, documents and other forms of communications and endpoints. The certificates contain cryptographic keys. Security teams are phasing in “47-day keys,” which are designed to expire and be replaced within 47 days—much more frequently than the current generation. It’s a step in the right direction, but not enough.

Establishing a hardened PQC defense requires much more than a standard software patch or upgrade to the public key infrastructure (PKI) used most everywhere to manage digital certificates and encrypt data. An enterprise-wide PQC strategy must be adopted and implemented at scale.

Consider the rapid rise of agentic AI, where organizations may need to assign digital identities to thousands or even millions of AI agents. That will require a level of authentication that goes well beyond existing infrastructure.

These projects will be led by the CISO but planning and execution should include other business leaders because post-quantum security must reach every part of the organization’s digital environment. Boards also need to be involved, given the governance stakes and the significant capital investment required. 

Developing a multi-year, multi-pronged strategy

Organizations in regulated industries—banking, healthcare and government, for example—are generally a step ahead in bracing for the post-quantum threat. Regardless of industry, though, few are fully prepared because readiness requires a detailed picture of an organization’s end-to-end data and security landscape.

In my experience, that holistic view is a rarity. For CISOs and their line-of-business colleagues, a good starting point is creating a comprehensive inventory of systems and data across the enterprise, then prioritizing what needs to be safeguarded.

Another important step is to begin testing and adopting the latest quantum-resistant algorithms and protocols that have been standardized by NIST. A growing range of PKI products and platforms support those specifications. That’s essential because the only way enterprises will be able to orchestrate, monitor and manage the scope of deployment is through automation.

Such updates are vital, but this isn’t a matter of simply replacing pre-quantum specs with newer ones. Because PQC will be a multi-year undertaking, organizations must bridge the gap between old and new. The best strategy for some will be a hybrid approach that combines classical cryptography and next-gen algorithms, though standardization remains a work in progress. Other organizations are driving toward a “pure” or unblended post-quantum model.

As for those harvest attacks, the best defense is straightforward: Encrypt your most sensitive long-lived data with quantum-resistant algorithms ASAP.

PQC is a shared responsibility

Unfortunately, there is no finish line in the race to quantum-era security. And even if an organization locks down its systems against emerging threats, there’s no guarantee that customers and business partners will do the same.

 Many vulnerabilities will still remain, which is why the business case for PQC includes protecting customer data and safeguarding reputation and brand trust as digital threats evolve quickly. Even today, a major breach can cost millions and inflict lasting damage to a corporate brand.

Quantum computing promises to bring many new capabilities to business and society—from transforming supply chain optimization and risk analysis, to enabling breakthrough discoveries in medicine and climate science. But the potential risks are just as substantial. After years of watching and waiting for quantum, business leaders have little choice but to take action.

Chris Hickman is the chief security officer of Keyfactor, a leading provider of quantum-safe security solutions. 

The post It’s time to get serious about post-quantum security. Here’s where to start. appeared first on CyberScoop.

Washington is right: Cybercrime is organized crime. Now we need to shut down the business model

The recently released executive order targeting cybercrime, fraud, and predatory schemes uses language the federal government has often avoided. Now, for the first time, the Trump administration is echoing what the cybersecurity industry has been shouting for years: cyber-enabled fraud is a product of transnational organized crime.

That distinction matters because organized crime requires an organized response.

Cybercrime is now the world’s fastest-growing criminal economy, built on stealing from everyday people. It is no longer a loose collection of hoodie-wearing hackers in basements or misfits trading malware in online forums. It is a mature global industry operating at scale. In the entirety of human history, there has not been a transfer of wealth of this magnitude since the era of pillaging empires. We have just gotten so used to it that it feels like background noise.

Modern cybercrime groups look less like street gangs and more like corporations. They run structured operations, complete with HR departments, training pipelines, performance metrics, and technology stacks that rival most enterprise companies. Their attackers don’t rely on sophisticated exploits — they think like expert investigators, systematically probing for weaknesses, exploiting psychological pressure, manipulating insiders, and using deception to move through gaps that defenders left open. They operate around the clock, in every time zone, and increasingly use AI to automate attacks at a scale that once required highly skilled operators.

Worse yet is that many of these operations rely on forced labor. Scam compounds in Southeast Asia run like factory floors, with rows of trafficked workers carrying out romance scams, cryptocurrency fraud, and impersonation schemes under threat of violence.

Their goal is to make fraud faster and more profitable. The result is a global criminal ecosystem that extends far beyond online scams. It fuels human trafficking, weapons smuggling, political corruption, compromised organ systems, and even nuclear programs.

If the federal government is ready to recognize what the industry has known — that cybercrime truly operates like an organized global industry — then responding to it solely through traditional law enforcement is not enough. The question goes beyond how governments apply sanctions, coordinate investigations, or pressure jurisdictions that harbor these operations. The greater question is whether the private sector is willing to help dismantle the infrastructure that allows this industry to thrive.

One word changes everything

I want to be specific about why this executive order is different, because the language is not accidental.

The order doesn’t just call these groups “hackers” or “organized crime.” It calls them transnational criminal organizations (TCOs). That word carries legal and operational weight that most coverage has glossed over. Transnational is the jurisdictional framing that authorizes an entirely different class of response. It is the same threshold that moves a case from local law enforcement to federal jurisdiction and beyond.

Pair that with what follows – “law enforcement, diplomacy, and potential offensive actions” – and you are reading something that goes well beyond a policy memo. Notice the sequence: diplomacy before offensive action is proportionality doctrine. But the administration did not rule out offensive action. The document also calls for deploying the “full suite of U.S. government defensive and offensive cyber operations” and uses the word “shape” as its first pillar of action. In military doctrine, shaping an adversary’s behavior does not mean gentle persuasion. It means force is part of the calculus.

This is not the language of a consumer protection policy. Whoever wrote this has studied the opposition.

An organized threat demands an organized response

The executive order draws a line in the sand: cybercrime has outgrown its origins as a consumer protection issue. It’s now a fundamental threat to economic stability and national security. But tackling an industry operating at this scale requires more than government action alone. The order’s answer is to mobilize the private sector – giving companies the green light to identify and disrupt adversary networks.

That framing matters.

The private sector sees the machinery of cybercrime every day. Security vendors, major platforms, and infrastructure providers spot the command-and-control servers, malicious domains, and payment pipelines that keep these operations moving. Too often, that intelligence is used only to defend commercial interests, when in reality, it should also be used to disrupt the networks behind the attacks. When criminal groups lose core infrastructure, they have to rebuild. That costs time. That costs money. That creates pressure.

At the same time, the order puts a question squarely before the private sector: How far is it willing to go, and under what terms? I spent my career believing “minimal force” matters. Precise, proportionate action prevents escalation and avoids creating cascading problems. As we move beyond a defense-only approach, those principles matter more than ever.

There is another question that sits underneath all of this: How far does “potential offensive actions” actually go? Does it stop at cyberspace? Financial sanctions? Asked bluntly, “Will leaders and shareholders know whether providing threat intelligence ends with a measured network take-down or an all-out drone strike on the fraudulent call center?”

Organizations need to fix the security weaknesses criminals are exploiting for profit. Most attacks in 2026 do not succeed because criminals are brilliant. They succeed because the basics are missing. No multifactor authentication. Weak Identity controls. Unpatched vulnerabilities sit open for months. Criminals don’t care about your industry or company size. They go where it’s easiest.

When organizations ignore basic security controls, they are doing more than accepting risk. They’re subsidizing the criminal infrastructure that exploits those gaps.

Governments must keep pressure on nations that harbor these operations. Large-scale cybercrime thrives where enforcement is weak or non-existent. The order specifically calls out “nations that tolerate predatory activity”—a signal that safe havens won’t be ignored. Stronger coordination across governments, law enforcement, and private industry can make it much harder for criminals to operate at scale.

The order also targets “foreign TCOs and associated networks,” with “associated networks” being a deliberately broad phrase. Defining who qualifies will be critical. Draw the lines too narrowly and the policy won’t work. Too broadly and you risk dangerous escalation.

Simply put, cybercriminal groups are disciplined because discipline pays. Disrupting them will require the same. It will demand pressure on countries that act as safe havens. It will take dismantling the infrastructure behind these schemes. It will require better basic security across every organization that criminals target.

The executive order is right – Cybercrime is organized. It is industrial. It is ruthless. For the first time in a long time, the response looks like it might be, too. Whether the government, private sector, and public can align around what this actually demands, and what it risks, are still unanswered questions.

After years of watching policy documents gather dust while victim numbers grow, I will take action over perfection every time.

Kyle Hanslovan is a former NSA cyberwarfare operator and CEO of Huntress Labs.

The post Washington is right: Cybercrime is organized crime. Now we need to shut down the business model appeared first on CyberScoop.

If consequences matter, they should apply to vendors, too

Washington has rediscovered consequences. Just not consistently.

The March 6 executive order rests on a simple, correct idea: cyber-enabled fraud persists because it is profitable, scalable, and too often tolerated. So the government’s answer is to raise the cost. More coordination. More disruption. More prosecutions. More diplomatic pressure on the states that shelter these operations.

Good.

But weeks ago, an OMB Memo rescinded earlier federal software supply chain memos issued during the Biden administration. In practice, that pulled back from the prior attestation-centered model and made tools like the Secure Software Development Attestation Form and SBOM requests optional rather than durable expectations.

Put plainly, we are getting tougher on the people exploiting digital systems while getting softer on the conditions that make those systems so easy to exploit.

The executive order gets something important right. Cyber-enabled fraud is not a collection of random online annoyances. It is an industrialized form of predation: ransomware, phishing, impersonation, sextortion, and financial fraud that’s run as repeatable business models, often transnational and sometimes protected by permissive states. The order responds with a more centralized federal posture built around disruption, coordination, intelligence sharing, prosecution, resilience, and international pressure.

That is directionally correct. Criminal ecosystems do not retreat because we publish better guidance. They retreat when the cost of doing business rises.

But then we arrive at software.

The critique of the old federal assurance regime is not entirely wrong. Compliance can become theater. Bureaucracies are very good at turning legitimate security goals into rituals of form collection and checkbox management. Some skepticism was warranted. OMB says as much explicitly, arguing the prior model became burdensome and prioritized compliance over genuine security investment.

Still, the failure of bad compliance is not proof that accountability itself was the problem.

That is where the logic breaks. The administration is clearly willing to believe that criminal actors respond to deterrence. It is willing to use prosecutions, sanctions, visa restrictions, and coordinated pressure downstream. But upstream, where insecure technology shapes the terrain those criminals exploit, the theory suddenly changes. There, we are told to trust discretion. Local judgment. Flexible, risk-based decisions.

Sometimes that is wisdom. Often it is just a more elegant way of saying no one wants a hard requirement.

This is also why my own position has not changed. In a post I wrote in 2024, I argued that the industry did not need softer expectations or another round of polite encouragement. It needed more concrete action and consequences strong enough to change incentives. The problem was never that we were demanding too much accountability. The problem was that insecure software remained too cheap to ship.

That is the deeper issue. Cybercrime at scale does not thrive only because criminals exist. It thrives because the environment rewards them. Weak identity systems, brittle software, sprawling dependency chains, poor visibility, and diffuse accountability all make predation cheaper. The people who ship avoidable risk rarely absorb the full cost of it. Everyone else does.

So these two policy moves, taken together, reveal something uncomfortable. The government seems to believe in consequences for cybercriminals, but not quite in consequences for insecure production. It wants deterrence for the scammer, but discretion for the supplier.

A coherent cyber strategy would do both. It would aggressively disrupt criminal networks and also create meaningful pressure for secure-by-design production and procurement. It would recognize that punishing attackers matters, but so does changing the terrain that keeps making attack profitable.

The administration is right about one thing: cybercrime will not shrink until the costs of predation rise.

The unanswered question is why that logic should stop at the edge of the scam center.

Brian Fox is the co-founder and CTO of Sonatype.

The post If consequences matter, they should apply to vendors, too appeared first on CyberScoop.

No, it’s not ‘unnecessarily burdensome’ to control your own data

According to a recent report, the State Department sent a cable urging U.S. diplomats to oppose international data sovereignty regulations like GDPR, characterizing these guardrails as “unnecessarily burdensome.” 

In the cable, the State Department claims that data sovereignty regulations “disrupt global data flows, increase costs and cybersecurity risks, limit Artificial Intelligence (AI) and cloud services, and expand government control in ways that can undermine civil liberties and enable censorship.”

Underpinning this argument is both a legitimate concern and a critical misconception.

The truth is that actual data sovereignty is technical, not territorial. 

Data localization is a blunt instrument trying to solve a sophisticated problem. Mandating that data stay within geographic boundaries doesn’t actually ensure that data owners retain control over how their information is accessed, used, or shared. People move; endpoints move; data must move.

European regulators have already defined what digital sovereignty actually requires. Specifically, in the aftermath of Schrems II, the European Data Protection Board made clear that sovereignty is preserved when data is strongly encrypted and the encryption keys remain solely under the control of the data owner in Europe. That clarity is often lost in broader geopolitical debates. 

True data sovereignty requires governments, enterprises, and citizens to retain cryptographic authority over who can access their information, regardless of where it is processed. Forcing data to sit inside national borders accomplishes little if foreign vendors still hold the keys. Sovereignty is fundamentally a technical challenge: it depends on controlling access through encryption and authentication, not simply controlling physical location.

There is a widespread belief that data sovereignty is disruptive to innovation, commerce, and national security. This is a misconception.

The memo presents a false choice: That we must either accept unfettered cross-border data flows with minimal protections in place for the data owner, or implement burdensome localization requirements that stifle innovation and collaboration.

This is simply not true, and the rise of data-centric security proves it: From the U.S., to Five Eyes nations, to the Indo-Pacific, security leaders are embracing this model. Rather than focusing efforts solely on building a strong perimeter boundary, controls and policies must instead follow the data itself, wherever it moves — providing more resilient and contextual security for the data itself. This is the central pillar of the DoW’s own Zero Trust strategy, and the model for agencies across the U.S. federal government and beyond. 

Even the Department of State’s own ITAR (the U.S. International Traffic in Arms Regulations) treat sensitive munitions data with location-specific requirements. There are good reasons for some types of sensitive information to be shielded from external eyes.

Context matters. We should not dismantle well-established data sovereignty standards without clear technical alternatives in place. Instead, we need to evaluate how to more effectively protect and govern sensitive data, without impeding the free flow of information. 

Data-centric security fortifies data sovereignty and liberates secure data flows. 

By shifting the focus from walls — border-specific protections, localization, and perimeters — to the data itself, you can fundamentally transform global data flows. When data is actually governed, tagged, and understood, it can move safely, through trusted channels, to achieve mission success.

In a data-centric security environment, a government agency can leverage cloud services from any provider while maintaining sovereign control over sensitive information by managing and hosting their own encryption keys, additionally providing resilience from third-party breaches with cloud service providers or other partners. 

This isn’t theoretical. Modern data-centric security architectures are in production today, with open standards like the Trusted Data Format enabling platform-agnostic, global data sharing among partners. It’s the antithesis of a data silo, allowing data to travel under very specific conditions and with governance attached to each data object itself. The U.K.’s Operation Highmast is a prime example of the success that comes from dynamic, intelligent data sharing among trusted partners. 

In an era defined by AI acceleration and geopolitical competition, sovereignty and interoperability must be engineered to reinforce one another — not framed as tradeoffs.

Angel Smith is the president of global public sector for Virtru.

The post No, it’s not ‘unnecessarily burdensome’ to control your own data appeared first on CyberScoop.

We’ve seen ransomware cost American lives. Here’s what it will actually take to stop it.

Flights canceled. Emergency rooms shut down. Centuries-old companies shuttered.

Ransomware and other similar cyberattacks have become so routine that even those serious human and economic consequences are often overlooked or easily forgotten.

This lack of focus is dangerous.

As former leaders of FBI and CISA cyber units, we’ve seen cybercrime ripple through communities – disrupting critical services, destroying jobs, and sometimes costing lives. Today’s ransomware numbers tell a stark story. The Department of Homeland Security reported more than 5,600 publicly-disclosed ransomware attacks worldwide in 2024, nearly half of them in the United States. The FBI found that ransomware incidents increased nearly nine percent year over year, with almost half targeting critical infrastructure. Attacks on these organizations pose the greatest threat to national security and public safety.

Despite this trend, we’re cautiously optimistic about the administration’s new National Cyber Strategy. It focuses on protecting critical infrastructure and stopping ransomware and cybercrime—threats it correctly elevates to top-tier national security threats.

But success requires sustained action across government and industry. Adversaries are evolving faster than defenses: ransomware attacks now average $2.73 million per incident, driving annual losses into the billions. Attackers have compressed their operations from weeks to hours, disabling Endpoint Detection and Response (EDR) tools and leaving defenders almost no time to stop an attack.

Basic cyber hygiene still matters. But it’s no longer sufficient. Attackers steal valid credentials, exploit known vulnerabilities, disable tools, and move laterally at machine speed, now accelerated by AI. They need a stunningly low level of technical expertise to do so, and AI tools are increasing the speed and scale of their actions.

Our defenses must keep pace with evolving threats. Protecting national security requires immediate action. Automating cyber threat information sharing offers clear benefits, but government agencies need significant structural and technological upgrades before they can effectively share data. This requires sustained investment and oversight.

The government does not have to do this alone. Industry and academia possess tools that could mean the difference between progress and revisiting this same conversation four, eight, or twelve years from now. Forums like CISA’s Joint Cyber Defense Collaborative (JCDC), the National Cyber Investigative Joint Task Force (NCIJTF), and NSA’s Cyber Collaboration Center (CCC) have demonstrated that information fusion and joint operational planning can work. But overlapping missions and unclear playbooks leave companies guessing what to share, when to share it, and with whom. These forums and underlying collaboration mechanisms must be resourced, deconflicted, and made predictable.

Despite the noble efforts of government agencies to share behind-the-scenes and interact with industry with one voice, the current structure remains fragile and dependent on personal relationships. We simply cannot afford this fragility or inefficiency, particularly in an era of constrained government cyber resources and escalating threats.

Effective protection of critical infrastructure requires focused collaboration. The administration’s strategy rightly emphasizes this, but narrowing this focus will not be easy. For years, the government has tried to cover sixteen sectors and hundreds of thousands of entities equally—an impossible task. Equal attention for all is unrealistic. Looking back, we wish we had prioritized more strategically during our time in government.

Prioritization is politically difficult, but operationally necessary. When everything is critical, nothing truly is. For the most important critical infrastructure, we must focus on resilience—ensuring systems can withstand attacks and recover quickly—rather than assuming we can prevent every breach.

The government can take concrete steps now to disrupt the ransomware ecosystem. Ransomware has cost American lives; designating certain ransomware actors and their enablers as Foreign Terrorist Organizations could unlock more powerful sanctions, diplomatic action, and intelligence operations. Sensible regulation holding cryptocurrency exchanges accountable for knowingly laundering ransomware proceeds could weaken criminal business models while strengthening legitimate digital asset markets in the U.S. and allied nations.

The technology and cybersecurity industry has responsibilities, as well. Industry must share actionable intelligence where legally permitted, pressure-test government programs with candid feedback, and support reauthorization of the Cybersecurity Information Sharing Act of 2015.

We all must do our part. Every day that passes without us confronting these critical questions is a gift to our adversaries. This will only be exacerbated by advancements in AI. We are hopeful that the release of this administration’s National Cyber Strategy will spark much-needed debate and decisions about the role of the government and industry in advancing our nation’s cybersecurity and resilience.

Cynthia Kaiser is senior vice president of Halcyon’s Ransomware Research Center. She was formerly Deputy Director of the FBI’s cyber division.

Matt Hartman serves as chief strategy officer at Merlin Group, where he is focused on identifying, accelerating, and scaling the delivery of transformative cyber technologies to the public sector and critical industries. Prior to this role, Matt spent the last five years serving as the senior career cybersecurity official at the Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security.

The post We’ve seen ransomware cost American lives. Here’s what it will actually take to stop it. appeared first on CyberScoop.

How ‘silent probing’ can make your security playbook a liability

For years, cyberattacks followed a familiar pattern: reconnaissance, exploitation, persistence, impact. Defenders built their strategies around that cycle, patching vulnerabilities, monitoring indicators, and working to reduce dwell time. But a quieter shift is underway.

Today’s most sophisticated adversaries are using AI to study how organizations defend themselves. They run what we call “silent probing campaigns:” long-term, subtle operations designed to map how a team detects threats, escalates issues, and responds under pressure. These campaigns focus on learning the defender’s habits, workflow and decision points so attackers can time and tailor follow-on actions to evade detection. This reframes cyber risk, turning it from a technical problem into a behavioral one.

From finding vulnerabilities to studying defenders

Historically, attackers focused solely on technical gaps, whether from an unpatched server, exposed credentials or a misconfigured cloud. The objective was to find the weakness and exploit it before someone else did. Silent probing adds a new “learning” phase to that playbook.

Attackers study how an organization responds as carefully as they study its systems. Using AI over weeks or months, they quietly measure detection and escalation speed, learn which alerts get ignored, and infer patterns like shift coverage, alert fatigue, and process bottlenecks.

Over time, these subtle probes generate data that feeds adaptive models. Those models help attackers learn what triggers a response, how quickly teams react, and where detection tends to falter. This means when a major attack finally unfolds, it has already been optimized against the organization’s real defensive patterns.

At the same time, organizations are embedding AI into their security operations, from automated triage to autonomous response orchestration. However, this shift introduces a new risk: the very systems designed to defend the enterprise can become part of the attack surface.

As organizations rely more heavily on AI to run their security operations, these systems need wide visibility and access to work properly. They often connect to cloud platforms, identity systems, and endpoint controls so they can detect threats and act quickly. But that level of access creates a substantial amount of power. If one of these AI-driven systems is compromised or manipulated, it doesn’t just expose a single tool, it can give an attacker broad reach across the environment. In that scenario, the technology designed to protect the organization can accelerate the damage.

Automation increases risk when AI systems can take action without human approval, such as  isolating devices, resetting passwords, or changing configurations.  Clear limits and guardrails are required, since manipulated inputs or faulty interpretations can trigger rapid wide-reaching disruption. Risk depends on the system’s authority and the controls around it.

AI hallucination in security operations can cause systems to misidentify threats, isolate the wrong assets or overlook the real threat. Repeated errors can erode trust in the system, or worse, create a false sense of confidence in its automated decisions. This affects judgment, decision-making, and how risk is understood in real time.

The risk of predictable defenses

Silent probing reveals how predictable an organization’s defenses are. Attackers are now looking for patterns in defensive behavior: response consistency across shifts, routinely ignored alerts, predictable incident response steps, and whether noisy tools accidentally hide slow-moving threats.

When defensive behavior becomes visible and predictable, it can be studied and exploited. Organizations need to understand how their defenses appear from the outside and assess their behavioral exposure the same way red teams test technical controls. This includes understanding how easily an outsider can identify detection thresholds, how clearly response times can be measured, and how much operational routine can be learned through quiet, repeated probing. The key question is whether patterns of response are unintentionally teaching attackers how to succeed.

Readiness in the age of AI

As AI plays a bigger role in security operations, oversight has to evolve alongside it. Strong governance starts with clearly defining what AI systems are allowed to do. Organizations need to be explicit about which actions can happen automatically and which require human approval. Conversely, least-privilege principles should apply not only to people, but also to machines. AI-driven tools should be tested regularly, reviewed for drift, bias, and inaccurate conclusions. Wherever possible, detection and response authority should be separated to avoid concentrating too much power in a single system. Centralization without control may feel efficient, but in practice, it creates fragility.

Still, policies and guardrails alone are not enough. As attackers use AI to understand defenders, defenders must sharpen their own ability to think like their adversaries. Security professionals need to evaluate how their tools perform and how they might be observed, manipulated, or misled. This requires questioning automated decisions, stepping in when necessary, and investigating anomalies—especially when the system appears confident in its conclusions.

This is why hands-on simulations and AI-focused red teaming matter. Teams need experience in environments that simulate adaptive adversaries who adjust their tactics based on defensive responses. not just textbook attack scenarios. They need to understand AI’s detection capabilities and the risks introduced by poor configurations or blind trust. The gap organizations face has become more cognitive than technological, and closing that gap requires continuous, measurable skill development, including AI literacy, offensive AI awareness, and the ability to critically evaluate automated outputs.

In an AI-first era, resilience now depends on how an organization defends itself like its being watched. Silent probing allows attackers to understand detection thresholds, escalation speed, and response consistency over weeks or months. and how consistently teams respond. This quiet observation can now serve a precursor to a major attack on an enterprise.

Security leaders need to focus on what their organizations reveal through day-to-day defensive behavior. When attackers can observe, learn, and adapt over time, predictable responses become a liability because they are easy to study and exploit.

Dimitrios Bougioukas is senior vice president of training at Hack The Box, where he leads the development of advanced training initiatives and certifications that equip cybersecurity professionals worldwide with mission-ready skills.

The post How ‘silent probing’ can make your security playbook a liability appeared first on CyberScoop.

Why ‘secure-by-design’ systems are non-negotiable in the AI era

Moody’s recently reported that global investment in data centers will surpass $3 trillion over the next five years, driven by AI capacity growth and hyperscaler demand. As big tech companies, banks, and institutional investors pour capital into these projects, data center developers and their financial sponsors must prioritze cybersecurity.

Moody’s said that data center investments made by the six largest U.S. cloud computing providers  — Microsoft, Amazon, Alphabet, Oracle, Meta, and CoreWeave — approached $400 billion last year. The firm anticipates that annual global investment will grow by $200 billion over the next two years.

Real estate firm Jones Lang LaSalle forecasted similar investment flows in a separate report published earlier this year, projecting that “nearly 100 GW of new data centers will be added between 2026 and 2030, doubling global capacity.” JLL said that this infrastructure investment “supercycle,” one of the largest in the modern era, will result in $1.2 trillion in real estate asset value creation and the need for roughly $870 billion of new debt financing.

In concert, these reports reflect a growing reality: Data centers are strategic, interconnected infrastructure supporting our manufacturing, national security, and communication systems. Cyber disruptions, whether through ransomware, supply-chain compromise, or operational technology (OT) compromises, can cascade beyond a single facility, threatening grid stability, cloud services, economic activity, and public safety.

Data centers are now critical hubs of energy demand and digital dependency. Their cybersecurity posture is directly tied to the resilience of the industrial and energy ecosystem that support them. For investors and stakeholders, cybersecurity should be fundamental to asset value and risk management. Strong cybersecurity directly affects uptime guarantees, regulatory exposure, insurance coverage, financing terms, and long-term valuation.

The most significant cybersecurity risks now center on three critical areas: data center-grid convergence, supply-chain vulnerabilities, and secure-by-design considerations. Data center operators and their financial backers must address these interconnected threats to protect both individual facilities and the broader system they support.  

Hardwired for risk

The cybersecurity challenge facing the data center supercycle stems from how these campuses are tightly coupled with both the public power grid and their own industrial control systems. As hyperscale and AI‑optimized facilities proliferate, their constant demand for high‑quality electricity shapes grid planning and reliability. These large campuses function less like traditional real estate and more like critical energy infrastructure nodes.

This shift comes as grid capacity tightens. The North American Electric Reliability Corporation (NERC) has warned that demand from new data centers will outpace energy supply growth in the coming years. A cyber incident that disrupts a major data center or degrades its industrial control systems can propagate into regional grid reliability issues, contract penalties, and broader economic disruption.

At the same time, the OT running these sites — building management, systems, cooling controls, battery and generator management — create dense cyber‑physical exposure. Global insurer Marsh notes that events in these systems, whether from human error or cyberattack, can cause physical damage and significant business interruption. The 2021 OVHcloud data center fire in Strasbourg, France destroyed an entire facility and disrupted services for thousands of customers, showing how failures in fire protection and cooling systems rapidly escalate. into catastrophic loss. Those safety functions now run through interconnected, remote-access-enabled OT systems.

Secure‑by‑design architectures for both grid‑side interfaces and on‑site OT are prerequisites for preventing this rapidly expanding energy–data infrastructure from becoming a single, converged point of failure.

Supply-chain integrity first

AI‑optimized campuses depend on massive volumes of GPUs, high‑density servers, network appliances, OT controllers, and edge devices. Many of these components are designed, manufactured, or assembled in jurisdictions at the center of great‑power competition, particularly China. Reports warn that state-aligned actors could introduce backdoors, malicious firmware, or weaponize delivery timelines to create strategic outages.

Secure‑by‑design must start at procurement. Security-conscious procurement requires stringent vendor due diligence, diversification away from single‑country dependencies, hardware and firmware validation before deployment, and alignment with export controls and national‑security guidance on high‑risk equipment. The bill of materials (BoM) for a modern data center must be treated like a living threat surface, with traceability from chip manufacture through installation, including approved vendor lists, tamper‑evident logistics, and mandatory firmware attestation.

Procurement teams need escalation paths for opaque supply chains, unexplained cost changes, or “gray‑market” alternatives, plus playbooks for rapidly substituting vendors when geopolitical shocks or sanctions make a product line unacceptable.

Governance around supply‑chain risk must reach the same level as power, cooling, and uptime guarantees in contracts with hyperscalers and large tenants. Secure‑by‑design campuses will embed requirements for hardware provenance, firmware update hygiene, and ongoing vulnerability disclosure into master service agreements and construction/operations contracts, with clear accountability when a supplier is implicated in espionage or sabotage.

Data center sponsors who cannot prove supply‑chain integrity will face growing pressure from regulators, insurers, and investors who see hardware trust as a prerequisite for AI and cloud infrastructure resilience.

Securing the infrastructure supply chain pipeline

Engineering secure-by-design campuses begins with assuming adversaries will target internet‑exposed and OT edge devices. Security architects must design environments that prevent any foothold at the edge from escalating into grid‑scale disruption or safety‑critical failure.

Geopolitically motivated campaigns against energy infrastructure are accelerating. Recent Russia-nexus attacks on the Polish power system and Romania’s national oil pipeline demonstrate that state‑linked and criminal groups see energy and digital infrastructure as leverage points. Last December, actors linked to Russia’s Sandworm APT compromised remote terminal units (RTUs), firewalls, and communications gateways at Polish substations and distributed energy facilities.

This precedent-setting cyberattack—the first to directly target distributed energy resources in a NATO member’s power system—is indicative of the current threat landscape. Sandworm’s campaign underscores how fragile edge devices are and how vital it is to harden the gateways at the OT boundary. The first pillar of secure-by-design campuses is disciplined network segmentation that treats OT as a distinct, high‑consequence domain.

OT networks should be carved into functional and geographic zones—separating building management from generator controls, from battery systems, from grid‑interconnection protection—with tightly controlled conduits between them, enforced by OT‑aware firewalls and protocol‑constrained paths.

Hardware‑enforced unidirectional gateways and data diodes offer uniquely strong protection at key boundaries. Data diodes allow telemetry and process data to flow outward from OT to IT and monitoring systems while physically blocking any return path, sharply reducing the chances that a web-based intrusion can reach OT systems.

Data diodes should be placed at key demarcation points—between the data center’s OT and corporate IT, between on‑site generation controls and the broader campus, and at interfaces with utility systems—so operators preserve visibility without exposing those domains to bidirectional network risk.

A second foundational element of secure‑by‑design campuses is a clear, continuously maintained OT asset inventory capturing every PLC, RTU, relay, drive, building controller, gateway, sensor, and engineering workstation, along with its network location, firmware version, vendor, and criticality. Effective segmentation depends on knowing what you have and how it communicates.

Operators cannot isolate critical power and cooling functions, or confidently place diodes and firewalls, without understanding which devices participate in those functions and which paths they rely on. This inventory must fully cover the same class of gateways and field devices abused in the Polish grid attack.

When asset inventories are linked to configuration and vulnerability management, operators can quickly identify exposed OT devices when they are approaching end  of life or when new flaws are disclosed. A comprehensive OT asset inventory also enables security teams to quickly locate high‑risk remote access paths and prioritize segments for additional hardening.

Secure‑by‑design engineering mandates the  mitigation of accelerating cyber risks posed by remote access gateways and the mass-automation of industrial functions. Every orchestration platform, management API, and remote session is a potential high‑impact attack vector.  This threat model requires consolidating OT access through hardened jump hosts with strong authentication and just‑in‑time privileges; sharply limiting what automation tools can change on OT networks, enforcing strict segregation between automation platforms and safety‑critical functions, continuously monitoring automated and remote actions, and hardening configuration‑management workflows.

Lastly, secure‑by‑design architecture demands OT‑aware visibility that can actually see and understand what is happening on control networks. This means instrumenting OT segments with monitoring tuned to industrial protocols and behaviors, correlating alerts with asset context, and wiring those insights into playbooks that can quickly isolate, triage, and physically replace compromised edge devices before an intrusion escalates.

Resilience is the only path to funding

The threat modeling, procurement, and design best practices detailed here directly constrain the blast radius of geopolitically charged campaigns that threaten data center reliability and safety. Data center developers, operators, and investors need this systems‑level blueprint for building AI‑era campuses that remain resilient as the energy and threat landscape becomes more contested.

Banks and institutional sponsors are deploying trillions of dollars in construction, fit‑out, and power capacity on the assumption that AI demand will translate into durable, high‑availability cash flows. Underinvesting in cybersecurity directly threatens covenants, refinancing options, insurance coverage, and asset valuation. Outages, safety incidents, or regulatory findings will capsize the investment thesis.

The campuses that will secure the best financing over the next decade will be those that can point to their secure‑by‑design architectures, campus-wide OT governance, and defensible supply‑chain practices. In this intertwining infrastructure supercycle and macro OT threat environment, power usage efficiency (PUE) metrics and fast build schedules will matter less that proven security safeguards.

The stakes are escalating rapidly. Developers and utilities are pairing energy‑hungry data centers with small modular reactors (SMRs) and other non‑traditional power generation. These campuses will converge with the security and risk profile of nuclear and high‑hazard industrial facilities, bringing heightened  regulations and adversary interest.

SMR data centers fundamentally change the threat model. When nuclear systems sit alongside AI clusters, secure-by-design takes on a new dimension. Operators, investors, regulators, and security professionals must prepare for this convergence. The integration of compute and power generation creates a dynamic that demands the security rigor of both digital and infrastructure and nuclear facilities. The window to build these protections into design is closing.

Jeffrey Knight is Director of Global Critical Infrastructure Services at InfraShield. Jeff brings more than 35 years of experience in nuclear engineering and cybersecurity across the Department of Defense (DoD), SWIFT, the NRC, and the Department of Energy (DOE) National Laboratory complex.

The post Why ‘secure-by-design’ systems are non-negotiable in the AI era appeared first on CyberScoop.

AI security’s ‘Great Wall’ problem

The Great Wall of China was built to slow northern raiders and prevent steppe armies from riding straight into the empire’s heart. Yet in 1644, its most impregnable fortress fell without a siege.

At Shanhai Pass, where the wall meets the Bohai Sea, General Wu Sangui commanded the eastern gate. Behind him: a rebel army had just taken Beijing, the emperor was dead, and the Ming Dynasty was buckling under internal crisis. Ahead: Manchu forces who had spent decades probing for weakness. Wu faced the oldest dilemma in fortress warfare: who is the greater threat?

He opened the gate. The Manchus poured through, defeated the rebels, and never left. They founded the Qing Dynasty and ruled China for the next 268 years, the last imperial dynasty before the republic.

The wall didn’t fail. The stone held. What broke was the human system it depended on.

Walls do not fail because the bricks are weak. They fail because the system around the wall is weak. Underpaid guards get bribed, gate procedures degrade, supply lines break. The attacker does not need to knock the wall down when they can walk through the gate.

That is why I disagree with the increasingly popular framing that AI security is fundamentally a cloud infrastructure problem. Cloud security matters. Identity, telemetry, and incident response are table stakes. But treating AI risk as something you can solve primarily by hardening the hosting layer is a comforting simplification, not a complete threat model.

Palo Alto Networks recently reported that 99% of organizations experienced at least one attack on an AI system in the past year. If nearly everyone is getting hit, the right conclusion is not “build a higher wall.” It is “we are defending the wrong boundaries.”

The fortress fallacy

A fortress mindset starts with an implicit assumption: secure the infrastructure and you secure the system. That mental model can work when the system boundary is clean and the workload is deterministic, but AI breaks both assumptions. Modern AI stacks are ecosystems that depend on components sitting outside the neat perimeter even when a model runs inside your cloud tenant: open-source libraries, data pipelines, evaluation tools, plug-ins, agent frameworks, and the humans who can change any of the above. If your security plan begins and ends with cloud controls, you will build excellent defenses around a system that attackers are not planning to assault head-on. Attackers route around strength and target weakness. Why scale the wall when someone at the gate is underpaid, overworked, or facing an impossible choice?

Consider the trust problem. Most organizations are consuming models trained elsewhere, fine-tuning them, and shipping them into production with limited internal expertise. Even if you self-host, you did not write the model or curate the training data. Cloud hardening does not change that reliance; it just makes the box look more secure. The most dangerous failures are not always intrusions but permissioned outcomes. AI agents turn mixed-trust inputs into execution, and when an agent can read internal content, open tickets, or trigger workflows, the attack surface moves to whatever the agent is allowed to do. If an attacker can influence what the agent consumes, they can influence what it executes, often without malware or exploit chains. Real-world AI incidents frequently involve the glue: telemetry, orchestration, plug-ins, and vendor services beside the model.

Humans remain the soft underbelly. AI security discussions obsess over architecture while threat actors obsess over access paths. Cloud controls cannot prevent coercion, social engineering, or insider risk. If a small set of people can approve changes to an AI system’s tools or policies, that is where the attacker will focus. 

History teaches this lesson clearly: the Great Wall’s garrison soldiers took payments to allow traders and smugglers to pass, to skip patrols, or to neglect watch duties. Irregular pay and delayed wages made them susceptible. Today, the “bribe” often isn’t cash. It might be a phishing link, a fake vendor request, or pressure to cut corners during a production crisis. The strongest wall doesn’t matter if the person guarding the gate is persuaded to open it.

Security from within 

Begin with a premise security leaders should already accept: everything can be breached. Your job is to reduce the odds, detect threats quickly, and limit the damage when prevention fails.

Threat model the entire AI system, not just where it’s hosted. Include data supply chains, tooling, evaluation pipelines, plug-ins, agents, and the people who can change them. If your threat model leaves out upstream and downstream dependencies, it’s incomplete.

Treat non-human identities as serious risks. Agents, service accounts, and tool credentials should be managed like privileged user accounts. Apply zero standing privileges (no always-on access), just-in-time access, contextual approvals for high-impact actions, and monitoring for unusual tool behavior. The moment you give an agent broad permissions “for productivity,” you’ve created a new control plane that attackers can hijack.

Build audit-grade change control. You should be able to answer: who changed agent permissions, retrieval sources, tool bindings, or policy gates? If those controls can be modified quietly, you’re running a system that’s one rushed change away from becoming a security incident.

Invest in detection that assumes manipulation, not just compromise. The hard part of AI misuse is that malicious actions can look legitimate in traditional logs. You need traceability from input to outcome: what context was fed in, what tool calls happened, what policy was active, and what boundary was crossed.

Cloud security is necessary—but not sufficient. When vendors say AI security is mainly a cloud infrastructure problem, I hear the modern version of “the wall is tall, so we’re safe.” Wu Sangui’s wall was tall too, stretching to the sea at the empire’s edge. None of that mattered when the system around it collapsed. Empires fall from within.

AI security isn’t solved by building a better fortress. It’s solved by governing delegated authority, hardening the supply chain, and building systems that can prove what happened when—not if—something goes wrong.

David Schwed is the COO of SVRN, an AI infrastructure company focused on building decentralized, sovereign intelligence systems that empower users with data privacy and autonomous computational power.

The post AI security’s ‘Great Wall’ problem appeared first on CyberScoop.

Why boards should be obsessed with their most ‘boring’ systems

Following a series of high-profile cyberattacks, boards of directors are now requiring their organizations to take greater responsibility for the risks posed by enterprise resource planning (ERP) systems pose after a series of high-profile cyberattacks. The Jaguar Land Rover (JLR), incident in Sept. 2025 illustrates the severe consequences of such attacks. The cyberattack forced JLR to halt production for six weeks, making it the costliest cyberattack in Britain’s history. The company’s revenue declined 24 percent that quarter, accounting for potentially over a  $1.2 billion drop in earnings, and subsequently reported a 43.3% wholesale sales volume drop the following quarter.

For decades, organizations have treated ERP systems like SAP as back-office workhorses. However, the JLR incident—carried out by executed by the cybercrime group ShinyHunters —has thrust ERP systems into the spotlight. That shift in attention is critical: today, 90% of the Fortune 500 use SAP, making these systems “crown jewel” assets that require the highest level of protection.

The threat is escalating. A recent Google Cloud Security report forecasts that ransomware operations specifically designed to target critical enterprise applications such as ERP systems will emerge in 2026, forcing organizations to make quick ransom payments and sacrifice business resilience. 

In our roles as board members, advisers, and cybersecurity CEOs, we’re witnessing a fundamental shift in how organizations approach ERP security: the conversation has moved from compliance to survival. Organizations are grappling with critical question: Who owns the risk? What is our recovery time? Can we patch critical ERP vulnerabilities within 72 hours? Do we have visibility inside the application?

ERP risks are an existential threat

To understand the severity of ERP security risks, the C-suite must first recognize how critical these sytems are. ERP systems are the operating system of modern businesses: They process  invoices, manage supply chains, record revenue, pay employees, ship products, and more. The scale is staggering: SAP’s customers alone are responsible for 84% of the world’s commerce. – Given this ubiquity, if your organization’s leadership can’t confirm whether you’re using SAP, you almost certainly are.

In 2025, more than 500 companies fell victim to the SAP NetWeaver zero-day vulnerability. This attack underscores what many security practitioners have warned: ERP application security has evolved from a ‘nice to have’ to a business-critical necessity.

When Stoli Group’s US subsidiaries filed for bankruptcy in 2024 following a ransomware attack on its ERP system, it demonstrated a stark reality: losing these system can lead to a company shutting its doors. When an organization’s central nervous system goes offline, the entire business stops functioning.

Unfortunately, the adversaries understand this inherent leverage better than we do. According to Onapsis research, SAP vulnerabilities grew by 39 percent in 2025. The cybercriminal marketplace price for SAP exploits has grown 400% (to more than $250,000) since 2020, which reflects the immense ROI of holding a Fortune 500 company’s operational capacity hostage.

The timeline for defense has become critically compressed. In 2025, threat actors are exploiting SAP security vulnerabilities within 72 hours of patch releases. Unprotected ERP systems deployed in the cloud are discovered and compromised in less than 3 hours. Meanwhile, the average enterprise patch cycle takes weeks or even months due to the rigorous testing required for complex, customized ERP environments. This mismatch creates a dangerous window of vulnerability.

The regulatory compliance vise

Boards face mounting pressure from an increasingly stringent global regulatory environment focused on securing critical data and infrastructure. ERP systems house multiple types of highly regulated data simultaneously—including financial records, personal employee information, customer data, and supply chain details—making them a focal point for regulatory scrutiny.

For public companies in the United States, Sarbanes-Oxley (SOX) requires attestation of financial reporting. The security of ERP systems is a SOX control issue because a breach could cause the efficacy of these systems to be compromised.

In the European Union (EU), GDPR regulations penalize companies that fail to protect personally identifiable information (PII). ERP systems house the vast majority of employee and customer data.

SEC disclosure rules in the United States and two other EU regulations, NIS2 and DORA, have introduced personal liability for board members and executives who fail to oversee their cybersecurity risks. A director can no longer say, “I didn’t understand the technical details.” Ignorance is now a legal liability.

A boardroom playbook for ERP resilience

As board members and advisors to multiple companies and audit committees, we have three key expectations for how organizations should approach ERP security.

First, boards need risk presented in dollar terms. Instead of asking for money to “patch technical vulnerabilities,” CISOs should tell the board exactly how much revenue is at risk. When requesting budget to secure SAP, frame it as an investment to protect specific revenue streams. This helps boards understand what they stand to lose, not just what they need to spend.

Second, stop treating security and productivity as opposing forces. Yes, patching systems might cause a brief disruption. But that minor inconvenience is nothing compared to the catastrophic impact of a total system lockout like the one ShinyHunters executed against JLR. CISOs should partner with CIOs to deploy automated monitoring tools that can detect potential exploits and prioritize patches for the most critical ERP vulnerabilities.

Third, someone must own responsibility for protecting these “crown jewel” systems. Too often, there’s a gray area between the CISO (who sets security policy), the CIO (who manages the technology infrastructure), and the ERP vendor. Boards must demand a clear shared responsibility model that defines who is accountable for what. It’s important to note that ERP vendors are not responsible for securing the application and data once deployed—which makes clear internal ownership even more critical.

Board members should be demanding answers to these questions: Do we have visibility into our ERP risk? Would we have visibility into an active attack?

We must assume a breach will happen. The only way to validate resilience is to test it. Boards should mandate tabletop exercises specifically designed around an ERP ransomware scenario, asking further questions like, “How do we communicate with suppliers?,” “How do we build and ship our products?,” “How do we make payroll?,” and “How do we restore from immutable backups if the primary data is compromised?”  Organizations must test their resilience before a crisis strikes, not during one.

A license to operate

The Jaguar Land Rover compromise was a watershed moment because it stripped away the illusion that our core systems are safe behind firewalls. Attackers have shifted their focus to critical business systems. They’ve professionalized their operations and dramatically increased the speed of their attacks.

For the C-suite and boards, the era of plausible deniability is over. Security is no longer just an IT expense; it’s what keeps your doors open. If you cannot protect the integrity of your financial data and the continuity of your supply chain, you do not have a viable business.

Just as boards have visibility into risk, CISOs should have visibility into all ERP instances. Organizations require four critical capabilities: discovery (identifying all ERP systems), assessment (finding vulnerabilities such as missing patches, weak configurations, and insecure custom code), real-time monitoring (detecting suspicious activity that may indicate an attack), and incident response (being able to quickly investigate and contain an ERP incident).

The decisions made in the boardroom today will affect the outcomes tomorrow. The next JLR-like event is most likely already unfolding. The only variable is whether your organization will be the next cautionary tale or the defender that held the line.

Dave DeWalt is the founder and CEO of NightDragon. Mariano Nunez is the CEO and co-founder of Onapsis.

The post Why boards should be obsessed with their most ‘boring’ systems appeared first on CyberScoop.

❌