Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why data centers now belong on the critical infrastructure list

By: Greg Otto
4 May 2026 at 06:00

Missile and drone attacks that took out cloud data centers in the Middle East underscored a critical vulnerability in the modern economy: reliance on digital infrastructure that sustains competitive advantage and operational continuity for corporations, nations, and militaries. 

The outages and downstream disruption were a preview of a new form of strategic and operational risk. Data centers have long been the backbone of the digital economy. What is changing is the scale of dependence as AI workloads dramatically increase the compute power required to run businesses, supply chains, and national security systems. 

Artificial intelligence has moved beyond business applications and into the core of warfare and national security. Last month, The New York Times reported that AI is “totally integrated” into the collection of intelligence and its use in strategic decision-making and military operations. Even if AI models are not directly firing weapons, AI-enabled analysis now plays a central role in how modern militaries gain visibility, find insights, and drive action.

That matters because it changes what should be considered critical infrastructure. If AI is a competitive advantage for companies and a battlefield advantage for warfighters, then the infrastructure that trains, hosts and runs AI becomes a high-value target. Attacks on the digital infrastructure organizations rely on can do more than inflict financial damage. They can slow decision-making, degrade logistics and reduce military effectiveness without ever engaging a conventional force.

Historically, nation-state campaigns targeting data centers and service providers focused on cyber intrusions for espionage or pre-positioning. What is different now is the emergence of physical attacks on digital infrastructure during active conflict. Russian military intelligence has been linked to campaigns aimed at digital infrastructure and managed services, often as part of a supply chain attack to compromise organizations at scale. Iran-aligned groups have repeatedly demonstrated willingness to target private sector entities to advance geopolitical goals. In many cases, the objective was access: steal data, implant persistence, map networks, and maintain a foothold that could be used later for espionage or disruption. 

What’s clearer now than ever before is that data centers and the AI workloads they support have become so vital to modern society, our adversaries will seek to degrade or destroy their efficacy as a tactic of both kinetic and cyber warfare.

We have already seen how quickly a digital incident can become real-world disruption. On March 11, reports surfaced of thousands of servers and endpoints wiped inside Stryker, a U.S.-based medical device manufacturer. A hacktivist group sympathetic to Iran, known as Handala, claimed responsibility. The incident reportedly halted Stryker’s global production after attackers accessed its Microsoft environment and issued a wipe command via Intune. Even without a single missile, the outcome looked like a strategic disruption: operations stopped and downstream customers felt it.

For business leaders, the imperative is clear: treat operational resilience as a board-level priority in the AI era.

In the world of corporate IT, cybersecurity prioritizes confidentiality: preventing theft of sensitive information. Resilience is a different discipline. It is the ability to sustain operations when systems are degraded, disrupted or actively under attack. For data centers and the businesses that depend on them, resilience comes down to preventing cascading failures and reducing the consequence when something inevitably goes wrong.

These developments carry an important implication for the private sector. Digital infrastructure is increasingly a strategic target, making resilience a core business priority rather than a narrow IT issue. For business leaders, the impact of data center disruption extends into multiple, often overlooked areas of cybersecurity risk.

For example, AI’s growth is colliding with a power wall in many regions where grid capacity cannot scale fast enough. That is driving facilities toward new power dependencies, including on-site generation through distributed energy and renewables, yielding more complex power management environments. This power infrastructure becomes a pressure point as interruptions to power supply or management systems can quickly force a data center offline. Russia has on several occasions demonstrated the ability to target and disrupt power generation and distribution in Ukraine in both 2015 and 2016.

Building management and automation systems, including HVAC and physical access controls, are another. These systems are essential to creating safe and supporting operational environments, but they typically have long capital depreciation cycles and inconsistent security safeguards. Frequently exposed to the Internet, and commonly misconfigured and not properly secured, they can become a pathway to outages by an attacker.

With an increasing density of computing infrastructure, thermal management has become a core environment control in data centers. As the industry adopts liquid cooling for dense AI loads, interference with cooling is no longer a niche technical issue. It is a risk vector that can cause downtime and potential equipment damage if breached by attackers.

Remote access creates another major exposure. Data centers rely on vendors, contractors, and systems integrators for maintenance, monitoring, and support, and each remote connection can become an entry point if it isn’t tightly controlled, centrally managed, and well secured. Adversaries often target these trusted access routes because they can be easier to compromise than a well-defended perimeter, allowing attackers to bypass standard controls and safeguards.

All of this has broader economic implications because data center disruption does not stay inside the technology sector. It cascades into the industries that keep society functioning and supply chains moving: hospitals, electric utilities, chemical production, food and beverage, oil and gas, and transportation. An extended outage becomes missed shipments, halted production, delayed care, safety concerns and lost trust.

What should leaders do now?

Start by defining resilience targets that match business reality: what must stay running, what can degrade, what cannot fail. Then invest in the controls that limit the impact of an incident. Segmentation between IT and OT assets should be non-negotiable. Remote access should be treated as a critical risk pathway with least privilege, strong authentication and continuous monitoring.

Manage facilities systems such as building management systems, power, and cooling controls as critical operational technology, with asset inventories, vulnerability management, logging, and incident response plans that anticipate disruption.

Finally, train to operate under degraded conditions. Tabletop exercises should include scenarios like loss of a cloud region, partial failure of a facility, or compromise of a management plane. Use these exercises to validate that the organization can maintain essential operations and recover quickly when disruptions occur. 

Policy is moving in this direction as well. Governments are increasingly treating data centers as critical infrastructure. Policies and frameworks such as the National Cybersecurity Strategy, CISA’s Secure by Design principles, and international standards like IEC 62443 all reflect a growing recognition that digital infrastructure is a national security issue. Companies that get ahead of this shift will not only reduce risk, they will build competitive advantage in a world where downtime can become a strategic weapon.

In the AI era, data centers are essential infrastructure for modern economies and national security. Their rising importance also makes them attractive targets in cyber and physical conflict. Protecting them is no longer just about safeguarding company operations, it is about protecting the systems society depends on every day. 

Grant Geyer is the chief strategy officer at Claroty.

The post Why data centers now belong on the critical infrastructure list appeared first on CyberScoop.

Everyone’s building AI agents. Almost nobody’s ready for what they do to identity.

By: Greg Otto
30 April 2026 at 06:00

Anthropic recently announced that it would not release Mythos, its most powerful AI model, to the public. The model discovered thousands of previously unknown software vulnerabilities — flaws that had sat undetected in major operating systems and web browsers for as long as nearly three decades. Anthropic said the model was too dangerous to deploy broadly because the same capabilities that let it find and fix security flaws could let attackers exploit them. A single AI agent, the company warned, could scan for weaknesses faster and more persistently than hundreds of human hackers. 

That decision tells you something important about where we are. The same AI systems that companies are racing to deploy as autonomous assistants — scheduling your appointments, writing your code, managing your workflows — are also capable of probing digital defenses at a speed and scale no human team can match. And most of the systems they’d be probing still rely on a security model designed for an era when a person sat behind every keyboard. 

Think of it like a building where every door has a lock, but the locks were all designed to recognize human hands. Now the building is full of robots — some of them authorized couriers, some of them intruders — and the locks can’t tell the difference. 

Not long ago, you could sit at your desk, glance at the sticky note on your monitor for your username and password, type them in, and grab a cup of coffee while your browser opened a doorway to the rest of the world. Every layer of security that followed — passwords, security questions, biometric scans, two-factor authentication — grew out of a single bedrock assumption: a person was on the other end. 

AI agents break that assumption from two directions at the same time. Legitimate agents need credentials to act like a human. OpenAI’s Operator navigates websites on your behalf. Google’s Gemini can plan your next family vacation while you sleep. Visa recently unveiled Intelligence Commerce Connect, a platform that lets AI agents do the shopping for consumers. These aren’t demos or hot takes from a tech conference floor. They’re shipping products that act on behalf of real people—and to do that, they need your identity. 

At the same time, adversaries can fake humanity at scale. The same AI that can act like a helpful assistant convincing can also be a malicious impersonator. They don’t break in, they log in—through shared credentials, hiring pipelines, vendor onboarding portals, and collaboration tools. Most organizations still treat identity as a login problem—something IT handles with stronger passwords or additional authentication steps layered on top of existing systems. The harder challenge now is knowing who, or what, you’ve already let in. 

That distinction is collapsing just as digital systems become more autonomous. 

When that distinction blurs, the damage is concrete. If a procurement workflow cannot distinguish between a human manager and an AI impersonator, purchase orders go out under false authority. When compliance logs cannot determine how a decision was authorized — by a person or a bot — the accountability chain falls apart. Regulators and customers will not accept “we’re not sure” as an explanation. 

The economics have tilted sharply toward the attacker. Sophisticated fraud once required coordination, with people researching targets, crafting messages, and adjusting tactics in real time. AI agents eliminate those constraints. One person can now supervise an army of autonomous systems, each running a valid persona across multiple interactions simultaneously. A single operator can field a hundred synthetic employees for the cost of one real salary. The barrier to large-scale impersonation is no longer skill or manpower. It is access to a capable model and a set of stolen credentials. 

Stronger identity controls do carry a cost. Every additional verification step is a moment when a customer might abandon a transaction, or an employee might lose patience with a security protocol. The goal is not to shut down automation. It is to make sure the systems acting in your name are authorized to do so. 

Some organizations are adapting. They are treating AI agents less like software and more like new employees, cataloging every agent in their environment, limiting permissions, requiring human approval for sensitive actions. They are moving beyond passwords to phishing-resistant authentication that binds access to a known device and a verified user. They are building behavioral baselines so that when a customer service bot suddenly queries a financial database, or a new hire accesses source code on day one, alarms go off. 

Nobody keeps their password on a sticky note anymore (I hope). But the assumption behind the sticky note, that a human hand would type it in, still underpins most of the systems we depend on. These systems hold your medical records, process your mortgage, and let an AI assistant rebook your flight. In a world where AI agents act faster, more persistently, and more convincingly than any person, that assumption is the vulnerability. 

The organizations that can verify identity continuously — not just at the door, but at every action, for every actor, human or machine — will have a durable advantage. The ones that cannot will find out what ambiguity costs. 

Devin Lynch is Senior Director of the Paladin Global Institute and a former Director for Policy and Strategy Implementation at the Office of the National Cyber Director. 

The post Everyone’s building AI agents. Almost nobody’s ready for what they do to identity. appeared first on CyberScoop.

Mythos can find the vulnerability. It can’t tell you what to do about it.

By: Greg Otto
21 April 2026 at 06:00

Mythos matters. It is a significant step forward in AI-assisted vulnerability discovery. But it does not mean cybersecurity changed overnight, nor does it mean enterprises are suddenly facing fully automated exploitation at internet scale tomorrow.

It does mean the offensive side of AI is continuing to improve. The defensive side needs to catch up now.

Mythos is the latest step in a longer trend. Over the next several years, expect the same pattern to repeat: incremental progress, then a jump; incremental progress, then a jump. Models will get more capable and cheaper with each cycle, and each jump will put more pressure on security teams still operating at human speed.

Mythos demonstrated that AI can find software vulnerabilities with unprecedented depth. That is real progress and should be taken seriously. However, this was not a case where AI suddenly made enterprise compromise cheap, easy, or automatic. Even in Anthropic’s own examples, the cost of discovering a critical vulnerability was significant. One example cited roughly $20,000 in token costs to identify a significant OpenBSD issue. 

Mythos made vulnerability discovery cheaper to scale by replacing bodies with dollars. But finding a vulnerability is only one part of the operational reality.

An attacker still has to determine whether that vulnerability is exploitable in a specific enterprise, identify a viable attack path, gain the necessary access, and successfully operationalize the exploit in a real environment. None of that became easy just because a model found a software bug.

And on the defensive side, Mythos does not yet solve the much harder enterprise problem: How do I know whether this vulnerability is actually exploitable in my environment, and what is the most efficient way to remediate it without breaking the business?

The real enterprise problem is not discovery. It is prioritization and action. Security leaders do not struggle only because vulnerabilities exist. They struggle because the operational cost of deciding what matters, what is exploitable, what can wait, and what can be fixed safely is enormous.

If a large enterprise learns that a critical vulnerability has been found in widely used software, the next step is not magic. It is a painful chain of operational questions focused on where they run the software, what version it is, whether there is a realistic attack path, and many more.

Mythos leaves the defensive cost of answering those questions inside a real enterprise largely unchanged. The right lesson is preparation.

One of the mistakes the market often makes with AI is assuming every new capability is the moment everything changes. The right move is to start now with defensive AI systems that are useful today and positioned to improve over time. For most enterprises, that means looking for AI products that help improve alert investigation, threat hunting, and vulnerability management, offer full audit capabilities, connect to enterprise data and reason to provide organizational context, and evolve as the model landscape matures.

The goal is to build the operational foundation now for a future in which more of the work can be automated safely.

Today, defenders need systems that let humans remain involved while the machine helps them scale. Over time, that involvement will change. Analysts will spend less time doing repetitive work themselves and more time orchestrating, reviewing, and improving how automated work gets done.

Eventually, some workflows will need to be reviewed in bulk rather than one action at a time. When response moves at machine speed, a human may not approve every individual remediation action. Instead, they will need a control center view into patterns: what the system did today, what worked, what did not, and what should be adjusted tomorrow.

That is a very different future from the simplistic idea of “replace the analyst.”

The real future is one where humans move from doing every task manually to supervising systems, shaping policy, reviewing patterns, and controlling how increasingly capable agents operate.

Mythos is a warning. Not because it means the sky is falling. Because it shows where the offensive side is heading. Defenders should move accordingly and with urgency.

Alex Thaman is the chief technology officer at Andesite. Over a 20+ year career, Alex has been an engineering leader at Microsoft, Unity Software, and Scale AI.

The post Mythos can find the vulnerability. It can’t tell you what to do about it. appeared first on CyberScoop.

Why the Axios attack proves AI is mandatory for supply chain security

By: Greg Otto
20 April 2026 at 09:17

Two weeks ago, a suspected North Korean threat actor slipped malicious code into a package within Axios, a widely used JavaScript library. The immediate concern was the blast radius: roughly 100 million weekly downloads spanning enterprises, startups, and government systems. But beyond the sheer scale, the attack’s speed was just as worrisome – a stark reminder of the tempo modern adversaries now operate at.

The Axios compromise was identified within minutes of publication by an Elastic researcher using an AI-powered monitoring tool that analyzed package registry changes in real time. The approach was right: AI classifying code changes at machine speed, at the moment of publication, before the damage compounds. By any standard, it was a fast response. The compromised package was removed in about three hours. But even in those three hours, the widely-used package may have been downloaded over half a million times.

This underscores a new reality. Enterprises and the public sector are being overwhelmed with attacks that are increasing in both speed and complexity, driven in part by AI. Adversaries are probing every link in the supply chain, and they are doing it at a pace that human-speed defenses cannot match.

This project is one example of using AI to tackle a security problem, but it also makes a broader case: AI-powered security can dramatically improve SOC efficiency especially when organizations across the public sector and beyond are drowning in attacks.

The direct threat to the public sector

Government agencies increasingly rely on the same open-source JavaScript frameworks as the private sector, so a poisoned package can give an adversary access to sensitive systems before anyone realizes the supply chain has been poisoned. This is a direct threat to national security and critical infrastructure, especially when the payloads are cross-platform, affecting macOS, Windows, and Linux.

What is most critical now is understanding and correctly preparing for the frequency and speed at which these attacks occur.

AI has fundamentally lowered the barrier to sophisticated cyber operations, granting relatively unsophisticated bad actors and small nation-states capabilities once reserved for elite criminal groups and countries. Adversaries now leverage AI to automate reconnaissance, craft convincing social engineering, and develop evasive malware. With a new vulnerability discovered every few minutes, the pace is accelerating.

For the public sector, the threat model has expanded. Defending against known nation-state playbooks is no longer sufficient—that’s just the baseline. Groups that couldn’t execute at nation-state levels five years ago now operate with comparable sophistication, while state-sponsored actors operate with unprecedented speed and automation. Staying ahead means moving beyond traditional defense to meet a threat landscape that is increasingly automated and ubiquitous.

AI is not optional

Adversarial AI is the defining threat of the current operating environment. Automated reconnaissance. AI-generated obfuscation. Machine-speed deployment across multiple vectors simultaneously. The adversary has implemented AI faster and more aggressively than most defensive teams.

It is rapidly becoming unquestionable in security: if you are not using AI to battle AI, you will lose.

That does not mean buying into the autonomous SOC fantasy. That approach treats AI in isolation, as if defenders are the only ones with access to the technology. Defensive AI is not a win button, but the minimum entry fee to stay level with the attacker. You still need business context, mission knowledge, and human judgment.

The agentic SOC transformation

The Axios compromise should serve as a clear signal. Nation-state actors are targeting the software supply chain with increasing frequency and sophistication. The government agencies and organizations that will defend successfully against these threats are the ones building security operations that can move just as fast as the threat actors they face.

AI-driven security operations that can match the speed of modern threats, like agentic workflows that automatically triage, investigate, and contain suspicious activity are operationally necessary. Having an agentic SOC mindset and approach to how these centers work will empower analysts’ activity. Agents will operate on behalf of the analyst automatically and transparently.

The traditional SOC pyramid puts humans at the bottom doing the highest-volume work. A wide analyst tier triaging alerts, feeding a narrower senior tier handling investigations. Adversarial AI has made that base layer untenable. The volume is too high, the speed too fast, the surface area too broad. The pyramid inverts into a diamond – AI takes the base while analysts rise to become threat engineers: managing, validating, and improving the agents working on their behalf.

AI agents handle the high-volume work of alert correlation, investigation enrichment, and initial containment while human analysts focus on strategic decisions and mission context. These agents amplify the expertise that government security professionals bring, delivering pre-investigated, correlated findings rather than a flood of disconnected alerts.

The rapid acceleration of sophisticated attacks calls for this essential change across the SOC. The public sector and industry are undergoing a significant transformation, shifting away from eyes-on-glass alert triage toward a high-impact era of threat engineering. In doing so, public sector teams will have the ability to greatly reduce mean time to detect/respond, in turn reducing SOC analyst fatigue and compressing investigation timelines.

Mike Nichols is the GM of Security at Elastic.

The post Why the Axios attack proves AI is mandatory for supply chain security appeared first on CyberScoop.

Ghost breaches: How AI-mediated narratives have become a new threat vector

By: Greg Otto
16 April 2026 at 06:00


A company wakes up to a news story claiming it has suffered a major data breach. The details are specific, technical and convincing. But the breach didn’t happen. No systems were compromised. No data was taken. A language model generated the entire story, filling in plausible details from scratch. And before the company can figure out what’s going on, a reporter at a reputable outlet picks up the story and requests comment. Within hours, the company is drafting statements and mobilizing its communications team to address a fictional event.

A second incident begins with something real. Years earlier, a company had suffered a genuine breach that received wide media coverage. The incident was investigated, resolved and closed. Then one of the outlets that originally reported on it redesigned its website. Old articles received new URLs and updated timestamps, and search engines re-indexed them as fresh content. AI-powered news aggregators picked up the signal and flagged it as a developing story. The company found itself fielding inquiries about an incident that had been resolved years before.

[Ed. note: The authors are withholding full specifics about the incidents because full disclosure could cause harm, yet CyberScoop confirmed with the authors that the incidents did in fact take place].

A third incident introduces yet another dimension. A cybersecurity publication ran a story about a business email compromise attack that cost a UK company close to a billion pounds. The article quoted a well-known security researcher, yet in reality, he had not spoken to the publication. AI generated the quotes, assigned them to him with full confidence, and the publication ran them as fact.

Together, these three cases expose a threat that most organizations have yet to prepare for. AI has developed the ability to fabricate convincing security incidents from nothing, complete with technical detail, named sources, and enough credibility to trigger full-scale crisis responses. Any organization that treats this as a distant or theoretical problem risks learning the hard way just how fast AI-generated fiction can become a real-world emergency.

The assumption that no longer holds

Cyber crisis response has always been built on a simple premise: something real happens, then you respond. That premise is breaking. AI systems now generate, amplify, and validate claims before security teams have confirmed anything. Once a narrative enters the ecosystem, it can be ingested into threat intelligence feeds, risk scoring platforms, and automated workflows. Fiction becomes signal.

For security teams, this creates a new class of false positive. Not a noisy alert from a misconfigured tool, but a fully formed external narrative that appears credible. A hallucinated breach can trigger internal investigations, executive escalation, and defensive actions. Time and resources get diverted toward disproving something that never happened.

Worse, it can influence real attacker behavior. Threat actors can weaponize fabricated breach narratives as pretext. Phishing emails referencing a “known incident” become more believable. Impersonation of IT or incident response teams becomes more effective. The narrative becomes part of the attack surface.

What this means for security teams

Security teams are used to monitoring for indicators of compromise. They now need to monitor for indicators of narrative. Open source intelligence pipelines are increasingly automated. If those pipelines ingest false information, downstream systems will act on it. That includes SIEM enrichment, third-party risk scoring, and even automated containment decisions in some environments.

The practical implication is that security teams need visibility into how their organization is being represented externally, not just what is happening internally. This is not traditional threat intelligence, but it behaves like it. Early detection changes outcomes.

There is also a need for tighter integration with communications. When a false narrative emerges, the technical reality and the external perception diverge. Both need to be managed in parallel.

What this means for communications teams

For communications teams, the timeline has collapsed. The first signal of a “breach” may not come from the SOC. It may come from a journalist, a customer, or an automated alert.

Silence is no longer neutral. If a narrative exists, AI systems will fill gaps with whatever information is available. That can reinforce inaccuracies with each iteration. Responses need to be designed for machine consumption as well as human audiences. Clear, declarative language. Verifiable facts. Structured statements that can be easily parsed and reused. The goal is to establish a competitive presence in the information supply chain.

Preparation becomes critical. Pre-approved language that can be deployed quickly. Established coordination with legal and security before something surfaces.

Shared implications

Both security and communications teams are now operating in the same environment, whether they recognize it or not. A hallucinated breach can trigger real operational disruption. Vendor relationships may be paused, connections to third-party systems may be severed, regulators may take interest, and markets may react. None of that requires an actual compromise. And this creates a feedback loop. External narratives drive internal actions. Internal actions, if visible, reinforce external narratives.

Breaking that loop requires speed, coordination, and clarity.

AI audits as a control mechanism

One of the most effective controls in this new environment is systematic AI auditing. Regularly testing how AI systems describe your organization, your security posture, and any alleged incidents. This provides visibility into what machines “believe” before that belief spreads. It allows organizations to identify and correct false narratives early, before they propagate into tooling, decision-making, and attacker behavior. It also highlights where accurate information needs to exist. Not just anywhere online, but in sources that AI systems prioritize.

The mindset shift

This marks a shift from incident response to narrative response. Security teams need to treat every alert as potentially fabricated. Communications teams need to prepare for narratives that form independently of what actually happened. Both must operate with the understanding that perception alone can trigger real consequences. In this environment, the ability to detect and respond to false narratives matters as much as the ability to detect and respond to actual breaches.

Mary Catherine Sullivan, who holds a Ph.D. in political science from Vanderbilt University, is a senior director of Data Science for Digital & Insights, within FTI’s Strategic Communications segment. She is a communications and data science leader specializing in message testing, audience research, digital communications analytics, and reputational risk assessment. As part of FTI Consulting’s Data Science team, she develops state-of-the-art artificial intelligence, natural language processing, machine learning, and statistical models to analyze media ecosystems, stakeholder discourse, and audience response—supporting informed, defensible decision-making for clients navigating complex reputational environments.

Brett Callow is a senior advisor in the Cybersecurity and Data Privacy Communications at FTI Consulting. With more than two decades of cybersecurity policy and legislation understanding and extensive cybersecurity communications experience, Brett’s expertise is widely recognized within the industry, by policy makers and the media. He has been involved in some of the most high-profile ransomware incidents and has participated in panels and policy-related discussions, including at the Office of the Director of National Intelligence and the Aspen Institute, and has served on the Advisory Board of the Royal United Services Institute’s Ransomware Harms project.

The post Ghost breaches: How AI-mediated narratives have become a new threat vector appeared first on CyberScoop.

We’re only seeing the tip of the chip-smuggling iceberg

By: Greg Otto
15 April 2026 at 06:00

Last year, Nvidia CEO Jensen Huang repeatedly denied that China was obtaining America’s most advanced chips. ‘There’s no evidence of any AI chip diversion,’ he said, dismissing such reports on another occasion as ‘tall tales.’

Federal prosecutors would beg to differ. They’ve charged six men over the past three weeks with smuggling billions of dollars’ worth of AI chips to China. The indictments, while a tactical victory, are a warning of how pervasive the problem has become, thanks both to loopholes in federal law and a failure to support existing laws with serious enforcement.

Both Washington and Beijing have tried to reshape AI chip supply chains to bolster their respective national security agendas ahead of an expected trade-focused summit in May. While the United States has imposed export controls on advanced chips to cut off China’s military modernization efforts, China has pushed its firms to adopt domestically produced components to secure its self-reliance.

But neither side can fully avoid the Willie Sutton rule. Why smuggle chips? Because that’s where the profit is — particularly without enough resources dedicated to enforcement. 

A closed Chinese market grasping for more powerful alternatives to their own products offers a prime incentive for American firms to provide components to Beijing. Smuggling has also transformed an emerging network of data center infrastructure across Southeast Asia into a source of illicit computing power for U.S. adversaries.

The recent cases highlight these features in detail. In March, prosecutors charged three people connected to Super Micro Computer, an American computing firm, with smuggling an estimated $2.5 billion in chips to Chinese customers by shipping servers to the company’s offices in Taiwan and elsewhere in the region. In the meantime, the trio designed warehouses full of fake products to fool U.S. authorities. A week later, prosecutors unveiled charges against another three individuals accused of conspiring to ship advanced chips to China via business contacts in Thailand.

This string of prosecutions suggests that despite some high-profile successes, smuggling remains a pervasive issue across the industry. While this is partially a problem of professed ignorance, it can also be solved with a combination of policy, personnel, and policing. 

The United States must strengthen controls over emerging technologies at the factory floor rather than the airport gate. While Washington has strong export control laws, these regulations are intended to prevent components from leaving the country. They do not, however, block Chinese firms from purchasing these technologies inside the country.

This divergence in intentions produces difficulties for prosecution, as smugglers are often solely indicted for evading customs enforcement rather than charged with illicitly obtaining the components while still on American soil. However, Congress can close this loophole via stronger due diligence laws that require greater scrutiny of potential customers ahead of the customs enforcement process.

Washington is also in an arms race with AI firms to properly fund enforcement mechanisms, a race it is currently losing. While one smuggling case alone involved $2.5 billion, federal spending on policing export controls amounted to $122 million in all of 2025.

Moreover, this surge of investment in computer hardware is increasingly global in scope, magnifying the current shortage of federal agents responsible for enforcing export controls at the exact moment both allies and adversaries are seeking to purchase ever larger batches of advanced chips.

Even with stronger policies and more personnel, prosecuting AI chip smuggling must also remain a policing priority for federal law enforcement. While these cases are often complex due to a range of technical and jurisdiction challenges, as well as an array of shifting export control regimes, the FBI and the Commerce Department should remain committed to tracking and disrupting these smuggling networks.

It will be key for the administration to separate enforcement actions from its ongoing diplomatic exchanges with Beijing — dropping domestic prosecutions should not be used as a bargaining chip to deliver trade concessions during the President Donald Trump’s upcoming travels to Beijing.

We need stronger enforcement so that the next billion-dollar smuggling case marks real progress, rather than exposing just how much slipped through.

Jack Burnham is a senior research analyst at the Foundation for Defense of Democracies’ China Program, focusing on China’s military, emerging technologies, and science and technology policy. Follow Jack on X @JackBurnham802.

The post We’re only seeing the tip of the chip-smuggling iceberg appeared first on CyberScoop.

Secretary Mullin must help finish the job: Urge the Senate to confirm Plankey

By: Greg Otto
14 April 2026 at 06:00

On March 23, the Senate confirmed Senator Markwayne Mullin as the next homeland security secretary, marking an important step in strengthening leadership during a critical moment for our nation’s security.

But only half of the job is done.

The Cybersecurity and Infrastructure Security Agency (CISA), the federal government’s main civilian cyber defense agency, still lacks a Senate-confirmed director. As global cyber threats escalate,  this prolonged leadership gap poses a growing national security risk.

As Executive Director of the National Technology Security Coalition (NTSC), I represent Chief Information Security Officers who are responsible for protecting the systems that sustain America’s economy and critical infrastructure. In every sector, energy, healthcare, financial services, manufacturing, and transportation, there is a common concern: the threat landscape is growing more aggressive, and our defenses must stay ahead.

Our enemies are not waiting.

Since the start of the conflict with Iran, cybersecurity experts have reported increased malicious cyber activity targeting U.S. and allied systems. Iran-linked actors have shown their ability to disrupt operations and exploit vulnerabilities. Meanwhile, China continues its long-term effort to infiltrate American networks and position itself for possible disruption of critical infrastructure. Russia and its affiliated groups remain persistent, probing Western systems for weaknesses and exerting constant pressure.

This is the reality of modern conflict. Cyber operations have emerged as a primary domain of competition. In some cases, they can rival the effects of traditional military action, disrupting economies, communications, and public safety through code alone. 

Leadership is important in this environment.

CISA plays a key role in coordinating federal cyber defense, sharing threat intelligence with the private sector, and supporting state and local governments. It serves as the link between government and industry in protecting the nation’s digital infrastructure. Without a Senate-confirmed director, the agency’s ability to set priorities, coordinate efforts, and respond quickly is limited.

That challenge is growing more urgent. The President’s fiscal year 2027 budget plan proposes significant cuts to CISA’s funding. At a time when the agency faces increasing operational pressure, fewer resources make strong, steady leadership even more crucial.

This is the moment when Secretary Mullin’s leadership is critical.

As a former member of the Senate, Secretary Mullin understands the institution, its dynamics, and how to build consensus. He is uniquely positioned to connect with past colleagues and help advance Sean Plankey’s nomination as Director of CISA.

Plankey is highly qualified and widely respected in the cybersecurity community. His experience in the U.S. Coast Guard, at the Department of Energy securing the nation’s energy infrastructure, and in the private sector provides him with a clear understanding of both the threat landscape and the importance of public-private collaboration. At a time when coordination between government and industry is vital, these qualities are essential.

The Senate has already signaled that it takes cyberthreats seriously. It recently confirmed Lt. Gen. Joshua Rudd to lead U.S. Cyber Command and serve as director of the National Security Agency, ensuring strong leadership of America’s military cyber defense team.

Now it needs to do the same on the civilian side.

Confirming Plankey matters because the country’s main civilian cyber defense agency needs established leadership to combat adversaries who are already inside our networks, probing our systems, and preparing for the next phase of conflict.

The leadership gap at CISA has gone on long enough.

Secretary Mullin must engage. The Senate needs to act. And Sean Plankey should be confirmed without further delay.

America’s cyber defenses depend on it.

Chris Sullivan is the executive director of the National Technology Security Coalition, a nonprofit, non-partisan organization that serves as an advocacy voice for chief information security officers across the nation.

The post Secretary Mullin must help finish the job: Urge the Senate to confirm Plankey appeared first on CyberScoop.

Don’t just fight fraud, hunt it

By: Greg Otto
9 April 2026 at 08:00

Our nation has entered a new fraud arms race fueled by AI.

With billions of dollars in fraud losses mounting in both the private and public sectors, it’s clear the old ways of deterring fraud aren’t working. That’s why we need a new playbook that starts with understanding how fraudsters operate, evolving our defenses, and shifting to a proactive posture that doesn’t just fight fraud but actively hunts it down. 

In the AI era, treating fraud as just a front-door problem won’t work. This moment requires industry, government, and consumers to work together, reduce silos, and share real-time intelligence. The goal is to move beyond reactive detection by understanding the lifecycle of a threat—from its formation to its spread—so we can intervene before it establishes a foothold.

For decades, fraud has been treated like a series of isolated incidents. This false assumption has underpinned nearly every past effort to crack down on it. Those efforts, while well-intentioned, have missed the mark. 

Now, in light of the Trump Administration’s Cyber Strategy for America and accompanying executive order, it’s critical to understand the modern fraud landscape and the central role that digital identity exploitation plays within it.

New research from Socure reveals just how dramatically the landscape is evolving. 

Fraud has become industrialized, with organized crime syndicates running operations that are global, systemic, automated, and powered by AI. No organization, service, or program is safe. Fraudsters target government programs, banks, fintech platforms, telecom companies, and more, blurring the lines between public sector fraud, financial crime, and cybercrime.

It used to be that fraud could be detected through the reuse of identity elements across multiple applications: the same email, device, phone number, or IP address used over and over. 

But the data is clear: these links are declining fast. Today’s sophisticated fraudsters are now engineering their attacks to avoid traditional fraud detection patterns. Our research demonstrates that emails will be completely unique within fraud populations as soon as 2027, so we won’t be able to rely on email to identify patterns.

Speed is another defining feature of modern identity fraud. Fraudsters use AI to create clean, durable, synthetic and stolen identities at scale. In one observed campaign, 24,148 synthetic identities were built and launched in under a month, with many attacks occurring within 48 hours. What once took weeks or even months can now be completed in days. 

The rapid rise of identity farms is another indicator of the industrialization of fraud. Identity farms are operated by crime rings to systematically create synthetic or stolen identities over time in order to closely resemble legitimate identities. Matured identities are used to open bank, credit, and money-movement accounts, siphon government benefits, launder funds, and more. These identity farms focus on durable identities that can bypass traditional verification controls.

So what should we do? Simply put, we must go on offense. 

This means treating identity as critical infrastructure and implementing strategies that track how identities were created before the moment of application; expanding signals monitoring to include elements like residential proxies, ISP behavior, and domain registration activity; evaluating velocity and orchestration in real-time; and treating continuous measurement, rapid model iteration, and cross-industry intelligence as core capabilities.

Additionally, given the rapid scaling of fraud, we need more analysis of the complete ecosystem, including dynamic factors like device information, digital footprints, and behavioral biometrics so organizations can effectively distinguish genuine humans from machines. Ultimately, this layered and interconnected approach makes it significantly harder for malicious actors to recreate or steal identities at scale.

Fraud is no longer a series of isolated acts. It is a coordinated, global enterprise built on the exploitation of identity. Until our efforts reflect this new reality, we will continue to fight an imminent and ongoing threat with outdated tools and fall further behind. 

Now is the time to make this strategic shift and finally put fraudsters on their heels. 

Mike Cook serves as head of fraud insights at Socure, the identity and risk platform for the AI age.

The post Don’t just fight fraud, hunt it appeared first on CyberScoop.

Washington is right: Cybercrime is organized crime. Now we need to shut down the business model

By: Greg Otto
16 March 2026 at 06:00

The recently released executive order targeting cybercrime, fraud, and predatory schemes uses language the federal government has often avoided. Now, for the first time, the Trump administration is echoing what the cybersecurity industry has been shouting for years: cyber-enabled fraud is a product of transnational organized crime.

That distinction matters because organized crime requires an organized response.

Cybercrime is now the world’s fastest-growing criminal economy, built on stealing from everyday people. It is no longer a loose collection of hoodie-wearing hackers in basements or misfits trading malware in online forums. It is a mature global industry operating at scale. In the entirety of human history, there has not been a transfer of wealth of this magnitude since the era of pillaging empires. We have just gotten so used to it that it feels like background noise.

Modern cybercrime groups look less like street gangs and more like corporations. They run structured operations, complete with HR departments, training pipelines, performance metrics, and technology stacks that rival most enterprise companies. Their attackers don’t rely on sophisticated exploits — they think like expert investigators, systematically probing for weaknesses, exploiting psychological pressure, manipulating insiders, and using deception to move through gaps that defenders left open. They operate around the clock, in every time zone, and increasingly use AI to automate attacks at a scale that once required highly skilled operators.

Worse yet is that many of these operations rely on forced labor. Scam compounds in Southeast Asia run like factory floors, with rows of trafficked workers carrying out romance scams, cryptocurrency fraud, and impersonation schemes under threat of violence.

Their goal is to make fraud faster and more profitable. The result is a global criminal ecosystem that extends far beyond online scams. It fuels human trafficking, weapons smuggling, political corruption, compromised organ systems, and even nuclear programs.

If the federal government is ready to recognize what the industry has known — that cybercrime truly operates like an organized global industry — then responding to it solely through traditional law enforcement is not enough. The question goes beyond how governments apply sanctions, coordinate investigations, or pressure jurisdictions that harbor these operations. The greater question is whether the private sector is willing to help dismantle the infrastructure that allows this industry to thrive.

One word changes everything

I want to be specific about why this executive order is different, because the language is not accidental.

The order doesn’t just call these groups “hackers” or “organized crime.” It calls them transnational criminal organizations (TCOs). That word carries legal and operational weight that most coverage has glossed over. Transnational is the jurisdictional framing that authorizes an entirely different class of response. It is the same threshold that moves a case from local law enforcement to federal jurisdiction and beyond.

Pair that with what follows – “law enforcement, diplomacy, and potential offensive actions” – and you are reading something that goes well beyond a policy memo. Notice the sequence: diplomacy before offensive action is proportionality doctrine. But the administration did not rule out offensive action. The document also calls for deploying the “full suite of U.S. government defensive and offensive cyber operations” and uses the word “shape” as its first pillar of action. In military doctrine, shaping an adversary’s behavior does not mean gentle persuasion. It means force is part of the calculus.

This is not the language of a consumer protection policy. Whoever wrote this has studied the opposition.

An organized threat demands an organized response

The executive order draws a line in the sand: cybercrime has outgrown its origins as a consumer protection issue. It’s now a fundamental threat to economic stability and national security. But tackling an industry operating at this scale requires more than government action alone. The order’s answer is to mobilize the private sector – giving companies the green light to identify and disrupt adversary networks.

That framing matters.

The private sector sees the machinery of cybercrime every day. Security vendors, major platforms, and infrastructure providers spot the command-and-control servers, malicious domains, and payment pipelines that keep these operations moving. Too often, that intelligence is used only to defend commercial interests, when in reality, it should also be used to disrupt the networks behind the attacks. When criminal groups lose core infrastructure, they have to rebuild. That costs time. That costs money. That creates pressure.

At the same time, the order puts a question squarely before the private sector: How far is it willing to go, and under what terms? I spent my career believing “minimal force” matters. Precise, proportionate action prevents escalation and avoids creating cascading problems. As we move beyond a defense-only approach, those principles matter more than ever.

There is another question that sits underneath all of this: How far does “potential offensive actions” actually go? Does it stop at cyberspace? Financial sanctions? Asked bluntly, “Will leaders and shareholders know whether providing threat intelligence ends with a measured network take-down or an all-out drone strike on the fraudulent call center?”

Organizations need to fix the security weaknesses criminals are exploiting for profit. Most attacks in 2026 do not succeed because criminals are brilliant. They succeed because the basics are missing. No multifactor authentication. Weak Identity controls. Unpatched vulnerabilities sit open for months. Criminals don’t care about your industry or company size. They go where it’s easiest.

When organizations ignore basic security controls, they are doing more than accepting risk. They’re subsidizing the criminal infrastructure that exploits those gaps.

Governments must keep pressure on nations that harbor these operations. Large-scale cybercrime thrives where enforcement is weak or non-existent. The order specifically calls out “nations that tolerate predatory activity”—a signal that safe havens won’t be ignored. Stronger coordination across governments, law enforcement, and private industry can make it much harder for criminals to operate at scale.

The order also targets “foreign TCOs and associated networks,” with “associated networks” being a deliberately broad phrase. Defining who qualifies will be critical. Draw the lines too narrowly and the policy won’t work. Too broadly and you risk dangerous escalation.

Simply put, cybercriminal groups are disciplined because discipline pays. Disrupting them will require the same. It will demand pressure on countries that act as safe havens. It will take dismantling the infrastructure behind these schemes. It will require better basic security across every organization that criminals target.

The executive order is right – Cybercrime is organized. It is industrial. It is ruthless. For the first time in a long time, the response looks like it might be, too. Whether the government, private sector, and public can align around what this actually demands, and what it risks, are still unanswered questions.

After years of watching policy documents gather dust while victim numbers grow, I will take action over perfection every time.

Kyle Hanslovan is a former NSA cyberwarfare operator and CEO of Huntress Labs.

The post Washington is right: Cybercrime is organized crime. Now we need to shut down the business model appeared first on CyberScoop.

If consequences matter, they should apply to vendors, too

By: Greg Otto
11 March 2026 at 06:00

Washington has rediscovered consequences. Just not consistently.

The March 6 executive order rests on a simple, correct idea: cyber-enabled fraud persists because it is profitable, scalable, and too often tolerated. So the government’s answer is to raise the cost. More coordination. More disruption. More prosecutions. More diplomatic pressure on the states that shelter these operations.

Good.

But weeks ago, an OMB Memo rescinded earlier federal software supply chain memos issued during the Biden administration. In practice, that pulled back from the prior attestation-centered model and made tools like the Secure Software Development Attestation Form and SBOM requests optional rather than durable expectations.

Put plainly, we are getting tougher on the people exploiting digital systems while getting softer on the conditions that make those systems so easy to exploit.

The executive order gets something important right. Cyber-enabled fraud is not a collection of random online annoyances. It is an industrialized form of predation: ransomware, phishing, impersonation, sextortion, and financial fraud that’s run as repeatable business models, often transnational and sometimes protected by permissive states. The order responds with a more centralized federal posture built around disruption, coordination, intelligence sharing, prosecution, resilience, and international pressure.

That is directionally correct. Criminal ecosystems do not retreat because we publish better guidance. They retreat when the cost of doing business rises.

But then we arrive at software.

The critique of the old federal assurance regime is not entirely wrong. Compliance can become theater. Bureaucracies are very good at turning legitimate security goals into rituals of form collection and checkbox management. Some skepticism was warranted. OMB says as much explicitly, arguing the prior model became burdensome and prioritized compliance over genuine security investment.

Still, the failure of bad compliance is not proof that accountability itself was the problem.

That is where the logic breaks. The administration is clearly willing to believe that criminal actors respond to deterrence. It is willing to use prosecutions, sanctions, visa restrictions, and coordinated pressure downstream. But upstream, where insecure technology shapes the terrain those criminals exploit, the theory suddenly changes. There, we are told to trust discretion. Local judgment. Flexible, risk-based decisions.

Sometimes that is wisdom. Often it is just a more elegant way of saying no one wants a hard requirement.

This is also why my own position has not changed. In a post I wrote in 2024, I argued that the industry did not need softer expectations or another round of polite encouragement. It needed more concrete action and consequences strong enough to change incentives. The problem was never that we were demanding too much accountability. The problem was that insecure software remained too cheap to ship.

That is the deeper issue. Cybercrime at scale does not thrive only because criminals exist. It thrives because the environment rewards them. Weak identity systems, brittle software, sprawling dependency chains, poor visibility, and diffuse accountability all make predation cheaper. The people who ship avoidable risk rarely absorb the full cost of it. Everyone else does.

So these two policy moves, taken together, reveal something uncomfortable. The government seems to believe in consequences for cybercriminals, but not quite in consequences for insecure production. It wants deterrence for the scammer, but discretion for the supplier.

A coherent cyber strategy would do both. It would aggressively disrupt criminal networks and also create meaningful pressure for secure-by-design production and procurement. It would recognize that punishing attackers matters, but so does changing the terrain that keeps making attack profitable.

The administration is right about one thing: cybercrime will not shrink until the costs of predation rise.

The unanswered question is why that logic should stop at the edge of the scam center.

Brian Fox is the co-founder and CTO of Sonatype.

The post If consequences matter, they should apply to vendors, too appeared first on CyberScoop.

No, it’s not ‘unnecessarily burdensome’ to control your own data

By: Greg Otto
10 March 2026 at 06:00

According to a recent report, the State Department sent a cable urging U.S. diplomats to oppose international data sovereignty regulations like GDPR, characterizing these guardrails as “unnecessarily burdensome.” 

In the cable, the State Department claims that data sovereignty regulations “disrupt global data flows, increase costs and cybersecurity risks, limit Artificial Intelligence (AI) and cloud services, and expand government control in ways that can undermine civil liberties and enable censorship.”

Underpinning this argument is both a legitimate concern and a critical misconception.

The truth is that actual data sovereignty is technical, not territorial. 

Data localization is a blunt instrument trying to solve a sophisticated problem. Mandating that data stay within geographic boundaries doesn’t actually ensure that data owners retain control over how their information is accessed, used, or shared. People move; endpoints move; data must move.

European regulators have already defined what digital sovereignty actually requires. Specifically, in the aftermath of Schrems II, the European Data Protection Board made clear that sovereignty is preserved when data is strongly encrypted and the encryption keys remain solely under the control of the data owner in Europe. That clarity is often lost in broader geopolitical debates. 

True data sovereignty requires governments, enterprises, and citizens to retain cryptographic authority over who can access their information, regardless of where it is processed. Forcing data to sit inside national borders accomplishes little if foreign vendors still hold the keys. Sovereignty is fundamentally a technical challenge: it depends on controlling access through encryption and authentication, not simply controlling physical location.

There is a widespread belief that data sovereignty is disruptive to innovation, commerce, and national security. This is a misconception.

The memo presents a false choice: That we must either accept unfettered cross-border data flows with minimal protections in place for the data owner, or implement burdensome localization requirements that stifle innovation and collaboration.

This is simply not true, and the rise of data-centric security proves it: From the U.S., to Five Eyes nations, to the Indo-Pacific, security leaders are embracing this model. Rather than focusing efforts solely on building a strong perimeter boundary, controls and policies must instead follow the data itself, wherever it moves — providing more resilient and contextual security for the data itself. This is the central pillar of the DoW’s own Zero Trust strategy, and the model for agencies across the U.S. federal government and beyond. 

Even the Department of State’s own ITAR (the U.S. International Traffic in Arms Regulations) treat sensitive munitions data with location-specific requirements. There are good reasons for some types of sensitive information to be shielded from external eyes.

Context matters. We should not dismantle well-established data sovereignty standards without clear technical alternatives in place. Instead, we need to evaluate how to more effectively protect and govern sensitive data, without impeding the free flow of information. 

Data-centric security fortifies data sovereignty and liberates secure data flows. 

By shifting the focus from walls — border-specific protections, localization, and perimeters — to the data itself, you can fundamentally transform global data flows. When data is actually governed, tagged, and understood, it can move safely, through trusted channels, to achieve mission success.

In a data-centric security environment, a government agency can leverage cloud services from any provider while maintaining sovereign control over sensitive information by managing and hosting their own encryption keys, additionally providing resilience from third-party breaches with cloud service providers or other partners. 

This isn’t theoretical. Modern data-centric security architectures are in production today, with open standards like the Trusted Data Format enabling platform-agnostic, global data sharing among partners. It’s the antithesis of a data silo, allowing data to travel under very specific conditions and with governance attached to each data object itself. The U.K.’s Operation Highmast is a prime example of the success that comes from dynamic, intelligent data sharing among trusted partners. 

In an era defined by AI acceleration and geopolitical competition, sovereignty and interoperability must be engineered to reinforce one another — not framed as tradeoffs.

Angel Smith is the president of global public sector for Virtru.

The post No, it’s not ‘unnecessarily burdensome’ to control your own data appeared first on CyberScoop.

We’ve seen ransomware cost American lives. Here’s what it will actually take to stop it.

By: Greg Otto
9 March 2026 at 06:00

Flights canceled. Emergency rooms shut down. Centuries-old companies shuttered.

Ransomware and other similar cyberattacks have become so routine that even those serious human and economic consequences are often overlooked or easily forgotten.

This lack of focus is dangerous.

As former leaders of FBI and CISA cyber units, we’ve seen cybercrime ripple through communities – disrupting critical services, destroying jobs, and sometimes costing lives. Today’s ransomware numbers tell a stark story. The Department of Homeland Security reported more than 5,600 publicly-disclosed ransomware attacks worldwide in 2024, nearly half of them in the United States. The FBI found that ransomware incidents increased nearly nine percent year over year, with almost half targeting critical infrastructure. Attacks on these organizations pose the greatest threat to national security and public safety.

Despite this trend, we’re cautiously optimistic about the administration’s new National Cyber Strategy. It focuses on protecting critical infrastructure and stopping ransomware and cybercrime—threats it correctly elevates to top-tier national security threats.

But success requires sustained action across government and industry. Adversaries are evolving faster than defenses: ransomware attacks now average $2.73 million per incident, driving annual losses into the billions. Attackers have compressed their operations from weeks to hours, disabling Endpoint Detection and Response (EDR) tools and leaving defenders almost no time to stop an attack.

Basic cyber hygiene still matters. But it’s no longer sufficient. Attackers steal valid credentials, exploit known vulnerabilities, disable tools, and move laterally at machine speed, now accelerated by AI. They need a stunningly low level of technical expertise to do so, and AI tools are increasing the speed and scale of their actions.

Our defenses must keep pace with evolving threats. Protecting national security requires immediate action. Automating cyber threat information sharing offers clear benefits, but government agencies need significant structural and technological upgrades before they can effectively share data. This requires sustained investment and oversight.

The government does not have to do this alone. Industry and academia possess tools that could mean the difference between progress and revisiting this same conversation four, eight, or twelve years from now. Forums like CISA’s Joint Cyber Defense Collaborative (JCDC), the National Cyber Investigative Joint Task Force (NCIJTF), and NSA’s Cyber Collaboration Center (CCC) have demonstrated that information fusion and joint operational planning can work. But overlapping missions and unclear playbooks leave companies guessing what to share, when to share it, and with whom. These forums and underlying collaboration mechanisms must be resourced, deconflicted, and made predictable.

Despite the noble efforts of government agencies to share behind-the-scenes and interact with industry with one voice, the current structure remains fragile and dependent on personal relationships. We simply cannot afford this fragility or inefficiency, particularly in an era of constrained government cyber resources and escalating threats.

Effective protection of critical infrastructure requires focused collaboration. The administration’s strategy rightly emphasizes this, but narrowing this focus will not be easy. For years, the government has tried to cover sixteen sectors and hundreds of thousands of entities equally—an impossible task. Equal attention for all is unrealistic. Looking back, we wish we had prioritized more strategically during our time in government.

Prioritization is politically difficult, but operationally necessary. When everything is critical, nothing truly is. For the most important critical infrastructure, we must focus on resilience—ensuring systems can withstand attacks and recover quickly—rather than assuming we can prevent every breach.

The government can take concrete steps now to disrupt the ransomware ecosystem. Ransomware has cost American lives; designating certain ransomware actors and their enablers as Foreign Terrorist Organizations could unlock more powerful sanctions, diplomatic action, and intelligence operations. Sensible regulation holding cryptocurrency exchanges accountable for knowingly laundering ransomware proceeds could weaken criminal business models while strengthening legitimate digital asset markets in the U.S. and allied nations.

The technology and cybersecurity industry has responsibilities, as well. Industry must share actionable intelligence where legally permitted, pressure-test government programs with candid feedback, and support reauthorization of the Cybersecurity Information Sharing Act of 2015.

We all must do our part. Every day that passes without us confronting these critical questions is a gift to our adversaries. This will only be exacerbated by advancements in AI. We are hopeful that the release of this administration’s National Cyber Strategy will spark much-needed debate and decisions about the role of the government and industry in advancing our nation’s cybersecurity and resilience.

Cynthia Kaiser is senior vice president of Halcyon’s Ransomware Research Center. She was formerly Deputy Director of the FBI’s cyber division.

Matt Hartman serves as chief strategy officer at Merlin Group, where he is focused on identifying, accelerating, and scaling the delivery of transformative cyber technologies to the public sector and critical industries. Prior to this role, Matt spent the last five years serving as the senior career cybersecurity official at the Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security.

The post We’ve seen ransomware cost American lives. Here’s what it will actually take to stop it. appeared first on CyberScoop.

How ‘silent probing’ can make your security playbook a liability

By: Greg Otto
2 March 2026 at 06:00

For years, cyberattacks followed a familiar pattern: reconnaissance, exploitation, persistence, impact. Defenders built their strategies around that cycle, patching vulnerabilities, monitoring indicators, and working to reduce dwell time. But a quieter shift is underway.

Today’s most sophisticated adversaries are using AI to study how organizations defend themselves. They run what we call “silent probing campaigns:” long-term, subtle operations designed to map how a team detects threats, escalates issues, and responds under pressure. These campaigns focus on learning the defender’s habits, workflow and decision points so attackers can time and tailor follow-on actions to evade detection. This reframes cyber risk, turning it from a technical problem into a behavioral one.

From finding vulnerabilities to studying defenders

Historically, attackers focused solely on technical gaps, whether from an unpatched server, exposed credentials or a misconfigured cloud. The objective was to find the weakness and exploit it before someone else did. Silent probing adds a new “learning” phase to that playbook.

Attackers study how an organization responds as carefully as they study its systems. Using AI over weeks or months, they quietly measure detection and escalation speed, learn which alerts get ignored, and infer patterns like shift coverage, alert fatigue, and process bottlenecks.

Over time, these subtle probes generate data that feeds adaptive models. Those models help attackers learn what triggers a response, how quickly teams react, and where detection tends to falter. This means when a major attack finally unfolds, it has already been optimized against the organization’s real defensive patterns.

At the same time, organizations are embedding AI into their security operations, from automated triage to autonomous response orchestration. However, this shift introduces a new risk: the very systems designed to defend the enterprise can become part of the attack surface.

As organizations rely more heavily on AI to run their security operations, these systems need wide visibility and access to work properly. They often connect to cloud platforms, identity systems, and endpoint controls so they can detect threats and act quickly. But that level of access creates a substantial amount of power. If one of these AI-driven systems is compromised or manipulated, it doesn’t just expose a single tool, it can give an attacker broad reach across the environment. In that scenario, the technology designed to protect the organization can accelerate the damage.

Automation increases risk when AI systems can take action without human approval, such as  isolating devices, resetting passwords, or changing configurations.  Clear limits and guardrails are required, since manipulated inputs or faulty interpretations can trigger rapid wide-reaching disruption. Risk depends on the system’s authority and the controls around it.

AI hallucination in security operations can cause systems to misidentify threats, isolate the wrong assets or overlook the real threat. Repeated errors can erode trust in the system, or worse, create a false sense of confidence in its automated decisions. This affects judgment, decision-making, and how risk is understood in real time.

The risk of predictable defenses

Silent probing reveals how predictable an organization’s defenses are. Attackers are now looking for patterns in defensive behavior: response consistency across shifts, routinely ignored alerts, predictable incident response steps, and whether noisy tools accidentally hide slow-moving threats.

When defensive behavior becomes visible and predictable, it can be studied and exploited. Organizations need to understand how their defenses appear from the outside and assess their behavioral exposure the same way red teams test technical controls. This includes understanding how easily an outsider can identify detection thresholds, how clearly response times can be measured, and how much operational routine can be learned through quiet, repeated probing. The key question is whether patterns of response are unintentionally teaching attackers how to succeed.

Readiness in the age of AI

As AI plays a bigger role in security operations, oversight has to evolve alongside it. Strong governance starts with clearly defining what AI systems are allowed to do. Organizations need to be explicit about which actions can happen automatically and which require human approval. Conversely, least-privilege principles should apply not only to people, but also to machines. AI-driven tools should be tested regularly, reviewed for drift, bias, and inaccurate conclusions. Wherever possible, detection and response authority should be separated to avoid concentrating too much power in a single system. Centralization without control may feel efficient, but in practice, it creates fragility.

Still, policies and guardrails alone are not enough. As attackers use AI to understand defenders, defenders must sharpen their own ability to think like their adversaries. Security professionals need to evaluate how their tools perform and how they might be observed, manipulated, or misled. This requires questioning automated decisions, stepping in when necessary, and investigating anomalies—especially when the system appears confident in its conclusions.

This is why hands-on simulations and AI-focused red teaming matter. Teams need experience in environments that simulate adaptive adversaries who adjust their tactics based on defensive responses. not just textbook attack scenarios. They need to understand AI’s detection capabilities and the risks introduced by poor configurations or blind trust. The gap organizations face has become more cognitive than technological, and closing that gap requires continuous, measurable skill development, including AI literacy, offensive AI awareness, and the ability to critically evaluate automated outputs.

In an AI-first era, resilience now depends on how an organization defends itself like its being watched. Silent probing allows attackers to understand detection thresholds, escalation speed, and response consistency over weeks or months. and how consistently teams respond. This quiet observation can now serve a precursor to a major attack on an enterprise.

Security leaders need to focus on what their organizations reveal through day-to-day defensive behavior. When attackers can observe, learn, and adapt over time, predictable responses become a liability because they are easy to study and exploit.

Dimitrios Bougioukas is senior vice president of training at Hack The Box, where he leads the development of advanced training initiatives and certifications that equip cybersecurity professionals worldwide with mission-ready skills.

The post How ‘silent probing’ can make your security playbook a liability appeared first on CyberScoop.

Why ‘secure-by-design’ systems are non-negotiable in the AI era

By: Greg Otto
17 February 2026 at 06:00

Moody’s recently reported that global investment in data centers will surpass $3 trillion over the next five years, driven by AI capacity growth and hyperscaler demand. As big tech companies, banks, and institutional investors pour capital into these projects, data center developers and their financial sponsors must prioritze cybersecurity.

Moody’s said that data center investments made by the six largest U.S. cloud computing providers  — Microsoft, Amazon, Alphabet, Oracle, Meta, and CoreWeave — approached $400 billion last year. The firm anticipates that annual global investment will grow by $200 billion over the next two years.

Real estate firm Jones Lang LaSalle forecasted similar investment flows in a separate report published earlier this year, projecting that “nearly 100 GW of new data centers will be added between 2026 and 2030, doubling global capacity.” JLL said that this infrastructure investment “supercycle,” one of the largest in the modern era, will result in $1.2 trillion in real estate asset value creation and the need for roughly $870 billion of new debt financing.

In concert, these reports reflect a growing reality: Data centers are strategic, interconnected infrastructure supporting our manufacturing, national security, and communication systems. Cyber disruptions, whether through ransomware, supply-chain compromise, or operational technology (OT) compromises, can cascade beyond a single facility, threatening grid stability, cloud services, economic activity, and public safety.

Data centers are now critical hubs of energy demand and digital dependency. Their cybersecurity posture is directly tied to the resilience of the industrial and energy ecosystem that support them. For investors and stakeholders, cybersecurity should be fundamental to asset value and risk management. Strong cybersecurity directly affects uptime guarantees, regulatory exposure, insurance coverage, financing terms, and long-term valuation.

The most significant cybersecurity risks now center on three critical areas: data center-grid convergence, supply-chain vulnerabilities, and secure-by-design considerations. Data center operators and their financial backers must address these interconnected threats to protect both individual facilities and the broader system they support.  

Hardwired for risk

The cybersecurity challenge facing the data center supercycle stems from how these campuses are tightly coupled with both the public power grid and their own industrial control systems. As hyperscale and AI‑optimized facilities proliferate, their constant demand for high‑quality electricity shapes grid planning and reliability. These large campuses function less like traditional real estate and more like critical energy infrastructure nodes.

This shift comes as grid capacity tightens. The North American Electric Reliability Corporation (NERC) has warned that demand from new data centers will outpace energy supply growth in the coming years. A cyber incident that disrupts a major data center or degrades its industrial control systems can propagate into regional grid reliability issues, contract penalties, and broader economic disruption.

At the same time, the OT running these sites — building management, systems, cooling controls, battery and generator management — create dense cyber‑physical exposure. Global insurer Marsh notes that events in these systems, whether from human error or cyberattack, can cause physical damage and significant business interruption. The 2021 OVHcloud data center fire in Strasbourg, France destroyed an entire facility and disrupted services for thousands of customers, showing how failures in fire protection and cooling systems rapidly escalate. into catastrophic loss. Those safety functions now run through interconnected, remote-access-enabled OT systems.

Secure‑by‑design architectures for both grid‑side interfaces and on‑site OT are prerequisites for preventing this rapidly expanding energy–data infrastructure from becoming a single, converged point of failure.

Supply-chain integrity first

AI‑optimized campuses depend on massive volumes of GPUs, high‑density servers, network appliances, OT controllers, and edge devices. Many of these components are designed, manufactured, or assembled in jurisdictions at the center of great‑power competition, particularly China. Reports warn that state-aligned actors could introduce backdoors, malicious firmware, or weaponize delivery timelines to create strategic outages.

Secure‑by‑design must start at procurement. Security-conscious procurement requires stringent vendor due diligence, diversification away from single‑country dependencies, hardware and firmware validation before deployment, and alignment with export controls and national‑security guidance on high‑risk equipment. The bill of materials (BoM) for a modern data center must be treated like a living threat surface, with traceability from chip manufacture through installation, including approved vendor lists, tamper‑evident logistics, and mandatory firmware attestation.

Procurement teams need escalation paths for opaque supply chains, unexplained cost changes, or “gray‑market” alternatives, plus playbooks for rapidly substituting vendors when geopolitical shocks or sanctions make a product line unacceptable.

Governance around supply‑chain risk must reach the same level as power, cooling, and uptime guarantees in contracts with hyperscalers and large tenants. Secure‑by‑design campuses will embed requirements for hardware provenance, firmware update hygiene, and ongoing vulnerability disclosure into master service agreements and construction/operations contracts, with clear accountability when a supplier is implicated in espionage or sabotage.

Data center sponsors who cannot prove supply‑chain integrity will face growing pressure from regulators, insurers, and investors who see hardware trust as a prerequisite for AI and cloud infrastructure resilience.

Securing the infrastructure supply chain pipeline

Engineering secure-by-design campuses begins with assuming adversaries will target internet‑exposed and OT edge devices. Security architects must design environments that prevent any foothold at the edge from escalating into grid‑scale disruption or safety‑critical failure.

Geopolitically motivated campaigns against energy infrastructure are accelerating. Recent Russia-nexus attacks on the Polish power system and Romania’s national oil pipeline demonstrate that state‑linked and criminal groups see energy and digital infrastructure as leverage points. Last December, actors linked to Russia’s Sandworm APT compromised remote terminal units (RTUs), firewalls, and communications gateways at Polish substations and distributed energy facilities.

This precedent-setting cyberattack—the first to directly target distributed energy resources in a NATO member’s power system—is indicative of the current threat landscape. Sandworm’s campaign underscores how fragile edge devices are and how vital it is to harden the gateways at the OT boundary. The first pillar of secure-by-design campuses is disciplined network segmentation that treats OT as a distinct, high‑consequence domain.

OT networks should be carved into functional and geographic zones—separating building management from generator controls, from battery systems, from grid‑interconnection protection—with tightly controlled conduits between them, enforced by OT‑aware firewalls and protocol‑constrained paths.

Hardware‑enforced unidirectional gateways and data diodes offer uniquely strong protection at key boundaries. Data diodes allow telemetry and process data to flow outward from OT to IT and monitoring systems while physically blocking any return path, sharply reducing the chances that a web-based intrusion can reach OT systems.

Data diodes should be placed at key demarcation points—between the data center’s OT and corporate IT, between on‑site generation controls and the broader campus, and at interfaces with utility systems—so operators preserve visibility without exposing those domains to bidirectional network risk.

A second foundational element of secure‑by‑design campuses is a clear, continuously maintained OT asset inventory capturing every PLC, RTU, relay, drive, building controller, gateway, sensor, and engineering workstation, along with its network location, firmware version, vendor, and criticality. Effective segmentation depends on knowing what you have and how it communicates.

Operators cannot isolate critical power and cooling functions, or confidently place diodes and firewalls, without understanding which devices participate in those functions and which paths they rely on. This inventory must fully cover the same class of gateways and field devices abused in the Polish grid attack.

When asset inventories are linked to configuration and vulnerability management, operators can quickly identify exposed OT devices when they are approaching end  of life or when new flaws are disclosed. A comprehensive OT asset inventory also enables security teams to quickly locate high‑risk remote access paths and prioritize segments for additional hardening.

Secure‑by‑design engineering mandates the  mitigation of accelerating cyber risks posed by remote access gateways and the mass-automation of industrial functions. Every orchestration platform, management API, and remote session is a potential high‑impact attack vector.  This threat model requires consolidating OT access through hardened jump hosts with strong authentication and just‑in‑time privileges; sharply limiting what automation tools can change on OT networks, enforcing strict segregation between automation platforms and safety‑critical functions, continuously monitoring automated and remote actions, and hardening configuration‑management workflows.

Lastly, secure‑by‑design architecture demands OT‑aware visibility that can actually see and understand what is happening on control networks. This means instrumenting OT segments with monitoring tuned to industrial protocols and behaviors, correlating alerts with asset context, and wiring those insights into playbooks that can quickly isolate, triage, and physically replace compromised edge devices before an intrusion escalates.

Resilience is the only path to funding

The threat modeling, procurement, and design best practices detailed here directly constrain the blast radius of geopolitically charged campaigns that threaten data center reliability and safety. Data center developers, operators, and investors need this systems‑level blueprint for building AI‑era campuses that remain resilient as the energy and threat landscape becomes more contested.

Banks and institutional sponsors are deploying trillions of dollars in construction, fit‑out, and power capacity on the assumption that AI demand will translate into durable, high‑availability cash flows. Underinvesting in cybersecurity directly threatens covenants, refinancing options, insurance coverage, and asset valuation. Outages, safety incidents, or regulatory findings will capsize the investment thesis.

The campuses that will secure the best financing over the next decade will be those that can point to their secure‑by‑design architectures, campus-wide OT governance, and defensible supply‑chain practices. In this intertwining infrastructure supercycle and macro OT threat environment, power usage efficiency (PUE) metrics and fast build schedules will matter less that proven security safeguards.

The stakes are escalating rapidly. Developers and utilities are pairing energy‑hungry data centers with small modular reactors (SMRs) and other non‑traditional power generation. These campuses will converge with the security and risk profile of nuclear and high‑hazard industrial facilities, bringing heightened  regulations and adversary interest.

SMR data centers fundamentally change the threat model. When nuclear systems sit alongside AI clusters, secure-by-design takes on a new dimension. Operators, investors, regulators, and security professionals must prepare for this convergence. The integration of compute and power generation creates a dynamic that demands the security rigor of both digital and infrastructure and nuclear facilities. The window to build these protections into design is closing.

Jeffrey Knight is Director of Global Critical Infrastructure Services at InfraShield. Jeff brings more than 35 years of experience in nuclear engineering and cybersecurity across the Department of Defense (DoD), SWIFT, the NRC, and the Department of Energy (DOE) National Laboratory complex.

The post Why ‘secure-by-design’ systems are non-negotiable in the AI era appeared first on CyberScoop.

AI security’s ‘Great Wall’ problem

By: Greg Otto
9 February 2026 at 06:00

The Great Wall of China was built to slow northern raiders and prevent steppe armies from riding straight into the empire’s heart. Yet in 1644, its most impregnable fortress fell without a siege.

At Shanhai Pass, where the wall meets the Bohai Sea, General Wu Sangui commanded the eastern gate. Behind him: a rebel army had just taken Beijing, the emperor was dead, and the Ming Dynasty was buckling under internal crisis. Ahead: Manchu forces who had spent decades probing for weakness. Wu faced the oldest dilemma in fortress warfare: who is the greater threat?

He opened the gate. The Manchus poured through, defeated the rebels, and never left. They founded the Qing Dynasty and ruled China for the next 268 years, the last imperial dynasty before the republic.

The wall didn’t fail. The stone held. What broke was the human system it depended on.

Walls do not fail because the bricks are weak. They fail because the system around the wall is weak. Underpaid guards get bribed, gate procedures degrade, supply lines break. The attacker does not need to knock the wall down when they can walk through the gate.

That is why I disagree with the increasingly popular framing that AI security is fundamentally a cloud infrastructure problem. Cloud security matters. Identity, telemetry, and incident response are table stakes. But treating AI risk as something you can solve primarily by hardening the hosting layer is a comforting simplification, not a complete threat model.

Palo Alto Networks recently reported that 99% of organizations experienced at least one attack on an AI system in the past year. If nearly everyone is getting hit, the right conclusion is not “build a higher wall.” It is “we are defending the wrong boundaries.”

The fortress fallacy

A fortress mindset starts with an implicit assumption: secure the infrastructure and you secure the system. That mental model can work when the system boundary is clean and the workload is deterministic, but AI breaks both assumptions. Modern AI stacks are ecosystems that depend on components sitting outside the neat perimeter even when a model runs inside your cloud tenant: open-source libraries, data pipelines, evaluation tools, plug-ins, agent frameworks, and the humans who can change any of the above. If your security plan begins and ends with cloud controls, you will build excellent defenses around a system that attackers are not planning to assault head-on. Attackers route around strength and target weakness. Why scale the wall when someone at the gate is underpaid, overworked, or facing an impossible choice?

Consider the trust problem. Most organizations are consuming models trained elsewhere, fine-tuning them, and shipping them into production with limited internal expertise. Even if you self-host, you did not write the model or curate the training data. Cloud hardening does not change that reliance; it just makes the box look more secure. The most dangerous failures are not always intrusions but permissioned outcomes. AI agents turn mixed-trust inputs into execution, and when an agent can read internal content, open tickets, or trigger workflows, the attack surface moves to whatever the agent is allowed to do. If an attacker can influence what the agent consumes, they can influence what it executes, often without malware or exploit chains. Real-world AI incidents frequently involve the glue: telemetry, orchestration, plug-ins, and vendor services beside the model.

Humans remain the soft underbelly. AI security discussions obsess over architecture while threat actors obsess over access paths. Cloud controls cannot prevent coercion, social engineering, or insider risk. If a small set of people can approve changes to an AI system’s tools or policies, that is where the attacker will focus. 

History teaches this lesson clearly: the Great Wall’s garrison soldiers took payments to allow traders and smugglers to pass, to skip patrols, or to neglect watch duties. Irregular pay and delayed wages made them susceptible. Today, the “bribe” often isn’t cash. It might be a phishing link, a fake vendor request, or pressure to cut corners during a production crisis. The strongest wall doesn’t matter if the person guarding the gate is persuaded to open it.

Security from within 

Begin with a premise security leaders should already accept: everything can be breached. Your job is to reduce the odds, detect threats quickly, and limit the damage when prevention fails.

Threat model the entire AI system, not just where it’s hosted. Include data supply chains, tooling, evaluation pipelines, plug-ins, agents, and the people who can change them. If your threat model leaves out upstream and downstream dependencies, it’s incomplete.

Treat non-human identities as serious risks. Agents, service accounts, and tool credentials should be managed like privileged user accounts. Apply zero standing privileges (no always-on access), just-in-time access, contextual approvals for high-impact actions, and monitoring for unusual tool behavior. The moment you give an agent broad permissions “for productivity,” you’ve created a new control plane that attackers can hijack.

Build audit-grade change control. You should be able to answer: who changed agent permissions, retrieval sources, tool bindings, or policy gates? If those controls can be modified quietly, you’re running a system that’s one rushed change away from becoming a security incident.

Invest in detection that assumes manipulation, not just compromise. The hard part of AI misuse is that malicious actions can look legitimate in traditional logs. You need traceability from input to outcome: what context was fed in, what tool calls happened, what policy was active, and what boundary was crossed.

Cloud security is necessary—but not sufficient. When vendors say AI security is mainly a cloud infrastructure problem, I hear the modern version of “the wall is tall, so we’re safe.” Wu Sangui’s wall was tall too, stretching to the sea at the empire’s edge. None of that mattered when the system around it collapsed. Empires fall from within.

AI security isn’t solved by building a better fortress. It’s solved by governing delegated authority, hardening the supply chain, and building systems that can prove what happened when—not if—something goes wrong.

David Schwed is the COO of SVRN, an AI infrastructure company focused on building decentralized, sovereign intelligence systems that empower users with data privacy and autonomous computational power.

The post AI security’s ‘Great Wall’ problem appeared first on CyberScoop.

We moved fast and broke things. It’s time for a change.

By: Greg Otto
2 February 2026 at 06:00

The phrase “Move fast and break things” is a guiding philosophy in the technology industry. The phrase was coined by Meta CEO and founder Mark Zuckerberg more than two decades ago: an operational directive for Facebook developers to prioritize speed and innovation even at the cost of stability. “Unless you are breaking stuff,” Zuckerberg told Business Insider in a 2009 interview, “you are not moving fast enough.” 

But Zuckerberg’s call was heard well beyond Facebook’s offices. The tech industry has embraced the philosophy for close to two decades, with benefits that are visible all around us: from Tik-Tok influencers, to contactless mobile payments, self-driving taxis, and AI-powered glasses. 

Practically, however, the culture of “move fast and break things” produced firms that prioritize fast release cycles and feature development over software security and resilience. They move fast and make broken things: vulnerable and poorly designed applications, services and devices that are preyed on by cybercriminal groups and hostile nations. Consider the China-backed APT groups targeting both known and “zero-day” flaws in on-premises Microsoft Sharepoint instances in 2025 and Ivanti VPN devices in 2023. Those campaigns led to the compromise of hundreds of organizations globally, including U.S. federal agencies and critical infrastructure operators. 

Then there was the campaign by the China-backed threat actor UNC6395 who targeted customers of Salesforce using OAuth tokens stolen from the third party application Salesloft Drift to exfiltrate large volumes of data from hundreds of Salesforce instances

These incidents highlight two key features of today’s cyberthreat landscape. First, attackers exploit older applications with legacy code that contains high-severity security vulnerabilities. Second, they target large, complex cloud platforms like Salesforce by compromising vulnerable third-party integrations, software dependencies, and poorly managed APIs. 

This problem is compounded by a dangerous assumption: that software suppliers are trustworthy and secure. This mindset is outdated. In the past, supply chain attacks were rare, development cycles took months or years, and applying patches quickly was the gold standard. Today, in the “move fast” era, code can go from development to production in days, hours, or even seconds.”

Consider the recent Trust Wallet breach. In December, the cryptocurrency application vendor disclosed that hackers stole approximately $8.5 million in crypto assets through a compromised Google Chrome extension. The root cause was a November outbreak of the Shai Hulud registry-native worm, which leaked Trust Wallet developers’ GitHub credentials. With these credentials, attackers accessed Trust Wallet’s browser extension source code and the Chrome Web Store (CWS) API key, the company said in a blog post. This allowed them to upload malicious extension builds directly to the store, bypassing Trust Wallet’s standard security reviews. Within days, Trust Wallet users awoke to find their wallets emptied

By compromising “pre-blessed” channels like software updates from trusted suppliers or open source projects, criminal and nation-state attackers can extend their reach into sensitive IT environments.

The solution to problems like this starts with recognizing that the “move fast and break things” era must end. As software powers everything from database servers to dishwashers and tractors, vendors must prioritize security to meet market demands and regulatory requirements. This means proving their software is secure. Traditional application security testing tools—like software composition analysis (SCA), static application security testing (SAST), and dynamic application security testing (DAST)—are part of the solution.

However, today’s threat landscape requires software publishers to look beyond appsec’s “usual suspects.” They must test compiled binaries before release to detect tampering or malicious code that typically evades traditional application security tools. After all, that’s what we saw with incidents like the hacks of Solarwinds’ Orion or VoIP provider 3CX’s Desktop App

Software publishers also need to prioritize code quality, security and transparency. They can do that by establishing ambitious “zero vulnerability” goals that incentivize them to address problems like “code rot” (reliance on old and vulnerable software modules). They must also embrace transparency by publishing bills of materials for their products—including SBOMs (software bills of materials), MLBOMs (machine learning bills of materials), and SaaSBOMs. Knowing what is in the software your organization consumes can be critical to heading off attacks that exploit vulnerable software dependencies or other supply chain weaknesses. 

Should tech firms still move fast and innovate? Absolutely. But in 2026, innovation and rapid releases must be balanced with an even greater priority: building secure, resilient technology that protects both vendors and customers from attacks. Instead of “move fast and break things,” we need a new rallying cry: “Make Smart and Safe Things.”  

Saša Zdjelar is the Chief Trust Officer (CTrO) at ReversingLabs and Operating Partner at Crosspoint Capital with nearly 20 years of Fortune 10 global executive leadership experience. His CTrO scope includes leadership, oversight and governance of the CISO/CSO function, including product security, as well as partnering with other leaders on corporate and product strategy, strategic partnerships and research, and customer and technology advisory boards, including sponsoring the ReversingLabs CISO Council.

The post We moved fast and broke things. It’s time for a change. appeared first on CyberScoop.

If you don’t control your keys, you don’t control your data

By: Greg Otto
28 January 2026 at 06:00

A recent Forbes investigation revealed that Microsoft has allegedly been handing over Bitlocker encryption recovery keys to law enforcement when served with warrants. Microsoft says it receives about 20 such requests annually. Taken narrowly, this may appear to be a routine case of lawful compliance. On closer inspection, it raises a consequential question about how modern digital systems are designed and who ultimately controls the data they hold.

The essence of the debate centers on data sovereignty, or whether individuals and organizations truly control their own data, or whether that control can be involuntarily transferred because of architectural choices made by technology providers.

BitLocker itself is strong encryption. Federal investigators have acknowledged they cannot defeat it cryptographically. Access depends on possession of the recovery key. The issue, then, is not the strength of the encryption, but where the keys reside.

Microsoft commonly recommends that users back up BitLocker recovery keys to a Microsoft account for convenience. That choice means Microsoft may retain the technical ability to unlock a customer’s device. When a third party holds both encrypted data and the keys required to decrypt it, control is no longer exclusive. Data sovereignty has already been diluted—long before any warrant is issued.

Before founding Virtru, I served as the lead technology policy adviser at the White House National Economic Council, where I participated in the early debates around the Patriot Act. Those discussions were framed as exceptional—extraordinary access in extraordinary circumstances. What experience and history have shown since is that access tends to expand, requests become more routine, and oversight struggles to keep pace with technologies that were never designed to limit access in the first place.

We have seen the consequences of this design pattern for more than two decades. From the Equifax breach, which exposed the financial identities of nearly half the U.S. population, to repeated leaks of sensitive communications and health data during the COVID era, the pattern is consistent: centralized systems that retain control over customer data become systemic points of failure. These incidents are not anomalies. They reflect a persistent architectural flaw.

That is why the BitLocker issue matters beyond a single investigation. When systems are built so that providers can be compelled to unlock customer data, lawful access becomes a standing feature of the architecture rather than an exceptional outcome governed by narrow circumstances.

Other large technology companies have demonstrated that a different approach is possible. Apple has designed systems that limit its own ability to access customer data, even when doing so would ease compliance with government demands. Google offers client-side encryption models that allow customers to retain exclusive control of encryption keys. These companies still comply with the law, but when they do not hold the keys, they cannot unlock the data. That is not obstruction. It is a design choice.

Microsoft could make similar choices. Retaining decryption capability is not a technical inevitability; it is a product and business decision. Defaults matter, and when convenience is the default, most users—individuals and enterprises alike—will unknowingly trade control for ease of use.

There is no such thing as a risk-free backdoor or a universally safe key escrow. Encryption does not distinguish between authorized and unauthorized access. Any system designed to be unlocked on demand will eventually be unlocked by unintended parties.

These risks are magnified in an era of persistent nation-state cyber activity. Every additional entity capable of decrypting data increases the attack surface. 

The Salt Typhoon debacle underscores the point. Even when attackers target networks or infrastructure instead of endpoints, the goal remains the same: to gain access to systems where large volumes of sensitive data can be accessed or decrypted at scale. Architectures that concentrate decryption authority magnify the consequences of inevitable breaches; architectures that enforce data-level key ownership sharply limit the blast radius. 

For global companies, the issue is not only U.S. legal process, but the possibility of conflicting demands across jurisdictions—some with far weaker protections for civil liberties and commercial confidentiality.

The lesson is straightforward: organizations cannot outsource responsibility for their most sensitive data and assume that third parties will always act in their best interest. Encryption only fulfills its purpose when the data owner is the sole party capable of unlocking it.

Microsoft has an opportunity to address this by making customer-controlled keys the default and by designing recovery mechanisms that do not place decryption authority in Microsoft’s hands. True data sovereignty—personal and organizational—requires systems that make compelled access technically impossible, not merely contractually discouraged.

In the meantime, this episode should serve as a warning. Executives and boards should ask a simple question of their technology stack: Where do our encryption keys live? 

The answer increasingly determines who truly owns the data—and who does not. Because if you do not control the keys, you do not control the data.

John Ackerly is the CEO and Co-Founder of Virtru.

The post If you don’t control your keys, you don’t control your data appeared first on CyberScoop.

Predator bots are exploiting APIs at scale. Here’s how defenders must respond.

By: Greg Otto
20 January 2026 at 06:00

The rise of malicious bots is changing how the internet operates, underscoring the need for stronger safeguards that keep humans firmly in control. Bots now account for more than half of global web traffic, and a new class of “predator bots” has emerged, unleashing self-learning programs that adapt in real time, mimic human behavior, and exploit APIs and business logic in order to steal data, scalp goods, and hijack transactions.

The economic fallout is staggering: bots and API attacks drain up to $186 billion annually, driven by credential theft, scalping, and fake account creation that fuel large-scale fraud and distort online markets. This represents one of the fastest-growing forms of cyber-enabled economic harm, and it’s happening mostly out of sight.

Security teams can’t afford to let hackers have the upper hand with automation. Addressing the growing bot crisis requires a deep knowledge of APIs and their vulnerabilities, as well as the ability to leverage automation in ways that match and counter attackers’ growing arsenals.

The new bot economy

Over the last few years, AI has accelerated malicious automation from simple scripts to adaptive systems that evolve in real time. Today’s predator bots blend seamlessly into normal traffic patterns, dramatically increasing the volume of legitimate-appearing bot traffic and making it harder for security teams to spot.

The influx of bots has led to an unprecedented scale credential theft, account takeover, scraping, scalping, and promotion fraud. With malicious bots now accounting for roughly 37% of all web traffic, security teams are left feeling like they’re playing a giant game of bot whack-a-mole.

Predator bots are not only causing financial impact; they’re also slowly eroding customer confidence and overall societal trust in our digital infrastructure. These bots are targeting every sector, from financial services to citizen services and beyond, further chipping away at public trust in critical infrastructure capabilities. Even small disruptions can now be amplified through automation, turning minor weaknesses into large-scale outages or fraud events.

As predator bots continue to grow in influence and scale, defenders are left with a shrinking window of time to secure today’s digital infrastructure for tomorrow’s customers.

APIs are the front line

APIs are the fabric that connects the internet, powering functions like identity management, payments, checkout carts, inventory, and customer access. The very essence of how APIs connect the internet is also what makes them the most vulnerable targets. While APIs represent roughly 14% of attack surfaces, they attract 44% of advanced bot traffic, highlighting the imbalance of risk.

Predator bots differ from attacks focused on code vulnerabilities, as they exploit business logic to reshape workflows against organizations. This manifests in API-driven abuse that exploits legitimate workflows, from manipulating checkout flows to large-scale data abuse.  As AI enables both high-volume brute force attacks and low-and-slow stealth attacks, security teams are quickly realizing traditional defenses are no longer up to par.

With hackers zeroing in on API abuse to drive predator bot attacks, visibility, classification, and behavior monitoring are now core to digital trust. Shadow APIs and forgotten endpoints only widen the attack surface, giving predators more places to hide. Shining a light on AI-powered bots requires layered defense strategies that combine human insight with advanced, adaptive technology.

Defending at machine speed

As automated attacks continue to mature and evolve, traditional defense tactics like static rules, CAPTCHAs, and IP blocking can no longer keep pace. To defend against bots at machine speed, security teams must pair modern defense tactics rooted in autonomy and agility with human expertise.

Bots don’t act in isolation, and neither should security teams. Autonomous controls can take over detection and response, automatically flagging suspicious bot behavior and enforcing protections like adaptive MFA. This allows human analysts to focus on high-value adds like threat modeling and strategic risk reduction.

Security teams should first start with a complete API discovery, including endpoints, to ensure they know their digital environment inside and out. Next, teams must adopt proactive security measures like behavioral bot detection, MFA, machine-speed anomaly detection, and business logic monitoring. These measures ensure that bots are caught before damage can be inflicted.

Today’s defense must operate, to some degree, like attacks: continuous, context-aware, and capable of adapting in real time. By augmenting human capabilities with autonomous tools, security teams shift from being overwhelmed and responding to threats reactively to operating proactively and intelligently. Security cannot afford to lag behind; it must evolve in lockstep with the threats teams face.

Automation is the new battleground

As AI accelerates attack automation, defenders need modern, AI-powered tools that match the speed of attackers and free security teams to concentrate on the complex, judgement-driven work that machines can’t replicate.

The future is about more than keeping bots out. Security’s next phase will be defined by behavior-driven insight, intent-based detection, and defense at machine speed.

Tim Chang is the global vice president of application security at Thales.

The post Predator bots are exploiting APIs at scale. Here’s how defenders must respond. appeared first on CyberScoop.

The quiet way AI normalizes foreign influence

By: Greg Otto
15 January 2026 at 09:30

Americans are being taught to trust propaganda. Often, it’s not intentional. A classic bit of advice for separating propaganda from real research is “Check the citations.” If the sources support the analysis, the material can be trusted. But AI is changing the rules of the game.

In December, the White House announced new guidance to ensure that AI tools procured for government use are “truthful” and “ideologically neutral,” including transparency around citation practices. But even with this new oversight there is a structural issue that the memo can’t fix; authoritarian states are optimizing their propaganda for AI consumption while America’s most credible news sources are actively blocking AI tools. This means that even ideologically neutral AI directs users towards state-aligned propaganda — simply because that is what is freely available.

Those who trust AI citations wind up trusting propaganda while believing they are doing responsible research.

Most large language models (LLMs) provide sources along with their analysis. But these models do not choose what sources to cite based on credibility. Rather, they choose based on availability. Many of the best sources, like top U.S. news outlets, are behind paywalls or are blocking the automated systems that AI uses to scan and collect information. These legacy media companies are slowly litigating and negotiating individual licensing deals with AI unicorns.

Authoritarian states, on the other hand, have optimized their content for accessibility. State-run media, like Qatar’s Al Jazeera, or Russian and Chinese outlets published in English, are free. That results in students, academics and federal analysts seeking to understand Gaza, Ukraine, or Taiwan being more likely to engage with state-backed propaganda than independent journalism.

Research from the Foundation for Defense of Democracies analyzing three major LLMs (ChatGPT, Claude, and Gemini) found that 57 percent of responses to questions about current international conflicts cited state-aligned propaganda sources.

When AI tools answer questions about contested conflicts — including Gaza, Ukraine, and Taiwan — they draw on enormous training data. While not perfect, the responses are often more nuanced than any one commentator or media outlet. But LLMs then funnel their hundreds of millions of users to a narrow subset of sources that it serves up as citations. FDD research found that 70 percent of neutral questions about the Israel-Gaza conflict yielded Al Jazeera citations.

This isn’t a minor technical flaw — citations are the attribution architecture shaping what Americans learn to trust.

While Western legacy media certainly carries its own biases, there is a crucial difference between editorial bias and state-controlled narratives. In 2024 alone, Russia-backed propaganda aggregator Pravda flooded the internet with more than 3.6 million articles from pro-Kremlin influencers and government spokespeople, in order to saturate the space with pro-Russian narratives.

AI sometimes fabricates information, or “hallucinate,” and that presents real risks. But urging people to “check the linked sources” can end up steering them straight to state-controlled media. Those links aren’t citations in the traditional sense — they are traffic directions. And the traffic they generate turns into revent, which ultimately determines which news outlets survive. AI platforms are becoming the internet’s traffic arbiters, and right now they’re systematically directing traffic away from independent journalism and toward state-controlled propaganda.

AI companies must bring credible journalism into their systems. There is no question that quality journalism requires resources and revenue to survive. Unfortunately, the licensing deals that are being negotiated now between LLM companies and media outlets are moving slowly. Every delay allows citation patterns to harden while we are increasingly vulnerable to foreign influence.

There’s no silver bullet, but a patchwork of solutions can help. The White House has already taken a strong stance by requiring agency heads to restrict AI procurement to LLMs that are “ideological neutral” and not “in favor of ideological dogmas.” Vendors selling to the U.S. government should present data on citation influence.

An LLM literacy campaign is needed so users understand citation bias. But awareness alone isn’t enough — AI companies should give lower priority to state-controlled media in their outputs and label them as such. And as LLMs evolve from being a consumer technology into a common infrastructure like the internet itself, citation patterns should be considered in AI safety frameworks — because a healthy democratic society needs a broad array of media sources, and that means independent journalism will always need support.

Leah Siskind is director of impact and an AI research fellow at the Foundation for Defense of Democracies.

The post The quiet way AI normalizes foreign influence appeared first on CyberScoop.

Is the US adopting the gray zone cyber playbook?

By: Greg Otto
12 January 2026 at 05:00

When President Trump referenced America’s ability to “darken” parts of Caracas during Operation Absolute Resolve, the comment stood out not because of what it confirmed, but because of what it implied. Delivered without technical detail, the remark hinted at capabilities that sit somewhere between diplomacy and force, and between cyber operations and traditional military action.

Whether or not the statement reflected a specific technical action in the raid on Venezuela is almost beside the point. What mattered was the signal: cyber-enabled disruption of civilian or economic systems is no longer treated as an abstract possibility, but as a plausible instrument of state power operating below the threshold of open conflict.

This framing aligns with events that preceded any visible kinetic or political resolution. Venezuela’s state-owned oil sector, the backbone of the country’s economy and a primary source of regime revenue, reportedly experienced cyber-related disruptions that affected operations and exports. Attribution remains contested, and no public confirmation has been offered. But the timing and the target were notable. Pressure seemed to be applied not during the confrontation itself, but earlier—targeting the systems that sustain national power.

These developments point toward a more deliberate “gray zone” approach, one that uses cyber interference against economic and civilian infrastructure as part of sustained pressure campaigns rather than isolated, surgical actions.

For a global power operating in an environment of constant competition, this shift may be less radical than it initially appears.

Why the gray zone matters

Gray zone conflict is often framed as a deviation from traditional deterrence. But in practice, it reflects how competition among major powers increasingly unfolds. Rarely does rivalry manifest as declared war. Instead, it plays out through incremental pressure applied across economic, informational, political, and technological domains.

Cyber capabilities are particularly well suited in this space. They allow nation-states to impose friction, degrade confidence, and shape behavior without crossing clear thresholds that would trigger conventional military escalation. Unlike kinetic force, cyber effects can be reversible, deniable, and calibrated over time.

From a technical perspective, this flexibility is not accidental. Modern cyber operations rely less on single exploits and more on persistent access, identity abuse, supply chain dependencies, and detailed mapping of complex systems. These attributes make cyber tools effective not just for disruption, but for long-term leverage.

For years, the United States invested heavily in advanced cyber capabilities while remaining cautious about integrating them openly into broader coercive strategies. This restraint, however, was not universally shared.

Lessons from the Russian model

For more than a decade, U.S. officials criticized Russia’s use of hybrid warfare, particularly its integration of cyber operations, economic pressure, information campaigns, and civilian infrastructure disruption. In Ukraine and elsewhere, civilian impact was not incidental, as it was a key part of the strategy.

From a technical standpoint, Russia demonstrated that persistent interference against power grids, telecommunications networks, healthcare systems, election infrastructure, and government services could impose strategic costs without provoking decisive military retaliation. Even relatively limited actions, such as GPS jamming affecting civilian aviation in the Baltics and Eastern Europe, reinforced the same lesson: disruption does not need to be catastrophic to be effective.

These operations often relied on modest technical effects amplified through operational timing and uncertainty. Intermittent outages, degraded reliability, and ambiguous attribution created pressure on governments and populations without crossing clear red lines.

 Regardless of how Moscow’s objectives are judged, the effectiveness of cyber and electronic interference as tools of statecraft did not go unnoticed. In recent  years, other countries, particularly China and Iran, have steadily expanded these operations and capabilities

How gray zone campaigns operate

From a cyber perspective, gray zone operations rarely resemble single attacks. They unfold as campaigns.

Access is often established years in advance through credential compromise, third-party vendors, or exposed management interfaces. Once inside, operators map dependencies, understand failover mechanisms, and identify points where limited disruption can produce outsized operational impact.

These effects, when applied, are typically restrained. Rather than causing prolonged blackouts or physical damage, campaigns may induce intermittent failures, data integrity concerns, or operational delays that erode confidence and consume resources. The goal is not destruction, but pressure: forcing leaders and operators to operate under uncertainty.

They are also designed to be reversible and deniable. The ability to stop, pause, or modulate disruption is as important as the ability to initiate it. This control allows cyber operations to be synchronized with diplomatic signals, economic sanctions, or other forms of statecraft.

Statecraft in an era of constant competition

The events in Venezuela underscore a broader reality: cyber-enabled pressure is now a standard component of how states pursue political outcomes. It shapes environments well before traditional markers of conflict appear.

The strategic question is no longer whether cyber-enabled economic interference will be used, but how seamlessly it is integrated with other tools. Sanctions, diplomacy, military posture, and cyber operations increasingly function as parts of a single continuum rather than separate domains.

This raises natural questions about where such pressure may be applied next. In the Western Hemisphere, U.S. attention has turned toward Cuba and Colombia. Beyond the region, Iran remains a focal point of coercive strategy, where cyber operations have already been used to strain industrial systems and public confidence without crossing into open conflict.

The point is not to predict specific operations, but to recognize that pressure via cyber operations has moved from the margins of policy into its core.

What this means going forward

For a global power, ignoring gray zone dynamics is increasingly unrealistic. However, embracing them does introduce new forms of risk. Cyber interference below the threshold of war offers flexibility and deniability, but it also creates ambiguity around control, proportionality, and long-term stability.

Escalation in this space rarely arrives as a single dramatic event. Instead, it accumulates through repeated disruptions that gradually blur the line between competition and conflict, often without clear signaling or agreed-upon thresholds.

Managing that risk requires more than technical capability. It demands disciplined judgment, an understanding of complex systems, and an appreciation for how seemingly modest cyber effects can cascade politically and economically.

The gray zone may be unavoidable, but how states operate within it will shape whether it becomes an effective tool of competition, or a source of sustained instability.

Aaron Estes, Vice President at Binary Defense, is a three-time Lockheed Martin Fellow with more than 25 years of experience in cybersecurity and software engineering.  Estes has spent much of his career advancing mission resilience and adaptive defense for the Department of Defense, intelligence community, and leading defense contractors.

The post Is the US adopting the gray zone cyber playbook? appeared first on CyberScoop.

❌
❌