Reading view

There are new articles available, click to refresh the page.

CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict

The Cybersecurity and Infrastructure Security Agency is urging critical infrastructure owners and operators to plan for delivering essential services under emergency conditions – potentially for months at a time.

The federal government’s top cybersecurity agency warned that state-sponsored hackers, particularly two Chinese groups known as Salt Typhoon and Volt Typhoon, continue to threaten critical sectors like electricity, water, and internet. 

The agency is now working with the private sector to protect operational technology – the systems that control the heavy machinery and equipment that powers most critical infrastructure – from attacks that enter through business IT systems or third-party vendor products.

The initiative  — known as CI Fortify – will include CISA conducting targeted technical assessments of critical infrastructure entities and aims to create plans that “allow for safe operations for weeks to months while isolated” from IT networks and third-party tools, according to the agency’s website.

Nick Andersen, CISA’s acting director, told reporters that the goal is “service delivery [that] can still reach critical infrastructure after the asset owner has disconnected with IT and OT, disconnected from third party vendors and service provider connections and disconnected from third party telecommunications equipment.”

Over the past two years, wars in Ukraine, Gaza, Iran and elsewhere have seen water plants, power substations, data centers and other critical infrastructure targeted by kinetic or cyberattacks.

Andersen said the agency has already begun engaging with some companies to pilot the assessments and expects that work to ramp up considerably as CISA hires additional staff in the coming months.

He declined to name the entities involved in the pilot program, but said they will focus on organizations that support national security, defense, public health and safety and economic continuity. He added that CISA’s assessments will vary from sector to sector depending on their unique needs.

“Water isn’t necessarily designed to prioritize specific customer needs outside of recovery periods, while energy and transportation have more immediate tradeoffs for selecting one load or one set of cargo over another,” Andersen said as an example.

One pillar of CISA’s strategy is isolation: essentially turning off all third-party and business network connections to an OT network when facing an emergency or unknown vulnerability.

Organizations also need to develop an internal plan for what acceptable service levels look like under those conditions and reach understandings with their critical customers, like U.S. military installations and lifeline services.

The second pillar, recovery, involves best practices for organizations: backing up files, documenting systems and having manual backups for operations when normal computer systems are down.

In conversations with cybersecurity specialists who focus on critical infrastructure and operational technology, it is widely assumed that China is not the only nation to have broadly compromised Americans critical infrastructure. That hacking groups tied to other nations have almost surely noticed and exploited the same basic vulnerabilities and hygiene issues found by the Typhoons.

Agencies like the FBI and Federal Communications Commission have touted efforts to purge Chinese hackers and work voluntarily with telecoms to harden their network security. But U.S. national security officials and cybersecurity defenders have consistently said both Salt Typhoon and Volt Typhoon remain active threats to U.S. critical infrastructure.

The post CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict appeared first on CyberScoop.

Here’s how cyber heavyweights in the US and UK are dealing with Claude Mythos

A joint report from the Cloud Security Alliance (CSA), the SANS Institute and the Open Worldwide Application Security Project (OWASP) concludes that in the near term, organizations are “likely to be overwhelmed” by threat actors using AI to find and exploit vulnerabilities faster than defenders can patch them.

While those organizations can use AI tools to speed up their own defenses, attackers “still face a heavier relative burden due to the inherent limitations of patching. This in turn leads to “asymmetric benefits” for attackers who can afford to adopt the technology without the same caution and bureaucracy as a multi-billion dollar business.

“The cost and capability floor to exploit discovery is dropping, the time between disclosure and weaponization is compressing toward zero, and capabilities that previously required nation-state resources are now becoming broadly accessible,” wrote Robert Lee, SANS Institute’s Chief AI Officer, Gadi Evron, CEO of Knostic and Rich Mogull, chief analyst at CSA, who served as the primary authors.

The report marks one of the first comprehensive responses to the capabilities of Claude Mythos from the U.S., boasting cybersecurity luminaries who have set policy at the highest levels as contributing authors, including Jen Easterly, former director of the Cybersecurity and Infrastructure Security Agency, Rob Joyce, a former top White House and NSA cybersecurity official, and Chris Inglis, former National Cyber Director.

It also includes private sector stalwarts like Heather Adkins, Google’s CISO, Katie Moussouris, CEO of Luta Security, and Sounil Yu, chief technology officer at Knostic. Another seventy CISOs, CTOs and other security executives are named as editors and reviewers.

Also this week, the UK’s AI Security Institute (AISI) detailed the results of tests it performed on a preview version of Claude Mythos, calling it a “step up” from past Anthropic models in the cybersecurity arena and able to “execute multi-stage attacks on vulnerable networks and discover and exploit vulnerabilities autonomously.”

Using a mix of Capture the Flag exercises and cyber range testing, AISI researchers found that Mythos not only raised the ceiling of technical non-experts and apprentice-level users, it narrowed the overall gap in hacking proficiency between the two. In other words, there’s becoming less of a distinction between the capabilities of amateur “script kiddies” and mid-level hackers with technical knowledge.

Claude Mythos and other Large Language Models are increasing the capabilities of both lower and mid-level hackers when it comes to solving cybersecurity-specific tasks and challenges. (Source: AISI)

Before April 2025, no Large Language Model could complete a single expert-level CTF problem. Mythos successfully solved nearly three quarters (73%) of them.

In cyber range tests – which are meant to simulate more complex, multi-chain attacks – the results were uneven, but also represented meaningful progress over prior Claude models.

Mythos was subjected to a 32-step attack playbook modeled on corporate networks, spanning initial network access to full network takeover. In three of the 10 simulations, the model completed an average of 24 of the 32 steps. Older versions of Claude and other frontier models never averaged more than 16.

Claude Mythos improved on other models ability to complete a 32 step cyber attack targeting a simulated corporate network environment. (Source: AISI)

Mythos flunked its test against a simulated operational technology cooling tower, but researchers noted that this doesn’t mean AI is bad at exploiting OT: the model actually faltered during the IT section of the exercise.

UK researchers were more measured in their analysis of Mythos, noting that their testing indicates it is “at least capable” of autonomously taking down smaller, weakly defended enterprise networks.

But they also note their cyber ranges lack security features – like active defenders and defensive tooling – that would be common in many real-world networks and present additional obstacles, nor did they penalize the model for triggering security alerts.

“This means we cannot say for sure whether Mythos Preview would be able to attack well-defended systems,” the researchers concluded.

Technical debt coming due

Both the US and UK reports agree that large language models are broadly moving in a similar direction of lowering the technical barrier. The US authors call for organizations to more quickly adopt AI for cyber defense while overhauling their incident response playbooks and corporate policies to account for more automated defense postures.

For its part, Anthropic has said it is not selling Mythos commercially, and last week it announced the model would be made available to Project Glasswing, a consortium of major tech companies that will use it to root out and patch vulnerabilities in commonly used products and services.

But other experts have warned that businesses and governments are not well-positioned to either absorb the influx of expected vulnerability exploitation or deftly harness AI tools of their own to counter them.

Casey Ellis, CTO and founder of Bugcrowd, wrote that recent advances in AI cyber tools has succeeded largely by “living in the places we stopped looking a decade ago.”

While the cybersecurity community has spent years focusing on application security, vulnerability triage and other “top layer” security problems, AI tools and apex level hacking groups have been feasting on vulnerabilities in forgotten firmware, or routers whose manufacturers long went out of business.

This reality that tools like Mythos can endlessly weaponize the massive technical debt of large organizations has taken the traditional defender’s dilemma and “the knob that used to go to ten and turned it to seven hundred,” Ellis wrote.

Additionally, corporations and governments run on consensus-building, multiple layers of hierarchy and legal compliance. While those are all necessary when handing your cybersecurity over to automated tooling, it can also lead to a slower process and more asymmetry against defenders in the short term.

“Integration into actual production becomes the battlezone,” wrote Ellis. “Lag is real. Bureaucracy is real. Supply chains are real.”

The post Here’s how cyber heavyweights in the US and UK are dealing with Claude Mythos appeared first on CyberScoop.

Iranian attacks on US critical infrastructure puts 3,900 devices in crosshairs

The fallout and potential exposure from Iran’s state-backed targeting of U.S. critical infrastructure extends to more than 5,200 internet-connected devices, researchers at Censys said in a threat intelligence brief Wednesday. 

 Of the programmable logic controllers manufactured by Rockwell Automation/Allen-Bradley that Censys identified as  potentially exposed to Iranian government attackers, nearly 3,900, or about 3 out of every 4, are based in the United States. 

The cybersecurity firm identified the devices based on details multiple federal agencies shared in a joint alert Tuesday, and published additional indicators of compromise, including operator IPs and other threat hunting queries.

Federal authorities earlier this week warned that Iranian government attackers have exploited devices that control industrial automation processes and disrupted multiple sectors during the past month. Some victims also experienced financial losses as a result of the attacks, officials said. 

The operational technology devices are deployed across the energy sector, water and wastewater systems, and U.S. government services and facilities. 

Censys scans spotted 5,219 internet-exposed Rockwell Automation/Allen-Bradley PLC hosts shortly after the joint alert was issued by the FBI, National Security Agency, Cybersecurity and Infrastructure Security Agency, Environmental Protection Agency, Energy Department and U.S. Cyber Command. 

Researchers at Censys determined most of the exposed devices are connected via cellular systems, posing a significant risk to remote field deployments. Nearly half of the devices globally are connected to Verizon’s wireless network and 13% are connected to AT&T’s infrastructure.

“These devices are almost certainly field-deployed in physical infrastructure (pump stations, substations, municipal facilities) with cellular modems as their sole internet path,” Censys researchers wrote in the report. 

The potential attack surface is also amplified by additional services exposed in other ports on these devices, a discovery that Censys warned could allow attackers to gain direct paths to operations beyond PLC exploitation. 

Researchers fingerprinted MicroLogix and CompactLogix models exposed to the latest threat campaign and published a list of the 15 most-exposed products. Many of the most prominent devices are running end-of-life software, a compounding risk that could allow attackers to prioritize unpatched devices upon scanning, according to Censys.

The attacks date back to at least March, following the U.S. and Israel’s war against Iran, and were underway as other Iranian government-backed attackers claimed other victims, including Stryker and local governments.

The post Iranian attacks on US critical infrastructure puts 3,900 devices in crosshairs appeared first on CyberScoop.

Iranian hackers launching disruptive attacks at U.S. energy, water targets, feds warn

Iranian government hackers are launching disruptive cyberattacks on American energy and water infrastructure, U.S. government agencies “urgently” warned Tuesday.

The hackers are taking aim at devices and systems that control industrial processes, and have harmed victims in the last month following the onset of U.S.-Israel strikes against Iran, according to the joint alert from the FBI, National Security Agency, Cybersecurity and Infrastructure Security Agency, Environmental Protection Agency, Energy Department and Cyber Command.

“Iran-affiliated advanced persistent threat (APT) actors are conducting exploitation activity targeting internet-facing operational technology (OT) devices, including programmable logic controllers (PLCs) manufactured by Rockwell Automation/Allen-Bradley,” the alert states. “This activity has led to PLC disruptions across several U.S. critical infrastructure sectors through malicious interactions with the project file and manipulation of data on human machine interface (HMI) and supervisory control and data acquisition (SCADA) displays.”

U.S. government agencies have warned before about Iranian hackers going after similar targets with those similar methods. The first such warning came after an Iranian government-linked group took credit for attacking a Pennsylvania water facility in late 2023.

Since March of this year, however, the agencies said they have seen new victims emerge from an advanced persistent threat group tied to Iran.

“The authoring agencies identified (through engagements with victim organizations) an Iranian-affiliated APT-group that disrupted the function of PLCs,” the alert reads. “These PLCs were deployed across multiple U.S. critical infrastructure sectors (including Government Services and Facilities, WWS, and Energy sectors) within a wide variety of industrial automation processes. Some of the victims experienced operational disruption and financial loss.”

The earlier campaign compromised at least 75 devices, the alert states.

The latest disruptions include “maliciously interacting with project files, and manipulating data displayed on HMI and SCADA displays,” according to the agencies’ warning.

After the U.S.-Israel conflict with Iran began, Tehran-connected hackers claimed victims including major medtech company Stryker, local governments and more.

The FBI warned last month that Iranian hackers were deploying malware over the Telegram app, although that campaign also predated the current Iran conflict.

The post Iranian hackers launching disruptive attacks at U.S. energy, water targets, feds warn appeared first on CyberScoop.

The Caracas operation suggests cyber was part of the plan – just not the whole operation

The dominant narrative has framed the Jan. 3 Caracas power outage during the mission to capture Venezuelan leader Nicolás Maduro as a “precision cyberattack.” But publicly available information points to a more complicated picture: videos, photographs, and accounts published from Caracas show significant physical damage to at least three Venezuelan substations. Experts who reviewed that material say the observed kinetic damage could, on its own, account for the outages—raising questions about how much of the outage can be confidently attributed to cyber activity alone.

These experts say Operation Absolute Resolve appears to have involved more than a stand-alone “cyber blackout,” despite the framing of many early accounts. In their view, cyber operations may have played some role, but the visible physical attacks alone could plausibly explain the outages—and that kinetic dimension is largely absent from the dominant narrative.

Retired Rear Adm. Mark Montgomery, a former director of operations at US Indo-Pacific Command and now a senior cybersecurity expert at the Foundation for the Defense of Democracies, described the outage to CyberScoop as part of “a campaign that likely took months to source cyber targets, days to work kinetic targets, and then integrated them into a single campaign plan that took a night.”

How the outage is framed matters because it can shape accountability, influence how governments and utilities prioritize grid security, and affect perceptions of offensive cyber capabilities. If the episode is widely presented as a “cyber-only” success without clear, corroborated evidence, it may encourage outsized conclusions about what cyber tools can accomplish on their own. Over time, that framing can steer policy and spending toward the wrong lessons—emphasizing digital defenses while giving less attention to physical vulnerabilities that may be just as consequential.

How ‘cyber blackout’ became the headline

Immediate coverage of the operation largely treated cyber as the decisive cause of the outage. Much of that framing traced back to a cryptic line from President Donald Trump  at a post-operation press conference: “It was dark, the lights of Caracas were largely turned off due to a certain expertise [emphasis added] that we have, it was dark, and it was deadly.” (Later Trump suggested that the lights were turned out in Caracas by a “discombobulator.”)

The cyber narrative gained further momentum when Chairman of the Joint Chiefs of Staff Gen. Dan Caine said at the same press conference that US Cyber Command and Space Command provided “layering effects” for the operation. One widely cited report went further, citing anonymous “people briefed on the matter” to assert that a US cyberattack caused the blackout without offering forensic evidence, technical details, or independent corroboration.

Neither the Pentagon nor Cyber Command has yet to publicly confirm that a cyberattack caused the grid outage. US Cyber Command referred CyberScoop to the Department of War, which did not respond to our queries.

The grid damage is visible, not virtual

While cyber attribution largely rested on anonymous sourcing and inference, the evidence of physical damage was public, visual, and documented shortly after the attack.

Beginning on Jan. 5, publicly shared videos and photos appeared to show extensive physical damage at substations in Caracas owned by the government’s energy utility company, Corpoelec. The images included apparent bullet impacts, destroyed equipment, blown doors, and oil leaks at the Panamericana 69 kV and Escuela Militar 4.8 kV sites. In Venezuelan government statements, officials attributed the incidents to an attack and said the damage took multiple transmission lines out of service, including the OAM-Vega Caricuao-Panamericana 1 and 2 (69 kV) and Junquito-Panamericana 1 and 2 (69 kV). Electric grid security experts who reviewed the footage told CyberScoop it appeared credible and consistent with the kind of damage that could contribute to localized outages.

Local journalists noted physical attacks on these facilities, as well as a third substation at Fuerte Tiuna, a military installation in Caracas. Videos showing damage to the Fuerte Tiuna substation—some with fires still burning—were uploaded to YouTube on Jan. 12.  AirWars, a not-for-profit group that describes itself as a civilian harm watchdog in conflict-affected nations, confirmed the geolocation of the affected substations and said “heavy weapons and explosive munitions” were used, though it reported no civilian harm.

The Venezuelan government did not respond to CyberScoop’s requests for comment, but it said in a press release that the damage was caused by “missiles.” Several experts with military or electric-sector cybersecurity backgrounds told CyberScoop that, based on what’s visible in the videos, the damage appears consistent with a kinetic attack—most likely carried out via helicopters and planes.

“There were obviously pretty large .50-caliber bullet holes in the walls,” Earl Shockley, president and CEO of INPOWERD, a military veteran and cybersecurity expert who worked for forty years as a power-grid operations engineer, told CyberScoop after viewing one of the videos.

“That’s a kinetic attack,” FDD’s Montgomery told CyberScoop after watching video of the Fuerte Tiuna substation incident.

Across interviews, grid operators, cybersecurity specialists, and military experts independently reached the same conclusion: the visible physical damage alone was enough to cause the outages observed.

An easy target, cyber or not

Experts note that cyber operations can sometimes produce kinetic effects—as they did in the highly complex US-Israeli operation known as Stuxnet—but they also say that taking down Caracas’s already fragile power grid would not necessarily have required that level of sophistication.

“All of us who are electric sector people, we’ve seen the videos,” Patrick Miller, president and CEO of Ampyx Cyber, told CyberScoop. “We’re all pretty much convinced that would definitely cause an outage. If you’re going to go in and shoot up the substations, why do you need cyber again?”

Miller said that temporarily disrupting the flow of power is a well-understood capability for any nation with the interest to do it–and that it often requires almost no precision or skill. “These are fragile systems, he said.

“This was not a hard cyber target,” Montgomery said. “It’s an easy cyber target. These are older systems that we have worked on before in other countries. They’re not unique. We’re not talking about taking down Idaho National Labs here. We’re talking about taking down a poorly defended, underfunded, under-resourced network.”

Ron Brash, operational technology and industrial control system expert, told CyberScoop, “These energy management systems are probably relatively easy to infiltrate either because they haven’t updated the software or updated what they need to update, and you can exploit the vulnerabilities, or because you buy insider access.” Moreover, he said, “There’s probably so much analog stuff in there from the 1960s.”

Cyber to blind, kinetic to break

Experts generally agree that physical damage likely disabled at least parts of the power grid. But they also think cyber activity may still have played an important supporting role in Operation Absolute Resolve—one that could have enabled or amplified the operation, even if it wouldn’t fully account for where the outages occurred or how long they lasted without accompanying physical damage.

Some experts say that it’s possible the US used cyber capabilities to briefly disrupt power transmission in specific areas—potentially to reduce Venezuelan defenders’ situational awareness as they moved toward Maduro’s compound. “You want to reduce situational awareness, blind the enemy, break their coordination, and enable yourself to maneuver where you need to be. And all of those things just played out with that operation,” Shockley said.

“If we shut down the radars, if we shut down the power grid, they don’t see what’s going on,” he said. “Then we do some kinetic damage to prevent them from bringing the grid back up quickly. That way, we have plenty of time to do what we need to do.”

“A cyberattack is reversible, so it’s temporary,” Montgomery said. “It’s possible that cyber was attempted to take down power stations and equipment before the missiles came in to take down the power stations and equipment,” he added. “You have missiles coming in and taking down power, so nothing works. And before that, you do cyber so that more of your missiles get through. It is kind of a layer to the attack.”

Vice Adm. Heidi Berg, commander of 10th Fleet/Fleet Cyber Command, hinted at such layering at the WEST conference in San Diego earlier this week.

Cyber-based surveillance may also have been used for months in advance, giving the US military visibility into the grid’s weak points and helping inform where kinetic strikes have the greatest effect. “It takes months to identify what the system does, what the software does, do we have access to their older systems,” and so forth, Montgomery said.

“If you monitor that system, you learn where the power flows go, you learn where the single points of failure are, you learn that if this thing blows up, man, I’m in trouble because I can’t get power from this area to that area,” Shockley said.

Trump said at the press briefing that the lights went out in Caracas, and some coverage interpreted that as widespread darkness across large parts of the city. That framing sits uneasily with the idea of narrowly targeted, area-specific disruption. At the same time, social media posts and news accounts from the incident did not indicate that a large portion of Caracas was plunged into darkness.

Valentina Aguana, a Venezuelan digital rights advocate and systems engineer now working in Spain, told CyberScoop that a widespread blackout “was never a thing for my team working in Venezuela. There were very few areas in which the power went down and it came back on in a few minutes,” which you would expect with a pure cyberattack. “All the areas that were left without power were left without power for a couple of hours,” she added, which experts say is consistent with a kinetic attack.

“I haven’t seen any real proof or even correlating proof that the outage was widespread,” Miller said, adding that he has an extensive network of electric system security contacts throughout South America.

What gets lost in a cyber-only framing

Given how quickly and widely videos, press releases, and other confirmation of physical damage to the Venezuelan substations circulated, it remains unclear why so many outlets gave little attention to the kinetic dimension of the outage.

Whatever the source of the omissions, recent reporting on Pentagon computer warfare doctrine has underscored that cyber operations are increasingly designed to shape battlefield conditions rather than function as stand-alone weapons, an approach that aligns with the expert assessments of the role of kinetic attacks in the Caracas operation.

However, continued accounts of what happened in Caracas that treat the sabotage as primarily “cyber” could skew risk assessments and preparedness—potentially leaving substations, transmission lines, and transformers less protected than they should be against the kind of real-world attacks that visible damage suggests are possible.

“This was a very complex thing, and it wasn’t just one thing; it wasn’t just a cyberattack,” Shockley said. “In my industry, we have regulations around how we’re supposed to protect our critical infrastructure, our substations, our power plants, our control centers. Physical security is a big thing that we do. We do physical security inspections, and we make recommendations.”

The post The Caracas operation suggests cyber was part of the plan – just not the whole operation appeared first on CyberScoop.

Why ‘secure-by-design’ systems are non-negotiable in the AI era

Moody’s recently reported that global investment in data centers will surpass $3 trillion over the next five years, driven by AI capacity growth and hyperscaler demand. As big tech companies, banks, and institutional investors pour capital into these projects, data center developers and their financial sponsors must prioritze cybersecurity.

Moody’s said that data center investments made by the six largest U.S. cloud computing providers  — Microsoft, Amazon, Alphabet, Oracle, Meta, and CoreWeave — approached $400 billion last year. The firm anticipates that annual global investment will grow by $200 billion over the next two years.

Real estate firm Jones Lang LaSalle forecasted similar investment flows in a separate report published earlier this year, projecting that “nearly 100 GW of new data centers will be added between 2026 and 2030, doubling global capacity.” JLL said that this infrastructure investment “supercycle,” one of the largest in the modern era, will result in $1.2 trillion in real estate asset value creation and the need for roughly $870 billion of new debt financing.

In concert, these reports reflect a growing reality: Data centers are strategic, interconnected infrastructure supporting our manufacturing, national security, and communication systems. Cyber disruptions, whether through ransomware, supply-chain compromise, or operational technology (OT) compromises, can cascade beyond a single facility, threatening grid stability, cloud services, economic activity, and public safety.

Data centers are now critical hubs of energy demand and digital dependency. Their cybersecurity posture is directly tied to the resilience of the industrial and energy ecosystem that support them. For investors and stakeholders, cybersecurity should be fundamental to asset value and risk management. Strong cybersecurity directly affects uptime guarantees, regulatory exposure, insurance coverage, financing terms, and long-term valuation.

The most significant cybersecurity risks now center on three critical areas: data center-grid convergence, supply-chain vulnerabilities, and secure-by-design considerations. Data center operators and their financial backers must address these interconnected threats to protect both individual facilities and the broader system they support.  

Hardwired for risk

The cybersecurity challenge facing the data center supercycle stems from how these campuses are tightly coupled with both the public power grid and their own industrial control systems. As hyperscale and AI‑optimized facilities proliferate, their constant demand for high‑quality electricity shapes grid planning and reliability. These large campuses function less like traditional real estate and more like critical energy infrastructure nodes.

This shift comes as grid capacity tightens. The North American Electric Reliability Corporation (NERC) has warned that demand from new data centers will outpace energy supply growth in the coming years. A cyber incident that disrupts a major data center or degrades its industrial control systems can propagate into regional grid reliability issues, contract penalties, and broader economic disruption.

At the same time, the OT running these sites — building management, systems, cooling controls, battery and generator management — create dense cyber‑physical exposure. Global insurer Marsh notes that events in these systems, whether from human error or cyberattack, can cause physical damage and significant business interruption. The 2021 OVHcloud data center fire in Strasbourg, France destroyed an entire facility and disrupted services for thousands of customers, showing how failures in fire protection and cooling systems rapidly escalate. into catastrophic loss. Those safety functions now run through interconnected, remote-access-enabled OT systems.

Secure‑by‑design architectures for both grid‑side interfaces and on‑site OT are prerequisites for preventing this rapidly expanding energy–data infrastructure from becoming a single, converged point of failure.

Supply-chain integrity first

AI‑optimized campuses depend on massive volumes of GPUs, high‑density servers, network appliances, OT controllers, and edge devices. Many of these components are designed, manufactured, or assembled in jurisdictions at the center of great‑power competition, particularly China. Reports warn that state-aligned actors could introduce backdoors, malicious firmware, or weaponize delivery timelines to create strategic outages.

Secure‑by‑design must start at procurement. Security-conscious procurement requires stringent vendor due diligence, diversification away from single‑country dependencies, hardware and firmware validation before deployment, and alignment with export controls and national‑security guidance on high‑risk equipment. The bill of materials (BoM) for a modern data center must be treated like a living threat surface, with traceability from chip manufacture through installation, including approved vendor lists, tamper‑evident logistics, and mandatory firmware attestation.

Procurement teams need escalation paths for opaque supply chains, unexplained cost changes, or “gray‑market” alternatives, plus playbooks for rapidly substituting vendors when geopolitical shocks or sanctions make a product line unacceptable.

Governance around supply‑chain risk must reach the same level as power, cooling, and uptime guarantees in contracts with hyperscalers and large tenants. Secure‑by‑design campuses will embed requirements for hardware provenance, firmware update hygiene, and ongoing vulnerability disclosure into master service agreements and construction/operations contracts, with clear accountability when a supplier is implicated in espionage or sabotage.

Data center sponsors who cannot prove supply‑chain integrity will face growing pressure from regulators, insurers, and investors who see hardware trust as a prerequisite for AI and cloud infrastructure resilience.

Securing the infrastructure supply chain pipeline

Engineering secure-by-design campuses begins with assuming adversaries will target internet‑exposed and OT edge devices. Security architects must design environments that prevent any foothold at the edge from escalating into grid‑scale disruption or safety‑critical failure.

Geopolitically motivated campaigns against energy infrastructure are accelerating. Recent Russia-nexus attacks on the Polish power system and Romania’s national oil pipeline demonstrate that state‑linked and criminal groups see energy and digital infrastructure as leverage points. Last December, actors linked to Russia’s Sandworm APT compromised remote terminal units (RTUs), firewalls, and communications gateways at Polish substations and distributed energy facilities.

This precedent-setting cyberattack—the first to directly target distributed energy resources in a NATO member’s power system—is indicative of the current threat landscape. Sandworm’s campaign underscores how fragile edge devices are and how vital it is to harden the gateways at the OT boundary. The first pillar of secure-by-design campuses is disciplined network segmentation that treats OT as a distinct, high‑consequence domain.

OT networks should be carved into functional and geographic zones—separating building management from generator controls, from battery systems, from grid‑interconnection protection—with tightly controlled conduits between them, enforced by OT‑aware firewalls and protocol‑constrained paths.

Hardware‑enforced unidirectional gateways and data diodes offer uniquely strong protection at key boundaries. Data diodes allow telemetry and process data to flow outward from OT to IT and monitoring systems while physically blocking any return path, sharply reducing the chances that a web-based intrusion can reach OT systems.

Data diodes should be placed at key demarcation points—between the data center’s OT and corporate IT, between on‑site generation controls and the broader campus, and at interfaces with utility systems—so operators preserve visibility without exposing those domains to bidirectional network risk.

A second foundational element of secure‑by‑design campuses is a clear, continuously maintained OT asset inventory capturing every PLC, RTU, relay, drive, building controller, gateway, sensor, and engineering workstation, along with its network location, firmware version, vendor, and criticality. Effective segmentation depends on knowing what you have and how it communicates.

Operators cannot isolate critical power and cooling functions, or confidently place diodes and firewalls, without understanding which devices participate in those functions and which paths they rely on. This inventory must fully cover the same class of gateways and field devices abused in the Polish grid attack.

When asset inventories are linked to configuration and vulnerability management, operators can quickly identify exposed OT devices when they are approaching end  of life or when new flaws are disclosed. A comprehensive OT asset inventory also enables security teams to quickly locate high‑risk remote access paths and prioritize segments for additional hardening.

Secure‑by‑design engineering mandates the  mitigation of accelerating cyber risks posed by remote access gateways and the mass-automation of industrial functions. Every orchestration platform, management API, and remote session is a potential high‑impact attack vector.  This threat model requires consolidating OT access through hardened jump hosts with strong authentication and just‑in‑time privileges; sharply limiting what automation tools can change on OT networks, enforcing strict segregation between automation platforms and safety‑critical functions, continuously monitoring automated and remote actions, and hardening configuration‑management workflows.

Lastly, secure‑by‑design architecture demands OT‑aware visibility that can actually see and understand what is happening on control networks. This means instrumenting OT segments with monitoring tuned to industrial protocols and behaviors, correlating alerts with asset context, and wiring those insights into playbooks that can quickly isolate, triage, and physically replace compromised edge devices before an intrusion escalates.

Resilience is the only path to funding

The threat modeling, procurement, and design best practices detailed here directly constrain the blast radius of geopolitically charged campaigns that threaten data center reliability and safety. Data center developers, operators, and investors need this systems‑level blueprint for building AI‑era campuses that remain resilient as the energy and threat landscape becomes more contested.

Banks and institutional sponsors are deploying trillions of dollars in construction, fit‑out, and power capacity on the assumption that AI demand will translate into durable, high‑availability cash flows. Underinvesting in cybersecurity directly threatens covenants, refinancing options, insurance coverage, and asset valuation. Outages, safety incidents, or regulatory findings will capsize the investment thesis.

The campuses that will secure the best financing over the next decade will be those that can point to their secure‑by‑design architectures, campus-wide OT governance, and defensible supply‑chain practices. In this intertwining infrastructure supercycle and macro OT threat environment, power usage efficiency (PUE) metrics and fast build schedules will matter less that proven security safeguards.

The stakes are escalating rapidly. Developers and utilities are pairing energy‑hungry data centers with small modular reactors (SMRs) and other non‑traditional power generation. These campuses will converge with the security and risk profile of nuclear and high‑hazard industrial facilities, bringing heightened  regulations and adversary interest.

SMR data centers fundamentally change the threat model. When nuclear systems sit alongside AI clusters, secure-by-design takes on a new dimension. Operators, investors, regulators, and security professionals must prepare for this convergence. The integration of compute and power generation creates a dynamic that demands the security rigor of both digital and infrastructure and nuclear facilities. The window to build these protections into design is closing.

Jeffrey Knight is Director of Global Critical Infrastructure Services at InfraShield. Jeff brings more than 35 years of experience in nuclear engineering and cybersecurity across the Department of Defense (DoD), SWIFT, the NRC, and the Department of Energy (DOE) National Laboratory complex.

The post Why ‘secure-by-design’ systems are non-negotiable in the AI era appeared first on CyberScoop.

After major Poland energy grid cyberattack, CISA issues warning to U.S. audience

A recent attempt at a destructive cyberattack on Poland’s power grid has prompted the Cybersecurity and Infrastructure Security Agency to publish a warning for U.S. critical infrastructure owners and operators.

Tuesday’s alert follows a Jan. 30 report from Poland’s Computer Emergency Response Team concluded the December attack overlapped significantly with infrastructure used by a Russian government-linked hacking group, and that it targeted 30 wind and photovoltaic farms, among others.

CISA said its warning was meant to “amplify” that Polish report. In particular, CISA said the attack highlighted the threats to operational technology and industrial control systems, most commonly used in the energy and manufacturing sectors.

And CISA’s alert continues a recent agency focus on securing edge devices like routers or firewalls, after a binding operational directive last week to federal agencies to strip unsupported products from their systems.

“The malicious cyber activity highlights the need for critical infrastructure entities with vulnerable edge devices to act now to strengthen their cybersecurity posture against cyber threat activities targeting OT and ICS,” the alert reads.

“A malicious cyber actor(s) gained initial access in this incident through vulnerable internet-facing edge devices, subsequently deploying wiper malware and causing damage to remote terminal units (RTUs),” it states. “The malicious cyber activity caused loss of view and control between facilities and distribution system operators, destroyed data on human machine interfaces (HMIs), and corrupted system firmware on OT devices. While the affected renewable energy systems continued production, the system operator could not control or monitor them by their intended design.”

CISA urged owners and operators to review the Polish report, as well as security guidance from other U.S. agencies.

The attack directed at Poland — which its CERT compared to “deliberate arson,” and had a “purely destructive objective” at a time when the nation was struggling with cold temperatures and snowstorms — has had ripples in other parts of the world, too. 

“Operators of UK critical national infrastructure (CNI) must not only take note but, as we have said before, act now,” Jonathon Ellison, director for national resilience at the United Kingdom’s National Cyber Security Centre, said in a LinkedIn post Monday.

Dragos, a cybersecurity firm that specializes in industrial control systems, said the attack represented a new frontier.

“This is the first major cyber attack targeting distributed energy resources (DERs), the smaller wind, solar, and CHP [combined heat and power] facilities being added to grids worldwide,” the company wrote in a report last month. “Unlike the centralized systems impacted in electric grid attacks in 2015 and 2016 in Ukraine, these distributed systems are more numerous, require extensive remote connectivity, and often receive less cybersecurity investment. This attack demonstrates they are now a valid target for sophisticated adversaries.”

Poland’s analysis concluded that the infrastructure used in the attack overlapped with that used by the group known alternately as Static Tundra, Berserk Bear, Ghost Blizzard and Dragonfly.

The post After major Poland energy grid cyberattack, CISA issues warning to U.S. audience appeared first on CyberScoop.

What’s next for DHS’s forthcoming replacement critical infrastructure protection panel, AI information sharing

A revised government-industry council devoted to critical infrastructure protection could be set up to have broader and more specific discussions on things like cybersecurity and threats to hardware and software that monitor and control industrial processes, known as operational technology (OT).

A top official at the Cybersecurity and Infrastructure Security Agency (CISA), Nick Andersen, said Tuesday he couldn’t share a timeline yet for the replacement of the Critical Infrastructure Partnership Advisory Council, which the Homeland Security Department disbanded to private sector dismay last year.

But he said the replacement, details of which CyberScoop was first to report, was trying to solve a number of problems with the original council (CIPAC).

“Old CIPAC never made any explicit focus on cybersecurity, that just wasn’t part of what was chartered back in the day when it was originally launched,” Andersen, executive assistant director for cybersecurity, told reporters at an event hosted by the Information Technology Industry Council (ITI).

“Additionally, it didn’t give us the opportunities for having focus groups to have conversations [about] like undersea cables, might be a good example. OT systems might be a good example,” he said. “OT had to nest itself under the IT Sector Coordinating Council in the past. There’s real opportunities for us to improve, opportunities for elements of the community that didn’t necessarily have opportunities to engage in a substantive way in the past, to give them a voice in the process.”

Further considerations, sources have told CyberScoop, include things like liability protections and how transparent the panel’s proceedings should be.

It was one of a number of topics discussed at the ITI event on the intersection of government, industry and cybersecurity.

Andersen told reporters he couldn’t provide a timeline for development of an artificial intelligence information sharing center (AI-ISAC), first proposed by the Trump administration as part of its AI Action Plan.

But he spoke at the event about pitfalls he hoped an AI-ISAC would avoid. Key, he said, would be to avoid having a government-established entity that ran parallel to, rather than in coordination with, industry efforts.

The administration wants to “take the opportunity to get that relationship right,” Andersen said.

The post What’s next for DHS’s forthcoming replacement critical infrastructure protection panel, AI information sharing appeared first on CyberScoop.

ServiceNow agrees to buy cyber firm Armis for $7.75B

ServiceNow has agreed to buy cybersecurity firm Armis for $7.75 billion in cash, a deal that would push the enterprise software company deeper into a fast-growing corner of security focused on tracking and reducing “exposure” across sprawling networks of connected devices.

The companies said Tuesday that combining ServiceNow’s workflow and risk products with Armis’ asset discovery and cyber-physical security tools would create an end-to-end system intended to detect vulnerable devices, prioritize risks and route remediation through automated operational processes. That vision reflects a broader shift in cybersecurity: visibility and response are increasingly being treated as continuous, integrated business functions rather than standalone technical tools. 

“ServiceNow is building the security platform of tomorrow,” said Amit Zavery, president, chief operating officer, and chief product officer at ServiceNow. “In the agentic AI era, intelligent trust and governance that span any cloud, any asset, any AI system, and any device are non-negotiable if companies want to scale AI for the long-term. Together with Armis, we will deliver an industry-defining strategic cybersecurity shield for real-time, end-to-end proactive protection across all technology estates. Modern cyber risk doesn’t stay neatly confined to a single silo, and with security built into the ServiceNow AI Platform, neither will we.”

Armis specializes in mapping and classifying devices across information technology systems and operational technology, including industrial controls and medical devices. Those environments, often essential to manufacturing, hospitals and critical infrastructure, have become prominent concerns as more equipment is connected to networks but remains difficult to inventory with traditional security software. Armis says it performs “agentless” discovery, meaning it can identify devices without installing software on each endpoint, a key consideration for older or regulated systems.

“AI is transforming the threat landscape faster than most organizations can adapt. Every connected asset has become a potential point of vulnerability,” said Yevgeny Dibrov, co-founder and CEO of Armis. “We built Armis to protect the most critical environments and give both public and private sector organizations the real-time intelligence they need to stay ahead – so they can see their entire environment clearly, understand risk in context, and take action before an incident occurs. Together with ServiceNow, customers will have a powerful new way to reduce their exposure and strengthen security at scale.”

ServiceNow, best known for IT service management and enterprise workflow products, has been building a security and risk business that it said crossed $1 billion in annual contract value in the third quarter of 2025. The company described the Armis deal as a way to “more than triple” its market opportunity in security and risk. While such projections are inherently forward-looking, the figure underscores how cybersecurity has become a major battleground for large platform vendors seeking to consolidate multiple functions into a single suite.

The announcement also highlights the industry’s preoccupation with artificial intelligence, both as a tool for defenders and a driver of new risks. ServiceNow framed the acquisition around “AI-native” and “agentic” capabilities, language that has become common as vendors race to incorporate autonomous features into security operations. The premise is that, as networks expand and threats move faster, human analysts cannot manually triage every alert or vulnerability, making automation and prioritization central selling points.

In the second half of 2025 alone: 

  • Palo Alto Networks announced it will acquire Chronosphere, a cloud observability platform, for $3.35 billion in cash and equity.
  • Cloud security company Zscaler announced it has acquired SplxAI, an artificial intelligence security platform.
  • Veeam acquired Securiti AI for $1.7 billion.
  • Check Point acquired AI security firm Lakera.
  • Mitsubishi Electric acquired OT and IoT cybersecurity specialist Nozomi Networks for $1 billion.

The companies cited a forecast that worldwide end-user spending on information security will rise 12.5% in 2026 to $240 billion, attributing growth to evolving threats and the expanding use of AI and generative AI. Whether those drivers translate into better security outcomes remains debated, but the spending trajectory signals continued pressure on organizations to manage risk across more endpoints, more software and more interconnected supply chains.

If completed, the deal would also strengthen ServiceNow’s position in so-called cyber-physical security, an area that blurs the line between digital compromise and real-world disruption. The integration described by the companies links Armis’ real-time device intelligence to ServiceNow’s configuration management database, which ties technical assets to business services and responsible teams. That connection, they argue, would make remediation more actionable by directing fixes to the people who can implement them.

Armis, founded in 2015, reported more than $340 million in annual recurring revenue and said it employs about 950 people. The company counts Global 2000 customers, including more than 35% of the Fortune 100, and said it serves government agencies and public-sector organizations.

The post ServiceNow agrees to buy cyber firm Armis for $7.75B appeared first on CyberScoop.

NIST, MITRE announce $20 million research effort on AI cybersecurity

The National Institute of Standards and Technology announced that it will partner with The MITRE Corporation on a $20 million project to stand up two new research centers focused on artificial intelligence, including how the technology may impact cybersecurity for U.S. critical infrastructure.

On Monday, the agency said one center will focus on advanced manufacturing while the second — the AI Economic Security Center to Secure U.S. Critical Infrastructure from Cyberthreats — will focus more directly on how industries that provide water, electricity, internet and other essential services can protect and maintain services in the face of AI-enabled threats. According to NIST, the centers will “drive the development and adoption” of AI-driven tools, including agentic AI solutions.

“The centers will develop the technology evaluations and advancements that are necessary to effectively protect U.S. dominance in AI innovation, address threats from adversaries’ use of AI, and reduce risks from reliance on insecure AI,” spokesperson Jennifer Huergo wrote in an agency release.

The two centers are part of a larger federal government investment to create  federally funded AI research centers at NIST, some of which predated the Trump administration.

Earlier this year the White House overhauled the name and mission of the AI Safety Institute, rebranding it the Center for AI Standards and Innovation, that mirrored the administration’s broader shift away from AI safety issues while prioritizing American competition with China. Next year NIST plans to make another award for the creation of a new AI for Resilient Manufacturing Institute, a five-year, $70 million federal investment to combine expertise in AI, manufacturing and supply chain networks and promote resilience in the manufacturing sector.

AI boosters in the government, industry and Congress are betting that more federal muscle behind these applications will lead to innovation for U.S. AI projects. Huergo wrote that NIST “expects the AI centers to enable breakthroughs in applied science and advanced technology.”

Acting NIST Director Craig Burkhardt said the centers will collectively “focus on enhancing the ability of U.S. companies to make high-value products more efficiently, meet market demands domestically and internationally, and catalyze discovery and commercialization of new technologies and devices.”

CyberScoop reached out to NIST for additional details on the centers and their work.

In response to further questions, Brian Abe, managing director of the national cybersecurity division at MITRE, told CyberScoop that the nonprofit corporation is bringing “all of MITRE to bear” to carry out the mission of the centers. He said the goal is to make an exponential impact on U.S. manufacturing and critical infrastructure cybersecurity within three years.

“We will also leverage the full range of MITRE’s lab capabilities such as our Federal AI Sandbox,” said Abe. “More importantly, we will not be doing this alone. These centers will be a true collaboration between NIST and MITRE as well as our industry partners.”

Nearly every source contacted by CyberScoop for reaction said they supported broader collaboration from government and industry on AI security and critical infrastructure.

Many industrial sectors have been pummeled by ransomware, foreign hacking and other digital threats over the past decade. The speed and scale advantages provided by large language models could put more stress on IT and security teams, many of whom already deal with chronically underfunded budgets.

Randy Dougherty, CIO of Trellix, told CyberScoop that by focusing on cybersecurity for critical infrastructure, “NIST is tackling the ‘high-stakes’ end of the AI spectrum where accuracy and reliability are non-negotiable.”

Some sources said it was important for any effort to invite stakeholders from the industries they’re trying to protect and ensure their input is included.

Gary Barlet, public sector chief technology officer at cybersecurity company Illumio, flagged two sectors in particular – water and power – that are essential to most modern critical services, saying that securing their IT, OT and supply chains should be among the center’s first priorities.

But in order to help, Barlet said that NIST and the government must ensure those sectors have a meaningful seat at the table and can translate any research insights into workable solutions. Getting those parties on board will be crucial because, he said, those are the people “who will be answering to Congress if something goes wrong, not the AI developers.”

“Too often, these centers are built by technologists for technologists, while the people who actually run our power grids, water systems, and other critical infrastructure are left out of the conversation,” Barlet said.

The post NIST, MITRE announce $20 million research effort on AI cybersecurity appeared first on CyberScoop.

New cybersecurity guidance paves the way for AI in critical infrastructure 

Global cybersecurity agencies have issued the first unified guidance on applying artificial intelligence (AI) within critical infrastructure, signaling a major shift from theoretical debate to practical guardrails for safety and reliability.

The release of joint guidance on Principles for the Secure Integration of Artificial Intelligence in Operational Technology marks a meaningful milestone for critical infrastructure security because major global cybersecurity agencies, including CISA, the FBI, the NSA, the Australian Signals Directorate’s Australian Cyber Security Centre, and other partners, have aligned on a shared direction. As AI adoption accelerates across operational environments, this document moves us from theory to practice. It acknowledges AI’s promise while making clear that it also “introduces significant risks—such as operational technology (OT) process models drifting over time or safety-process bypasses” that operators must actively manage to ensure reliability.

The guidance draws a firm distinction between safety and security, emphasizing that large language models should be used to make safety decisions for OT environments and urges operators to adopt push-based architectures with strong architectural boundaries, maintain human-in-the-loop oversight, and demand transparency from vendors embedding AI into industrial systems. It frames AI as an adviser rather than a controller, reinforcing that resilience depends on skilled operators, clear validation procedures, and visibility into how AI models interact with the physical world.

A central contribution of this guidance is its clear distinction between safety and security in the AI era. Protecting the integrity and availability of systems is not the same as preventing physical harm, and AI complicates this relationship in ways many CISOs are now expected to navigate. The guidance recognizes that AI’s non-deterministic nature can lead to unpredictable behaviors or hallucinations. This is why it draws an explicit line: “AI such as LLMs almost certainly should not be used to make safety decisions for OT environments.” 

The message is not a rejection of innovation. It is a pragmatic call to preserve the safety foundations that operational technology depends on. For example, in a water treatment facility, a generative model might misinterpret sensor anomalies and make a recommendation that inadvertently adjusts chemical dosing. Even if security controls are intact, the safety implications can be immediate and physical.

The architecture recommendations extend that safety-first mindset. The guidance maps where AI belongs within the OT hierarchy with clarity. Predictive Machine Learning can strengthen operations at levels 0 through 3, such as forecasting pump failures based on vibration patterns or identifying anomalies in turbine exhaust temperatures. Meanwhile, large language models are better suited for business functions at levels 4 and 5 where they assist with documentation, work order generation, or regulatory reporting. 

The guidance also cautions against introducing new attack vectors. To reduce inbound risk, agencies recommend “push-based or brokered architectures that move required features or summaries out of OT without granting persistent inbound access”. This pattern prevents scenarios where an adversary could exploit a cloud-hosted AI system to pivot directly into OT networks. In other words, AI should act as an advisor rather than a controller, supporting operations without becoming an unseen entry point for adversaries.

Importantly, the document looks beyond systems to the humans who operate them. It warns that “heavy reliance on AI may cause OT personnel to lose manual skills needed for managing systems during AI failures or system outages.” For critical infrastructure, this is not theoretical. Many power plant and water utility operators are already experiencing a loss of skilled workers as employees retire. The guidance encourages organizations to train operators not only on how to use AI, but also on how to challenge it. For example, personnel should be able to validate AI outputs using alternative sensors and observations to confirm that digital recommendations align with physical reality. A compressor temperature anomaly flagged by an ML model, for example, should still be correlated with on-floor readings by humans before operators take corrective action.

The guidance also recommends that critical infrastructure owners should develop strong procurement strategies that take AI into account. Organizations are encouraged to “demand transparency and security considerations from OT vendors regarding how AI technologies are embedded into their products.” This includes requiring SBOMs (or AIBOMs) that specify where models are sourced and hosted, and ensuring that vendors disclose whether they are training those models on an operator’s sensitive data. 

Many CISOs are finding that AI-enabled features are being added quietly into third-party software and SaaS without clear disclosure. This guidance supports a shift toward secure by demand, giving operators the clarity to make informed choices before AI features are embedded deep into their environments.

Finally, the document reaffirms that accountability sits with people. It reminds us that “ultimately, humans are responsible for functional safety.” The recommended “human in the loop” model ensures that AI informs decisions but does not replace human judgment. This approach mitigates challenges such as “model drift” and avoids the risk of blindly executing “black box” outputs in environments where the stakes include real human safety. For example, as refinery equipment ages, model drift can cause a machine learning model to predict failure thresholds that are too low, making it critical for operators to regularly validate the model over the asset’s lifetime. 

As we move forward, the path is both challenging and hopeful. This shared global guidance gives operators a clearer map, and it reinforces that resilience grows when humans and machines work in partnership. A practical next step is to review where AI already touches your OT landscape, then establish or refresh validation procedures that keep operators engaged and confident. You can also begin early conversations with vendors about transparency requirements, which helps set expectations before new capabilities are deployed. In a landscape shaped by rapid innovation, these proactive actions will help ensure that safety and trust remain at the center of progress.

Diana Kelley is the chief information security officer at Noma Security. She has also held senior leadership roles at major technology and cybersecurity companies, including Microsoft Cybersecurity Field CTO, Global Executive Security Advisor at IBM Security, and GM at Symantec. 

The post New cybersecurity guidance paves the way for AI in critical infrastructure  appeared first on CyberScoop.

‘Stranger Things’ emerge when OT security is stuck in the past

The final season of “Stranger Things” is upon us, and 1980s nostalgia is at an all-time high. The clunky control panels at Hawkins Lab help set the stage for the show. The unfortunate reality is that similar legacy systems still exist in operational technology (OT) environments today. Just as Hawkins Lab spawned a monstrous compendium from the “Upside Down,” a variety of threats have burst forth from vulnerable devices.

Nation-state threats, such as Volt Typhoon, have established persistent access across critical infrastructure, including telecommunications providers. Most of these threats exploit common vulnerabilities and exposures (CVEs) in networking devices; no zero-day exploits are required.

Nostalgia for “the good old days” ignores how much progress has been made since then. From the Purdue Enterprise Reference Architecture (PERA) model of the 1990s to more timely guidance from the Cybersecurity and Infrastructure Security Agency (CISA), organizations have a script they can follow for critical infrastructure protection. Hopefully, this story has a happy ending.

All it takes is one open port

The Department of Defense (DoD) has increasingly been focused on bringing OT security up to par with IT security, noting the challenges legacy systems create with vulnerabilities, data integration and standards.

The challenge in securing critical infrastructure is multifaceted. Critical infrastructure environments tend to be complex and dispersed, including IT and OT networks across multiple physical locations. Digital transformation initiatives, such as industrial IoT and cloud computing, are often at odds with legacy systems, which were never intended to be connected to the internet or able to support modern cybersecurity controls.

One of the biggest reasons that organizations struggle with the cybersecurity of legacy systems is because OT environments tend to prioritize productivity. Even when patches are available for industrial systems, the patch management process is meticulous and methodical to ensure production is not interrupted.

However, many industrial control systems (ICS), SCADA systems and programmable logic controllers (PLCs) have been around for decades. These are systems that were expensive investments and cannot be easily replaced. Patches for many of these systems are no longer available. For example, even as IT environments are focused on Windows 10 migration today, there are still OT environments running Windows XP, which has not been patched in more than a decade.

Many legacy systems were never intended to be connected to the internet. However, digital transformation initiatives and IT/OT convergence have forced connectivity into these devices, leaving them exposed to attack. Consequently, legacy protocols like Modbus and DNP3, which lack encryption or authentication, become open avenues for lateral movement.

The empire strikes back

There are more advanced persistent threats (APTs) than there are sequels to Hollywood blockbusters. Just like most sequels, many of these threats return bigger and badder than their predecessors. For example, two of the most notorious APTs of the past few years are Volt Typhoon and Salt Typhoon.

Both Volt Typhoon and Salt Typhoon exploit CVEs in networking appliances to gain initial access. Once these threats establish initial access, they leverage living off the land (LOTL) techniques, such as using RDP and VPN access, to evade detection and modify access control lists to establish persistence. 

In the case of Volt Typhoon, CISA advises organizations to prioritize patching critical vulnerabilities known to be exploited by the threat actor group and to plan for “end of life” technology, which is the epitome of legacy systems. In the case of Salt Typhoon, CISA advises organizations to continuously monitor for indicators of compromise (IOCs), such as suspicious changes to configurations.

These threats underscore the importance of having visibility into both the state of devices, such as their vulnerabilities, as well as network traffic, such as behavioral anomalies. Furthermore, organizations should be monitoring not just for IOCs, but for early warning signs, which are indicators of attack (IOAs).

Back to the future

Pop culture references to time travel tend to create a bit of a paradox, but organizations can review models and frameworks from the past and present to better understand how to secure legacy technology in OT environments.

In the 1990s, PERA, or the “Purdue Model,” was developed to explain how data flows across industrial systems. Just as threats evolve, so do these models. IEC 62443 is a common security framework (CSF) that builds upon the Purdue Model, providing a variety of best practices for protecting IT and OT networks in critical infrastructure environments. 

Two of the biggest takeaways from the Purdue Model and IEC 62443 are an in-depth patch management process that validates the reliability of updates to critical systems and the importance of network segmentation and network isolation for critical systems that may not otherwise be able to be patched or protected.

More recently, in 2025, CISA published “Foundations for OT Cybersecurity: Asset Inventory Guidance for Owners and Operators.” According to CISA, threat actors exploit vulnerabilities, misconfigured protocols, insecure remote access points, weak authentication mechanisms and insufficient network segmentation to compromise critical infrastructure.

CISA advises organizations to develop asset inventories and taxonomies for their classification. In other words, visibility and context into the state of these devices.

Hindsight is 20/20

The problem with rose-tinted glasses is that you don’t notice red flags. Organizations should not let nostalgia for the past blind them to the reality they face today. 

It is unrealistic to expect organizations to replace monolithic legacy systems that are central to their operations, but they do need to understand them.

The post ‘Stranger Things’ emerge when OT security is stuck in the past appeared first on CyberScoop.

❌