Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why ‘secure-by-design’ systems are non-negotiable in the AI era

By: Greg Otto
17 February 2026 at 06:00

Moody’s recently reported that global investment in data centers will surpass $3 trillion over the next five years, driven by AI capacity growth and hyperscaler demand. As big tech companies, banks, and institutional investors pour capital into these projects, data center developers and their financial sponsors must prioritze cybersecurity.

Moody’s said that data center investments made by the six largest U.S. cloud computing providers  — Microsoft, Amazon, Alphabet, Oracle, Meta, and CoreWeave — approached $400 billion last year. The firm anticipates that annual global investment will grow by $200 billion over the next two years.

Real estate firm Jones Lang LaSalle forecasted similar investment flows in a separate report published earlier this year, projecting that “nearly 100 GW of new data centers will be added between 2026 and 2030, doubling global capacity.” JLL said that this infrastructure investment “supercycle,” one of the largest in the modern era, will result in $1.2 trillion in real estate asset value creation and the need for roughly $870 billion of new debt financing.

In concert, these reports reflect a growing reality: Data centers are strategic, interconnected infrastructure supporting our manufacturing, national security, and communication systems. Cyber disruptions, whether through ransomware, supply-chain compromise, or operational technology (OT) compromises, can cascade beyond a single facility, threatening grid stability, cloud services, economic activity, and public safety.

Data centers are now critical hubs of energy demand and digital dependency. Their cybersecurity posture is directly tied to the resilience of the industrial and energy ecosystem that support them. For investors and stakeholders, cybersecurity should be fundamental to asset value and risk management. Strong cybersecurity directly affects uptime guarantees, regulatory exposure, insurance coverage, financing terms, and long-term valuation.

The most significant cybersecurity risks now center on three critical areas: data center-grid convergence, supply-chain vulnerabilities, and secure-by-design considerations. Data center operators and their financial backers must address these interconnected threats to protect both individual facilities and the broader system they support.  

Hardwired for risk

The cybersecurity challenge facing the data center supercycle stems from how these campuses are tightly coupled with both the public power grid and their own industrial control systems. As hyperscale and AI‑optimized facilities proliferate, their constant demand for high‑quality electricity shapes grid planning and reliability. These large campuses function less like traditional real estate and more like critical energy infrastructure nodes.

This shift comes as grid capacity tightens. The North American Electric Reliability Corporation (NERC) has warned that demand from new data centers will outpace energy supply growth in the coming years. A cyber incident that disrupts a major data center or degrades its industrial control systems can propagate into regional grid reliability issues, contract penalties, and broader economic disruption.

At the same time, the OT running these sites — building management, systems, cooling controls, battery and generator management — create dense cyber‑physical exposure. Global insurer Marsh notes that events in these systems, whether from human error or cyberattack, can cause physical damage and significant business interruption. The 2021 OVHcloud data center fire in Strasbourg, France destroyed an entire facility and disrupted services for thousands of customers, showing how failures in fire protection and cooling systems rapidly escalate. into catastrophic loss. Those safety functions now run through interconnected, remote-access-enabled OT systems.

Secure‑by‑design architectures for both grid‑side interfaces and on‑site OT are prerequisites for preventing this rapidly expanding energy–data infrastructure from becoming a single, converged point of failure.

Supply-chain integrity first

AI‑optimized campuses depend on massive volumes of GPUs, high‑density servers, network appliances, OT controllers, and edge devices. Many of these components are designed, manufactured, or assembled in jurisdictions at the center of great‑power competition, particularly China. Reports warn that state-aligned actors could introduce backdoors, malicious firmware, or weaponize delivery timelines to create strategic outages.

Secure‑by‑design must start at procurement. Security-conscious procurement requires stringent vendor due diligence, diversification away from single‑country dependencies, hardware and firmware validation before deployment, and alignment with export controls and national‑security guidance on high‑risk equipment. The bill of materials (BoM) for a modern data center must be treated like a living threat surface, with traceability from chip manufacture through installation, including approved vendor lists, tamper‑evident logistics, and mandatory firmware attestation.

Procurement teams need escalation paths for opaque supply chains, unexplained cost changes, or “gray‑market” alternatives, plus playbooks for rapidly substituting vendors when geopolitical shocks or sanctions make a product line unacceptable.

Governance around supply‑chain risk must reach the same level as power, cooling, and uptime guarantees in contracts with hyperscalers and large tenants. Secure‑by‑design campuses will embed requirements for hardware provenance, firmware update hygiene, and ongoing vulnerability disclosure into master service agreements and construction/operations contracts, with clear accountability when a supplier is implicated in espionage or sabotage.

Data center sponsors who cannot prove supply‑chain integrity will face growing pressure from regulators, insurers, and investors who see hardware trust as a prerequisite for AI and cloud infrastructure resilience.

Securing the infrastructure supply chain pipeline

Engineering secure-by-design campuses begins with assuming adversaries will target internet‑exposed and OT edge devices. Security architects must design environments that prevent any foothold at the edge from escalating into grid‑scale disruption or safety‑critical failure.

Geopolitically motivated campaigns against energy infrastructure are accelerating. Recent Russia-nexus attacks on the Polish power system and Romania’s national oil pipeline demonstrate that state‑linked and criminal groups see energy and digital infrastructure as leverage points. Last December, actors linked to Russia’s Sandworm APT compromised remote terminal units (RTUs), firewalls, and communications gateways at Polish substations and distributed energy facilities.

This precedent-setting cyberattack—the first to directly target distributed energy resources in a NATO member’s power system—is indicative of the current threat landscape. Sandworm’s campaign underscores how fragile edge devices are and how vital it is to harden the gateways at the OT boundary. The first pillar of secure-by-design campuses is disciplined network segmentation that treats OT as a distinct, high‑consequence domain.

OT networks should be carved into functional and geographic zones—separating building management from generator controls, from battery systems, from grid‑interconnection protection—with tightly controlled conduits between them, enforced by OT‑aware firewalls and protocol‑constrained paths.

Hardware‑enforced unidirectional gateways and data diodes offer uniquely strong protection at key boundaries. Data diodes allow telemetry and process data to flow outward from OT to IT and monitoring systems while physically blocking any return path, sharply reducing the chances that a web-based intrusion can reach OT systems.

Data diodes should be placed at key demarcation points—between the data center’s OT and corporate IT, between on‑site generation controls and the broader campus, and at interfaces with utility systems—so operators preserve visibility without exposing those domains to bidirectional network risk.

A second foundational element of secure‑by‑design campuses is a clear, continuously maintained OT asset inventory capturing every PLC, RTU, relay, drive, building controller, gateway, sensor, and engineering workstation, along with its network location, firmware version, vendor, and criticality. Effective segmentation depends on knowing what you have and how it communicates.

Operators cannot isolate critical power and cooling functions, or confidently place diodes and firewalls, without understanding which devices participate in those functions and which paths they rely on. This inventory must fully cover the same class of gateways and field devices abused in the Polish grid attack.

When asset inventories are linked to configuration and vulnerability management, operators can quickly identify exposed OT devices when they are approaching end  of life or when new flaws are disclosed. A comprehensive OT asset inventory also enables security teams to quickly locate high‑risk remote access paths and prioritize segments for additional hardening.

Secure‑by‑design engineering mandates the  mitigation of accelerating cyber risks posed by remote access gateways and the mass-automation of industrial functions. Every orchestration platform, management API, and remote session is a potential high‑impact attack vector.  This threat model requires consolidating OT access through hardened jump hosts with strong authentication and just‑in‑time privileges; sharply limiting what automation tools can change on OT networks, enforcing strict segregation between automation platforms and safety‑critical functions, continuously monitoring automated and remote actions, and hardening configuration‑management workflows.

Lastly, secure‑by‑design architecture demands OT‑aware visibility that can actually see and understand what is happening on control networks. This means instrumenting OT segments with monitoring tuned to industrial protocols and behaviors, correlating alerts with asset context, and wiring those insights into playbooks that can quickly isolate, triage, and physically replace compromised edge devices before an intrusion escalates.

Resilience is the only path to funding

The threat modeling, procurement, and design best practices detailed here directly constrain the blast radius of geopolitically charged campaigns that threaten data center reliability and safety. Data center developers, operators, and investors need this systems‑level blueprint for building AI‑era campuses that remain resilient as the energy and threat landscape becomes more contested.

Banks and institutional sponsors are deploying trillions of dollars in construction, fit‑out, and power capacity on the assumption that AI demand will translate into durable, high‑availability cash flows. Underinvesting in cybersecurity directly threatens covenants, refinancing options, insurance coverage, and asset valuation. Outages, safety incidents, or regulatory findings will capsize the investment thesis.

The campuses that will secure the best financing over the next decade will be those that can point to their secure‑by‑design architectures, campus-wide OT governance, and defensible supply‑chain practices. In this intertwining infrastructure supercycle and macro OT threat environment, power usage efficiency (PUE) metrics and fast build schedules will matter less that proven security safeguards.

The stakes are escalating rapidly. Developers and utilities are pairing energy‑hungry data centers with small modular reactors (SMRs) and other non‑traditional power generation. These campuses will converge with the security and risk profile of nuclear and high‑hazard industrial facilities, bringing heightened  regulations and adversary interest.

SMR data centers fundamentally change the threat model. When nuclear systems sit alongside AI clusters, secure-by-design takes on a new dimension. Operators, investors, regulators, and security professionals must prepare for this convergence. The integration of compute and power generation creates a dynamic that demands the security rigor of both digital and infrastructure and nuclear facilities. The window to build these protections into design is closing.

Jeffrey Knight is Director of Global Critical Infrastructure Services at InfraShield. Jeff brings more than 35 years of experience in nuclear engineering and cybersecurity across the Department of Defense (DoD), SWIFT, the NRC, and the Department of Energy (DOE) National Laboratory complex.

The post Why ‘secure-by-design’ systems are non-negotiable in the AI era appeared first on CyberScoop.

The slow rise of SBOMs meets the rapid advance of AI

By: Greg Otto
24 November 2025 at 06:00

Open-source components power nearly all modern software, but they’re often buried deep in massive codebases—hiding severe vulnerabilities. For years, software bills of materials (SBOMs) have been the security community’s key tool to shine a light on these hidden risks. Yet, despite government advancements in the US and Europe, SBOM adoption in the private sector remains sluggish. Now, some experts warn that the rapid rise of AI-assisted coding could soon eclipse the push to make software supply chains more transparent.

“I’m a strong, strong supporter of SBOM, and yet we have this emerging thing that’s happening that fundamentally undermines everything that we’ve been working towards,” Sounil Yu, chief AI officer of Knostic, told CyberScoop. “It is not a far-away future where we should expect to see a near infinite number of varieties of [CVE-free software packages] that AI coding systems are going to generate.”

Yu’s optimistic vision, while shared by some, is roundly rejected by many veteran SBOM and software security experts, who say there will likely never be a day when AI can produce vulnerability-free software. 

“People are imagining a future where there are no open-source dependencies or there are no reused dependencies, and therefore there’s nothing to put in an SBOM because every piece of the code is bespoke,” Brian Fox, the co-founder and CTO of Sonatype, told CyberScoop. “I think that’s kind of insane.”

Where SBOM policy stands

Developed under an executive order issued under President Joe Biden, the National Telecommunications and Information Administration (NTIA) released the US government’s first official software SBOM document, The Minimum Elements For a Software Bill of Materials (SBOM), in July 2021. That foundational effort was subsequently transferred to the Cybersecurity and Infrastructure Security Agency (CISA).

According to Allan Friedman, who is widely considered the “father” of SBOM and spearheaded that document’s creation, Biden’s order was also clearly intended for SBOMs to be mandated for federal government suppliers under the FAR [Federal Acquisition Regulation], which could have created a transparency floor for all software providers looking to sell into the federal government.

However, neither the National Institute of Standards and Technology (NIST) nor the Office of Management and Budget (OMB) fully spelled out what that requirement would look like, and the hoped-for FAR requirement ended up merely as part of a required software attestation form, according to Friedman, who is now a senior technical adviser at the Institute for Security and Technology (IST).

Two recent developments at CISA have fostered hopes for more widespread and robust SBOMs. On Aug. 22, the agency opened a public comment period for an SBOM guide that aims to update the NTIA document to reflect evolving SBOM practices.

On Sept. 3, CISA, in collaboration with NSA and 19 international partners, released joint guidance outlining the “growing international consensus” for what an SBOM should look like. Participants called the guidance “a significant step forward in strengthening software supply chain transparency and security worldwide.”

As promising as some may find these developments, some experts believe they represent the last vestiges of the Biden administration’s work. Former CISA employee Josh Corman, now an executive in residence for public safety and resilience at IST, told CyberScoop that the minimum elements update and the international framework were actions akin to “the body continuing to move without its head just because of prior commitments to the [Biden] White House.” 

While SBOM work has stalled under the Trump administration, other experts believe there is more is to come from CISA. “[CISA official] Nick Andersen and [CISA director nominee] Sean Plankey are both supporters of these initiatives,” NetRise co-founder and CEO Tom Pace told CyberScoop. He added, “I know that directly. I also know that we have multiple contracts with the federal civilian agencies, including CISA, that are moving forward for SBOM.”

 CISA insists that it has not slowed its work on SBOM—its efforts have increased.

“We are actively involved in several SBOM-related initiatives, including the G7 Cybersecurity Working Group’s Software Bill of Materials for Artificial Intelligence and the review of nearly 100 public comments on our draft SBOM Minimum Elements,” CISA Director of Public Affairs Marci McCarthy told CyberScoop in a statement. “The recently released Shared Vision of SBOM highlights and reinforces our operational collaboration in action with both international and domestic partners to advance the use of SBOMs.”

Aside from CISA’s actions, other developments at the federal level promise to further advance SBOM. The Consolidated Appropriations Act of 2023 amended the Food, Drug, and Cosmetic Act to mandate SBOMs as part of premarket submissions for healthcare devices at the FDA. In 2023, the Pentagon issued guidance that contains recommendations for SBOM management as part of the military’s supply chain risk management strategy.

On the international level, the EU parliament adopted the Cyber Resilience Act (CRA) in March 2024, which will require all manufacturers and distributors of digital products to share a top-level SBOM with market surveillance authorities as part of the technical documentation provided. The legislation calls for these requirements to take effect in December 2027.

Private sector barriers to SBOM adoption

Even with these advancements, most software providers still don’t provide SBOMs, and most organizations don’t demand them from their suppliers. Black Duck’s latest annual analysis found that 86% of commercial codebases contain open-source vulnerabilities, with 81% carrying high- or critical-risk flaws. Meanwhile, 95% of websites continue running outdated software with known issues.

“Surveys are showing that only 30% of people are doing anything about this,” Sonatype’s Fox said.  “And that’s largely because it’s optional.” 

Corman thinks most organizations find transparency “existentially terrifying.” 

“They have license risks where they’re violating terms and conditions of open-source licenses that can be exposed in lawsuits, and they’re not prone to out themselves voluntarily,” he said. 

Along the same lines, Steve Springett, chair of the CycloneDX Core Working Group and board vice chair of the OWASP Foundation, told CyberScoop that many organizations fear the legal ramifications of disclosing flaws in their software. “The legal departments in a lot of organizations really don’t want them to unnecessarily disclose more information than what is required for normal business activities.”

Nilesh Jain, co-founder and CEO of cybersecurity startup CleanStart, told CyberScoop, “Most companies that we interact with are still trying to figure out the best way to start generating SBOMs. Some of the largest enterprises and banks and financing institutions still don’t use it.”

Cyber vulnerability expert Art Manion points to the so-called “naming problem,” where there are so many versions of software out there that span multiple years, which are tracked using numerous forms of syntax, that it becomes overwhelming to account for this multiplicity in an SBOM framework. 

“Fundamentally, we really are still blocked by not uniformly calling software the same things,” Manion told CyberScoop. “No single source can spend enough time or money or be fast enough to collect and name all the software and keep track of it.”

Friedman, however, thinks this naming problem can be solved “with a little bit of intelligence on the pattern-matching side of things. Instead of trying to build a tool that matches exact string to exact string, we can do some fuzzy matching with a little bit of data science,” he said.

Will generative AI eliminate the need for SBOMs?

While progress on SBOM is slow, there is a simultaneous surge in the adoption and hype cycle of AI-based coding assistants. Some experts believe these tools will reduce or even eliminate software vulnerabilities.

“I’ve created code myself where I’ve instructed my AI coding assistant to go build me some software and not use any software dependencies whatsoever,” Knostic’s Yu told CyberScoop, suggesting that avoiding dependencies can also help prevent vulnerabilities found in those libraries from being included in new software. “You can reference the entirety of open source as a template for what to build, but do not actually use any open-source libraries.”

CycloneDX’s Springett agrees with Yu. “It can be done,” he told CyberScoop. “It’s just not being done today, but it can be done. I’ve seen it being done. In the short term, AI is going to propel the number of first-party vulnerabilities that we create. But in the longer term, AI will be a good peer code reviewer and code author, and will always be on the lookout for insecure code and suggest safer alternatives to developers.”

Opinions on whether AI can create vulnerability-free systems are sharply divided. “It’s absolutely not possible,” Manion said. “I have seen no evidence that AI is going to write secure software.”

“That’s basically saying everything we’ve learned in software engineering over the last 60-plus years is just tossed out the window, and none of those things matter,” Sonatype’s Fox said. “If you want to recreate the wheel and make all the same mistakes, good luck, man.”

“I don’t think it’s possible,” Biswajit De, co-founder and CTO of CleanStart, told CyberScoop. “It is physically impossible to give everything in your prompts to create vulnerability-free code.”

Friedman is skeptical as well. 

“I have a hard time imagining any tool that is trained in the JavaScript or the node package management system, which is heavily reliant on thousands of dependencies, just then turning around and saying, ‘Well, we can write code without dependencies,’ or if they are writing code, it will use those dependencies in practice,” he told CyberScoop. 

He added, “AI-generated code will get better. Anyone who looks at what is being produced today will say, ‘Oh, that’s impressive.’ But large code bases tend to get unwieldy very quickly. You can use AI to try to find and detect vulnerabilities as you write them, but people do that today. There’s nothing magic about AI compared to today’s tools or the future tools.”

The post The slow rise of SBOMs meets the rapid advance of AI appeared first on CyberScoop.

❌
❌