Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why the web-hosting industry needs a trust seal

By: Greg Otto
16 October 2025 at 06:00

Every day, billions of people place their trust in websites they know little about. Behind each one is a hosting provider, but not all of them play by the same rules. 

Traditionally, privacy policies let web visitors understand how their data would be handled, and SSL (Secure Sockets Layer) certificates ensured their connection was encrypted. Those safeguards were once sufficient. Today, they are not.

The online threat landscape is evolving at the speed and scale of AI development, and many on the front lines are unprepared. A recent survey of 600 enterprise IT leaders found that just 10% of respondents were very confident in their ability to address AI-enabled attacks targeting their organizations. 

Before AI, cyberattacks were primarily rule-based, scripted, and manually executed. These attacks now deploy everything from deepfake phishing calls to automated vulnerability scanning. AI has enhanced their scale, personalization, and automation, making them easier to adapt and harder to detect. That should alarm us all. 

This isn’t only about evolving to meet technological advancements — it’s also about trust. Consumers and businesses alike must be able to identify which providers meet high standards for transparency, reliability, and accountability. Without that clarity, they are left in the dark, unable to make informed choices about who they rely on to keep their digital lives safe. In an era of relentless cyberattacks, the internet needs a higher standard to safeguard not just websites, but the very trust that keeps the entire system running. 

That’s why the Secure Hosting Alliance (SHA) is introducing the SHA Trust Seal. The seal sets a clear bar for providers by demanding transparency, accountability, and resilience. Certified hosts commit to offering fair and understandable terms of service, with no hidden surprises. They act quickly and responsibly when their infrastructure is misused, maintain reliable and resilient services through proactive monitoring and recovery planning, and handle government requests with documented, lawful processes that respect privacy and due process. Most importantly, they commit to ongoing accountability. 

In recent years, transparency has become a cornerstone of the larger cybersecurity community, with companies expected to back up their claims through independent audits, public disclosures, and measurable outcomes. Trust seals are already standard in industries like e-commerce, finance, and health care, where sensitive information is exchanged and verified authentication is essential. Given that the web-hosting industry is part of the internet’s critical infrastructure, it too deserves a clear symbol of trust. The SHA Trust Seal delivers exactly that, translating providers’ promises from words on a website into commitments that can be verified against clear, rigorous standards.

The Trust Seal also reflects a larger shift in how the industry tackles problems. Instead of every company responding in isolation, SHA works with partners such as the Malware and Mobile Anti-Abuse Working Group (M3AAWG) and the Anti-Phishing Working Group (APWG) to build common approaches for preventing cybercrime, improving incident response, and reducing misuse of hosting resources. By creating consistent expectations across providers, the seal helps establish a baseline for what responsible stewardship of the internet should look like.

The stakes are high. From ransomware to supply chain breaches, hackers increasingly target the companies behind the websites we use every day. Earlier this year, Cloudflare blocked a record-breaking distributed denial-of-service (DDoS) attack of 7.3 terabits per second — the largest in history. Attacks like this strike at the very infrastructure of the internet, yet most consumers remain unaware of how fragile that foundation can be. 

This lack of visibility is exactly why a trust seal is needed. The SHA Trust Seal is more than just a badge — it’s a promise. It gives responsible providers a way to make their commitments visible, reassuring customers, elevating industry standards, and strengthening the foundation of a safer internet. By embracing a trust seal, the web hosting industry can transform security from a hidden feature into a visible standard.

Christian Dawson is the co-founder of the Internet Infrastructure Coalition (i2Coalition)and the Coalition on Digital Impact (CODI). 

The post Why the web-hosting industry needs a trust seal appeared first on CyberScoop.

Red, blue, and now AI: Rethinking cybersecurity training for the 2026 threat landscape

By: Greg Otto
14 October 2025 at 05:00

Cybersecurity today is defined by complexity. Threats evolve in real time, driven by AI-generated malware, autonomous reconnaissance, and adversaries capable of pivoting faster than ever. 

In a recent survey by DarkTrace of more than 1,500 cybersecurity professionals worldwide, nearly 74% said AI-powered threats are a major challenge for their organization, and 90% expect these threats to have a significant impact over the next one to two years. 

Meanwhile, many organizations are still operating with defensive models that were built for a more static world. These outdated training environments are ad hoc, compliance-driven, and poorly suited for the ever-changing nature of today’s security risks.

What’s needed now within organizations and cybersecurity teams is a transformation from occasional simulations to a daily threat-informed practice. This means changing from fragmented roles to cross-functional synergy and from a reactive defense to operational resilience. 

At the heart of that transformation lies Continuous Threat Exposure Management (CTEM), a discipline — not a tool or a project — that enables organizations to evolve in step with the threats they face.

Why traditional models no longer work

Legacy training models that include annual penetration tests, semi-annual tabletop exercises, and isolated red vs. blue events are no longer sufficient. They offer limited visibility, simulate too narrow a scope of attack behavior, and often check a compliance box without building lasting and strategic capabilities.

Even worse, they assume adversaries are predictable and unchanging. But as we know, AI-generated malware and autonomous reconnaissance have raised the bar. Threat actors are now faster, more creative, and harder to detect. 

Today’s attackers are capable of developing evasive malware and launching attacks that shift in real time. To meet this evolving threat environment, organizations must shift their mindset before they can shift their tactics. 

Embedding CTEM into daily practice

CTEM offers a fundamentally different approach. It calls for operationalized resilience, where teams systematically test, refine, and continually evolve their defensive posture daily. 

This is not done through broad-stroke simulations, but through atomic, context-aware exercises targeting individual techniques relevant to their specific threat landscape. This is also done one sub-technique at a time. Teams look at one scenario, then iterate, refine, and move to the next. 

This level of precision ensures organizations are training for the threats that actually matter — attacks that target their sector, their infrastructure, and their business logic. It also creates a steady rhythm of learning that helps build enduring security reflexes.

Real-time breach simulations: training under pressure

What separates CTEM from traditional testing is not just frequency, but authenticity. Real-time breach simulations aren’t hypothetical. These simulations are designed to replicate real adversarial behavior, intensity, and tactics. If they are done right, they mirror the sneakiness and ferocity of live attacks.

We should keep in mind that authenticity doesn’t just come from tools but also from the people designing the simulations. You can only replicate real-world threats if your SOC teams are keeping current with today’s threat landscape. Without that, simulations risk becoming just another theoretical exercise. 

These complex scenarios don’t just test defenses; they reveal how teams collaborate under pressure, how fast they detect threats, and whether their response protocols are aligned with actual threat behavior.

Analytics as a feedback loop

What happens after a simulation is just as important as the exercise itself. The post-simulation analytics loop offers critical insights into what worked, what didn’t, and where systemic weaknesses lie. 

Granular reporting is essential, as it allows organizations to identify issues with skills, processes, or coordination. By learning the specifics and gaining meaningful metrics — including latency in detection, success of containment, and coverage gaps — they can turn simulations into actionable intelligence. 

Over time, recurring exercises using similar tradecraft help measure progress with precision and determine if improvements are taking hold or if additional refinements are needed.

A blueprint for CISOs: building resilient, cross-functional teams

For CISOs and security leaders, adopting CTEM is not just about adding more tools — it’s about implementing culture, structure, and strategy. 

This is a blueprint for embedding CTEM into an organization’s security protocols:

  • Integrate tactical threat intelligence. Training must be based on real-world intelligence. Scenarios disconnected from the current threat landscape are at best inefficient, at worst misleading.
  • Align red and blue teams through continuous collaboration. Security is a team sport. Silos between offensive and defensive teams must be broken down. Shared learnings and iterative refinement cycles are essential.
  • Engage in simulation, not just instruction. Structured training is the foundation, but true readiness comes from cyber incident simulation. Teams need to move from knowing a technique to executing it under stress, in an operational context.
  • Establish CTEM as a daily discipline. CTEM must be part of the organization’s DNA and a continuous process. This requires organizational maturity, dedicated feedback loops, and strong process ownership.
  • Use metrics to drive learning. Evidence-based repetition depends on reliable data. Analytics from breach simulations should be mapped directly to skills development and tooling performance.

The role of AI in cybersecurity training

While attackers are already using AI to their advantage, defenders can use it too, but with care. 

AI isn’t a replacement for real-world training scenarios. Relying on it alone to create best-practice content is a mistake. What AI can do well is speed up content delivery, adapt to different learners, and personalize the experience. 

It can also identify each person’s weaknesses and guide them through custom learning paths that fill real skill gaps. In 2026, expect AI-driven personalization to become standard in professional development, aligning learner needs with the most relevant simulations and modules.

Beyond tools: making CTEM a culture

Ultimately, CTEM succeeds when it’s embraced not as a feature or a product but as a discipline woven into the daily practices of the organization. 

It also requires careful development. Red and blue teams must be open, transparent, and aligned. It’s not enough to simulate the threat. Security teams must also simulate to match an adversary’s intensity in order to build reflexes strong enough to withstand the real thing. 

The organizations that take this path won’t just respond faster to incidents — they’ll be able to anticipate and adapt and cultivate resilience that evolves as quickly as the threats do.

Dimitrios Bougioukas is vice president of training at Hack The Box, where he leads the development of advanced training initiatives and certifications that equip cybersecurity professionals worldwide with mission-ready skills.

The post Red, blue, and now AI: Rethinking cybersecurity training for the 2026 threat landscape appeared first on CyberScoop.

Expired protections, exposed networks: The stakes of CISA’s sunset

By: mbracken
29 September 2025 at 06:00

A critical, longstanding piece of America’s cybersecurity infrastructure is perilously close to vanishing overnight. 

On Tuesday, the Cybersecurity Information Sharing Act (CISA) expires — and with it, the legal protections that enable countless organizations to share threat intelligence with the federal government. Without swift congressional action, we risk dismantling years of progress in collaborative cyber defense at the precise moment we need it most.

As we approach CISA’s 10-year anniversary, we’re confronted with the reality that today’s threat landscape is virtually unrecognizable from a decade ago. In 2015, we worried about data breaches and website defacements. 

Today, we face AI-powered attacks, the proliferation of cybercrime-as-a-service, supply chain compromises that ripple across entire sectors, undetected cyberattacks that pre-positions adversaries, and sophisticated ransomware ecosystems where criminals and nation-states share resources to scale their cyber operations. 

The recent Salt Typhoon intrusions into U.S. telecommunications infrastructure underscore a harsh reality: our adversaries have evolved faster than our defenses.

The damaging cost of inaction

CISA’s expiration wouldn’t just be a bureaucratic hiccup — it would trigger a cascade of consequences across our digital infrastructure. The act’s safe harbor provisions and liability protections form the legal backbone that allows private companies to share cyber threat indicators with government agencies, without fear of lawsuits. Remove these protections, and organizations will retreat into information silos, leaving us blind to emerging threats.

Consider what could happen if these protections disappear: a financial institution that detects suspicious activity linked to a nation-state campaign could face legal exposure for sharing that intelligence. A single hospital’s medical records compromised during a cyberattack could put an entire health care system at risk. The telecommunications companies that need to coordinate during incidents like Salt Typhoon could lose their legal framework for collaboration. 

This isn’t speculation — it’s the pre-2015 reality we’d return to.

Beyond band-aids: modernizing for tomorrow’s threats

While the proposed WIMWIG Act aims to extend CISA through 2035, simply reauthorizing outdated frameworks won’t thoroughly address modern security challenges. We’re still operating in a reactive cybersecurity paradigm that tells organizations what already happened, rather than helping them understand what’s currently happening based on signals and criminal behaviors. 

Current information sharing focuses heavily on Indicators of Compromise (IoCs) — specific IP addresses, domains, and file hashes that attackers use. But in an era of AI and automation, threat actors constantly pivot their infrastructure, making these IoCs stale within days, hours, or even minutes.

The truth is, while threat intelligence serves larger organizations with mature security operations, most organizations struggle to leverage it effectively. We need intelligence that doesn’t just catalog past attacks but that provides predictive insights. 

This is why the real opportunity lies in shifting from reactive IoC sharing to proactive behavioral analytics and telemetry. Instead of sharing that an attacker used a specific IP address — which they’ll constantly spin up new infrastructure — we need to share how they moved through networks, what techniques they employed, and what behaviors preceded the attack. Three failed login attempts might mean nothing in isolation, but when combined with lateral movement patterns and privilege escalation behaviors, they reveal an active intrusion.

This shift becomes even more critical as we enter the age of non-human identities. Cloud services, operational technology, and AI systems are creating environments where machine identities outnumber human ones 10:1

Understanding the complex relationships and interactions across these hybrid environments requires contextual intelligence that transforms raw telemetry into actionable insights about ongoing threats and likely identities that will be targeted.

A path forward

Congress faces a choice: settle for short-term extensions that kick the can down the road or seize this moment to modernize our cyber defense systems. Some may view CISA’s potential expiration as a retreat from collective cyber defense, but it could instead represent an opportunity to build something stronger — a modern framework that demonstrates America’s commitment to defending against cyber threats at every level. 

Meaningful reauthorization must include: 

  • Enhanced liability protections that cover behavioral anomalies, not just traditional IoCs. Organizations need legal clarity in order to share the rich, contextual intelligence that actually prevents attacks.
  • Mandated reciprocity in intelligence flows. Too often, private sector sharing has been a one-way street. Federal agencies must provide consistent, enriched, and actionable intelligence back to industry partners, fostering true collaboration rather than mere collection.
  • Incorporation of AI and automation capabilities that can process behavioral patterns at scale, enabling real-time threat detection across our increasingly complex digital ecosystem.
  • Improved oversight mechanisms that ensure the program evolves with the threat landscape rather than remaining frozen in 2015-era security methodologies.

The urgency is real

With bipartisan reauthorization efforts facing tight timelines, the window to get this right is closing fast. If CISA 2015 lapses, it shouldn’t be due to political gridlock but because we’ve chosen to seize this opportunity to build a cyber defense framework worthy of the challenges ahead.

Every day of delay gives our adversaries a greater advantage. Every moment of uncertainty weakens our collective cyber defense. Congress must act decisively, not just to preserve what we have, but to build the proactive, behavior-based intelligence-sharing ecosystem our national security demands.

In just a day, we’ll either have a modernized framework for collaborative cyber defense, or we’ll watch a decade of progress crumble. The choice before Congress isn’t just about renewal — it’s about transformation. Let’s ensure any outcome strengthens, not weakens, our nation’s cyber resilience. 

The time for action is now — we must defend and protect forward.

Kevin E. Greene is the chief cybersecurity technologist for public sector at BeyondTrust. He previously held tech roles at OpenText, the MITRE Corporation and in the cybersecurity division of the Department of Homeland Security’s Science and Technology Directorate.

The post Expired protections, exposed networks: The stakes of CISA’s sunset appeared first on CyberScoop.

Contain or be contained: The security imperative of controlling autonomous AI

By: mbracken
25 September 2025 at 09:30

Artificial intelligence is no longer a future concept; it is being integrated into critical infrastructure, enterprise operations and security missions around the world. As we embrace AI’s potential and accelerate its innovation, we must also confront a new reality: the speed of cybersecurity conflict now exceeds human capacity. The timescale for effective threat response has compressed from months or days to mere seconds. 

This acceleration requires removing humans from the tactical security loop. To manage this profound shift responsibly, we must evolve our thinking from abstract debates on “AI safety” to the practical, architectural challenge of “AI security.” The only way to harness the power of probabilistic AI is to ground it with deterministic controls.

In a machine-speed conflict, the need to have a person develop, test and approve a countermeasure becomes a critical liability. Consider an industrial control system (ICS) managing a municipal water supply. An AI-driven attack could manipulate valves and pumps in milliseconds to create a catastrophic failure. A human-led security operations center might not even recognize the coordinated anomaly for hours. 

An AI-driven defense, however, could identify the attack pattern, correlate it with threat intelligence, and deploy a countermeasure to isolate the affected network segments in seconds, preserving operational integrity. In this new paradigm, the most secure and resilient systems will be those with the least direct human interaction. Human oversight will — and must — shift from the tactical to the strategic.

The fallacy of AI safety

Much of the current discourse on “AI safety” centers on the complex goal of AI with human values. As AI pioneer Stuart Russell notes in his book “Human Compatible,” a key challenge is that “it is very difficult to put into precise algorithmic terms what it is you’re looking for.” Getting human preferences wrong is “potentially catastrophic.” 

This highlights the core problem: trying to program a perfect, universal morality is a fool’s errand. There is no global consensus on what “human values” are. Even if we could agree, would we want an apex predator’s values encoded into a superior intelligence? 

The reality is that AI systems — built on neural networks modeled after the human brain and trained on exclusively human-created content — already reflect our values, for better and for worse. The priority, therefore, should not be a futile attempt to make AI “moral,” but a practical effort to make it secure

As author James Barrat warns in “The Final Invention,” we may be forced to “compete with a rival more cunning, more powerful & more alien than we can imagine.” The focus must be on ensuring human safety by architecting an environment where AI operations are constrained and verifiable.

Reconciling probabilistic AI with deterministic control

AI’s power comes from its probabilistic nature. It analyzes countless variables and scenarios to identify strategies and solutions — like the AlphaGo move that was initially laughed at but secured victory — that are beyond human comprehension. This capability is a feature not a bug. 

However, our entire legal and policy infrastructure is built on a deterministic foundation. Safety and security certifications rely on testable systems with predictable outcomes to establish clear lines of accountability.

This creates a fundamental conflict. Who is liable when a probabilistic AI, tasked with managing a national power grid, makes an unconventional decision that saves thousands of lives but results in immediate, localized deaths? 

No human will want, or be allowed, to accept the liability for overriding an AI’s statistically superior strategic decision. The solution is not to cripple the AI by forcing it into a deterministic box, but to build a deterministic fortress around it. 

This aligns with established cybersecurity principles — such as those within NIST SP 800-53 — that mandate strict boundary protection and policy-enforced information flow control. We don’t need to control how the AI thinks; we need to rigorously control how it interacts with the world.

The path forward: AI containment

Three trends are converging: the hyper-acceleration of security operations, the necessary removal of humans from the tactical loop, and the clash between probabilistic AI and our deterministic legal frameworks. The path forward is not to halt progress, but to embrace a new security model: AI containment.

This strategy would allow the AI to operate and innovate freely within human-defined boundaries. It requires us to architect digital “moats” and strictly moderate the “drawbridges” that connect the AI to other systems. 

By architecting systems with rigorously enforced and inspected interfaces, we can monitor the AI, prevent it from being poisoned by external data and ensure its actions remain within a contained, predictable sphere. This is how we can leverage the immense benefits of AI’s strategic intelligence while preserving the deterministic control and accountability essential for our nation’s most critical missions.

Scott Orton is CEO of Owl Cyber Defense.

The post Contain or be contained: The security imperative of controlling autonomous AI appeared first on CyberScoop.

Why federal IT leaders must act now to deliver NIST’s post-quantum cryptography transition

By: Greg Otto
22 September 2025 at 05:30

In August 2024, the National Institute of Standards and Technology published its first set of post-quantum cryptography (PQC) standards, the culmination of over seven years of cryptographic scrutiny, review and competition. 

As the standards were announced, the implications for cybersecurity leaders were clear: The U.S. government must re-secure its entire digital infrastructure — from battlefield systems to tax records — against adversaries preparing to use quantum computers to break our encryption.

This isn’t a theoretical risk; it’s an operational vulnerability. The cryptography that secures federal data today will be obsolete — NIST has already set a deadline to ban some algorithms by 2035 — and our adversaries know it.

A foundational national security threat

Quantum computers are no longer science fiction — they’re a strategic priority for governments across the United States, Europe, China, and beyond, investing billions in their development. While the technology holds promise for scientific and economic breakthroughs, it also carries significant risks for national security.

If just one adversarial state succeeds in building a large enough quantum computer, it would render RSA, ECC, and other foundational cryptographic systems — the algorithms underpinning federal communications, authentication, and data protection — completely obsolete. This would occur not in years or decades that it would take a classical computer today, but in days.

Even before such computers exist, the risk is clear. Intelligence agencies like the National Security Agency have long warned of “harvest now, decrypt later” attacks. That means sensitive U.S. government data — captured today over insecure links or stolen in data breaches — may be stored in data centers with the intention of being decrypted years from now when quantum capabilities mature. This includes classified material, personally identifiable information, defense logistics data, and more.

We are not talking about theoretical vulnerabilities or bugs. We are talking about a complete systemic failure of classical cryptography in the face of a new computing paradigm, and a long-known one at that.

You’ve been warned and instructed

If you work in federal IT or security and haven’t started quantum-proofing your systems, you are already behind. The U.S. government has made its intentions crystal clear over the past three years. 

National Security Memorandum 10 (NSM-10), under the Biden administration, was signed in 2022 and mandates that all National Security Systems transition to quantum-resistant cryptography by 2030. This was followed by Office of Management and Budget memo M-23-02 in November 2022, which requires all federal civilian agencies to inventory their cryptographic assets, assess quantum vulnerability, and develop transition plans.

These early instructions were cemented in the NSA’s CNSA 2.0 guidelines, stating that systems protecting classified and national security data must move to quantum-safe algorithms before the 2035 deadline, with many systems already transitioned by 2030, using NIST’s approved cryptographic standards.

This is not a proposal; it is federal policy. The deadlines are set. The threat is recognized and the technology is ready.

The scale is unprecedented but not insurmountable

There hasn’t been a cryptographic overhaul of this magnitude since the transition to public-key cryptography in the 1980s and arguably not since Y2K. But unlike Y2K, there is no fixed date when things will fail. There won’t be a headline or official press release when quantum computing arrives. If you’re waiting for a clear signal, you won’t get one — it will simply be here, and those who haven’t prepared will already be behind.

Just as when the Allies broke the Enigma machine, the first nation to build a cryptographically relevant quantum computer is not likely to announce this to the world and their adversaries. 

Quantum-safe transition isn’t as simple as swapping out a cryptographic library. Legacy systems across agencies rely on hardcoded cryptographic protocols. Hardware modules may require firmware upgrades or full replacement. Key management systems will need to be redesigned. Certification and compliance processes must be updated. 

This encryption is found everywhere across the technology supply chain and in everyday life. With so many critical government functions, services, systems and departments now run online, just one weak link in the supply chain could bring the whole network down. 

Under the NSA’s CNSA 2.0 guidelines, any business that wants to do business with the U.S. government must implement PQC, especially for any new technology procurement beyond 2030. Furthermore, any products using the designated vulnerable encryption will be discontinued by 2035.

Most agencies aren’t prepared, and the private sector vendors they depend on are working hard to provide the tools needed to deliver the transition. What we must be careful of is some suppliers marketing “quantum-safe” solutions that do not meet NIST standards and may introduce new vulnerabilities down the line.

What federal IT leaders must do today 

The countdown to 2030 and 2035 has already begun. Federal CIOs, CISOs, and program managers should take the following steps this fiscal year:

  1. Enforce cryptographic discovery mandates. OMB memo M-23-02 requires all agencies to submit an annual inventory of cryptographic systems. If your agency hasn’t complied or gone beyond minimal discovery, it’s time to escalate.
  2. Demand vendor transparency. Your suppliers must tell you when and how they plan to support NIST’s PQC algorithms, not “proprietary” solutions. If they can’t, find new ones.
  3. Fund pilot deployments now. Testing post-quantum algorithms in isolated systems today will reveal architectural bottlenecks and allow for smoother rollout in future years.
  4. Educate procurement teams. Use the NSA’s quantum-safe procurement guidance to ensure RFPs, contracts, and tech refreshes explicitly require PQC readiness.
  5. Treat PQC as a cybersecurity budget line item, not a future capital project. Quantum risk is not hypothetical, it’s live and needs action to address it today.

The bottom line: This is a national defense imperative

You don’t have to believe the quantum hype — you just have to follow your own government’s threat assessments.

 Federal legislation, including the Quantum Computing Cybersecurity Preparedness Act, signed into law in December 2022, requires agencies to prepare for the migration.

If your systems still rely on RSA, ECC, or other legacy algorithms without a transition roadmap,  you are not defending them — you are leaving them open to attack.

The NIST standards show that with one year of progress behind us, there are five years of opportunity ahead.

Ali El Kaafarani is the founder and CEO of PQShield, a global leader in post-quantum cryptography.

The post Why federal IT leaders must act now to deliver NIST’s post-quantum cryptography transition appeared first on CyberScoop.

Ten things you should know about email

22 September 2025 at 03:43
COMMENTARY By Peter Deegan Do you know enough about email to communicate securely, efficiently, and with your messages easily read by all? Like most things in the tech world, important — or at least useful — features are available that many people don’t know about or think aren’t worth the bother. Read the full story […]

When ‘minimal impact’ isn’t reassuring: lessons from the largest npm supply chain compromise

By: Greg Otto
15 September 2025 at 09:21

Earlier this week, Aikido Security disclosed what is being described as the largest npm supply chain compromise to date. Attackers successfully injected malicious code into 18 popular npm packages, collectively accounting for more than 2.6 billion weekly downloads. The entire campaign began not with a technical exploit, but with a single, well-trained maintainer clicking on a convincingly crafted phishing email.

The scale of this incident should serve as a wake-up call for the industry. Even though the financial fallout has been labeled “minimal,” attackers were able to compromise packages at the very core of the JavaScript ecosystem. That reality should concern every developer, security leader, and policymaker.

We can’t afford to normalize these events as routine, low-stakes occurrences. Each successful package takeover exposes the fragility of our collective software infrastructure. The fact that defenders managed to contain this “leaking roof” in time should not reassure us — it should motivate us to act before the next one.

Anatomy of the compromise

The attack began with a familiar but effective tactic: account takeover. According to Aikido, attackers tricked the maintainer of the affected libraries using a phishing email impersonating npm support, requesting a two-factor authentication update. With those stolen credentials in hand, the attackers published malicious versions of popular packages — including chalk and debug — by modifying their index.js files.

The injected payload was designed to hijack cryptocurrency transactions. By monitoring browser APIs like fetch, XMLHttpRequest, and wallet interfaces such as window.ethereum, the malware could redirect funds to attacker-controlled addresses.

Fortunately, the malicious versions were identified within minutes and publicly disclosed within the hour. This rapid response helped prevent widespread damage. Still, millions of developers pulled compromised versions during that brief window — a reminder of how much trust we place in open source infrastructure and how quickly that trust can be exploited.

Adding to the picture, further research has revealed that additional npm packages were hijacked as part of this campaign, including duckdb, which alone sees nearly 150,000 downloads per week. These findings reinforce the breadth of the operation and highlight how difficult it is to measure the full scope of supply chain compromises in real time.

A playbook that’s here to stay

This compromise was not an isolated incident. Package takeovers have become a standard tactic for threat actors because they provide unmatched reach: compromise one popular project, and you instantly gain access to millions of downstream systems. 

We have seen this strategy become a key tool for advanced persistent threats (APTs), including groups like Lazarus most recently. Package takeovers allow them to infiltrate massive portions of the world’s developer population by targeting a single under-resourced project.

The npm ecosystem is not unique in this regard. Whether it’s PyPI, RubyGems, or Maven Central, package registries are critical distribution points in the modern software supply chain. They represent single points of failure that adversaries will continue to exploit.

The “it wasn’t that bad” narrative

Since disclosure, some industry commentary has downplayed the incident. Reports note that the attackers appear to have stolen just a handful of crypto assets: roughly 5 cents of ETH and $20 worth of a small memecoin.

But this framing is short-sighted. The true cost is not the stolen cryptocurrency; it’s the thousands of hours of engineering and security work required worldwide to clean up compromised environments, not to mention the contracts, compliance requirements, and audits that inevitably follow. 

What’s also striking is how quickly attackers are now able to act. In this case, malicious versions of npm packages were downloaded potentially millions of times within minutes. The same pattern has played out for years in vulnerability exploitation — from HeartBleed to Equifax — where the time between disclosure and exploitation has shrunk to nearly zero.

The “minimal impact” narrative risks lulling organizations into complacency. It encourages a mindset where each incident is dismissed as “low risk” until one day, it isn’t.

What needs to change

Focusing on what didn’t happen ignores the reality that attackers had the opportunity to hit far harder. This incident underscores several urgent priorities, including:

  • Strengthen maintainer security: Package maintainers are the new frontline of cyberattacks. Protecting their accounts with phishing-resistant authentication, hardware keys, and stronger identity protections must become the norm, not the exception.
  • Improve ecosystem-level safeguards: Registries must continue to invest in stronger safeguards, such as mandatory MFA, anomaly detection for unusual publishing activity, and proactive monitoring for malicious code patterns.
  • Shift industry mindset: Organizations need to treat every compromise of a widely used package as a major security incident — even if the immediate payload looks trivial. A malicious package should trigger the same urgency as a zero-day exploit, because the potential blast radius is just as large.
  • Invest in supply chain visibility: Software bills of materials (SBOMs) and automated dependency tracking are essential. Enterprises must be able to quickly identify whether they’re pulling compromised versions and take immediate action.

This npm compromise may go down as the “largest to-date,” but its significance has little to do with its size or the negligible cryptocurrency stolen. Its importance lies in what it reveals about the state of modern software security: our trust in open-source infrastructure is more fragile than we like to admit, and attackers know it.

If we keep measuring the significance of these breaches only by their immediate dollar impact, we’ve missed the point. This was like catching a leaking roof before the storm — the damage was limited only because it was discovered quickly. Next time, we may not be so fortunate.

Brian Fox is co-founder and CTO at Sonatype. 

The post When ‘minimal impact’ isn’t reassuring: lessons from the largest npm supply chain compromise appeared first on CyberScoop.

I wrote a book about ethics

1 September 2025 at 03:42
COMMENTARY By Michael A. Covington What did I think I was doing? Is computer and AI ethics even a subject? And if so, to get into it, do I have to figure out the ultimate source of values and the meaning of life? Thank goodness, no. I don’t even have to get my readers to […]

The U.S. should bolster investment reviews to combat China

By: Greg Otto
20 August 2025 at 06:00

The Committee on Foreign Investment in the United States just published its 2024 report, revealing once again that shielding U.S. tech from risky foreign investments was a critical focus for the interagency group that reviews investments in the United States for national security risks. But as U.S.-China tensions further intensify, bolstering these reviews is even more important for national security — and getting it wrong all the more damaging.

When President Trump took office again in January, he signed an executive order “fast-tracking” investments from (unspecified) allied and partner countries — in other words, expediting their CFIUS reviews — as a way to accelerate the funding of U.S. advanced tech and other businesses. It’s an idea with some merit.

Yet, CFIUS remains plagued by procedural problems, far beyond the screening of allied investments, that impact the rigor, transparency, and ultimate efficacy of its national security reviews. These issues make a CFIUS shakeup an opportune moment to evaluate the U.S. government’s broader strategy for screening investments into U.S. technologies. Policymakers should ensure CFIUS has a more rigorous analysis of risks, a more nuanced focus on China, and greater transparency — all of which will help U.S. tech security and with competition against Beijing in the coming years.

President Ford created what is now CFIUS in 1975 through executive order, making it 50 years old this year. In subsequent administrations, president after president kept it around as a matter of executive policy, and Congress statutorily authorized the Committee in 2007. The idea was that certain non-U.S. investments in U.S. companies could potentially enable foreign adversaries — such as, at the time, the USSR — to infiltrate supply chains, steal trade secrets, or even sabotage operations. This could target anything from U.S. energy infrastructure to steel plants for tanks.

As described in my upcoming book on U.S. national security governance of technology, CFIUS had a tech focus from its earliest days, such as handling concerns in the 1980s about Japanese investments in semiconductors. But as time went on, its tech focus grew substantially. CFIUS received authorities in 2018 to evaluate how foreign investments impact sensitive U.S. data and technologies. It forced a Chinese buyer to sell the gay dating app Grindr back to U.S. owners. And it even opened a 2019, pre-ban-debate investigation into TikTok. The current Committee structure puts the Treasury Department at the helm, working with departments from State to Defense, to parse these risks and recommend whether to block, approve, undo, or put security conditions on transactions.

Today, as its newest report says, CFIUS spends a substantial amount of time looking at risks to U.S. technology. Outside of real estate transactions, which CFIUS also reviews, 53% of companies that sent a “covered notice” to CFIUS in 2024 — alerting the group in detail of a potentially relevant investment — came from the “Finances, Information, and Services” sector, up from 50% in 2023. This category includes companies in telecommunications, computing infrastructure, data processing, and professional, scientific, and technical services. 

But the Committee is even more tech-focused than the numbers suggest: companies can also submit shorter filings to CFIUS — simpler “declarations” typically intended for less risky investments — not counted in these numbers. And companies not in tech, per se, can receive CFIUS scrutiny for a tech-related issue, such as a health insurer with sensitive data taking a non-U.S. investment.

The latest report also clarifies that CFIUS is highly focused on China. Investments from China motivated more covered notices in 2024 than investments from any other country — including from other adversaries such as Iran and Russia, which counted for none. Shorter declarations, meanwhile, were led by investments from Japan, Canada, France, and the United Kingdom. (China’s domination of covered notices but not shorter declarations may suggest Chinese investors prefer providing more information to CFIUS up front to — in their minds — make the U.S. security review timeline more predictable.)

Combined, these new data points illuminate the challenges at hand in the coming years.

CFIUS has powers to look at a broad sweep of investment activities. These range from acquisitions of big American firms to influential minority stakes in Bay Area startups to transactions involving national security-critical technologies — like AI models, space communications systems, and biotech applications. 

CFIUS has a substantial focus on Chinese investments, which the intelligence community has repeatedly said create opportunities for Beijing to steal U.S. technologies. And it must screen U.S. allied and partner investments that could create risks, too (including due to, say, Chinese front companies in Japan or Russian ones in the U.K.).

Despite this broad, consequential activity, CFIUS is often described as a “black box.” Companies complain it’s difficult to understand and therefore navigate; congressional overseers have told me repeatedly in recent years that they want better insights into CFIUS’s activity on AI, chips, China, and more, including to inform decisions about whether it needs more funding. 

Unlike other tech and national security regulatory programs, CFIUS additionally appears to lack an adequately standardized framework to identify and mitigate national security risks. Methodology sounds boring. But a rigorous, standardized risk process is the difference between identifying the right risks and working to address them — and acting in good faith but getting distracted, going down rabbit holes, inflating unlikely scenarios, and pulling focus from the highest priority risks.

The new administration — or a future one — and Congress should push CFIUS toward a more standardized, rigorous risk management process. This could include a White House-led effort to better synchronize risk mitigations across CFIUS-involved agencies or creating robust frameworks for issues like investors’ access to company-held data, software source code, or technical schema.

Related, CFIUS should work to resist the ever-growing D.C. temptation to label all China-related activity “a risk,” taking a reductive view of the threat landscape. It should instead apply more nuance to areas that present minimal, mitigatable risk versus areas that present outsized risk to U.S. technologies or data (such as with the later-undone Grindr acquisition).

Lastly, more transparency into U.S. investment security reviews would help companies, the public, overseers, and national security at once. No, CFIUS should not alert the press every time a company considers a merger or funding round — that’s proprietary and should be kept that way. And it relies on classified insights within the government to assess risks, too.

But Congress can and should compel the Committee to provide greater insights into its activities than only the statistics in its annual reports. Making its generalized risk criteria a bit clearer to companies — for instance, what areas concern it most and how it thinks about mitigations for risky investments — could help lower compliance costs without tipping off U.S. adversaries with too much detail. It could help congressional overseers better ensure the interagency team is focused on the right issues, including with tech and China, and can get briefings that protect company trade secrets but provide more details about security issues and reviews.

Increasing CFIUS’s transparency is also a win for the public. As CFIUS launches investigations that impact widely used communications and other technologies — TikTok being the chief example — transparency is both vital in a democracy and helpful to inform public debate. And as competition with China intensifies, investment security reviews will prove a critical vector for protecting business innovation, securing U.S. supply chains, and bolstering long-term security.

Justin Sherman is the founder and CEO of Global Cyber Strategies, a D.C.-based research and advisory firm, and the author of “Navigating Technology and National Security.

The post The U.S. should bolster investment reviews to combat China appeared first on CyberScoop.

By gutting its cyber staff, State Department ignores congressional directives

By: Greg Otto
18 August 2025 at 06:00

The State Department has demonstrated it does not understand that cyber power is critical to geopolitical power. In the course of reorganizing offices and reducing staff over the past three weeks, the department’s political appointees have gutted President Trump’s ability to work with partners and allies on cybersecurity and technology resilience. Congress will need to intervene to defend its bipartisan effort to bolster cyber diplomacy. 

For years, Washington’s efforts to hold China, Russia, and Iran accountable for malicious cyber activity were hamstrung by an inability to effectively work with allies to quickly identify and punish perpetrators. America’s allies were failing to prevent cyberattacks on critical systems that the U.S. military needed to operate securely overseas. Instead, these attacks cascaded across continents and hit the U.S. homeland. And U.S. adversaries were running circles around the West’s principled stance on privacy and security in cyberspace, instead reshaping telecommunications infrastructure and the internet in their image. 

After watching successive administrations dither, Congress took a stand, passing the Cyber Diplomacy Act in 2022. The law tasked a new State Department Bureau of Cyberspace and Digital Policy (CDP) with promoting reliable and secure internet infrastructure, building the cyber capacity of U.S. partners, and advancing technology and cybersecurity policies globally that bolster U.S. economic and national security interests. 

To accomplish this mission, CDP pulled together existing, disparate economic and international security functions related to cyber and technology into a single, more efficient operation. By all accounts, this consolidation made CDP successful.

When Congress tasked the bureau with managing a unique cyber assistance fund to rapidly respond to incidents overseas, CDP created a mechanism to airdrop expertise into partner countries in as little as two days.

Likewise, when Congress tasked the bureau with securing communications technology, semiconductor supply chains, and other emerging technology, the bureau paired U.S. seed funding with investments from allies and technology companies to box out Chinese firms attempting to dominate telecommunications in the Indo-Pacific. 

On July 1, however, the State Department stepped backwards. Despite its stated goal of creating a “more agile Department” and reducing duplicative offices, Foggy Bottom pulled CDP apart into multiple offices, each of which now holds a piece of the cyber mission. CDP lost its division responsible for responding to cyberattacks to a new bureau on emerging threats. Its strategy team moved to the personal staff of the undersecretary of economic growth. And its internet freedom team went to the undersecretary for public diplomacy. 

CDP will now consist of two slimmed down teams. One will focus on internet governance and technical standards, the other on using U.S. foreign aid to bolster allied cybersecurity. However, after the trifecta of the dissolution of the U.S. Agency for International Development, the foreign aid freezes earlier this year, and Congress’ acquiescence to billions of dollars in cuts to previously allocated foreign aid, it is not clear what funds CDP will have to help U.S. allies. 

Unfortunately, the crippling of State’s cyber diplomacy capabilities is not just the result of the restructuring, but also a significant loss of subject matter expertise. In the course of reducing its overall workforce in mid-July, State fired at least a half dozen people from CDP. The bureau lost two strategists and five of only eight experts working on bilateral and regional affairs. 

CDP had expected to bring in staff from other technology-focused offices as they were dissolved. Instead, quantum, artificial intelligence, and other technology experts were fired. Over the past few months, other CDP staff have accepted the department’s offers of deferred resignation and early retirement. And State reassigned CDP’s acting head, leaving the bureau without a leader. 

At an April hearing about CDP, the House Foreign Affairs Committee’s Europe Subcommittee Chairman Keith Self, R-Texas, affirmed the importance of State’s cyber capabilities. “The U.S. is not facing these real and growing threats alone,” he noted. “Through cooperation with our allies and partners, the U.S. will continue to work to combat malign cyber activities from the PRC, Iran, North Korea and Russia.” 

After a bipartisan show of support for the bureau, the subcommittee staff are drafting components of a State reauthorization bill from Foreign Affairs Committee Chairman Brian Mast, R-Fla., that would bolster CDP’s mandate. If Foggy Bottom keeps undercutting CDP, however, there may be little left to reauthorize. 

Chairman Mast indicated he plans to bring the reauthorization bill to the floor at the end of September. Lawmakers need to weigh in with State Department leadership sooner rather than later, however, to remind Secretary of State Marco Rubio that he himself voted for the Cyber Diplomacy Act when he served in the Senate. He knew then what members know now: Without strong cyber capabilities within the State Department, America’s partners will turn to unreliable associates in China for infrastructure investment and succumb to cyberattacks that place U.S. forces overseas at risk.

It will take years to rebuild State’s capabilities. While Congress should move quickly to re-integrate CDP’s component pieces, reauthorize cyber foreign assistance, and restart secure technology projects, the loss of subject matter experts will take longer to fix. The cyber experts with sought-after skills that State let go are not waiting by the phone to get their old jobs back. They will move on to higher-paying private sector jobs. Only after the department re-commits to its cyber mission and places a Senate-confirmed ambassador at the helm of the bureau will the department have a hope of reconstituting all that it lost over a few weeks in July.

The post By gutting its cyber staff, State Department ignores congressional directives appeared first on CyberScoop.

Patch the vulnerability: Confirm Sean Plankey as CISA director

By: Greg Otto
13 August 2025 at 09:24

Every chief information security officer understands that unresolved vulnerabilities can eventually become entry points for threats. In the private sector, we don’t ignore gaps in leadership when they pose security risks. However, that’s precisely the risk our nation faces with the ongoing vacancy at the head of the Cybersecurity and Infrastructure Security Agency.

As the executive director of the National Technology Security Coalition (NTSC), a nonpartisan organization representing chief information security officers and senior security technology leaders from across the country, I can confidently say that this vacancy presents a national cybersecurity risk and must be addressed immediately. The appropriate corrective action is for the Senate to confirm Sean Plankey as the next director of CISA.

Our members live and breathe cybersecurity every day. They are responsible for protecting America’s leading enterprises from cyber threats, building resilient systems, and responding to incidents that could disrupt operations, damage reputations, or compromise the personal data of millions of Americans. These challenges are not just theoretical; they are immediate, complex, and constantly evolving. That’s why public-private collaboration is essential, and why a strong, capable leader must be at the helm of CISA.

Sean Plankey is precisely that kind of leader.

Plankey combines strategic vision, operational experience, and a strong commitment to public service — qualities essential for this role. He served as principal deputy assistant secretary at the Department of Energy’s Office of Cybersecurity, Energy Security, and Emergency Response, where he played a key role in safeguarding the nation’s critical energy infrastructure from cyber threats. His work there gave him direct experience managing risk at the intersection of digital and physical security.

At the White House, Plankey served as director for maritime and pacific cybersecurity policy at the National Security Council. In that role, he co-authored the National Maritime Cybersecurity Plan and contributed to presidential directives on offensive cyberspace operations, efforts that strengthened national strategy and improved interagency coordination. His leadership helped protect America’s ports and shipping lanes from cyber threats, which are vital to both our economic security and military readiness.

Plankey’s qualifications are extensive. As a commissioned officer in the U.S. Coast Guard, he was deployed to Afghanistan, where he took part in offensive cyber operations. This gave him direct experience with the cyber side of modern warfare. He understands not only the policy impacts of cyber threats but also the tactical realities — insights that few others possess.

In addition to his technical and strategic credentials, Plankey has demonstrated a clear understanding of how to navigate government agencies and work with the private sector. His ability to operate across organizations and industries is exactly what is needed now, as cybersecurity is no longer just a technical matter but a vital national security issue.

For CISOs and national security professionals alike, leadership at CISA is not a luxury; it’s a necessity. With increasing geopolitical instability, the expanding use of artificial intelligence by both defenders and attackers, and the rapid growth of digital infrastructure, we face a threat landscape that demands clarity, coordination, and expertise at the highest level. Leaving CISA without someone in charge during this period of heightened risk is like leaving a ship adrift in stormy seas.

Our country cannot afford any further delays. The cybersecurity community needs a leader at CISA who can work with industry, state, and local partners, as well as international allies, to strengthen defenses and respond quickly to emerging threats. Plankey has earned the trust and respect of both the public and private sectors. He is prepared to lead from day one.

The Senate should act quickly to confirm Plankey as the new director of CISA. This would not only fill a critical leadership void but also strengthen America’s digital defenses.

Patrick D. Gaul is the executive director of the National Technology Security Coalition, a nonprofit, non-partisan organization that serves as an advocacy voice for chief information security officers across the nation.

The post Patch the vulnerability: Confirm Sean Plankey as CISA director appeared first on CyberScoop.

CISA is facing a tight CIRCIA deadline. Here’s how Sean Plankey can attempt to meet it

By: Greg Otto
30 July 2025 at 07:00

During a Senate Homeland Security and Governmental Affairs Committee hearing earlier this month in which lawmakers considered if Sean Plankey is fit to become director of the Cybersecurity and Infrastructure Security Agency, ranking member Gary Peters asked the CISA nominee how he would ensure the agency meets all of its statutory requirements, including those in the Cyber Incident Reporting for Critical Infrastructure Act of 2022. 

The problem is, it can’t. To meet the statutory deadline established by Congress, CISA will need to publish a final rule by October. That means CISA has two months left. 

Ever since CIRCIA was signed into law in March 2022, CISA has had every intention of meeting this deadline. I know that because I ran the program while at CISA, from the day it was signed into law through when I left government in January. 

You don’t have to take my word for it. CISA was shouting its commitment to this timeline from the rooftops. You can check the Unified Agenda — the government’s official record of planned regulatory action — from both fall 2024 and spring 2024, both of which state that CISA was targeting an Oct. 4 final rule due date. These commitments are additionally reinforced by the updates provided in the National Cybersecurity Strategy Implementation Plan published by the Office of the National Cyber Director. The formal publications mirror the consistent public statements made by senior officials from CISA and the Department of Homeland Security over multiple years. 

However, since January there has been silence from the agency regarding CIRCIA. Despite receiving hundreds of public comments on the CIRCIA Notice of Proposed Rulemaking, which necessitates an internal policy process to decide how to respond to those comments and adjust the rule, the agency has made no public statements about its progress.  

There is no way for CISA to address hundreds of policy decisions, revise a 450-page piece of regulation, coordinate those revisions with all relevant agencies, and gain the necessary White House approval in two months. This work could have been accomplished had it been prioritized by the current administration on Day One. However, without a CISA director, that work does not appear to have occurred.

In response to Sen. Peters’ question, Plankey responded that he is “going to empower those operators to operate.” I know the operators who worked nights and weekends analyzing the public comments, modernizing existing technology systems, building new tools using CIRCIA funds appropriated by Congress, and expanding the agency’s capacity to support victims ahead of CIRCIA’s launch. I know those people are prepared to present critical policy matters to the next CISA director and to move quickly to draft a final rule. 

Peters also asked Plankey how he would achieve those goals amid budget cuts and the hundreds of personnel leaving the agency. While the CIRCIA program has faced personnel changes, its core staff remain committed to the cause. 

Congress has provided substantial funding for CIRCIA, but without a centralized division or subdivision dedicated to this work within the agency, it’s hard for the program to protect and target these funds exclusively for CIRCIA’s new requirements. Although not fully funded, the program has strong support, and the new director should ensure all resources and people appropriated by Congress for CIRCIA implementation are focused on preparing CISA to serve as the nation’s central cyber incident repository. 

Now that Plankey is poised to become the CISA director, I hope he will prioritize these statutory requirements from Congress and act immediately to advance the CIRCIA final rule for our national security. Plankey said that if confirmed he would like to “get in, provide them the direction, tell them the hill we are going to take, and protect the American public from cybersecurity attacks on critical infrastructure.” 

I hope that in partnership with the CIRCIA team, he does just that.

Lauren Boas Hayes is a cybersecurity and tech trust & safety expert with experience working at CISA, Meta, and Deloitte. She is a founding fellow of the Integrity Institute and an adjunct professor at Georgetown University & John Hopkins SAIS.

The post CISA is facing a tight CIRCIA deadline. Here’s how Sean Plankey can attempt to meet it appeared first on CyberScoop.

Microsoft’s software licensing playbook is a national security risk

By: mbracken
28 July 2025 at 06:00

News of two major Microsoft security events in as many weeks should concern every federal agency, not just because of the breaches themselves, but because of what they reveal about how the company does business.

First, ProPublica uncovered that Microsoft allowed Chinese engineers to work on sensitive U.S. military cloud projects under the supervision of underqualified subcontractors. Then came a global cyberattack exploiting a critical flaw in Microsoft SharePoint, breaching U.S. agencies, universities, and energy firms. 

These aren’t isolated incidents. They’re symptoms of a business model built around restrictive and anticompetitive software licensing practices.

Time and again, Microsoft’s security failures turn into federal growth opportunities. After cyberattacks in 2021, Microsoft promised the Biden administration $150 million in free cybersecurity upgrades. What wasn’t said upfront? These freebies locked agencies into Microsoft tools, making it costly and complex to switch. Once agencies were locked in, Microsoft raised prices. This wasn’t charity or goodwill on Microsoft’s behalf: It was a calculated move to crowd out competitors, win long-term contracts, and deepen federal dependence on Microsoft’s ecosystem.

Then, in 2023, Chinese hackers known as Storm-0558 exploited a vulnerability in Microsoft’s cloud email service. They breached more than 500 individuals and 22 organizations worldwide, including senior U.S. government officials. A 34-page report by the Cyber Safety Review Board (CSRB) later described Microsoft’s security culture as “inadequate,” warning it “requires an overhaul” given the company’s central role in the tech ecosystem. It said Microsoft’s CEO and board should institute “rapid cultural change,” including publicly sharing “a plan with specific timelines to make fundamental, security-focused reforms across the company and its full suite of products.”

The CSRB also criticized Microsoft’s delayed and opaque communications. The company waited until March 2024 to correct a misleading September 2023 blog post about the cause of the breach, after months of questioning from investigators.

Meanwhile, in early 2024, Russian hackers known as Midnight Blizzard infiltrated Microsoft’s corporate systems. Initially described as a limited incident, Microsoft later admitted that the breach was far more extensive: The hackers accessed sensitive internal emails, and even Microsoft’s source code. According to the company, Midnight Blizzard may now be using information found in customer emails to pursue further attacks.

At a June 2024 House Committee on Homeland Security hearing to address the series of cybersecurity incidents, Brad Smith, Microsoft’s vice chair and president, testified that the “bad news for the folks who want to sell plan B” is that public sector clients “don’t want to switch. They want us to get it right and we have to get it right to deserve their business.”

Smith is half right; customers don’t see a plan B, but that’s because their choice to switch providers has been effectively cut off. At the core of all of this is Microsoft’s software licensing strategy. The company routinely ties its core productivity software to an ever-growing bundle (which at the upper tier includes over 30 products), limits integrations with third-party providers, making it difficult for customers to diversify their system, and restricts how customers can use their previously purchased software on other cloud providers. These practices are not just business tactics that lock-in customers — they are very real security concerns. Every single customer who received an alert from Microsoft over the weekend regarding the SharePoint hack has had to learn that the hard way. 

In addition to exposing companies to cybersecurity vulnerabilities, these practices also raise significant antitrust concerns — and are under scrutiny from regulators around the world, including reportedly by the Federal Trade Commission

Microsoft’s largest customer — the U.S. government — needs to wake up to this threat. When customers license Microsoft software, they aren’t just buying tools — they’re buying into a system where exit is difficult, choice is limited, and security is too often an exposure.

The question isn’t whether Microsoft will respond to its latest failures. The company’s decades-long playbook — blaming the government for not doing more, then offering free upgrades post-breach only to raise prices and deepen lock-in — suggests they will deflect with a “nothing to see here” approach while capitalizing on vulnerabilities. 

The real question is whether the government will continue to accept a model that turns licensing restrictions into national dependence and vulnerabilities into profit, and repeatedly exposes our nation’s most critical information to those who wish to harm us.

Ryan Triplette is executive director of the Coalition for Fair Software Licensing.

The post Microsoft’s software licensing playbook is a national security risk appeared first on CyberScoop.

Why it’s time for the US to go on offense in cyberspace

By: Greg Otto
21 July 2025 at 09:00

The U.S. is stepping into a new cyber era, and it comes not a moment too soon.

With the Trump administration’s sweeping $1 billion cyber initiative in the “Big Beautiful Bill” and growing congressional momentum under the 2026 National Defense Authorization Act (NDAA) to strengthen cyber deterrence, we’re seeing a shift in posture that many in the security community have long anticipated, although often debated: a decisive pivot toward more robust offensive cyber operations.

While many may disagree with the decision to “go on offense,” we need to recognize the changing threat landscape and the failure of our previous restrained approach. The U.S. has the most advanced cyber capabilities in the world. Yet for the past two decades, our posture has been dominated by defense, deterrence-by-denial, and diplomatic restraint. This strategy has not yielded peace or dissuaded our adversaries. On the contrary, it has only served to embolden them.

With geopolitical tensions now at a boiling point and adversaries escalating both the scale and ambition of their cyber campaigns, it is time to remove the handcuffs. This doesn’t mean acting recklessly, but it does mean meeting our adversaries on the same battlefield so that we can use our unmatched capabilities to hold them at risk.

The strategic landscape has changed

The cyber threat environment in 2025 is fundamentally different from what it was even five years ago. Operations like China’s Volt Typhoon and Russia’s relentless campaigns against Ukraine’s infrastructure illustrate a broader shift: our adversaries are no longer limiting themselves to espionage or IP theft. They are actively preparing for conflict.

Volt Typhoon, in particular, marks a strategic evolution as Chinese state actors are actively prepositioning in U.S. critical infrastructure not for surveillance, but for disruption. Salt Typhoon’s operations, targeting civilian infrastructure with apparent tolerance for detection, suggest a loosening of China’s risk calculus. Meanwhile, Russia’s destructive malware targeting industrial control system (ICS) environments, and Iran’s growing reliance on cyber proxies, show how aggressive and emboldened our rivals have become.

Offensive capabilities are a military imperative

The proposed $1 billion investment isn’t about launching retaliatory attacks. It’s about building the infrastructure, tools, and talent needed to make cyber a fully integrated and reliable component of U.S. military and intelligence operations.

While the U.S. possesses world-class cyber capabilities, current policies have kept these tools locked behind layers of classification, bureaucracy, and operational disconnect. As a result, offensive cyber operations have been limited to highly targeted missions. While they’re often executed with surgical precision, they usually lack the speed, adaptability, or scale demonstrated by our adversaries.

When a U.S. technique is exposed, it can take months to retool and mount another operation. In contrast, our adversaries rely on publicly known vulnerabilities, social engineering, and agile teams that can quickly weaponize newly disclosed exploits.

Zero-days are among our most valuable (and expensive) cyber assets. But having the exploit isn’t enough. Effective use requires real-time intelligence, targeting infrastructure, trained operators, and a legal framework that enables rapid deployment.

This new investment represents a serious effort to evolve our approach. It will enable the Department of Defense, U.S. Cyber Command, and the intelligence community to proactively shape the digital battlefield, both independently and in coordination with conventional military operations.

Adversaries respond to force, not diplomacy

Over the past 15 years, we’ve watched top adversaries China and Russia test, prod, and exploit our most sensitive networks, from government systems to critical infrastructure companies, often with minimal consequence. We’ve also sustained numerous damaging attacks, from the massive OPM and Equifax breaches to SolarWinds, NotPetya and Colonial Pipeline. The list goes on and on.

In all of these cases, we’ve responded, at best, with indictments, sanctions, or strongly worded statements. In the meantime, our adversaries have only grown bolder and more sophisticated. Their actions suggest one conclusion: they don’t believe we’ll strike back.

This lack of proportional response is viewed as weakness, not restraint. Deterrence only works when the adversary believes you will act. That belief is fading. But a more muscular cyber posture, backed by operational capacity and political will, can restore it.

Ransomware is now a national security threat

The line between criminal and nation-state activity is becoming blurred amid rising geopolitical tensions. Ransomware, once seen as a law enforcement issue, now poses one of the most serious threats to national infrastructure.

We’ve already seen its disruptive power in attacks on Colonial Pipeline, JBS Foods, Mondelez International, and United Natural Foods Inc. However, as damaging as those were, they pale in comparison to what a determined adversary — especially one that is backed by a state — could accomplish.

Essential services like electricity, water, health care, and transportation are increasingly vulnerable. Many ransomware groups operate in jurisdictions that ignore or even support their activities. U.S. adversaries are now integrating these actors into broader state-aligned campaigns, using them as asymmetric tools of disruption.

The weaponization of ransomware and other destructive malware like “wipers” is a clear and present danger. Countering it requires more than law enforcement.

While the Department of Homeland Security and the FBI play vital roles in tracking threats, they lack the global reach and strategic authority of the military. Offensive cyber capabilities are needed to disrupt operations, dismantle infrastructure, and impose real costs.

There are risks with doing nothing, too

Critics of these operations rightly point out there are plenty of risks: escalation, unintended consequences, and blowback. Yes, these risks are real. Any use of cyber capabilities, especially against state-linked infrastructure, must be carefully weighed, governed by rules of engagement, and aligned with broader geopolitical strategy. 

Historically, cyber has not had clear rules for what constitutes “crossing the line,” though the general assumption has been that loss of life or large-scale disruptions to critical infrastructure would qualify. 

But inaction has its own risks. If we continue playing defense while our adversaries go on offense, we are signaling that they can operate with impunity. This is not de-escalation; it’s appeasement. And it will only invite more aggression. 

On the other hand, offensive action may at times be the most effective path to de-escalation, by showing that the U.S. is both willing and able to impose real costs.

It’s time for real deterrence

Cyber deterrence has long been an elusive concept. Unlike nuclear deterrence, which relies on mutually assured destruction, cyber deterrence is far more ambiguous. The lack of clear red lines, uncertain attribution, and the diverse range of actors all complicate strategy.

But these are not reasons to avoid building deterrence. This is why it’s even more important to build smarter, more flexible capabilities that combine intelligence, cyber offense, and traditional diplomacy to manage escalation while signaling resolve.

The shift we’re seeing now, both from Congress and the administration, is a necessary first step. However, in order to be effective, it must be followed by clear doctrine, strong oversight, and close coordination between military, intelligence, and homeland security stakeholders. 

Offensive cyber operations are not a silver bullet, but they are an essential tool of statecraft in the modern world. 

Dave Kennedy is the founder of TrustedSec and Binary Defense.

The post Why it’s time for the US to go on offense in cyberspace appeared first on CyberScoop.

Have you checked your FICO score?

26 June 2025 at 04:00
If you do online banking, you may have been prompted to check your FICO score. This score set by the Fair Issac Corporation. Now FICO will be including the Buy now Pay Later short-term loans you may get for goods and services in your FICO score. This is the first time a change in the […]
❌
❌