Normal view

There are new articles available, click to refresh the page.
Today — 12 May 2026Main stream

The missing cybersecurity leader in small business

By: Greg Otto
11 May 2026 at 06:00

The average cyberattack costs for a small- or medium-size business is more than $250,000. The salary for a chief information security officer (CISO) is about the same, pulling in between $250,000 and $400,000, according to the annual 2026 CISO Report from Sophos and Cybersecurity Ventures. Small- and medium-size businesses (SMBs) know they cannot afford the salary, so they roll the dice, hoping they will not be attacked. This is a dangerous gamble that these businesses, which make up the backbone of the American economy, should not have to take. A virtual (vCISO) or fractional CISO (fCISO) can provide a practical solution.

As the American economy goes digital, SMBs now rely on the same building blocks as big enterprises — cloud services, payment systems, remote access, customer data, and other third-party vendors.  But without senior cyber leadership, cybersecurity often becomes a patchwork of tools, checklists, insurance paperwork, and whatever guidance a vendor offers. That may get these companies through a questionnaire; it will not build real resilience. Nearly half, all reported cyber incidents, which is projected to cost the global economy $12.2 trillion annually by 2031, involve smaller firms.

The threat is growing in both size and sophistication. Adversaries are deploying AI to automate reconnaissance, develop malware, and run phishing campaigns at scale.  This reduces the cost and skill needed to target smaller firms at volume. Adversaries are also collecting encrypted data with the intent to decrypt it later when they have access to large enough quantum computers. SMBs in defense, healthcare, and financial supply chains often hold sensitive credentials that provide access into larger enterprise environments, but most are not prepared to adopt quantum-resistant encryption.

SMBs generally understand they face cyber risk. The real gap is leadership: someone who can turn technical vulnerabilities into business decisions, set priorities, brief executives, prepare for audits, and hold vendors accountable. For more SMBs, hiring a full-time CISO is financially unrealistic.

A Virtual CISO provides remote, on-demand cybersecurity leadership and advice, typically supporting several organizations at the same time. A fractional CISO is a dedicated, part-time executive who is more deeply integrated into one organization’s governance, security planning, and day-to-day operations. Both models give smaller organizations access to senior-level cybersecurity expertise in a flexible, more affordable way than hiring a full-time CISO.

Washington should make it easier for SMBs to hire fractional cybersecurity leaders, because the private market is not closing this gap on its own. The Cybersecurity and Infrastructure Security Agency (CISA) and the Small Business Administration (SBA) could help by publishing buyer guidance: vetted criteria for evaluating providers, example scopes of work and deliverables, and real-world case studies that show SMB owners what a high-quality vCISO or fCISO engagement should look like.

Clear guidance matters because many smaller firms cannot easily tell the difference between true cybersecurity leadership and a tool reseller, compliance-only consultant, or a generic managed services contract. Any vetted provider criteria should emphasize proven experience building and running security programs, independence from vendor incentives and product quotas, and the ability to tie security investment to real business risk, not just a list of certifications. Model scopes of work should also spell out the basics every engagement should deliver: an initial risk assessment, a prioritized remediation roadmap, and simple metrics that show whether security is improving over time. Without clear buyer criteria, federal efforts could end up funding low-quality services that add cost and paperwork without making companies safer.

The National Institute for Standards and Technology (NIST) should recognize these CISO models in its SMB-focused Cybersecurity Framework guidance. That would help smaller firms turn the framework’s Govern, Identify, Protect, Detect, Respond, and Recover functions into a clear, accountable leadership structure. This would make these roles less abstract: the point is not merely providing advice, but taking executive-level ownership of risk priorities, vendor oversight, incident readiness, and communication with the owner or board.

Congress and the Treasury Department should consider targeted tax incentives or credits for qualified cybersecurity leadership services, tied to measurable risk-reduction outcomes. Eligible activities could include completing a risk assessment, building a incident response plan, conducting vendor security reviews, running employee training, and producing a remediation roadmap. SMBs often defer cybersecurity because every dollar competes with payroll, inventory, and growth. A targeted incentive would make security leadership easier to justify as a business investment rather than an optional add-on.

Federal acquisition officials should require contractors that handle sensitive government data to show it has executive-level cybersecurity oversight, whether it is full-time, virtual, or fractional, and should extend that expectation down to relevant subcontractors and suppliers. This is necessary because SMBs serve as entry points into defense, healthcare, financial, and critical infrastructure supply chains.

Finally, CISA and the SBA should support vCISO- and fractional-CISO-led workforce training. Employees improve security when training comes with leadership, regular reinforcement, and clear accountability, not just annual awareness training. The aim is not to turn every SMB into a Fortune 500 security shop. It should be to give smaller firms access to the leadership they need before the next incident forces the issue.

Georgianna Shea, who is a Doctor of Computer Science, is chief technologist at the Foundation for Defense of Democracies’ Center on Cyber and Technology Innovation and its Transformative Cyber Innovation Lab, where Cason Smith served as a summer 2025 intern. Cason is studying integrated information technology at the University of South Carolina.

The post The missing cybersecurity leader in small business appeared first on CyberScoop.

Before yesterdayMain stream

American duo sentenced for hosting laptop farms for North Korean IT workers

By: Greg Otto
7 May 2026 at 09:56


Two U.S. nationals were sentenced to 18 months in prison for running laptop farms that facilitated North Korea’s expansive remote IT workers scheme, the Justice Department said Wednesday.

Matthew Issac Knoot and Erick Ntekereze Prince both received and hosted laptops at their residences to dupe U.S. companies into thinking remote IT workers they hired were located in the country. The pair’s separate schemes impacted almost 70 U.S. companies and generated a combined $1.2 million in revenue for the North Korean regime.

“The FBI and our partners will continue to disrupt North Korea’s ability to circumvent sanctions and fund its totalitarian regime,” Brett Leatherman, lead of the FBI’s Cyber Division, said in a statement. “These cases should leave no doubt that Americans who choose to facilitate these schemes will be identified and held accountable. Hosting laptops for DPRK IT workers is a federal crime which directly impacts our national security, and these sentences should serve as a warning to anyone considering it.”

Knoot, of Nashville, Tennessee, and Prince, of New York, received the laptops from unsuspecting U.S. companies and installed remote desktop applications on the machines to enable co-conspirators to work from anywhere while appearing to be based at their respective residences.

Prince’s company Taggcar was contracted to supply IT workers to victim U.S. companies from June 2020 through August 2024. He pleaded guilty in November 2025 to wire fraud conspiracy for his yearslong involvement in the North Korean IT worker scheme. 

Prince was indicted and charged in January 2025 along with his alleged co-conspirators, who collectively obtained work for North Korean IT workers at 64 U.S. companies, earning nearly $950,000 in salary payments. 

A federal judge sentenced Prince Wednesday and ordered him to forfeit $89,000, which is the amount he netted personally. 

Knoot was arrested in August 2024, a year after the FBI searched his home. Officials said he made multiple false and misleading statements and destroyed evidence to obstruct the investigation at that time. 

Victim companies paid North Korean workers linked to Knoot’s laptop farm more than $250,000 from July 2022 to August 2023. The remote IT workers transferred those funds to Knoot and accounts associated with North Korean and Chinese nationals, officials said. 

Knoot was sentenced May 1 and ordered to pay $15,100 in restitution to the victim companies and forfeit an additional $15,100, which is equivalent to the amount of his direct take from the scheme.

The pair of North Korean operatives join a growing list of people who have been charged and jailed for supporting the regime’s scheme that generates hundreds of millions of dollars annually for the country’s military and organizations involved in its weapons programs.

Authorities have been cracking down on the malicious insider activity by seizing cryptocurrency linked to the theft, and targeting U.S.-based facilitators who provided forged or stolen identities and hosted laptop farms for North Korean operatives. 

The countermeasures are stacking up, but the scheme is widespread and has infiltrated an undetermined number of businesses, including hundreds of Fortune 500 companies.

Federal judges previously sentenced other people to prison for their involvement in the scheme, including Keija Wang and Zhenxing Wang; Audricus Phagnasay, Jason Salazar and Alexander Paul Travis; Oleksandr Didenko and Christina Chapman

“These sentences hold accountable U.S nationals who enabled North Korea’s illicit efforts to infiltrate U.S. networks and profit on the back of U.S. companies,” John A. Eisenberg, assistant attorney general for national security, said in a statement. 

“These defendants helped North Korean ‘IT workers’ masquerade as legitimate employees, compromising U.S. corporate networks and helping generate revenue for a heavily sanctioned and rogue regime,” he added. “The National Security Division will continue to pursue those who, through deception and cyber-enabled fraud, threaten our national security.”

The post American duo sentenced for hosting laptop farms for North Korean IT workers appeared first on CyberScoop.

A DOD contractor’s API flaw exposed military course data and service member records

By: Greg Otto
6 May 2026 at 17:15

A defense technology company with Department of Defense contracts exposed user records and military training materials through API endpoints that lacked meaningful authorization checks, according to an account published by Strix, an open-source autonomous security testing project.

The issue affected Schemata, an AI-powered virtual training platform used in military and defense settings. According to Strix, an ordinary low-privilege account was able to access data across multiple tenants, including user listings, organization records, course information, training metadata and direct links to documents hosted on the Schemata’s Amazon Web Services instances.

Strix said the exposed materials included a 3D virtual training course for naval maintenance personnel with documentation marked confidential and proprietary, a course containing Army field manuals on explosive ordnance handling and tactical deployment, and hundreds of user records linked to bases and training enrollments. Additionally, the exposed information included names, email addresses, enrollment details and the military bases where U.S. service members were stationed. 

Schemata acknowledged the affected endpoints were exposed May 1, after what Strix described as a 150-day disclosure process. Strix said it verified remediation before publication and published its account earlier this week, 152 days after its initial disclosure attempt.

The reported vulnerability did not require a complex exploit. Strix said it used a low-privilege account to watch normal browser traffic, identify API endpoints exposed through the application, and request high-value data using the same session. According to Strix, those requests returned records from outside the account’s own organization, suggesting the API was not properly enforcing tenant boundaries or user permissions.

In multi-tenant software, authorization controls are intended to ensure users can access only the data and functions assigned to their account or organization. The failure described by Strix would represent a basic breakdown in that model. The firm said some routes also appeared “write-enabled,” meaning a malicious actor could potentially modify or delete courses through update or delete requests, though the account does not say Strix performed destructive testing.

Strix did not respond to CyberScoop’s request for comment. 

Schemata’s platform serves military and defense training environments, where user identities, assignments and course enrollments can reveal sensitive operational context. Even when information is not classified, records showing where service members are based, what training they are enrolled in and which materials they can access may create risks if exposed outside intended channels.

In a statement posted on the company’s website, Schemata said it did not have “evidence that any third party exploited the vulnerability to access customer data.”

The disclosure timeline also raises questions about how companies handling sensitive government-related data receive and respond to vulnerability reports. Strix said it first contacted Schemata on Dec. 2, 2025. According to the account, Schemata’s CEO initially responded, “I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?”

Strix said it clarified the same day that compensation was not required and that its priority was user safety. It said it sent multiple follow-ups from Dec. 8-29, warning that the vulnerability was critical and asking where to send details. Five months later, after telling Schemata that researchers were publishing the information publicly, Schemata responded, acknowledged the exposed endpoints and said it would patch the issue immediately.

“After we received actionable details about the vulnerability and confirmed the security researcher appeared to be legitimate, our team remediated the vulnerability the same day, and the researcher independently verified the fix before publishing their findings,” Schemata’s statement reads. “We appreciate the security researcher bringing this to our attention and their contribution to the security of our platform.”

Schemata said it’s working with cybersecurity consultants to assist with its response and improve its security posture. The company also said it is in contact with government authorities about the vulnerability.

Defense contractors that handle Controlled Unclassified Information, or CUI, must report cyber incidents to the Department of Defense Cyber Crime Center (DC3). The center did not respond to CyberScoop’s request for comment. 

According to contracting data, the company holds $3.4 million in contracts with the Department of Defense. In May 2025, Schemata announced $5 million in venture funding from several firms, including Andreessen Horowitz. 

The post A DOD contractor’s API flaw exposed military course data and service member records appeared first on CyberScoop.

Why data centers now belong on the critical infrastructure list

By: Greg Otto
4 May 2026 at 06:00

Missile and drone attacks that took out cloud data centers in the Middle East underscored a critical vulnerability in the modern economy: reliance on digital infrastructure that sustains competitive advantage and operational continuity for corporations, nations, and militaries. 

The outages and downstream disruption were a preview of a new form of strategic and operational risk. Data centers have long been the backbone of the digital economy. What is changing is the scale of dependence as AI workloads dramatically increase the compute power required to run businesses, supply chains, and national security systems. 

Artificial intelligence has moved beyond business applications and into the core of warfare and national security. Last month, The New York Times reported that AI is “totally integrated” into the collection of intelligence and its use in strategic decision-making and military operations. Even if AI models are not directly firing weapons, AI-enabled analysis now plays a central role in how modern militaries gain visibility, find insights, and drive action.

That matters because it changes what should be considered critical infrastructure. If AI is a competitive advantage for companies and a battlefield advantage for warfighters, then the infrastructure that trains, hosts and runs AI becomes a high-value target. Attacks on the digital infrastructure organizations rely on can do more than inflict financial damage. They can slow decision-making, degrade logistics and reduce military effectiveness without ever engaging a conventional force.

Historically, nation-state campaigns targeting data centers and service providers focused on cyber intrusions for espionage or pre-positioning. What is different now is the emergence of physical attacks on digital infrastructure during active conflict. Russian military intelligence has been linked to campaigns aimed at digital infrastructure and managed services, often as part of a supply chain attack to compromise organizations at scale. Iran-aligned groups have repeatedly demonstrated willingness to target private sector entities to advance geopolitical goals. In many cases, the objective was access: steal data, implant persistence, map networks, and maintain a foothold that could be used later for espionage or disruption. 

What’s clearer now than ever before is that data centers and the AI workloads they support have become so vital to modern society, our adversaries will seek to degrade or destroy their efficacy as a tactic of both kinetic and cyber warfare.

We have already seen how quickly a digital incident can become real-world disruption. On March 11, reports surfaced of thousands of servers and endpoints wiped inside Stryker, a U.S.-based medical device manufacturer. A hacktivist group sympathetic to Iran, known as Handala, claimed responsibility. The incident reportedly halted Stryker’s global production after attackers accessed its Microsoft environment and issued a wipe command via Intune. Even without a single missile, the outcome looked like a strategic disruption: operations stopped and downstream customers felt it.

For business leaders, the imperative is clear: treat operational resilience as a board-level priority in the AI era.

In the world of corporate IT, cybersecurity prioritizes confidentiality: preventing theft of sensitive information. Resilience is a different discipline. It is the ability to sustain operations when systems are degraded, disrupted or actively under attack. For data centers and the businesses that depend on them, resilience comes down to preventing cascading failures and reducing the consequence when something inevitably goes wrong.

These developments carry an important implication for the private sector. Digital infrastructure is increasingly a strategic target, making resilience a core business priority rather than a narrow IT issue. For business leaders, the impact of data center disruption extends into multiple, often overlooked areas of cybersecurity risk.

For example, AI’s growth is colliding with a power wall in many regions where grid capacity cannot scale fast enough. That is driving facilities toward new power dependencies, including on-site generation through distributed energy and renewables, yielding more complex power management environments. This power infrastructure becomes a pressure point as interruptions to power supply or management systems can quickly force a data center offline. Russia has on several occasions demonstrated the ability to target and disrupt power generation and distribution in Ukraine in both 2015 and 2016.

Building management and automation systems, including HVAC and physical access controls, are another. These systems are essential to creating safe and supporting operational environments, but they typically have long capital depreciation cycles and inconsistent security safeguards. Frequently exposed to the Internet, and commonly misconfigured and not properly secured, they can become a pathway to outages by an attacker.

With an increasing density of computing infrastructure, thermal management has become a core environment control in data centers. As the industry adopts liquid cooling for dense AI loads, interference with cooling is no longer a niche technical issue. It is a risk vector that can cause downtime and potential equipment damage if breached by attackers.

Remote access creates another major exposure. Data centers rely on vendors, contractors, and systems integrators for maintenance, monitoring, and support, and each remote connection can become an entry point if it isn’t tightly controlled, centrally managed, and well secured. Adversaries often target these trusted access routes because they can be easier to compromise than a well-defended perimeter, allowing attackers to bypass standard controls and safeguards.

All of this has broader economic implications because data center disruption does not stay inside the technology sector. It cascades into the industries that keep society functioning and supply chains moving: hospitals, electric utilities, chemical production, food and beverage, oil and gas, and transportation. An extended outage becomes missed shipments, halted production, delayed care, safety concerns and lost trust.

What should leaders do now?

Start by defining resilience targets that match business reality: what must stay running, what can degrade, what cannot fail. Then invest in the controls that limit the impact of an incident. Segmentation between IT and OT assets should be non-negotiable. Remote access should be treated as a critical risk pathway with least privilege, strong authentication and continuous monitoring.

Manage facilities systems such as building management systems, power, and cooling controls as critical operational technology, with asset inventories, vulnerability management, logging, and incident response plans that anticipate disruption.

Finally, train to operate under degraded conditions. Tabletop exercises should include scenarios like loss of a cloud region, partial failure of a facility, or compromise of a management plane. Use these exercises to validate that the organization can maintain essential operations and recover quickly when disruptions occur. 

Policy is moving in this direction as well. Governments are increasingly treating data centers as critical infrastructure. Policies and frameworks such as the National Cybersecurity Strategy, CISA’s Secure by Design principles, and international standards like IEC 62443 all reflect a growing recognition that digital infrastructure is a national security issue. Companies that get ahead of this shift will not only reduce risk, they will build competitive advantage in a world where downtime can become a strategic weapon.

In the AI era, data centers are essential infrastructure for modern economies and national security. Their rising importance also makes them attractive targets in cyber and physical conflict. Protecting them is no longer just about safeguarding company operations, it is about protecting the systems society depends on every day. 

Grant Geyer is the chief strategy officer at Claroty.

The post Why data centers now belong on the critical infrastructure list appeared first on CyberScoop.

US government, allies publish guidance on how to safely deploy AI agents

By: Greg Otto
1 May 2026 at 12:49

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The agencies’ central message is that agentic AI does not require an entirely new security discipline. Organizations should fold these systems into the cybersecurity frameworks and governance structures they already maintain, applying established principles such as zero trust, defense-in-depth and least-privilege access.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.

The guidance also flags prompt injection, where instructions embedded inside data can hijack an agent’s behavior to perform malicious tasks. Prompt injection has been a lingering problem with large language models, with some companies admitting that the problem may never be solved

Identity management gets significant attention throughout the document. The agencies recommend that each agent carry a verified, cryptographically secured identity, use short-lived credentials and encrypt all communications with other agents and services. For high-impact actions, a human should have to sign off, and the guidance is explicit that deciding which actions require that approval is a job for system designers, not the agent.

The agencies admit the security field has not fully caught up with agentic AI. Some risks unique to these systems are not yet covered by existing frameworks, and the guidance calls for more research and collaboration as the technology takes on a growing number of operational roles.

“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains,” the guidance reads. 

You can read the full guidance below.

The post US government, allies publish guidance on how to safely deploy AI agents appeared first on CyberScoop.

cPanel’s authentication bypass bug is being exploited in the wild, CISA warns

By: Greg Otto
30 April 2026 at 16:49

A severe authentication bypass vulnerability in cPanel, one of the most widely deployed web hosting control panel platforms on the internet, is being actively exploited in the wild, according to security researchers and hosting providers.

The vulnerability, tracked as CVE-2026-41940, affects all supported versions of cPanel and WebHost Manager (WHM) released after version 11.40, as well as WP Squared, a WordPress hosting management panel built on the cPanel platform. Internet scans conducted by security firm Rapid7 using the Shodan search engine identified approximately 1.5 million cPanel instances exposed online, though the precise number of vulnerable systems remains unknown.

cPanel released a patch Tuesday. By that point, exploitation had already been underway. KnownHost, a hosting provider that relies on cPanel, said earlier this week that successful exploits had been observed in the wild prior to any fix being made available. 

The Cybersecurity and Infrastructure Security Agency added the CVE to its Known Exploited Vulnerabilities (KEV) list Thursday. 

Cybersecurity firm watchTowr provided technical details in a blog posted Wednesday: The flaw stems from improper handling of user input during the login process. When a user attempts to log in, cPanel writes data from the request into a server-side session file before verifying the user’s identity. An attacker can exploit this by embedding hidden line breaks into the password field of a login request — characters cPanel fails to strip out — allowing arbitrary data to be injected directly into that file.

Through a secondary step, also involving a deliberately malformed request, the injected data gets promoted into the session’s active cache, where cPanel reads it as legitimate. Once that happens, the system sees the session as already authenticated and skips password verification entirely, granting access without ever checking the user’s actual credentials.

cPanel has published a detection script designed to scan session files for indicators of compromise, including sessions that contain injected authentication timestamps, pre-authentication sessions with authenticated attributes, and password fields containing embedded newlines. WatchTowr separately released a “Detection Artifact Generator” that administrators can use to verify whether their instances remain vulnerable.

Namecheap, a major domain registrar and hosting provider, took the step of temporarily blocking connections to cPanel and WHM ports 2083 and 2087 ahead of patch availability, citing the need to protect customers while an official fix was pending. The company began applying the patch after cPanel’s release earlier this week.

cPanel’s patched releases address the issue across seven version branches, from 11.110.0 through 11.136.0, as well as WP Squared version 11.136.1. The company’s advisory notes that the fix ensures potentially dangerous input is scrubbed automatically within the core session-saving process, rather than depending on each individual part of the codebase to do so separately. The patch also adds handling for cases where a per-session encryption key is missing, a condition the original code failed to account for and that attackers were able to exploit to bypass password encoding entirely.

The CVE has been given a 9.8 on the CVSS scale. 

The post cPanel’s authentication bypass bug is being exploited in the wild, CISA warns appeared first on CyberScoop.

Everyone’s building AI agents. Almost nobody’s ready for what they do to identity.

By: Greg Otto
30 April 2026 at 06:00

Anthropic recently announced that it would not release Mythos, its most powerful AI model, to the public. The model discovered thousands of previously unknown software vulnerabilities — flaws that had sat undetected in major operating systems and web browsers for as long as nearly three decades. Anthropic said the model was too dangerous to deploy broadly because the same capabilities that let it find and fix security flaws could let attackers exploit them. A single AI agent, the company warned, could scan for weaknesses faster and more persistently than hundreds of human hackers. 

That decision tells you something important about where we are. The same AI systems that companies are racing to deploy as autonomous assistants — scheduling your appointments, writing your code, managing your workflows — are also capable of probing digital defenses at a speed and scale no human team can match. And most of the systems they’d be probing still rely on a security model designed for an era when a person sat behind every keyboard. 

Think of it like a building where every door has a lock, but the locks were all designed to recognize human hands. Now the building is full of robots — some of them authorized couriers, some of them intruders — and the locks can’t tell the difference. 

Not long ago, you could sit at your desk, glance at the sticky note on your monitor for your username and password, type them in, and grab a cup of coffee while your browser opened a doorway to the rest of the world. Every layer of security that followed — passwords, security questions, biometric scans, two-factor authentication — grew out of a single bedrock assumption: a person was on the other end. 

AI agents break that assumption from two directions at the same time. Legitimate agents need credentials to act like a human. OpenAI’s Operator navigates websites on your behalf. Google’s Gemini can plan your next family vacation while you sleep. Visa recently unveiled Intelligence Commerce Connect, a platform that lets AI agents do the shopping for consumers. These aren’t demos or hot takes from a tech conference floor. They’re shipping products that act on behalf of real people—and to do that, they need your identity. 

At the same time, adversaries can fake humanity at scale. The same AI that can act like a helpful assistant convincing can also be a malicious impersonator. They don’t break in, they log in—through shared credentials, hiring pipelines, vendor onboarding portals, and collaboration tools. Most organizations still treat identity as a login problem—something IT handles with stronger passwords or additional authentication steps layered on top of existing systems. The harder challenge now is knowing who, or what, you’ve already let in. 

That distinction is collapsing just as digital systems become more autonomous. 

When that distinction blurs, the damage is concrete. If a procurement workflow cannot distinguish between a human manager and an AI impersonator, purchase orders go out under false authority. When compliance logs cannot determine how a decision was authorized — by a person or a bot — the accountability chain falls apart. Regulators and customers will not accept “we’re not sure” as an explanation. 

The economics have tilted sharply toward the attacker. Sophisticated fraud once required coordination, with people researching targets, crafting messages, and adjusting tactics in real time. AI agents eliminate those constraints. One person can now supervise an army of autonomous systems, each running a valid persona across multiple interactions simultaneously. A single operator can field a hundred synthetic employees for the cost of one real salary. The barrier to large-scale impersonation is no longer skill or manpower. It is access to a capable model and a set of stolen credentials. 

Stronger identity controls do carry a cost. Every additional verification step is a moment when a customer might abandon a transaction, or an employee might lose patience with a security protocol. The goal is not to shut down automation. It is to make sure the systems acting in your name are authorized to do so. 

Some organizations are adapting. They are treating AI agents less like software and more like new employees, cataloging every agent in their environment, limiting permissions, requiring human approval for sensitive actions. They are moving beyond passwords to phishing-resistant authentication that binds access to a known device and a verified user. They are building behavioral baselines so that when a customer service bot suddenly queries a financial database, or a new hire accesses source code on day one, alarms go off. 

Nobody keeps their password on a sticky note anymore (I hope). But the assumption behind the sticky note, that a human hand would type it in, still underpins most of the systems we depend on. These systems hold your medical records, process your mortgage, and let an AI assistant rebook your flight. In a world where AI agents act faster, more persistently, and more convincingly than any person, that assumption is the vulnerability. 

The organizations that can verify identity continuously — not just at the door, but at every action, for every actor, human or machine — will have a durable advantage. The ones that cannot will find out what ambiguity costs. 

Devin Lynch is Senior Director of the Paladin Global Institute and a former Director for Policy and Strategy Implementation at the Office of the National Cyber Director. 

The post Everyone’s building AI agents. Almost nobody’s ready for what they do to identity. appeared first on CyberScoop.

Federal CIO cautious on Anthropic’s Mythos despite planned rollout

By: Greg Otto
28 April 2026 at 16:14

Federal Chief Information Officer Greg Barbaccia said Tuesday the government is approaching Anthropic’s Mythos model with measured expectations, acknowledging both its potential to strengthen federal cyber defenses and the significant uncertainties that remain about how it would perform in real-world conditions.

Barbaccia said his direct exposure to Mythos has been limited to evaluations and benchmarking tests, and that no federal agencies have deployed it yet. While he says the Office of the National Cyber Director is coordinating the government’s approach, his broader assessment of where AI-assisted cybersecurity is heading was direct.

“We’re going to get to a world soon where AI defense will be able to catch up,” Barbaccia told CyberScoop on Tuesday at the Workday Federal Forum, produced by Scoop News Group. “We must get to a point where the bots are finding the bots.”

Earlier this month, Barbaccia sent an email to cabinet agencies to inform them that the Office of Management and Budget has started laying the groundwork for a controlled rollout of the model to federal agencies.

His framing reflects a view that the same capabilities making Mythos a potential offensive threat are precisely what make it valuable as a defensive tool. Anthropic has said the model identified thousands of previously unknown, high-severity vulnerabilities across major operating systems and web browsers during testing, many of them decades old. The question for federal security teams is not whether those capabilities are real, but whether they translate from controlled laboratory settings to the complex, defended networks that government agencies actually run.

Barbaccia was candid about that gap. 

“I think it’ll uplevel people and make a novice cybersecurity offensive operator more efficient,” he told CyberScoop. “But the jury is still out on how effective it’ll be against real-world conditions, meaning a network that’s guarded by human defenders that has alerting and things like that. The evaluations I’ve seen have been laboratory learnings.”

That distinction matters for federal security teams weighing how to think about the model. Finding a vulnerability and successfully exploiting it in a defended environment are different problems. Barbaccia pointed to the CVE catalog, the government’s running list of known software flaws, as one area where the model’s speed could have practical value. A human analyst working through that catalog would take considerable time. A model like Mythos could move through it far faster. But speed alone does not determine whether a vulnerability poses an actual threat.

“There’s a difference between something that is exploitable in a 4-nanosecond window during a BIOS boot versus what’s the reality of that being exploited in the real world,” he said. “We have to understand, just like you could secure your entire threat surface, where are the crown jewels? And how do you protect something, and make sure the protection you’re deploying is worthwhile what you’re protecting.”

That kind of thinking is familiar to federal network defenders, who operate under resource constraints and must triage which vulnerabilities to address first. What Mythos potentially changes is the speed at which that triage can happen, and the depth at which vulnerabilities can be identified before an adversary finds them.

Barbaccia said the CIO Council, which coordinates technology policy across civilian agencies, is still in the early stages of understanding what the model could mean for enterprise security environments. “Everybody’s just curious to learn a lot more,” he said.

Agencies have tried on their own to obtain access to Anthropic’s model. The Department of the Treasury has asked for access, according to reports. CISA, the agency responsible for securing, monitoring, and defending civilian agency networks, has not been granted access.

The post Federal CIO cautious on Anthropic’s Mythos despite planned rollout appeared first on CyberScoop.

US, UK agencies warn hackers were hiding on Cisco firewalls long after patches were applied

By: Greg Otto
23 April 2026 at 16:25

A state-sponsored hacking group has implanted a custom backdoor on Cisco network security devices that can survive firmware updates and standard reboots, U.S. and British cybersecurity authorities disclosed Thursday, marking a significant escalation in a campaign that has targeted government and critical infrastructure networks since at least late 2025.

The Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Cyber Security Centre jointly published a malware analysis report identifying the backdoor, code-named Firestarter. Cisco’s threat intelligence division, Talos, attributed the malware to a threat actor it tracks as UAT-4356. The company attributed the same group to a 2024 espionage campaign called ArcaneDoor, which focused on compromising network perimeter devices.

CISA confirmed it discovered Firestarter on a U.S. federal civilian agency’s Cisco Firepower device after identifying suspicious connections through continuous network monitoring. The finding prompted an updated emergency directive issued Thursday, requiring all federal civilian agencies to audit their Cisco firewall infrastructure and submit device memory snapshots for analysis by Friday.

A backdoor that outlasts patches

The central concern driving the updated directive is the attack group’s ability to persist on compromised devices, even after enterprises applied security patches Cisco released in September 2025. Those patches addressed two vulnerabilities — CVE-2025-20333, a remote code execution flaw in the VPN web server component, and CVE-2025-20362, an unauthorized access vulnerability — that UAT-4356 exploited to gain initial entry. According to CISA, devices compromised before patching may still harbor the implant.

Firestarter allows attackers to achieve persistence by manipulating the Cisco Service Platform mount list, a configuration file that governs which programs execute during the device’s boot sequence. When the device receives a termination signal or enters a reboot, the malware copies itself to a secondary location and rewrites the mount list to restore and relaunch itself after the system comes back online. 

Critically, a standard software reboot does not remove the implant. Only a hard reboot — physically disconnecting the device from its power supply — is sufficient to clear the persistence mechanism from memory, according to both CISA and Cisco.

From there, the malware injects malicious shellcode into LINA, the core networking and firewalling code of Cisco’s Adaptive Security Appliance and Firepower Threat Defense software. Once embedded, the malware intercepts a specific type of network request normally used for VPN authentication. When a request arrives containing a hidden trigger sequence, it executes code supplied by the attackers, giving them a backdoor into the device.

Ties to ongoing campaign

Cisco Talos noted that Firestarter shares significant technical similarities with a previously documented implant called RayInitiator, suggesting the tools share a common origin or development history within UAT-4356’s arsenal.

In the federal agency incident analyzed by CISA, the attackers first deployed a separate implant, called Line Viper, to gain access to device configurations, credentials, and encryption keys. Firestarter was installed shortly after, prior to Cisco’s September 2025 patches being applied to those specific devices. When the agency patched its systems, Firestarter stayed on the devices, and the actors used it to then redeploy Line Viper in March, nearly six months after the initial breach.

Cisco and CISA did not attribute the espionage attacks to a specific nation state, but Censys researchers previously said it found compelling evidence indicating a threat group based in China was behind the ArcaneDoor campaign. Censys noted it found evidence of multiple major Chinese networks and Chinese-developed anti-censorship software during its investigation into the early 2024 attacks.

The persistence vulnerability affects a broad range of Cisco hardware, including the Firepower 1000, 2100, 4100, and 9300 series, as well as the Secure Firewall 1200, 3100, and 4200 series.

Cisco has released updated software to address the persistence mechanism, though the company strongly recommends reimaging affected devices rather than relying solely on software updates where compromise is suspected.

The incident reflects a pattern increasingly seen among state-linked hackers: targeting the network edge devices that organizations rely on to enforce security boundaries. Because these appliances sit at the perimeter of enterprise and government networks, compromising them can expose internal traffic and give attackers a position to intercept credentials and communications.

CISA acknowledged active exploitation of the underlying vulnerabilities was ongoing at the time of publication.

A Cisco spokesperson told CyberScoop that customers needing assistance should contact Cisco Technical Assistance for support. CISA did not respond to a request for comment. 

The post US, UK agencies warn hackers were hiding on Cisco firewalls long after patches were applied appeared first on CyberScoop.

A dozen allied agencies say China is building covert hacker networks out of everyday routers

By: Greg Otto
23 April 2026 at 12:13

U.S. and international government agencies warned Thursday about a “widespread shift” in Chinese hacker methods toward the use of large-scale covert networks that compromise common devices to carry out a variety of attacks.

The advisory details how those networks work, and defensive steps organizations should take.

“Over the past few years there has been a major shift in the tactics, techniques and procedures (TTPs) used by China-nexus cyber actors, moving away from the use of individually procured infrastructure, and towards the use of externally provisioned, large-scale networks of compromised devices,” the warning reads.

The U.K. National Cyber Security Centre, Cybersecurity and Infrastructure Security Agency, National Security Agency, FBI and agencies from Australia, Canada, Germany, Netherlands, New Zealand, Japan, Spain and Sweden joined forces on the advisory.

It says that “multiple covert networks have been created and are being constantly updated, and that a single covert network could be being used by multiple actors. These networks are mainly made up of compromised Small Office Home Office (SOHO) routers, as well as Internet of Things (IoT) and smart devices.”

It continues: “Covert networks are used to connect across the internet in a low-cost, low-risk, deniable way, disguising the origin and attribution of malicious activity.”

Chinese information security companies create and support the networks, evidence suggests, according to the agencies. Hackers use the networks for reconnaissance, malware delivery and stealing information, they said.

Examples of the use of covert networks include activities from groups known as Volt Typhoon to pre-position on U.S. critical infrastructure, and Flax Typhoon to conduct cyber espionage.

An example of a covert network is the botnet Raptor Train, which infected 200,000 devices worldwide. The networks are large, constantly evolving and with new ones being developed constantly.

At a speech this week, NCSC CEO Richard Horne said “we know that China’s intelligence and military agencies now display an eye-watering level of sophistication in their cyber operations.”

Defenses against covert networks aren’t “straightforward,” according to the advisory, but include an assortment of common good cybersecurity practices. The largest and most at-risk organizations should engage in active hunting, tracking and mapping covert networks, using threat reporting to create blocklists and more.

“Working closely with U.S. and international partners, CISA continues to identify and warn organizations of Chinese state-sponsored cyber actors threatening critical infrastructure,” CISA Acting Director Nick Andersen said Thursday. “This advisory informs organizations of how these actors are strategically using numerous, evolving covert networks at scale for malicious cyber activity.”

The post A dozen allied agencies say China is building covert hacker networks out of everyday routers appeared first on CyberScoop.

The AI era demands a different kind of CISO

By: Greg Otto
22 April 2026 at 06:00

Many security leaders are still operating with frameworks built for a different era. For years, success was measured by fixed checkpoints, such as passing audits, closing vulnerabilities, and maintaining compliance. Those markers still have value, but they were designed for a threat landscape that moved in predictable, linear ways.

Today, that landscape is shifting in real time. AI is accelerating how attackers can identify and exploit weaknesses, while cloud environments and autonomous systems are constantly changing the terrain. The result is a gap between how risk is measured and how it actually unfolds, where static signals can’t keep up with dynamic threats.

CISOs are under pressure from two directions: risk is growing, and the tools meant to measure it are struggling to keep up. Traditional indicators often reflect yesterday’s threat landscape, leaving security leaders with an incomplete picture of where they actually stand.

The Mythos signal

Recent reports about Anthropic’s Claude Mythos Preview, described as so effective at vulnerability discovery that access has been restricted, offer a clear signal of where cybersecurity is headed. AI models like this one demonstrate that the speed and scale of exploitation have fundamentally changed. What once took skilled attackers days or weeks can now happen in minutes, and increasingly without human intervention.

That shift matters because attacker capabilities are accelerating faster than most organizations can measure them. The gap between how risk unfolds and how security teams track it is widening. A “passed” audit tells you where you’ve been, not where you are. A posture dashboard reflects a moment in time, not a continuously changing environment. And a pen test is a snapshot, in a world where conditions evolve constantly.

Sharpening the conversation this quarter

If your conversations haven’t evolved to match this new reality, your organization has a significant blind spot. Here are five questions CISOs should be using to turn the current shift into action:

What can we see at runtime without waiting for a report?
Configuration tools tell you what should be true. Runtime visibility tells you what is true right now. (Follow up: If an attacker starts moving laterally in our cloud environment today, how fast do we know, in minutes or days?)

Do we have a complete inventory of identities, including non-human?
Business environments are full of identities beyond employees. Vendors, contractors, service accounts, API keys, automations, machine identities, and cloud principals sprawl across systems. Attackers love that sprawl because stealing credentials is often easier than writing malware.
(Follow up: How many human and non-human identities do we have, and which ones can access sensitive data or modify critical infrastructure?)

Where are we over-permissioned, and how quickly can we reduce it?
Over-permissioned accounts act like master keys: convenient until they’re compromised. Least privilege must be measurable, not aspirational. (Follow up: Can you show me the highest-risk access paths and what we can remove or tighten in 30 days?)

Are we using AI to reduce noise and speed decisions or just adding another screen?
Many teams are drowning in alerts. AI can help by adding context (connecting a risky identity + vulnerable workload + exposed secret) so responders can act quickly, instead of chasing disconnected warnings. (Follow up: What’s our alert volume, what percentage is actionable, and what’s improved response time?)

Can you walk me through a realistic incident end to end, with decision points?
Prevention matters, but resilience is what separates organizations when something gets through. Incidents are inevitable. What matters is detection speed, containment, recovery, and communications. (Follow up: Pick a scenario — credential theft, ransomware, vendor compromise — What happens here, who decides what, and when does executive leadership need to know? What do customers need to know?)

What to do with the answers

If these questions surface gaps, the path forward is usually practical. Start by prioritizing runtime visibility on systems that support critical services and sensitive resident data. Treat identity like infrastructure — inventory it, right-size permissions, and monitor continuously. Shift measurement toward outcomes like time to detect, contain, and restore, rather than activity metrics like tickets closed or controls checked. And rehearse the hard day with both technical teams and leadership, including communications.

In an era where threats move at AI speed, the advantage belongs to teams that can see clearly and act immediately. The defining question now is how quickly you can identify a risk, understand its impact, and respond before it escalates.

Rinki Sethi is the chief security & strategy officer at Upwind Security, holding over two decades of cybersecurity leadership experience from roles at Twitter, Rubrik, BILL, Palo Alto Networks, IBM, and eBay. She is a founding partner at Lockstep Ventures, serves on the boards of ForgeRock and Vaultree, and is widely recognized for her contributions to the cybersecurity community, including developing the first national cybersecurity curriculum for the Girl Scouts of USA.

The post The AI era demands a different kind of CISO appeared first on CyberScoop.

Mythos can find the vulnerability. It can’t tell you what to do about it.

By: Greg Otto
21 April 2026 at 06:00

Mythos matters. It is a significant step forward in AI-assisted vulnerability discovery. But it does not mean cybersecurity changed overnight, nor does it mean enterprises are suddenly facing fully automated exploitation at internet scale tomorrow.

It does mean the offensive side of AI is continuing to improve. The defensive side needs to catch up now.

Mythos is the latest step in a longer trend. Over the next several years, expect the same pattern to repeat: incremental progress, then a jump; incremental progress, then a jump. Models will get more capable and cheaper with each cycle, and each jump will put more pressure on security teams still operating at human speed.

Mythos demonstrated that AI can find software vulnerabilities with unprecedented depth. That is real progress and should be taken seriously. However, this was not a case where AI suddenly made enterprise compromise cheap, easy, or automatic. Even in Anthropic’s own examples, the cost of discovering a critical vulnerability was significant. One example cited roughly $20,000 in token costs to identify a significant OpenBSD issue. 

Mythos made vulnerability discovery cheaper to scale by replacing bodies with dollars. But finding a vulnerability is only one part of the operational reality.

An attacker still has to determine whether that vulnerability is exploitable in a specific enterprise, identify a viable attack path, gain the necessary access, and successfully operationalize the exploit in a real environment. None of that became easy just because a model found a software bug.

And on the defensive side, Mythos does not yet solve the much harder enterprise problem: How do I know whether this vulnerability is actually exploitable in my environment, and what is the most efficient way to remediate it without breaking the business?

The real enterprise problem is not discovery. It is prioritization and action. Security leaders do not struggle only because vulnerabilities exist. They struggle because the operational cost of deciding what matters, what is exploitable, what can wait, and what can be fixed safely is enormous.

If a large enterprise learns that a critical vulnerability has been found in widely used software, the next step is not magic. It is a painful chain of operational questions focused on where they run the software, what version it is, whether there is a realistic attack path, and many more.

Mythos leaves the defensive cost of answering those questions inside a real enterprise largely unchanged. The right lesson is preparation.

One of the mistakes the market often makes with AI is assuming every new capability is the moment everything changes. The right move is to start now with defensive AI systems that are useful today and positioned to improve over time. For most enterprises, that means looking for AI products that help improve alert investigation, threat hunting, and vulnerability management, offer full audit capabilities, connect to enterprise data and reason to provide organizational context, and evolve as the model landscape matures.

The goal is to build the operational foundation now for a future in which more of the work can be automated safely.

Today, defenders need systems that let humans remain involved while the machine helps them scale. Over time, that involvement will change. Analysts will spend less time doing repetitive work themselves and more time orchestrating, reviewing, and improving how automated work gets done.

Eventually, some workflows will need to be reviewed in bulk rather than one action at a time. When response moves at machine speed, a human may not approve every individual remediation action. Instead, they will need a control center view into patterns: what the system did today, what worked, what did not, and what should be adjusted tomorrow.

That is a very different future from the simplistic idea of “replace the analyst.”

The real future is one where humans move from doing every task manually to supervising systems, shaping policy, reviewing patterns, and controlling how increasingly capable agents operate.

Mythos is a warning. Not because it means the sky is falling. Because it shows where the offensive side is heading. Defenders should move accordingly and with urgency.

Alex Thaman is the chief technology officer at Andesite. Over a 20+ year career, Alex has been an engineering leader at Microsoft, Unity Software, and Scale AI.

The post Mythos can find the vulnerability. It can’t tell you what to do about it. appeared first on CyberScoop.

Why the Axios attack proves AI is mandatory for supply chain security

By: Greg Otto
20 April 2026 at 09:17

Two weeks ago, a suspected North Korean threat actor slipped malicious code into a package within Axios, a widely used JavaScript library. The immediate concern was the blast radius: roughly 100 million weekly downloads spanning enterprises, startups, and government systems. But beyond the sheer scale, the attack’s speed was just as worrisome – a stark reminder of the tempo modern adversaries now operate at.

The Axios compromise was identified within minutes of publication by an Elastic researcher using an AI-powered monitoring tool that analyzed package registry changes in real time. The approach was right: AI classifying code changes at machine speed, at the moment of publication, before the damage compounds. By any standard, it was a fast response. The compromised package was removed in about three hours. But even in those three hours, the widely-used package may have been downloaded over half a million times.

This underscores a new reality. Enterprises and the public sector are being overwhelmed with attacks that are increasing in both speed and complexity, driven in part by AI. Adversaries are probing every link in the supply chain, and they are doing it at a pace that human-speed defenses cannot match.

This project is one example of using AI to tackle a security problem, but it also makes a broader case: AI-powered security can dramatically improve SOC efficiency especially when organizations across the public sector and beyond are drowning in attacks.

The direct threat to the public sector

Government agencies increasingly rely on the same open-source JavaScript frameworks as the private sector, so a poisoned package can give an adversary access to sensitive systems before anyone realizes the supply chain has been poisoned. This is a direct threat to national security and critical infrastructure, especially when the payloads are cross-platform, affecting macOS, Windows, and Linux.

What is most critical now is understanding and correctly preparing for the frequency and speed at which these attacks occur.

AI has fundamentally lowered the barrier to sophisticated cyber operations, granting relatively unsophisticated bad actors and small nation-states capabilities once reserved for elite criminal groups and countries. Adversaries now leverage AI to automate reconnaissance, craft convincing social engineering, and develop evasive malware. With a new vulnerability discovered every few minutes, the pace is accelerating.

For the public sector, the threat model has expanded. Defending against known nation-state playbooks is no longer sufficient—that’s just the baseline. Groups that couldn’t execute at nation-state levels five years ago now operate with comparable sophistication, while state-sponsored actors operate with unprecedented speed and automation. Staying ahead means moving beyond traditional defense to meet a threat landscape that is increasingly automated and ubiquitous.

AI is not optional

Adversarial AI is the defining threat of the current operating environment. Automated reconnaissance. AI-generated obfuscation. Machine-speed deployment across multiple vectors simultaneously. The adversary has implemented AI faster and more aggressively than most defensive teams.

It is rapidly becoming unquestionable in security: if you are not using AI to battle AI, you will lose.

That does not mean buying into the autonomous SOC fantasy. That approach treats AI in isolation, as if defenders are the only ones with access to the technology. Defensive AI is not a win button, but the minimum entry fee to stay level with the attacker. You still need business context, mission knowledge, and human judgment.

The agentic SOC transformation

The Axios compromise should serve as a clear signal. Nation-state actors are targeting the software supply chain with increasing frequency and sophistication. The government agencies and organizations that will defend successfully against these threats are the ones building security operations that can move just as fast as the threat actors they face.

AI-driven security operations that can match the speed of modern threats, like agentic workflows that automatically triage, investigate, and contain suspicious activity are operationally necessary. Having an agentic SOC mindset and approach to how these centers work will empower analysts’ activity. Agents will operate on behalf of the analyst automatically and transparently.

The traditional SOC pyramid puts humans at the bottom doing the highest-volume work. A wide analyst tier triaging alerts, feeding a narrower senior tier handling investigations. Adversarial AI has made that base layer untenable. The volume is too high, the speed too fast, the surface area too broad. The pyramid inverts into a diamond – AI takes the base while analysts rise to become threat engineers: managing, validating, and improving the agents working on their behalf.

AI agents handle the high-volume work of alert correlation, investigation enrichment, and initial containment while human analysts focus on strategic decisions and mission context. These agents amplify the expertise that government security professionals bring, delivering pre-investigated, correlated findings rather than a flood of disconnected alerts.

The rapid acceleration of sophisticated attacks calls for this essential change across the SOC. The public sector and industry are undergoing a significant transformation, shifting away from eyes-on-glass alert triage toward a high-impact era of threat engineering. In doing so, public sector teams will have the ability to greatly reduce mean time to detect/respond, in turn reducing SOC analyst fatigue and compressing investigation timelines.

Mike Nichols is the GM of Security at Elastic.

The post Why the Axios attack proves AI is mandatory for supply chain security appeared first on CyberScoop.

Ghost breaches: How AI-mediated narratives have become a new threat vector

By: Greg Otto
16 April 2026 at 06:00


A company wakes up to a news story claiming it has suffered a major data breach. The details are specific, technical and convincing. But the breach didn’t happen. No systems were compromised. No data was taken. A language model generated the entire story, filling in plausible details from scratch. And before the company can figure out what’s going on, a reporter at a reputable outlet picks up the story and requests comment. Within hours, the company is drafting statements and mobilizing its communications team to address a fictional event.

A second incident begins with something real. Years earlier, a company had suffered a genuine breach that received wide media coverage. The incident was investigated, resolved and closed. Then one of the outlets that originally reported on it redesigned its website. Old articles received new URLs and updated timestamps, and search engines re-indexed them as fresh content. AI-powered news aggregators picked up the signal and flagged it as a developing story. The company found itself fielding inquiries about an incident that had been resolved years before.

[Ed. note: The authors are withholding full specifics about the incidents because full disclosure could cause harm, yet CyberScoop confirmed with the authors that the incidents did in fact take place].

A third incident introduces yet another dimension. A cybersecurity publication ran a story about a business email compromise attack that cost a UK company close to a billion pounds. The article quoted a well-known security researcher, yet in reality, he had not spoken to the publication. AI generated the quotes, assigned them to him with full confidence, and the publication ran them as fact.

Together, these three cases expose a threat that most organizations have yet to prepare for. AI has developed the ability to fabricate convincing security incidents from nothing, complete with technical detail, named sources, and enough credibility to trigger full-scale crisis responses. Any organization that treats this as a distant or theoretical problem risks learning the hard way just how fast AI-generated fiction can become a real-world emergency.

The assumption that no longer holds

Cyber crisis response has always been built on a simple premise: something real happens, then you respond. That premise is breaking. AI systems now generate, amplify, and validate claims before security teams have confirmed anything. Once a narrative enters the ecosystem, it can be ingested into threat intelligence feeds, risk scoring platforms, and automated workflows. Fiction becomes signal.

For security teams, this creates a new class of false positive. Not a noisy alert from a misconfigured tool, but a fully formed external narrative that appears credible. A hallucinated breach can trigger internal investigations, executive escalation, and defensive actions. Time and resources get diverted toward disproving something that never happened.

Worse, it can influence real attacker behavior. Threat actors can weaponize fabricated breach narratives as pretext. Phishing emails referencing a “known incident” become more believable. Impersonation of IT or incident response teams becomes more effective. The narrative becomes part of the attack surface.

What this means for security teams

Security teams are used to monitoring for indicators of compromise. They now need to monitor for indicators of narrative. Open source intelligence pipelines are increasingly automated. If those pipelines ingest false information, downstream systems will act on it. That includes SIEM enrichment, third-party risk scoring, and even automated containment decisions in some environments.

The practical implication is that security teams need visibility into how their organization is being represented externally, not just what is happening internally. This is not traditional threat intelligence, but it behaves like it. Early detection changes outcomes.

There is also a need for tighter integration with communications. When a false narrative emerges, the technical reality and the external perception diverge. Both need to be managed in parallel.

What this means for communications teams

For communications teams, the timeline has collapsed. The first signal of a “breach” may not come from the SOC. It may come from a journalist, a customer, or an automated alert.

Silence is no longer neutral. If a narrative exists, AI systems will fill gaps with whatever information is available. That can reinforce inaccuracies with each iteration. Responses need to be designed for machine consumption as well as human audiences. Clear, declarative language. Verifiable facts. Structured statements that can be easily parsed and reused. The goal is to establish a competitive presence in the information supply chain.

Preparation becomes critical. Pre-approved language that can be deployed quickly. Established coordination with legal and security before something surfaces.

Shared implications

Both security and communications teams are now operating in the same environment, whether they recognize it or not. A hallucinated breach can trigger real operational disruption. Vendor relationships may be paused, connections to third-party systems may be severed, regulators may take interest, and markets may react. None of that requires an actual compromise. And this creates a feedback loop. External narratives drive internal actions. Internal actions, if visible, reinforce external narratives.

Breaking that loop requires speed, coordination, and clarity.

AI audits as a control mechanism

One of the most effective controls in this new environment is systematic AI auditing. Regularly testing how AI systems describe your organization, your security posture, and any alleged incidents. This provides visibility into what machines “believe” before that belief spreads. It allows organizations to identify and correct false narratives early, before they propagate into tooling, decision-making, and attacker behavior. It also highlights where accurate information needs to exist. Not just anywhere online, but in sources that AI systems prioritize.

The mindset shift

This marks a shift from incident response to narrative response. Security teams need to treat every alert as potentially fabricated. Communications teams need to prepare for narratives that form independently of what actually happened. Both must operate with the understanding that perception alone can trigger real consequences. In this environment, the ability to detect and respond to false narratives matters as much as the ability to detect and respond to actual breaches.

Mary Catherine Sullivan, who holds a Ph.D. in political science from Vanderbilt University, is a senior director of Data Science for Digital & Insights, within FTI’s Strategic Communications segment. She is a communications and data science leader specializing in message testing, audience research, digital communications analytics, and reputational risk assessment. As part of FTI Consulting’s Data Science team, she develops state-of-the-art artificial intelligence, natural language processing, machine learning, and statistical models to analyze media ecosystems, stakeholder discourse, and audience response—supporting informed, defensible decision-making for clients navigating complex reputational environments.

Brett Callow is a senior advisor in the Cybersecurity and Data Privacy Communications at FTI Consulting. With more than two decades of cybersecurity policy and legislation understanding and extensive cybersecurity communications experience, Brett’s expertise is widely recognized within the industry, by policy makers and the media. He has been involved in some of the most high-profile ransomware incidents and has participated in panels and policy-related discussions, including at the Office of the Director of National Intelligence and the Aspen Institute, and has served on the Advisory Board of the Royal United Services Institute’s Ransomware Harms project.

The post Ghost breaches: How AI-mediated narratives have become a new threat vector appeared first on CyberScoop.

We’re only seeing the tip of the chip-smuggling iceberg

By: Greg Otto
15 April 2026 at 06:00

Last year, Nvidia CEO Jensen Huang repeatedly denied that China was obtaining America’s most advanced chips. ‘There’s no evidence of any AI chip diversion,’ he said, dismissing such reports on another occasion as ‘tall tales.’

Federal prosecutors would beg to differ. They’ve charged six men over the past three weeks with smuggling billions of dollars’ worth of AI chips to China. The indictments, while a tactical victory, are a warning of how pervasive the problem has become, thanks both to loopholes in federal law and a failure to support existing laws with serious enforcement.

Both Washington and Beijing have tried to reshape AI chip supply chains to bolster their respective national security agendas ahead of an expected trade-focused summit in May. While the United States has imposed export controls on advanced chips to cut off China’s military modernization efforts, China has pushed its firms to adopt domestically produced components to secure its self-reliance.

But neither side can fully avoid the Willie Sutton rule. Why smuggle chips? Because that’s where the profit is — particularly without enough resources dedicated to enforcement. 

A closed Chinese market grasping for more powerful alternatives to their own products offers a prime incentive for American firms to provide components to Beijing. Smuggling has also transformed an emerging network of data center infrastructure across Southeast Asia into a source of illicit computing power for U.S. adversaries.

The recent cases highlight these features in detail. In March, prosecutors charged three people connected to Super Micro Computer, an American computing firm, with smuggling an estimated $2.5 billion in chips to Chinese customers by shipping servers to the company’s offices in Taiwan and elsewhere in the region. In the meantime, the trio designed warehouses full of fake products to fool U.S. authorities. A week later, prosecutors unveiled charges against another three individuals accused of conspiring to ship advanced chips to China via business contacts in Thailand.

This string of prosecutions suggests that despite some high-profile successes, smuggling remains a pervasive issue across the industry. While this is partially a problem of professed ignorance, it can also be solved with a combination of policy, personnel, and policing. 

The United States must strengthen controls over emerging technologies at the factory floor rather than the airport gate. While Washington has strong export control laws, these regulations are intended to prevent components from leaving the country. They do not, however, block Chinese firms from purchasing these technologies inside the country.

This divergence in intentions produces difficulties for prosecution, as smugglers are often solely indicted for evading customs enforcement rather than charged with illicitly obtaining the components while still on American soil. However, Congress can close this loophole via stronger due diligence laws that require greater scrutiny of potential customers ahead of the customs enforcement process.

Washington is also in an arms race with AI firms to properly fund enforcement mechanisms, a race it is currently losing. While one smuggling case alone involved $2.5 billion, federal spending on policing export controls amounted to $122 million in all of 2025.

Moreover, this surge of investment in computer hardware is increasingly global in scope, magnifying the current shortage of federal agents responsible for enforcing export controls at the exact moment both allies and adversaries are seeking to purchase ever larger batches of advanced chips.

Even with stronger policies and more personnel, prosecuting AI chip smuggling must also remain a policing priority for federal law enforcement. While these cases are often complex due to a range of technical and jurisdiction challenges, as well as an array of shifting export control regimes, the FBI and the Commerce Department should remain committed to tracking and disrupting these smuggling networks.

It will be key for the administration to separate enforcement actions from its ongoing diplomatic exchanges with Beijing — dropping domestic prosecutions should not be used as a bargaining chip to deliver trade concessions during the President Donald Trump’s upcoming travels to Beijing.

We need stronger enforcement so that the next billion-dollar smuggling case marks real progress, rather than exposing just how much slipped through.

Jack Burnham is a senior research analyst at the Foundation for Defense of Democracies’ China Program, focusing on China’s military, emerging technologies, and science and technology policy. Follow Jack on X @JackBurnham802.

The post We’re only seeing the tip of the chip-smuggling iceberg appeared first on CyberScoop.

CISA cancels summer internships for cyber scholarship students amid DHS funding lapse

By: Greg Otto
14 April 2026 at 19:17

The Cybersecurity and Infrastructure Security Agency has informed participants of the federal government’s Scholarship for Service program that it has canceled this year’s summer internship programs due to the current funding issues at the Department of Homeland Security. 

Emails from CISA obtained by CyberScoop recently informed applicants that the agency will not bring any CyberCorps: Scholarship for Service interns onboard this summer due to the impacts of the federal funding lapse and the current administrative situation at DHS. For some applicants, agency representatives acknowledged that the cancellations represent a second consecutive year of disrupted placement efforts.

The National Science Foundation (NSF) leads and manages the program, in coordination with the Office of Personnel Management (OPM) and DHS. The program covers tuition and provides stipends for students specializing in cybersecurity and artificial intelligence. In exchange, graduates must complete an internship and subsequently work in federal service for a period equal to the duration of their scholarship. 

An OPM official told CyberScoop the agency is “actively in contact with all Federal cabinet agencies on this topic, and are confident that we will place nearly all eligible Scholarship for Service participants within the next couple months.”

An NSF spokesperson declined to comment.  CISA did not respond to CyberScoop’s request for comment. 

The sudden closure of agency pipelines highlights how federal job seekers are currently navigating a paralyzed hiring environment, exacerbated by budget turmoil at DHS and proposed workforce reductions under the Trump administration. The White House’s fiscal 2027 budget would slash CISA’s budget by $707 million, according to a summary released earlier this month, which would deeply chop down an agency that already took a big hit in President Donald Trump’s first year.

Sources told CyberScoop Tuesday that CISA has been reaching out to internship applicants who had participated in a virtual job fair held in February, where they were told that the agency would have 100 internship roles available. However, applicants were warned that the agency would not be able to hire anyone until the agency was funded. 

Program participants expressed regret to CyberScoop last November over taking part in an initiative that binds them to an employer currently unable to hire them. Program administrators have reportedly advised students to get creative in their job searches, a directive that caused frustration among participants who rely on standard federal placement pipelines.

In response to the growing backlog of unplaced graduates, OPM announced plans to collaborate with the National Science Foundation on a mass deferment. OPM Director Scott Kupor stated that the deferment will be implemented after the government shutdown resolves, providing graduates additional time to secure qualifying positions.

The structural breakdown of the CyberCorps pipeline presents long-term challenges for the federal government’s ability to recruit technical talent. The United States currently faces an estimated 500,000 open cybersecurity positions. The scholarship program was historically viewed as a reliable mechanism to bypass private-sector wage competition and secure early-career talent for the federal government.

Lawmakers are currently battling over bills that would end the DHS shutdown. 

Tim Starks contributed to this story. 

The post CISA cancels summer internships for cyber scholarship students amid DHS funding lapse appeared first on CyberScoop.

Secretary Mullin must help finish the job: Urge the Senate to confirm Plankey

By: Greg Otto
14 April 2026 at 06:00

On March 23, the Senate confirmed Senator Markwayne Mullin as the next homeland security secretary, marking an important step in strengthening leadership during a critical moment for our nation’s security.

But only half of the job is done.

The Cybersecurity and Infrastructure Security Agency (CISA), the federal government’s main civilian cyber defense agency, still lacks a Senate-confirmed director. As global cyber threats escalate,  this prolonged leadership gap poses a growing national security risk.

As Executive Director of the National Technology Security Coalition (NTSC), I represent Chief Information Security Officers who are responsible for protecting the systems that sustain America’s economy and critical infrastructure. In every sector, energy, healthcare, financial services, manufacturing, and transportation, there is a common concern: the threat landscape is growing more aggressive, and our defenses must stay ahead.

Our enemies are not waiting.

Since the start of the conflict with Iran, cybersecurity experts have reported increased malicious cyber activity targeting U.S. and allied systems. Iran-linked actors have shown their ability to disrupt operations and exploit vulnerabilities. Meanwhile, China continues its long-term effort to infiltrate American networks and position itself for possible disruption of critical infrastructure. Russia and its affiliated groups remain persistent, probing Western systems for weaknesses and exerting constant pressure.

This is the reality of modern conflict. Cyber operations have emerged as a primary domain of competition. In some cases, they can rival the effects of traditional military action, disrupting economies, communications, and public safety through code alone. 

Leadership is important in this environment.

CISA plays a key role in coordinating federal cyber defense, sharing threat intelligence with the private sector, and supporting state and local governments. It serves as the link between government and industry in protecting the nation’s digital infrastructure. Without a Senate-confirmed director, the agency’s ability to set priorities, coordinate efforts, and respond quickly is limited.

That challenge is growing more urgent. The President’s fiscal year 2027 budget plan proposes significant cuts to CISA’s funding. At a time when the agency faces increasing operational pressure, fewer resources make strong, steady leadership even more crucial.

This is the moment when Secretary Mullin’s leadership is critical.

As a former member of the Senate, Secretary Mullin understands the institution, its dynamics, and how to build consensus. He is uniquely positioned to connect with past colleagues and help advance Sean Plankey’s nomination as Director of CISA.

Plankey is highly qualified and widely respected in the cybersecurity community. His experience in the U.S. Coast Guard, at the Department of Energy securing the nation’s energy infrastructure, and in the private sector provides him with a clear understanding of both the threat landscape and the importance of public-private collaboration. At a time when coordination between government and industry is vital, these qualities are essential.

The Senate has already signaled that it takes cyberthreats seriously. It recently confirmed Lt. Gen. Joshua Rudd to lead U.S. Cyber Command and serve as director of the National Security Agency, ensuring strong leadership of America’s military cyber defense team.

Now it needs to do the same on the civilian side.

Confirming Plankey matters because the country’s main civilian cyber defense agency needs established leadership to combat adversaries who are already inside our networks, probing our systems, and preparing for the next phase of conflict.

The leadership gap at CISA has gone on long enough.

Secretary Mullin must engage. The Senate needs to act. And Sean Plankey should be confirmed without further delay.

America’s cyber defenses depend on it.

Chris Sullivan is the executive director of the National Technology Security Coalition, a nonprofit, non-partisan organization that serves as an advocacy voice for chief information security officers across the nation.

The post Secretary Mullin must help finish the job: Urge the Senate to confirm Plankey appeared first on CyberScoop.

Don’t just fight fraud, hunt it

By: Greg Otto
9 April 2026 at 08:00

Our nation has entered a new fraud arms race fueled by AI.

With billions of dollars in fraud losses mounting in both the private and public sectors, it’s clear the old ways of deterring fraud aren’t working. That’s why we need a new playbook that starts with understanding how fraudsters operate, evolving our defenses, and shifting to a proactive posture that doesn’t just fight fraud but actively hunts it down. 

In the AI era, treating fraud as just a front-door problem won’t work. This moment requires industry, government, and consumers to work together, reduce silos, and share real-time intelligence. The goal is to move beyond reactive detection by understanding the lifecycle of a threat—from its formation to its spread—so we can intervene before it establishes a foothold.

For decades, fraud has been treated like a series of isolated incidents. This false assumption has underpinned nearly every past effort to crack down on it. Those efforts, while well-intentioned, have missed the mark. 

Now, in light of the Trump Administration’s Cyber Strategy for America and accompanying executive order, it’s critical to understand the modern fraud landscape and the central role that digital identity exploitation plays within it.

New research from Socure reveals just how dramatically the landscape is evolving. 

Fraud has become industrialized, with organized crime syndicates running operations that are global, systemic, automated, and powered by AI. No organization, service, or program is safe. Fraudsters target government programs, banks, fintech platforms, telecom companies, and more, blurring the lines between public sector fraud, financial crime, and cybercrime.

It used to be that fraud could be detected through the reuse of identity elements across multiple applications: the same email, device, phone number, or IP address used over and over. 

But the data is clear: these links are declining fast. Today’s sophisticated fraudsters are now engineering their attacks to avoid traditional fraud detection patterns. Our research demonstrates that emails will be completely unique within fraud populations as soon as 2027, so we won’t be able to rely on email to identify patterns.

Speed is another defining feature of modern identity fraud. Fraudsters use AI to create clean, durable, synthetic and stolen identities at scale. In one observed campaign, 24,148 synthetic identities were built and launched in under a month, with many attacks occurring within 48 hours. What once took weeks or even months can now be completed in days. 

The rapid rise of identity farms is another indicator of the industrialization of fraud. Identity farms are operated by crime rings to systematically create synthetic or stolen identities over time in order to closely resemble legitimate identities. Matured identities are used to open bank, credit, and money-movement accounts, siphon government benefits, launder funds, and more. These identity farms focus on durable identities that can bypass traditional verification controls.

So what should we do? Simply put, we must go on offense. 

This means treating identity as critical infrastructure and implementing strategies that track how identities were created before the moment of application; expanding signals monitoring to include elements like residential proxies, ISP behavior, and domain registration activity; evaluating velocity and orchestration in real-time; and treating continuous measurement, rapid model iteration, and cross-industry intelligence as core capabilities.

Additionally, given the rapid scaling of fraud, we need more analysis of the complete ecosystem, including dynamic factors like device information, digital footprints, and behavioral biometrics so organizations can effectively distinguish genuine humans from machines. Ultimately, this layered and interconnected approach makes it significantly harder for malicious actors to recreate or steal identities at scale.

Fraud is no longer a series of isolated acts. It is a coordinated, global enterprise built on the exploitation of identity. Until our efforts reflect this new reality, we will continue to fight an imminent and ongoing threat with outdated tools and fall further behind. 

Now is the time to make this strategic shift and finally put fraudsters on their heels. 

Mike Cook serves as head of fraud insights at Socure, the identity and risk platform for the AI age.

The post Don’t just fight fraud, hunt it appeared first on CyberScoop.

Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities

By: Greg Otto
7 April 2026 at 14:00

Major technology companies have joined forces in an effort to use advanced artificial intelligence to identify and address security flaws in the world’s most critical software systems, marking a significant shift in how the industry approaches cybersecurity threats.

Anthropic announced Project Glasswing on Tuesday, bringing together Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. The initiative centers on Claude Mythos Preview, an unreleased AI model that Anthropic will make available exclusively to project partners and approximately 40 additional organizations responsible for critical software infrastructure.

The model has already identified thousands of previously unknown vulnerabilities in its initial testing phase, including security flaws that have existed in widely used systems for decades, according to Anthropic. Among the discoveries is a 27-year-old bug in OpenBSD, an operating system known primarily for its security focus, and a 16-year-old vulnerability in FFmpeg, a widely used video software program that automated testing tools had failed to detect despite running the affected code line five million times. The company has been in contact with the maintainers of the relevant software, and all found vulnerabilities have been patched. 

Anthropic will commit up to $100 million in usage credits for the project, along with $4 million in direct donations to open-source security organizations. The company has stated it does not plan to make Mythos Preview available to the general public, citing concerns about the model’s potential misuse.

The initiative reflects growing concerns within the technology sector about the dual-use nature of advanced AI systems. While Mythos Preview was not trained specifically for cybersecurity purposes, its coding and reasoning capabilities have proven effective at identifying subtle security flaws that have eluded human analysts and conventional automated tools.

“Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs,” the company said in a blog post. “Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.”

The project comes as the industry has predicted that similar AI capabilities will soon become more widespread. Anthropic executives have indicated that without coordinated action, such tools could eventually reach actors who might deploy them for malicious purposes rather than defensive security work.

Participating organizations will be required to share their findings with the broader industry. The project places particular emphasis on open-source software, which forms the foundation of most modern systems, including critical infrastructure, yet whose maintainers have historically lacked access to sophisticated security resources.

“Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software. By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation,” said Jim Zemlin, CEO of the Linux Foundation. “This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.” 

Additionally, Anthropic says it has engaged in ongoing discussions with U.S. government officials regarding Mythos Preview’s capabilities. The company has framed the project in national security terms, arguing that maintaining leadership in AI technology represents a strategic priority for the United States and its allies. Anthropic has been locked in a high-stakes dispute with the Department of Defense about the U.S. military’s use of the startup’s Claude AI model in real-world operations. 

The project’s success will depend partly on whether the collaborative approach can keep pace with rapid advances in AI capabilities. Anthropic has indicated that frontier AI systems are likely to advance substantially within months, potentially creating a dynamic environment where defensive and offensive capabilities evolve in parallel.

“Project Glasswing is a starting point,” Anthropic wrote in a blog post. “No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.”

The post Tech giants launch AI-powered ‘Project Glasswing’ to identify critical software vulnerabilities appeared first on CyberScoop.

❌
❌