❌

Reading view

There are new articles available, click to refresh the page.

If consequences matter, they should apply to vendors, too

Washington has rediscovered consequences. Just not consistently.

The March 6 executive order rests on a simple, correct idea: cyber-enabled fraud persists because it is profitable, scalable, and too often tolerated. So the government’s answer is to raise the cost. More coordination. More disruption. More prosecutions. More diplomatic pressure on the states that shelter these operations.

Good.

But weeks ago, an OMB Memo rescinded earlier federal software supply chain memos issued during the Biden administration. In practice, that pulled back from the prior attestation-centered model and made tools like the Secure Software Development Attestation Form and SBOM requests optional rather than durable expectations.

Put plainly, we are getting tougher on the people exploiting digital systems while getting softer on the conditions that make those systems so easy to exploit.

The executive order gets something important right. Cyber-enabled fraud is not a collection of random online annoyances. It is an industrialized form of predation: ransomware, phishing, impersonation, sextortion, and financial fraud that’s run as repeatable business models, often transnational and sometimes protected by permissive states. The order responds with a more centralized federal posture built around disruption, coordination, intelligence sharing, prosecution, resilience, and international pressure.

That is directionally correct. Criminal ecosystems do not retreat because we publish better guidance. They retreat when the cost of doing business rises.

But then we arrive at software.

The critique of the old federal assurance regime is not entirely wrong. Compliance can become theater. Bureaucracies are very good at turning legitimate security goals into rituals of form collection and checkbox management. Some skepticism was warranted. OMB says as much explicitly, arguing the prior model became burdensome and prioritized compliance over genuine security investment.

Still, the failure of bad compliance is not proof that accountability itself was the problem.

That is where the logic breaks. The administration is clearly willing to believe that criminal actors respond to deterrence. It is willing to use prosecutions, sanctions, visa restrictions, and coordinated pressure downstream. But upstream, where insecure technology shapes the terrain those criminals exploit, the theory suddenly changes. There, we are told to trust discretion. Local judgment. Flexible, risk-based decisions.

Sometimes that is wisdom. Often it is just a more elegant way of saying no one wants a hard requirement.

This is also why my own position has not changed. In a post I wrote in 2024, I argued that the industry did not need softer expectations or another round of polite encouragement. It needed more concrete action and consequences strong enough to change incentives. The problem was never that we were demanding too much accountability. The problem was that insecure software remained too cheap to ship.

That is the deeper issue. Cybercrime at scale does not thrive only because criminals exist. It thrives because the environment rewards them. Weak identity systems, brittle software, sprawling dependency chains, poor visibility, and diffuse accountability all make predation cheaper. The people who ship avoidable risk rarely absorb the full cost of it. Everyone else does.

So these two policy moves, taken together, reveal something uncomfortable. The government seems to believe in consequences for cybercriminals, but not quite in consequences for insecure production. It wants deterrence for the scammer, but discretion for the supplier.

A coherent cyber strategy would do both. It would aggressively disrupt criminal networks and also create meaningful pressure for secure-by-design production and procurement. It would recognize that punishing attackers matters, but so does changing the terrain that keeps making attack profitable.

The administration is right about one thing: cybercrime will not shrink until the costs of predation rise.

The unanswered question is why that logic should stop at the edge of the scam center.

Brian Fox is the co-founder and CTO of Sonatype.

The post If consequences matter, they should apply to vendors, too appeared first on CyberScoop.

More evidence your AI agents can be turned against you

Agentic AI tools are being pushed into software development pipelines, IT networks and other business workflows. But using these tools can quickly turn into a supply chain nightmare for organizations, introducing untrusted or malicious content into their workstream that are then regularly treated as instructions by the underlying large language models powering the tools.

Researchers at Aikido said this week that they have discovered a new vulnerability that affects most major commercial AI coding apps, including Google Gemini, Claude Code, OpenAI’s Codex, as well as GitHub’s AI Inference tool.

The flaw, which happens when AI tools are integrated into software development automation workflows like GitHub Actions and GitLab, allows maintainers (and in some cases external parties) to send prompts to an LLM that also contain commit messages, pull requests and other software development related commands. And because these messages were delivered as prompts, the underlying LLM will regularly remember them later and interpret them as straightforward instructions.

Although previous research has shown that agentic AI tools can use external data from the internet and other sources as prompting instructions, Aikido bug bounty hunter Rein Daelman claims this is the first evidence that the problem can affect real software development projects on platforms like GitHub.

β€œThis is one of the first verified instances that shows…AI prompt injection can directly compromise GitHub Actions workflows,” wrote Daelman. It also β€œconfirms the risk beyond theoretical discussion: This attack chain is practical, exploitable, and already present in real workflows.”

Because many of these models had high-level privileges within their GitHub repositories, they also had broad authority to act on those malicious instructions, including executing shell commands, editing issues or pull requests and publishing content on GitHub. While some projects only allowed trusted human maintainers to execute major tasks, others could be triggered by external users filing an issue.

Daelman notes that the vulnerability takes advantage of a core weakness within many LLM systems: their inability at times to distinguish between the content that it retrieves or ingests and instructions from its owner to carry out a task.

β€œThe goal is to confuse the model into thinking that the data its meant to be analyzing is actually a prompt,” Daelman wrote. β€œThis is, in essence, the same pathway as being able to prompt inject into a GitHub action.”

An illustration of how malicious parties can send commands to LLM in the form of content. (Source: Aikido)

Daelman said Aikido reported the flaw to Google along with a proof of concept for how it could be exploited. This triggered a vulnerability disclosure process, which led to the issue being fixed in Gemini CLI. However, he emphasized that the flaw is rooted in the core architecture of most AI models, and that the issues in Gemini are β€œnot an isolated case.”

While both Claude Code and OpenAI’s Codex require write permissions, Aikido published simple commands that they claim can override those default settings.

β€œThis should be considered extremely dangerous. In our testing, if an attacker is able to trigger a workflow that uses this setting, it is almost always possible to leak a privileged [GitHub token], Daelman wrote about Claude.Β  β€œEven if user input is not directly embedded into the prompt, but gathered by Claude itself using its available tools.”

The blog noted that Aikido is withholding some of its evidence as it continues to work with β€œmany other Fortune 500 companies” to address the underlying vulnerability, Daelman said the company has observed similar issues in β€œmany high-profile repositories.”

CyberScoop has contacted OpenAI, Anthropic and GitHub to request additional information and comments on Aikido’s research and findings.

The post More evidence your AI agents can be turned against you appeared first on CyberScoop.

❌