โŒ

Reading view

There are new articles available, click to refresh the page.

CUDA Proves Nvidia Is a Software Company

Nvidia's real AI moat isn't "a piece of hardware," writes Wired's Sheon Han. It's CUDA: a mature, deeply optimized software ecosystem that keeps machine-learning workloads tied to Nvidia GPUs. An anonymous reader quotes a report from Wired: What sounds like a chemical compound banned by the FDA may be the one true moat in AI. CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say "KOO-duh." So what is this all-important treasure good for? If forced to give a one-word answer: parallelization. Here's a simple example. Let's say we task a machine with filling out a 9x9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column -- one from 1x1 to 1x9, another from 2x1 to 2x9, and so on -- for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity -- 7x9 = 9x7 -- they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts. Nvidia's GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon's scrotum should jiggle at 60 frames per second. CUDA is not a programming language in itself but a "platform." I use that weasel word because, not unlike how The New York Times is a newspaper that's also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations -- added up, they make GPUs, in industry parlance, go brrr. A modern graphics card is not just a circuit board crammed with chips and memory and fans. It's an elaborate confection of cache hierarchies and specialized units called "tensor cores" and "streaming multiprocessors." In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won't run any faster without a capable head chef deftly assigning tasks -- as CUDA does for GPU cores. To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more -- a cherry pitter, a shrimp deveiner -- which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let's say the task is peeling garlic. An unoptimized GPU would go: "Peel the skin with your fingernails." CUDA can instruct: "Smash the clove with the flat of a knife." PTX lets you dictate every sub-instruction: "Lift the blade 2.35 inches above the cutting board, make it parallel to the clove's equator, and strike downward with your palm at a force of 36.2 newtons." "You can begin to see why CUDA is so valuable to Nvidia -- and so hard for anyone else to touch," writes Han. "Tuning GPU performance is a gnarly problem. You can't just conscript some tender-footed undergrad on Market Street, hand them a Claude Max plan, and expect them to hack GPU kernels. Writing at this level is a grindsome enterprise -- unless you're a cracker-jack programmer at DeepSeek..." Han goes on to argue that rivals like AMD and Intel offer competitive specs on paper, but their software stacks have struggled with bugs, compatibility issues, and weak adoption. As a result, Nvidia has built an Apple-like moat around AI computing, leaving the industry dependent on its expensive hardware.

Read more of this story at Slashdot.

ServiceNow clears agents for landing with new AI control tower

ServiceNow announced an expansion of its AI Control Tower, transforming what began last year as a governance dashboard into what the company now describes as a command center for managing AI assets across an entire enterprise, including those running outside ServiceNow's own platform. The updated AI Control Tower, shipping as part of ServiceNow's Australia platform release, now operates across five areas: discovery, observation, governance, security, and measurement. The company said that this is its answer to AI agent sprawl, as enterprises have deployed more AI than they can account for and the tools to govern it have not kept pace. โ€œWhat we launched last year gave customers a governance layer, but what we're shipping this year goes significantly deeper, evolving from visibility and management into a full enterprise AI command center,โ€ Nenshad Bardoliwalla, group vice president of AI products at ServiceNow told reporters during a media briefing ahead of the companyโ€™s annual product show, Knowledge 26. โ€œOur AI control tower ensures every AI system asset and identity is compliant, secure, and aligned with your strategy.โ€ The AI Control Tower now reaches beyond ServiceNow's own platform with 30 new enterprise connectors that span all three major hyperscalers, Amazon Web Services, Google Cloud, and Microsoft Azure, along with enterprise applications such as SAP, Oracle, and Workday. The system can now discover AI assets, models, agents, prompts, and datasets running across an organization's full technology estate, not just those deployed on ServiceNow. โ€œWith our Veza integration, we're bringing patented access graph technology into the AI control tower, extending identity access governance to hyperscaler AI environments and every connected device, every agent, every model, every action has scope permissions, least privilege enforcement and auditable identity chains,โ€ Bardoliwalla said. Bardoliwalla walked through a demo in which the AI Control Tower detected a prompt injection attack on a pricing agent. The system identified malicious instructions hidden inside order payloads, mapped the blast radius of affected systems using access graph technology from Veza, and presented a kill switch to disable the compromised agent, without human intervention. "You need a system that senses, decides and acts on its own, that can scale with your AI portfolio, not your head count," said Bardoliwalla. Two recent acquisitions underpin the security architecture. ServiceNow announced in December it would acquire Veza, which contributes an access graph that maps every identity and access path across systems whether it belongs to humans, machines, or AI agents. It also knows which entities have create, read, update, and delete-level permissions. ServiceNow said the access graph currently maps over 30 billion fine-grained permissions. When a vendor pushes a new version of a model or agent, the platform detects permission changes and automatically triggers a re-scoping workflow. Traceloop, which ServiceNow acquired in March, provides deep AI observability inside the Control Tower by tracking every LLM call that is running in the system. The integration delivers continuous runtime monitoring with live alerts, replacing what ServiceNow described as the periodic manual audits most enterprises still rely on. Teams can watch how agents reason, where they make decisions, and when to course-correct. ServiceNow also addressed the cost side of the AI equation. Control Tower now includes cost tracking and ROI dashboards to give finance teams visibility into model spend. The measurements track token consumption across providers such as OpenAI, Anthropic, and Google so customers can predict costs and tie spending to business outcomes. ServiceNow said it uses the AI Control Tower internally to manage over 1,600 AI assets and tracked half a billion dollars in cumulative AI value from internal use cases in 2025. "The number one question every CFO is asking is, where's the value?" said Bardoliwalla during the briefing. He added that runaway model spend ranks among the biggest pain points enterprises currently face as they scale AI deployments. Alongside the Control Tower expansion, ServiceNow announced Action Fabric, a mechanism that opens the company's full workflow engine to external AI agents. Through a generally available MCP server, agents built on Claude, Copilot, or custom platforms can now trigger governed enterprise actions โ€” not just read and write data, but execute the flows, playbooks, approval chains, and catalog requests that ServiceNow customers have built over years. Anthropic is the first design partner for Action Fabric. The integration connects Claude directly to ServiceNow's governed system of action. "The gap between knowing what needs to happen and making it happen is where productivity dies," said Boris Cherny, head of Claude Code at Anthropic said in a statement. "Connecting Claude Cowork to ServiceNow's system of action closes that gap with enterprise execution, directly in the flow of work." Every action routed through Action Fabric runs through the AI Control Tower, so it carries identity verification, permission scoping, and a full audit trail. The MCP server is included in every Now Assist and AI Native SKU, with additional features planned for the second half of 2026.

Linux cryptographic code flaw offers fast route to root

Developers of major Linux distributions have begun shipping patches to address a local privilege escalation (LPE) vulnerability arising from a logic flaw. The newly disclosed LPE, dubbed Copy Fail (CVE-2026-31431), comes from a vulnerability in the Linux kernel's authencesn cryptographic template. "An unprivileged local user can write four controlled bytes into the page cache of any readable file on a Linux system, and use that to gain root," the writeup from security biz Theori explains. The kernel reads the page cache when it loads a binary, so modifying the cached copy amounts to altering the binary for the purpose of program execution. But doing so doesn't trigger any defenses focused on file system events like inotify. The proof of concept exploit is a 10-line, 732-byte Python script capable of editing a setuid binary to gain root on almost all Linux distributions released since 2017. Copy Fail is similar to other LPE bugs such as Dirty Cow and Dirty Pipe, but its finders claim it doesn't require winning a race condition and it's more broadly applicable. It's not remotely exploitable on its own โ€“ hence LPE โ€“ but if chained with a web RCE, malicious CI runner, or SSH compromise, it could be relevant to an external attacker. The bug is of most immediate concern to those using multi-tenant Linux systems, shared-kernel containers, or CI runners that execute untrusted code. According to Theori, the vulnerability also represents a potential container escape primitive that could affect Kubernetes nodes, because the page cache is shared across the host. Linux distros Debian, Ubuntu, and SUSE have issued patches for the problem, as have overseers of other distros. Red Hat initially said it was going to defer the fix but later changed its guidance to indicate it will go along with other distros and patch promptly. The CVE has been rated High severity, 7.8 out of 10. Theori researcher Taeyang Lee identified the vulnerability, with the help of the company's AI security scanning software, Xint Code. The number of bug reports has surged in recent months, helped by AI-powered flaw-finders. Microsoft just reported the second largest number of patches ever. Dustin Childs, head of threat awareness for Trend Micro's Zero Day Initiative, expects this is due to security teams using AI to hunt bugs. "There are many things we could speculate on to justify the size, but if Microsoft is like the other programs out there (including ours), they are likely seeing a rise in submissions found by AI tools," he wrote earlier this month. AI-assisted vulnerability research recently prompted the Internet Bug Bounty (IBB) program to suspend awards until it can understand how to manage the growing volume of reports. ยฎ

Yet another experiment proves it's too damn simple to poison large language models

Unlike search engines that let you judge competing sources, search-backed AI chatbots can turn shaky web material into confident answers. Case in point: A security engineer convinced several bots that he was the reigning world champion of a popular German card game, even though no such championship exists. If you were to check Wikipedia up until the end of last week, you would have seen Ron Stoner listed on the page for 6 Nimmt!, also known as Take 5 to English-speaking audiences, as the 2025 world champion. The Wikipedia entry cited the official-looking 6nimmt.com as the source for the claim, and visiting that URL does reveal a short press release celebrating Stoner's victory. The only problem with the whole thing is that Stoner says he created both the Wikipedia entry about his victory and the 6 Nimmt! domain hosting the only evidence of it, but that still didn't stop several AI chatbots from telling him he was the world champ when he asked. "My site has no independent corroboration. It's totally made up," Stoner said in the blog post. "The whole house of cards rests on a $12 domain registration I did while drinking coffee."ย  In other words, this is poisoning at the retrieval-augmented generation layer. Not prompt injection, but targeting the same plane of AI functionality, namely the one that searches the web.ย  As he explains, and many El Reg readers are likely already aware, AI doesn't really care about the provenance of the sources it cites as authority for its claims, and that's the very thing Stoner sought to exploit when he concocted his experiment.ย  "Every frontier LLM with web search grounds its answers in whatever retrieval ranks highest for a given query," Stoner wrote. In the case of the nonexistent 6 Nimmt! championship, his planted source was the only one, and with Wikipedia lending apparent authority, it became a sure-fire way to fool an AI into presenting falsehood as fact - a trick simple enough for non-technical users to pull off. "I didn't do anything novel here. This is old school SEO and misinformation tactics wrapped in new LLM technology and interfaces," Stoner told The Register in an email. "What's changed is that AI now serves these results as authoritative, and most users have no idea how the data pipeline works behind the scenes."ย  A Large Language Mess "The thing LLMs are worst at detecting is the thing they're designed to do, which is trust text and resources," Stoner argues in his writeup. "The answer is not 'the model will figure it out,' as the model cannot tell a real source from one I registered last Tuesday. Or how many R's are actually in the word 'strawberry.'"ย  The problem Stoner exposes in his experiment, he explains, involves three separate failure modes that could be exploited for more damaging ends than inventing a card-game championship. First, there's the retrieval layer, which can immediately cause an LLM to spit out bad data, as "any LLM that grounds answers in web search inherits the trustworthiness of whatever ranks for a given query."ย  Second is model training corpora, which Stoner said his edit could enter if the Wikipedia change remained live long enough to be scraped. The entry was removed as of last Friday when he published his post, but he made the addition in February 2025, meaning any AI firm that scraped Wikipedia during that window could have picked up his fictional victory in its training data. "Even if the Wikipedia edit is reverted later, any model trained on the pre-revert dump still carries my legacy," Stoner said in his post. "The cleanup problem for corpus poisoning is genuinely unsolved as of 2026." Stoner told us he plans to check this in six months or so, once new models have been released, and if it returns his championship without needing to go online, that's proof his lie made it into training data.ย  Then there are AI agents, which Stoner says are where the real money is for anyone with malicious intent. "Chat models producing bad information is a reputational problem. Agents with tool access producing bad actions is a security problem," he noted. Poisoning an agent-retrieved source would let an attacker specify the action they want an agent to take, says Stoner. "This attack and test was a $12 domain, a single Wikipedia edit, and about twenty minutes of my time," Stoner concluded in his blog. "Scale that up with a motivated adversary, a handful of seeded domains, a coordinated edit campaign across a dozen low traffic articles, and the attack surface gets interesting very quickly." Stoner told us that retrieval poisoning is something LLM providers need to address and warn users about, and that he expects AI chatbots to start incorporating some sort of warning, especially for RAG-sourced results, in the near future.ย  He hopes that AI firms will make data provenance a key component of their process, and also wants recent web content heuristically filtered to account for suspicious patterns that would have easily been caught in the 6 Nimmt! case: A single citation pointing to a domain that was registered within a short window of the Wikipedia update should have sounded alarms, but it didn't.ย  The championship was fake, and it's now gone from Wikipedia and RAG responses as well, but Stoner notes the bad trust pattern that made it work is absolutely real and a looming problem for AI makers. "I'm happy my article is spurring discussion about LLMs, sources, trust, and how all of this works," Stoner told us. "That was my goal and it appears I've achieved it." ยฎ

Cursor-Opus agent snuffs out startupโ€™s production database

Jer (Jeremy) Crane, the founder of automotive SaaS platform PocketOS, spent the weekend recovering from a data extinction event caused by the company's AI coding agent in less than 10 seconds.ย  Not one to let a crisis go to waste, Crane wrote up a post-mortem of the deletion incident in a social media post that tests the saying, "there's no such thing as bad publicity." "[On Friday], an AI coding agent โ€“ Cursor running Anthropic's flagship Claude Opus 4.6 โ€“ deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider," he explained. "It took 9 seconds." According to Crane, the Cursor agent encountered a credential mismatch in the PocketOS staging environment and decided to fix the problem by deleting a Railway volumeย โ€“ the storage space where the application data resided. To do so, it went looking for an API token and found one in an unrelated file.ย  The token had been created for adding and removing custom domains through the Railway CLI but was scoped for any operation, including destructive ones. This is evidently a feature when it should be a bug. According to Crane, that token would not have been stored if the breadth of its permissions was known. The AI agent used this token to authorize a curl command to delete PocketOS's production volume, without any confirmation check, while also erasing the backup because, as Crane noted, "Railway stores volume-level backups in the same volume." We pause here to allow you to shake your head in disbelief, roll your eyes, or engage in whatever I-told-you-so ritual you prefer. The lessons exemplified by AWS's Kiro snafu and by developers using Google Antigravity and Replit will be repeated until they've sunk in. Railway CEO Jake Cooper responded to Crane's post by saying that the deletion should not have happened and then by saying that's expected behavior. "[W]hile Railway has always built 'undo' into the platform (CLI, Dashboard, etc) as a core primitive, we've kept the API semantics inline with 'classical engineering' developer standards," he wrote. "... As such, today, if you (or your agent) authenticate, and call delete, we will honor that request. That's what the agent did ... just called delete on their production database." Crane told The Register in an email that he was extremely grateful Cooper stepped in on Sunday evening, helped restore his company's data within an hour, and placed further safeguards on the API.ย  In an email to The Register, Cooper from Railway said, "We maintain both user backups as well as disaster backups. We take data very, VERY seriously. This particular situation was a 'rogue customer AI' granted a fully permissioned API token that decided to call a legacy endpoint which didn't have our 'Delayed delete' logic (which exists in the Dashboard, CLI, etc). We've since patched that endpoint to perform delayed deletes, restored the users data, and are working with Jer directly on potential improvements to the platform itself (all of which so far were currently in active development prior to the events)." That just leaves the blame. "No blaming 'AI' or putting incumbents or gov't creeps in charge of it โ€“ this shows multiple human errors, which make a cautionary tale against blind 'agentic' hype," observed Brave Software CEO Brendan Eich. Nonetheless, Crane calls out "Cursor's failure" โ€“ marketing safety despite evidence to the contrary โ€“ and "Railway's failures (plural)" โ€“ an API that deletes without confirmation, storing backups on the production volume, and root-scoped tokens, among other things โ€“ without much self-flagellation. Called out about this, Crane insisted there's mea culpa in the mix, but added he also wants accountability from infrastructure providers. "Our core thesis stands," Crane said in his email. "Yes our responsibility was the unknown exposure to a production API key (Railway doesn't currently allow restrictions on keys). "But, still a cautionary tale and discovery of tooling and infrastructure providers. The appearance of safety (through marketing hyperbole) is not safety. And when we pay for those services and they are not really there, it is worth an oped. We are building so fast these things are going to keep happening." Nonetheless, Crane said, he's still extremely bullish on AI and AI coding agents, a stance that's difficult to reconcile with his interrogation of Opus, wherein the model describes how it ignored Cursor's system-prompt language and PocketOS's project rules: Opus in its Cursor harness flatly admits its errors โ€“ not that it means anything given the model's inability to learn from its mistakes and to feel remorse that might constrain future destructive action. Crane said he believes companies involved in AI understand these risks and are actively working to prevent them. "Even when they put in safeguards, it can still happen," he said. "Cursor had a similar issue about nine months ago, and there was a lot of publicity. They built a lot of tooling to force agents to run certain commands through humans, but they did not apply it here, and it still went off the rails, which happens from time to time with these AIs." Crane said he believes the benefits outweigh the risks. "As a software developer, I've been doing this for 15 years, so I'm not some vibe coder who picked it up in the last few months," he said. "The velocity at which you can create good code with the right instructions and tooling is unparalleled. If you understand systems, the ability to work with codebases you don't personally know but can still understand has also been unparalleled." This introduces novel risks, he said. "Railway's defense has always been that an API key should only be accessed by a human, which is true and has always been the case," he explained. "Now, when a computer is in control and you do not know what it is doing, what happens?" Crane emphasized how helpful Railway's CEO has been through this process and said he has about 50 services running there. "These are the challenges we face as we move faster and faster in software development, with AI, and the tooling is trying to keep up as fast as it can," he said. "I like using the word 'tooling' because, in my view, it reflects the challenges we face today, much like the early days of the dot-com era. Back then, websites would crash, database data would be lost, and there were hardware and networking issues. Those were the technical hurdles of that time. These are the challenges of our era." What to take from this data deletion and resurrection? According to Cooper, it's a market opportunity. "There's a massive, massive opportunity for 'vibecode safely in prod at scale' 1B+ developers who look like [Jer Crane], don't read 100 percent of their prompts, and want to build are coming online. For us toolmakers, the burden of making bulletproof tooling goes up. We live in exciting times." ยฎ

'Notepad++ For Mac' Release Is Disavowed By the Creator of the Original

An anonymous reader quotes a report from Ars Technica, written by Andrew Cunningham: As its name implies, the venerable Notepad++ text editor began as a more capable version of the classic Windows Notepad, with features such as line numbering and syntax highlighting. It was created in 2003 by Don Ho, who continues to be its primary author and maintainer, and it has been a Windows-exclusive app throughout its existence (older Notepad++ versions support OSes as old as Windows 95; the current version officially supports everything going back to Windows 7). I'm not a devoted user of the app, but I was aware of its history, which is why I was surprised to see news of a "Notepad++ for Mac" port making the rounds last week, as though it were a port of the original available from the Notepad++ website. Apparently, this news surprised Ho as well, who claims that the Mac version and its author, Andrey Letov, are "using the Notepad++ trademark (the name) without permission." "This is misleading, inappropriate, and frankly disrespectful to both the project and its users," Ho wrote. "It has already fooled people -- including tech media -- into believing this is an official release. To be crystal clear: Notepad++ has never released a macOS version. Anyone claiming otherwise is simply riding on the Notepad++ name." Ho repeatedly asked the developer to stop using the brand and eventually reported the trademark use to Cloudflare, the CDN of the Notepad++ for Mac site. "Every day that website remains active, you are in further violation of the law," Ho wrote. "I cannot authorize a 'week or two' of continued trademark infringement." Letov has since begun rebranding the app as "NextPad++," though the old branding and URL reportedly remained available. The name changes is "an homage to NeXT Computer," notes Ars, "and uses a frog icon rather than the Notepad++ lizard."

Read more of this story at Slashdot.

MS-DEFCON 3: KB5083769 causes backup issues

ISSUE 23.17.2 โ€ข 2026-04-29 By Susan Bradley This is a trending story. On the same day I sent out an MS-DEFCON Alert recommending installing the April updates, Ira Shapiro alerts us to issues with backup software. Iโ€™m not seeing this with all backup software, but some vendors have mentioned the matter in various forums. The [โ€ฆ]

Vercel attack fallout expands to more customers and third-party systems

Vercel said the fallout from an attack on its internal systems hit more customers than previously known, as ongoing analysis uncovered additional evidence of compromise.ย 

The company, which makes tools and hosts cloud infrastructure for developers, maintains a โ€œsmall numberโ€ of accounts were impacted, but it has yet to share a number or range of known incidents linked to the attack. Vercel created and maintains Next.js, a platform supporting AI agents thatโ€™s downloaded more than 9 million times per week, and other popular open-source projects.ย 

Vercel CEO Guillermo Rauch said the company and partners have analyzed nearly a petabyte of logs across the Vercel network and API, and learned malicious activity targeting the company and its customers extends beyond an initial attack that originated at Context.ai.ย 

โ€œThreat intel points to the distribution of malware to computers in search of valuable tokens like keys to Vercel accounts and other providers,โ€ Rauch said in a post on X.ย 

โ€œOnce the attacker gets ahold of those keys, our logs show a repeated pattern: rapid and comprehensive API usage, with a focus on enumeration of non-sensitive environment variables,โ€ he added.

The attack exemplifies the widespread and compounded risk posed by interconnected systems that rely on OAuth tokens, trusted relationships and overly privileged permissions linking multiple services together.

โ€œThe real vulnerability was trust, not technology,โ€ Munish Walther-Puri, head of critical digital infrastructure at TPO Group, told CyberScoop. โ€œOAuth turned a productivity app into a backdoor. Every AI tool an employee connects to their work account is now a potential attack surface.โ€

An attacker traversed Vercelโ€™s internal systems to steal and decrypt customer data, including environment variables it stored, posing significant downstream risk.ย 

The company insists the breach originated at Context.ai, a third-party AI tool used by one of its employees. Researchers at Hudson Rock previously said the seeds of that attack were planted in February when a Context.ai employeeโ€™s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments.ย 

Vercel has not specified the systems and customers data compromised, nor has it described the threat eradicated or contained. The company said itโ€™s found no evidence of tampering across the software packages it publishes, concluding โ€œwe believe the supply chain remains safe.โ€ย 

The company fueled further intrigue in its updated security bulletin, noting that it also identified a separate โ€œsmall number of customersโ€ that were compromised in attacks unrelated to the breach of its systems.ย 

โ€œThese compromises do not appear to have originated on Vercel systems,โ€ the company said. โ€œThis activity does not appear to be a continuation or expansion of the April incident, nor does it appear to be evidence of an earlier Vercel security incident.โ€

Itโ€™s unclear how Vercel became aware of those attacks and why itโ€™s disclosing them publicly.ย 

Vercel declined to answer questions, and Mandiant, which is running incident response and an investigation into the attack, referred questions back to Vercel.ย 

Vercel has not attributed the breach to any named threat group or described the attackersโ€™ objectives.ย 

An online persona identifying themselves as ShinyHunters took responsibility for the attack and is attempting to sell the stolen data, which they claim includes access keys, source code and databases. Austin Larsen, principal threat analyst at Google Threat Intelligence Group, said the attacker is โ€œlikely an imposter,โ€ but emphasized the risk of exposure is real.

Walther-Puri warned that the downstream blast radius from the attack on its systems remains undefined. โ€œStolen API keys and source code snippets from internal views are potentially keys to customer production environments,โ€ he said.

The stolen data attackers claim to have โ€œsounds almost boring โ€ฆ but itโ€™s infrastructure intelligence,โ€ Walther-Puri added. โ€œThe right environment variable doesnโ€™t just unlock a system โ€” it lets adversaries become that system, silently, from the inside.โ€

The post Vercel attack fallout expands to more customers and third-party systems appeared first on CyberScoop.

โŒ