Reading view

There are new articles available, click to refresh the page.

Google spotted an AI-developed zero-day before attackers could use it

Google researchers found a zero-day exploit developed by artificial intelligence and alerted the susceptible vendor to the imminent threat before a well-known cybercrime group initiated a mass-exploitation campaign, the company said in a report released Monday.

The averted disaster probably isn’t the first time attackers used AI to build a zero-day, but it is the first time Google Threat Intelligence Group found compelling evidence that this long-predicted and worrying escalation in vulnerability-exploit development is underway.

“We finally uncovered some evidence this is happening,” John Hultquist, chief analyst at GTIG, told CyberScoop. “This is probably the tip of the iceberg and it’s certainly not going to be the last.”

Google declined to identify the specific vulnerability, which has been patched, or name the “popular open-source, web-based administration tool” it affected. It did, however, note that the defect impacted a Python script that allows attackers to bypass two-factor authentication for the service.

Researchers also withheld details about how they discovered the zero-day exploit or the cybercrime group that was preparing to use it for a large-scale attack spree.

The threat group has a “strong record of high-profile incidents and mass exploitation,” Hultquist said, suggesting the attackers are prominent and well-known among cybersecurity practitioners. 

GTIG is fairly confident the threat group was using AI in a meaningful way throughout the entire process, but it has yet to determine if the technology also discovered the vulnerability it ultimately developed into an exploit.

Whichever AI model the attackers used — Google is confident it wasn’t Gemini or Anthropic’s Mythos — left artifacts throughout the exploit code that are inconsistent with human developers. This evidence, which included documentation strings in Python, highly annotated code and a hallucinated but non-existent CVSS score, tipped Google off to the fact AI was heavily involved, Hultquist said. 

GTIG has been warning about and expecting AI-developed exploits to hit systems in the wild, especially after its Big Sleep AI agent found a zero-day vulnerability in late 2024.

“I think the watershed moment was two years ago when we proved this was possible,” Hultquist said, adding that there are probably several other AI developed zero-days in play now. 

Yet, to him, the discovery of a zero-day exploit developed by AI is less concerning than what this single instance forebodes even further.

“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist said. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”

The post Google spotted an AI-developed zero-day before attackers could use it appeared first on CyberScoop.

Students Boo Commencement Speaker After She Calls AI the 'Next Industrial Revolution'

An anonymous reader quotes a report from 404 Media: Speaking to graduates of University of Central Florida's College of Arts and Humanities and Nicholson School of Communication and Media on May 8, commencement speaker Gloria Caulfield, vice president of strategic alliances at Tavistock Group, told graduating humanities students that AI is the "next industrial revolution," and was met with thousands of booing graduates. "And let's face it, change can be daunting. The rise of artificial intelligence is the next industrial revolution," Caulfield said. At that point, murmurs rippled through the crowd. Caulfield paused, and the crowd erupted into boos. "Oh, what happened?" Caulfield said, turning around with her hands out. "Okay, I struck a chord. May I finish?" Someone in the crowd yelled, "AI SUCKS!" Her speech begins around the hour and 15 minute mark in the UCF livestream. [...] Before the industrial revolution comment, Caulfield praised Jeff Bezos for his passion and use of Amazon as a "stepping stone" to his real dream: spaceflight. Rattled after the crowd's reaction, she continued her speech: "Only a few years ago, AI was not a factor in our lives." The crowd cheered. "Okay. We've got a bipolar topic here I see," Caulfield said. "And now AI capabilities are in the palm of our hands." The crowd booed again. "I love it, passion, let's go," she said. "AI is beginning to challenge all major sectors to find their highest and best use," she continued. "Okay, I don't want any giggles when I say this. We have been through this before, these industrial revolutions. In my graduation era, we were faced with the launch of the internet." She goes on to talk about how cellphones used to be the size of briefcases. "At that time we had no idea how any of these technologies would impact the world and our lives. [...] These were some of the same trepidations and concerns we are now facing. But ultimately it was a game changer for global economic development and the proliferation of new businesses that never existed like Apple and Google and Meta and so many others, and not to mention countless job opportunities. So being an optimist here, AI alongside human intelligence has the potential to help us solve some of humanity's greatest problems. Many of you in this graduating class will play a role in making this happen."

Read more of this story at Slashdot.

Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time

Google says it has seen the first evidence of cybercriminals using AI to create a zero-day vulnerability. "Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report," reports Politico. "The company then issued a patch to fix the issue." From the report: Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes. The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities -- marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them. Google concluded that Anthropic's Claude Mythos model -- which has already found thousands of vulnerabilities across every major operating system and web browser -- was most likely not used to create the zero-day exploit. [...] The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods. John Hultquist, chief analyst at Google Threat Intelligence Group, said the findings made clear that the race to use AI to find network vulnerabilities has "already begun." "For every zero-day we can trace back to AI, there are probably many more out there," Hultquist said. "Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."

Read more of this story at Slashdot.

Anthropic Says 'Evil' Portrayals of AI Were Responsible For Claude's Blackmail Attempts

An anonymous reader quotes a report from TechCrunch: Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Anthropic later published research suggesting that models from other companies had similar issues with "agentic misalignment." Apparently Anthropic has done more work around that behavior, claiming in a post on X, "We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation." The company went into more detail in a blog post stating that since Claude Haiku 4.5, Anthropic's models "never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time." What accounts for the difference? The company said it found that training on "documents about Claude's constitution and fictional stories about AIs behaving admirably improve alignment." Related, Anthropic said that it found training to be more effective when it includes "the principles underlying aligned behavior" and not just "demonstrations of aligned behavior alone." "Doing both together appears to be the most effective strategy," the company said.

Read more of this story at Slashdot.

Mistaiks happen

LEGAL BRIEF By Max Stul Oppenheimer, Esq. To err is no longer the exclusive province of humans. Apologies to Alexander Pope. Artificial intelligence has progressed from hallucinating to enticing humans to join in the hallucination. We need a new term for hybrid human-AI errors caused by reliance on AI hallucinations. I propose “mistaiks.” Read the […]

PlayStation3 Emulator Devs Politely Ask Contributors to Stop Submitting 'AI Slop' Pull Requests

Open-source PS3 emulator RPCS3 "has been around since 2011," Kotaku notes, and has made 70% of the PlayStation 3's library fully playable, "bolstered in part by the many users who contribute to its GitHub page." But their dev team "took to X today to very kindly and civilly request that users 'stop submitting AI slop code pull requests' to its GitHub page." Then they immediately proceeded to tell the AI-brain-rotted tech bros attempting to justify their vibe-coding nonsense to kick rocks in the replies, which is somewhat less civil but far more entertaining to read... My favorite one was when someone asked how the team was certain they weren't rejecting human-written code, to which RPCS3 replied: "You can't possibly handwrite the type of shit AI slop we have been seeing."

Read more of this story at Slashdot.

Amazon Relents, Lets its Programmers Use OpenAI's Codex and Anthropic's Claude

An anonymous reader shared this report from Futurism: In November, Amazon leaders sent an internal memo to employees, pushing them to use its in-house code generating tool, Kiro, over third-party alternatives from competitors. "While we continue to support existing tools in use today, we do not plan to support additional third party, AI development tools," the memo read, as quoted by Reuters at the time. "As part of our builder community, you all play a critical role shaping these products and we use your feedback to aggressively improve them." It was an unusual development, considering the tens of billions of dollars the e-commerce giant has invested in its competitors in the space, including Anthropic and OpenAI... Half a year later, Amazon is singing a dramatically different tune. As Business Insider reports, Amazon is officially throwing in the towel, succumbing to growing calls among employees for access to OpenAI's Codex and Anthropic's Claude... Given the unfortunate optics of opening the floodgates for Codex and Claude Code, an Amazon spokesperson told the publication in a statement that teams are still "primarily using" Kiro, claiming that 83 percent of engineers at the company are leaning on it.

Read more of this story at Slashdot.

Unemployment Ticked Up in America's IT Sector

IT sector unemployment "increased to 3.8% in April from 3.6% in March," reports the Wall Street Journal. But they add that the increase reflects "an ongoing uncertainty in tech as AI continues to play havoc with hiring. That's according to analysis from consulting firm Janco Associates, which bases its findings on data from the U.S. Labor Department." On Friday, the department said the economy added 115,000 jobs, buoyed by gains in industries including retail, transportation and warehousing and healthcare. The unemployment rate was unchanged at 4.3%. But the information sector lost 13,000 jobs in April. While it's still too early to say exactly how AI is affecting employment overall, some businesses, especially in the tech industry, have said it's part of the reason they're cutting staff. In April, Meta Platforms said it would lay off 10% of its staff, or roughly 8,000 people, as it seeks to streamline operations and pay for its own massive investments in AI. Nike will reduce its workforce by roughly 1,400 workers, or about 2%, mostly in its tech department, as it simplifies global operations. And Snap is planning to eliminate 16% of its workforce, or about 1,000 positions, as it aims to boost efficiency. In other areas of IT, which includes telecommunications and data-processing, employment is now down 11%, or 342,000 jobs, from its most recent peak in November 2022. But there's not just AI to blame. Inflation and economic uncertainty linked to the Iran conflict is giving some chief executives and tech leaders reason to pull back or pause their IT hiring, said Janco Chief Executive Victor Janulaitis. The article even notes that postings for software developer jobs "are up 15% year-over-year on job-search platform Indeed, according to Hannah Calhoon, its vice president of AI". But employers do seem to be looking for experienced developers, which could pose a problem for recent college graduates.

Read more of this story at Slashdot.

Cisco Releases Open-Source 'DNA Test for AI Models'

Cisco has released an open-source tool "to trace the origins of AI models," reports SC World, "and compare model similarities for great visibility into the AI supply chain." [Cisco's Model Provenance Kit] is a Python toolkit and command-line interface (CLI) that looks at signals such as metadata and weights to create a "fingerprint" for AI models that can then be compared to other model fingerprints to determine potential shared origins. "Think of Model Provenance Kit as a DNA test for AI models," Cisco researchers wrote. "[...] Much like a DNA test reveals biological origins, the Model Provenance Kit examines both metadata and the actual learned parameters of a model (like a unique genome that comprises a model), to assess whether models share a common origin and identify signs of modification." The tool aims to address gaps in visibility into the AI model supply chain. For example, many organizations utilize open-source models from repositories like HuggingFace, where models could potentially be uploaded with incomplete or deceptive documentation. The Model Provenance Kit provides a way for organizations to verify claims about a model's origins, such as claims that a model is trained from scratch, when in reality it may be copied from another model, Cisco said. This may put organizations at risk of using models with unknown biases, vulnerabilities or manipulations and make it more difficult to resolve any incidents that arise from these risks. Thanks to Slashdot reader spatwei for sharing the news.

Read more of this story at Slashdot.

Newspaper Chain's Reporters Withhold Their Bylines to Protest 'AI-Assisted' Articles

A chain of 30 U.S. newspapers including the Sacramento Bee, the Miami Herald and the Idaho Statesman "has started to use a new AI tool that can summarize traditional articles and spit out different versions for different audiences," reports the New York Times. And the chain's reporters "are not happy about it." Journalists in many of the company's newsrooms are now withholding their bylines from articles created by the new tool, meaning that those articles will run with a generic credit rather than a reporter's name, as is customary. They are also labeled AI-assisted. "We don't want to put our bylines on stories we did not actually write even if they're based on our work," said Ariane Lange, an investigative reporter at the Sacramento Bee and the vice chair of the Sacramento Bee News Guild. "That in itself feels like a lie." The reporters' byline strike is one of the sharpest conflicts yet between journalists and their companies over the use of AI. Related debates are playing out in newsrooms across the country, as publishers experiment with new AI tools to streamline work that used to take hours, and some even use it to write full articles... [E]xecutives have promoted the tool internally as a way to increase the number of articles published and ultimately gain new subscribers... [Eric Nelson, the vice president of local news] said using reporters' bylines on the AI-generated articles was a way to show "authority" on Google so the search engine would rank the articles higher in the results. He also said the company was experimenting with feeding in reporters' notes to create articles. "Journalists who embrace and experiment with this tool are going to win," Nelson said in the meeting. "Journalists who are defiant will fall behind".... McClatchy's public AI policy states that the company uses AI tools to summarize articles to "help readers quickly understand the main points of a single story or catch up on multiple stories about a larger topic," and that editors review the output before publication.

Read more of this story at Slashdot.

Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments

The Senate’s top Democrat called on the Department of Homeland Security Friday to work closely with state and local governments to defend against artificial intelligence-strengthened hacks. 

Senate Minority Leader Chuck Schumer, D-N.Y., wrote to DHS Secretary Markwayne Mullin to make sure state, local, tribal and territorial (SLTT) governments aren’t left behind as AI models advance, posing new hacking threats.

“There is a race between cybersecurity defenders and AI-enabled hacking — and there’s no time to waste,” Schumer wrote.

“While the White House has reportedly begun hosting meetings about its internal security priorities following these frontier AI cyber breakthroughs, it is glaringly obvious that the Department of Homeland Security needs an updated plan for coordinating these efforts with [state, local, tribal and territorial] governments and implementing procedures to reduce the risk of disruptive cyberattacks enabled by frontier AI,” he stated.

Schumer said he was worried about the capabilities of DHS and its Cybersecurity and Infrastructure Security Agency to carry out that coordination, given federal funding cuts to the Multistate Information Sharing and Analysis Center, and the lack of a Senate-confirmed CISA director for the duration of the second Trump administration.

Schumer wants a plan from DHS by July 1 on coordinating with state and local governments on a range of questions, such as how to identify top AI talent, carry out rapid patching and conduct risk assessments.

“AI is changing the cyber battlefield fast — and we cannot let hackers get there first,” Schumer said in comments accompanying the letter. “Hospitals, power grids, water systems, schools, elections, and emergency services cannot be left exposed while criminal gangs and state-backed hackers race to exploit new AI tools. DHS must immediately help states and localities find and fix vulnerabilities before Americans are hit with outages, disruptions, and attacks that could put lives and livelihoods at risk.”

CISA is using AI to help on the defensive side internally, agency officials recently said.

The post Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments appeared first on CyberScoop.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

Thousands of Vibe-Coded Apps Expose Corporate and Personal Data On the Open Web

An anonymous reader quotes a report from Wired: Security researcher Dor Zvi and his team at the cybersecurity firm he cofounded, RedAccess, analyzed thousands of vibe-coded web applications created using the AI software development tools Lovable, Replit, Base44, and Netlify and found more than 5,000 of them that had virtually no security or authentication of any kind. Many of these web apps allowed anyone who merely finds their web URL to access the apps and their data. Others had only trivial barriers to that access, such as requiring that a visitor sign in with any email address. Around 40 percent of the apps exposed sensitive data, Zvi says, including medical information, financial data, corporate presentations, and strategy documents, as well as detailed logs of customer conversations with chatbots. "The end result is that organizations are actually leaking private data through vibe-coding applications," says Zvi. "This is one of the biggest events ever where people are exposing corporate or other sensitive information to anyone in the world." Zvi says RedAccess' scouring for vulnerable web apps was surprisingly easy. Lovable, Replit, Base44, and Netlify all allow users to host their web apps on those AI companies' own domains, rather than the users'. So the researchers used straightforward Google and Bing searches for those AI companies' domains combined with other search terms to identify thousands of apps that had been vibe coded with the companies' tools. Of the 5,000 AI-coded apps that Zvi says were left publicly accessible to anyone who simply typed their URLs into a browser, he found close to 2,000 that, upon closer inspection, seemed to reveal private data: Screenshots of web apps he shared with WIRED -- several of which WIRED verified were still online and exposed -- showed what appeared to be a hospital's work assignments with the personally identifiable information of doctors, a company's detailed ad purchasing information, what appeared to be another firm's go-to-market strategy presentation, a retailer's full logs of its chatbot's conversations with customers, including the customers' full names and contact information, a shipping firm's cargo records, and assorted sales and financial records from a variety of other companies. In some cases, Zvi says, he found that the exposed apps would have allowed him to gain administrative privileges over systems and even remove other administrators. In the case of Lovable, Zvi says he also found numerous examples of phishing sites that impersonated major corporations, including Bank of America, Costco, FedEx, Trader Joe's, and McDonald's, that appeared to have been created with the AI coding tool and hosted on Lovable's domain. "Anyone from your company at any moment can generate an app, and this is not going through any development cycle or any security check," Zvi says. "People can just start using it in production without asking anyone. And they do."

Read more of this story at Slashdot.

❌