Reading view

There are new articles available, click to refresh the page.

Google spotted an AI-developed zero-day before attackers could use it

Google researchers found a zero-day exploit developed by artificial intelligence and alerted the susceptible vendor to the imminent threat before a well-known cybercrime group initiated a mass-exploitation campaign, the company said in a report released Monday.

The averted disaster probably isn’t the first time attackers used AI to build a zero-day, but it is the first time Google Threat Intelligence Group found compelling evidence that this long-predicted and worrying escalation in vulnerability-exploit development is underway.

“We finally uncovered some evidence this is happening,” John Hultquist, chief analyst at GTIG, told CyberScoop. “This is probably the tip of the iceberg and it’s certainly not going to be the last.”

Google declined to identify the specific vulnerability, which has been patched, or name the “popular open-source, web-based administration tool” it affected. It did, however, note that the defect impacted a Python script that allows attackers to bypass two-factor authentication for the service.

Researchers also withheld details about how they discovered the zero-day exploit or the cybercrime group that was preparing to use it for a large-scale attack spree.

The threat group has a “strong record of high-profile incidents and mass exploitation,” Hultquist said, suggesting the attackers are prominent and well-known among cybersecurity practitioners. 

GTIG is fairly confident the threat group was using AI in a meaningful way throughout the entire process, but it has yet to determine if the technology also discovered the vulnerability it ultimately developed into an exploit.

Whichever AI model the attackers used — Google is confident it wasn’t Gemini or Anthropic’s Mythos — left artifacts throughout the exploit code that are inconsistent with human developers. This evidence, which included documentation strings in Python, highly annotated code and a hallucinated but non-existent CVSS score, tipped Google off to the fact AI was heavily involved, Hultquist said. 

GTIG has been warning about and expecting AI-developed exploits to hit systems in the wild, especially after its Big Sleep AI agent found a zero-day vulnerability in late 2024.

“I think the watershed moment was two years ago when we proved this was possible,” Hultquist said, adding that there are probably several other AI developed zero-days in play now. 

Yet, to him, the discovery of a zero-day exploit developed by AI is less concerning than what this single instance forebodes even further.

“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist said. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”

The post Google spotted an AI-developed zero-day before attackers could use it appeared first on CyberScoop.

Google Unveils Screenless Fitbit Air, Google Health App To Replace Fitbit

An anonymous reader quotes a report from Ars Technica: Wearables have really come full circle. The early Fitbits didn't have screens, but the move to smartwatches put a screen on everyone's wrist. Now, devices like Whoop and Hume are designed as data trackers first and foremost without so much as a clock. Google's newest wearable jumps on that trend: The Fitbit Air doesn't have a screen, but it does have a suite of health sensors that pipe data into the new Google Health app. And if you want, Google has a new AI-powered health coach in the app ready to tell you what that data means (maybe). The Fitbit Air itself is a small plastic puck about 1.4 inches long and 0.7 inches wide. It slots into various bands that hold the bottom-mounted sensors against your wrist. There's no display pointing upward, so the entire device is covered by the fabric or plastic of the band. It's a streamlined and potentially stylish look -- in uncharacteristic fashion, Google has plenty of colors and style options available, including a special-edition Steph Curry version. You may have heard chatter about Curry being seen teasing a new screenless Fitbit, and this is it. [...] The Fitbit app is getting a major makeover and a new name. An update in the coming weeks will transform that app into Google Health, featuring a new interface with a more extensive Material Expressive aesthetic and redesigned menus and tabs. You also won't see Fitbit branding in as many places -- the Fitbit Premium subscription will become Google Health Premium. Without a subscription, the app still does all the basic things, like tracking your health stats, automatically logging workouts, and showing it all in a pretty dashboard. With the Premium subscription, you get all the features from Fitbit Premium plus the new AI Health Coach. It's a chatbot, so you can ask it about any health or wellness topics, and the answers are grounded in your health data. The Fitbit Air launches May 26 for $99.99, includes a Performance Loop band, and comes with three months of the new Google Health Premium that replaces Fitbit Premium and adds Google's AI Health Coach. Meanwhile, Google Health Premium will cost $10 per month or $100 per year, though it's included with AI Pro or AI Ultra. Non-subscribers can still use basic tracking features. Ars also notes that when Google Fit shuts down later this year, users will need to migrate their data to Google Health.

Read more of this story at Slashdot.

Supreme Court justices skeptically question both sides in geofence surveillance case

Supreme Court justices lobbed sharp questions at both sides about the constitutionality of geofence warrants during oral arguments Monday in a case that could have broader implications for law enforcement collection of Americans’ data.

Chatrie v. The United States stems from the 2019 conviction of Okello Chatrie in a bank robbery, where authorities obtained location data from Google about people within a specific area at a specific time.

In questioning an attorney for the petitioner, Adam Unikowsky, a number of conservative justices — including Chief Justice John Roberts — asked why the government shouldn’t be allowed to access location data taken from a third party given that Chatrie had “opted-in” to share that data.

“I just don’t agree that one should have to flip off one’s location history as well as other cloud services to avoid government surveillance,” Unikowsky answered, raising whether the government was entitled to getting emails or calendar data that are also stored in the cloud. (Google has since moved location data to users’ individual devices.)

Some liberal justices, too, had skeptical questions for Unikowsky. “This identifies a place, a crime — a limited time frame, but a time frame,” Sonia Sotomayor said, referring to protections from open-ended searches under the Fourth Amendment. “So it’s not a general warrant in this historical sense.” But she also said that because location data follows users everywhere: “When the police are searching or asking for a search result, there’s no way to predict whether they’re going to invade your privacy.”

The line of questioning about how far a government request for bulk data can go continued from both conservative and liberal justices when it was the government’s turn to argue its position. Justices probed skeptically about what made emails or calendar data different, and whether the government could do a physical search of all of the lockers in a storage facility to find one gun they believed might be there.

It was an unusually long session for the Supreme Court, going two hours. A ruling could come in June or July. Predicting how a court will decide based on justices’ questions is famously fraught. Only one justice, Samuel Alito, hinted strongly at how he was likely to decide.

“I’m struggling to understand why we are here in this case, other than the fact that at least four of us voted to take it,” he said. He said he didn’t believe anything new of note could come out of the court based on lower court rulings during questioning of Unikowsky. “We are all free to write law review articles on this fascinating subject, but that seems like that’s what you’re asking for.”

Orin Kerr, a Stanford University law professor who filed a friend of the court brief on the government’s side, said he believed based on the oral arguments that the court will say geofence warrants can be drafted lawfully.

“The Justices seem likely to reject the broader argument Chatrie made about the lawfulness of the warrant,” he wrote on social media. “They’ll probably say the geofence warrants have to be limited in time and space.”

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, was struck by the absence of a major focus on “third-party doctrine,” under which there’s no reasonable expectation of privacy when citizens give their information to an outside party like a bank. 

She also honed in on arguments Unikowsky made.

“His argument really gave two lines to go down for the judges, and one was that you have a property interest in your data on the cloud, and the other was that you have a reasonable expectation of privacy for your data on the cloud,” she told CyberScoop. “And historically, both of those avenues have been grounds on which the Court has found that …issue is protected under the Fourth Amendment, and therefore that the actions constituted a search. So I thought it was interesting that he went and kind of argued both of those lanes.”

Alan Butler, executive director of the Electronic Privacy Information Center that filed a friend of the court brief on the side of the petitioner, said the stakes in the case are high.

“Today’s arguments underscored that the Supreme Court is weighing one of the most consequential privacy questions of the digital age: whether the government can use sweeping location data searches to identify a suspect,” he said in a statement after the arguments. “The Court should hold that the Constitution protects our digital data even when it is stored by an app or cloud provider. The Court should ensure that the highly sensitive records generated by our phones cannot be obtained without particularized suspicion and close judicial oversight.” 

The post Supreme Court justices skeptically question both sides in geofence surveillance case appeared first on CyberScoop.

Google To Invest Up To $40 Billion In Anthropic

Google plans to invest up to $40 billion more in Anthropic, starting with $10 billion now and another $30 billion tied to performance milestones. CNBC reports: Anthropic said the agreement expands on a longstanding partnership between the two companies. Earlier this month, Anthropic secured 5 gigawatts worth of computing capacity as part of an announcement with Google and Broadcom that will start to come online next year. Anthropic could decide to add additional gigawatts of compute in the future. [...] The relationship between the two companies (Google and Anthropic) dates back to 2023, when Google invested $300 million in the AI lab for a stake of about 10%. Months later, Google poured in another $2 billion. Ahead of Friday's announcement, Google's investment in Anthropic exceeded $3 billion, and it reportedly owned a 14% stake in the company. Now, the leading tech companies are investing tens of billions of dollars in the frontier AI labs -- OpenAI and Anthropic -- in funding rounds that far exceed any prior investments in startups. Much of that investment will return in the form of revenue.

Read more of this story at Slashdot.

The Supreme Court is about to decide how far geofence warrants can go

The Supreme Court will hear oral arguments Monday in a case that could limit the government’s ability to obtain bulk digital data of device users with a single warrant, in a rare instance of the country’s top justices taking on digital rights.

Chatrie v. The United States is the first major Fourth Amendment case the court has taken up since 2018, despite the proliferation of technology that impacts privacy since then. At the center of what the justices will address are so-called geofence warrants, which compel companies to disclose user data from a certain time and location.

“It’s a really interesting question about a law enforcement tool that would have been unimaginable a few decades ago, where you can basically look at potentially every phone, for example, that passed through a particular area in a particular window,” said John Villasenor, a law professor at UCLA and nonresident senior fellow at the Brookings Institution.

Both conservative and liberal civil liberties advocates have lined up in favor of the petitioner, leaving the United States government with fewer friend-of-the-court briefs on its side. Okello Chatrie was convicted for a 2019 bank robbery after police used a geofence warrant to obtain information from Google about users during a one-hour period and 17.5-acre area, then refined the search.

In Congress, Democrats have raised concerns about geofence warrants as they might pertain to abortion rights, while Republicans have raised concerns about their use in tracking suspects linked to the Jan. 6, 2021 insurrection at the Capitol.

Courts have been divided on the legality of the geofence warrant in Chatrie’s case. Google has since stopped storing location data in the cloud and moved records directly to user devices, but those siding with Chatrie say it could have broader implications for financial records, search history records, chat bot records and more.

“We think it’s important that courts get it right and that, among other things, courts recognize that we have a property interest in many of our digital records,” said Brent Skorup, a legal fellow at the Cato Institute, which has filed an amicus brief on behalf of the petitioner. “If the government can get those digital records without a warrant, that renders the Fourth Amendment pretty empty and we’re not secure in our privacy and traditional rights to having control of our private papers and effects.”

The United States noted that Chatrie opted into Google’s storage of his location history, and that the information’s collection is not substantially different from identification of other markers of someone’s presence, like tire tracks or boot prints.

“Individuals generally have no reasonable expectation of privacy in information disclosed to a third party and then conveyed by the third party to the government,” it wrote. A collection of 32 attorneys general have sided with the U.S. government, as well as some law professors.

In the 2018 case, Carpenter v. The United States, the Supreme Court limited the applicability of that “third-party doctrine” — echoed by the U.S. government’s argument in the Chatrie case — to search and seizure of 127 days’ worth of someone’s cell site location information, ruling that it constituted a search under the Fourth Amendment and therefore required a warrant.

The type of warrant is at issue in Chatrie v. The United States. A Virginia court ultimately found that geofence warrant unconstitutional because it was not sufficiently specific and was not supported by probable cause for every user whose data was collected. However, the court ruled the evidence was admissible in court, because law enforcement acted in “good faith” in the belief that it was constitutional.

Villasenor said the court could clear a lot up by addressing the good faith exception, something lower courts have used to sidestep substantial constitutional rulings, according to one study. But both Villasenor and Skorup say it’s possible that the Supreme Court also could fail to arrive at a conclusive ruling on the issues at stake in Chatrie.

While some civil liberties advocates are optimistic about the outcome due to the court’s ruling in Carpenter, three justices in that case have since been replaced by others.

The rarity of such digital privacy cases rising to the level of the Supreme Court might be simply a function of a crowded court agenda, but it’s not the only possibility.

“Part of it might be because the court has not developed a consensus view about how to approach these yet,” Skorup said. “It’s speculation on my part, but they probably have some ambivalence about taking up cases where they know that they’re not going to speak with one voice, or they know they might speak with fractured voices.”

Google itself filed a brief in the case, but sided with neither party, saying it took no position on the warrant in Chatrie’s specific case.

“But it urges the Court to hold that Google Location History and other similar digital documents stored remotely deserve the Fourth Amendment’s protection,” it wrote. “A contrary rule would leave the intimate details of millions of Americans’ daily lives — data that will exist in many forms as technology rapidly develops — exposed to warrantless surveillance.”

The post The Supreme Court is about to decide how far geofence warrants can go appeared first on CyberScoop.

Google Unveils Two New AI Chips For the 'Agentic Era'

Google announced two new tensor processing units (TPUs) for the "agentic era," with separate processors dedicated to training and inference. "With the rise of AI agents, we determined the community would benefit from chips individually specialized to the needs of training and serving," Amin Vahdat, a Google senior vice president and chief technologist for AI and infrastructure, said in a blog post. Both chips will become available later this year. CNBC reports: After years of producing chips that can both train artificial intelligence models and handle inference work, Google is separating those tasks into distinct processors, its latest effort to take on Nvidia in AI hardware. [...] None of the tech giants are displacing Nvidia, and Google isn't even comparing the performance of its new chips with those from the AI chip leader. Google did say the training chip enables 2.8 times the performance of the seventh-generation Ironwood TPU, announced in November, for the same price, while performance is 80% better for the inference processor. Nvidia said its upcoming Groq 3 LPU hardware will draw on large quantities of static random-access memory, or SRAM, which is used by Cerebras, an AI chipmaker that filed to go public earlier this month. Google's new inference chip, dubbed TPU 8i, also relies on SRAM. Each chip contains 384 megabytes of SRAM, triple the amount in Ironwood. The architecture is designed "to deliver the massive throughput and low latency needed to concurrently run millions of agents cost-effectively," Sundar Pichai, CEO of Google parent Alphabet, wrote in a blog post.

Read more of this story at Slashdot.

Google's Internal Politics Leave It Playing Catch-Up On AI Coding

An anonymous reader quotes a report from Bloomberg: At Google, leaders are anxious about falling behind in the race to offer AI coding tools, especially as rivals like Anthropic PBC offer more effective and popular tools to businesses, according to people familiar with the matter. The search giant is now working to unite some of its coding initiatives under one banner to speed progress and take advantage of a surge in customer interest. In some corners of Alphabet's Google, particularly AI lab DeepMind, concerns about the company's position are mounting, according to current and former employees and executives, who declined to be named because they weren't authorized to speak publicly. Businesses are just starting to realize that AI coding tools can enable anyone to build products by prompting a chatbot. But Google doesn't have a clear solution for them. Its Gemini model's capabilities are sprinkled across half a dozen different coding products with different branding, indicating how the company's lack of focus and competing internal efforts have hampered success, the people said. Even internally, some Google engineers prefer to use Anthropic's Claude Code, they said. More concerning, the people said, are the engineers who are struggling to adopt AI coding at all. [...] Google's emphasis on its own technology has also complicated the push to catch up. Most employees are banned from using competing tools such as Claude Code or Codex due to security concerns, but Googlers can request exceptions if they can demonstrate they have a business case, one former employee said. Some teams at DeepMind, including those working on the Gemini model, internal applications, and open source models, use Claude Code, according to three former employees. "You want the best people to use the best tool, even inside Google," one of the former employees said. [...] In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said. [...] Within the Googleplex, there is a philosophical clash between AI researchers who want to move as quickly as possible and more traditional senior engineers who have exacting standards for code quality, former employees say. AI usage is factored into performance reviews, according to a former employee. But engineers who try to use internal AI coding tools often hit capacity constraints due to competition for computing power, the former employee said.

Read more of this story at Slashdot.

Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution

As organizations consider agentic AI for their business and IT stacks, researchers continue to find bugs and vulnerabilities in major, commercial models  that can significantly expand their attack surface.

This week, researchers at Pillar Security disclosed a vulnerability in Antigravity, an AI-powered developer tool for filesystem operations made by Google.

The bug, since patched, combined prompt injection with Antigravity’s permitted file-creation capability to grant attackers remote code execution privileges.

The research details how the exploit was able to circumvent Antigravity’s secure mode, Google’s highest security setting for its agents that runs all command operations through a virtual sandbox environment, throttles network access and prohibits the agent from writing code outside of the working directory.

Secure mode is supposed to limit the AI agent access to sensitive systems – and its ability to execute malicious or dangerous acts through shell commands. But one of the file-searching tools used by Antigravity, called “find_by_name,” is classified as a ‘native’ system tool. This means the agent can execute it directly and before protections like Secure Mode can even evaluate command level operations.

“The security boundary that Secure Mode enforces simply never sees this call,” wrote Dan Lisichkin, an AI security researcher with Pillar Security. “This means an attacker achieves arbitrary code execution under the exact configuration a security-conscious user would rely on to prevent it.”

The prompt injection attacks can be delivered through compromised identity accounts connected to the agent, or indirectly by hiding clandestine prompt instructions inside open-source files or web content the agent ingests. Antigravity  has trouble distinguishing between written data it ingests for context and literal prompt instructions, so compromise can be achieved without any elevated access by getting it to read a malicious document or file.

According to a disclosure timeline provided by Pillar Security, the bug was reported to Google on Jan. 6 and patched on Feb. 28, with Google awarding a bug bounty for the discovery.

Lisichkin said this same pattern of prompt injection through unvalidated input has been found in other coding AI agents like Cursor. In the age of AI, any unvalidated input can become a malicious prompt capable of hijacking internal systems.

“The trust model underpinning security assumptions, that a human will catch something suspicious, does not hold when autonomous agents follow instructions from external content,” he wrote.

The fact that the vulnerability was able to completely bypass Google’s secure mode underscores how the cybersecurity industry must start adapting and “move beyond sanitization-based controls.” 

“Every native tool parameter that reaches a shell command is a potential injection point. Auditing for this class of vulnerability is no longer optional, and it is a prerequisite for shipping agentic features safely,” Lisichkin wrote.

The post Vuln in Google’s Antigravity AI agent manager could escape sandbox, give attackers remote code execution appeared first on CyberScoop.

Vercel’s security breach started with malware disguised as Roblox cheats

Vercel customers are at risk of compromise after an attacker hopped through multiple internal systems to steal credentials and other sensitive data, the company said in a security bulletin Sunday. 

The attack, which didn’t originate at Vercel, showcases the pitfalls of interconnected cloud applications and SaaS integrations with overly privileged permissions. 

An attacker traversed third-party systems and connections left exposed by employees before it hit the San Francisco-based company that created and maintains Next.js and other popular open-source libraries. 

Researchers at Hudson Rock said the seeds of the attack were planted in February when a Context.ai employee’s computer was infected with Lumma Stealer malware after they searched for Roblox game exploits, a common vector for infostealer deployments.

Each of the companies are pinning at least some blame for the attack on the other vendor.

Context.ai on Sunday said that breach allowed the attacker to access its AWS environment and OAuth tokens for some users, including a token for a Vercel employee’s Google Workspace account. Vercel is not a Context customer, but the Vercel employee was using Context AI Office Suite and granted it full access, the artificial intelligence agent company said. 

“The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as sensitive,” Vercel said in its bulletin. 

The company said a limited number of its customers are impacted and were immediately advised to rotate credentials. Vercel, which declined to answer questions, did not specify which internal systems were accessed or fully explain how the attacker gained access to Vercel customers’ credentials. 

Vercel CEO Guillermo Rauch said customer data stored by the company is fully encrypted, yet the attacker got further access through enumeration, or by counting and inventorying specific variables. 

“We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI,” he said in a post on X. “They moved with surprising velocity and in-depth understanding of Vercel.”

A threat group identifying itself as ShinyHunters took responsibility for the attack in a post on Telegram and is attempting to sell the stolen data, which they claim includes access keys, source code and databases.

The attacker “is likely an imposter attempting to use an established name to inflate their notoriety,” Austin Larsen, principal threat analyst at Google Threat Intelligence, wrote in a LinkedIn post. “Regardless of the threat actor involved, the exposure risk is real.”

Vercel also warned that the attack on Context’s Google Workspace OAuth app “was the subject of a broader compromise, potentially affecting its hundreds of users across many organizations.” It published indicators of compromise and encouraged customers to review activity logs, review and rotate variables containing secrets.

Context and Vercel said their separate and coordinated investigations into the attack aided by CrowdStrike and Mandiant remain underway.

The post Vercel’s security breach started with malware disguised as Roblox cheats appeared first on CyberScoop.

Gmail Brings End-to-End Encryption to Android and iOS for Enterprise Users

The feature allows enterprise users to compose and read end-to-end encrypted messages natively on their mobile devices.

The post Gmail Brings End-to-End Encryption to Android and iOS for Enterprise Users appeared first on SecurityWeek.

Google News Now Prominently Featuring Polymarket Bets

Futurism found that Google News is surfacing Polymarket betting pages alongside traditional news sources. "The bets often appear in the 'For you' section of Google News, which is tailored to a user's personal interests," the publication reports. "In one instance, it was even the very top result, as with this bet on the price of Bitcoin." From the report: In our testing, Polymarket bets are also showing up on the Google News home page. But links from the prediction market can pop up all over Google News, including in searches. In further tests, looking up "will ships transit the strait," referring to the Strait of Hormuz, returned numerous credible sources like Financial Times, The Guardian, and Reuters. Just below them, however, was a Polymarket bet on the number of ships that would be allowed to pass through the critical oil passageway. This doesn't appear to be an accident. When searching "Polymarket" in its search bar, Google News now allows users to choose it as a "source," directing them to a page that aggregates other Polymarket hits. It's not the only non-news site that's selectable as a source -- looking up "Reddit" and "X" offers the option, too -- but searching for "Kalshi," another prediction market and Polymarket's main competitor, doesn't give the option to use it as a source. [...] In light of all this, Polymarket appearing in Google News is a major victory for the prediction platform -- rubber-stamping its image as an authority on developing real-world events right alongside genuine real publishers of journalism.

Read more of this story at Slashdot.

❌