Google is bringing Gemini to Mac in a bid to help organize your files and much more
An attacker could have planted a malicious configuration to execute commands outside the sandbox.
The post Critical Gemini CLI Flaw Enabled Host Code Execution, Supply Chain Attacks appeared first on SecurityWeek.
Dozens of such keys can be extracted from appsโ decompiled code to gain access to all Gemini endpoints.
The post Google API Keys in Android Apps Expose Gemini Endpoints to Unauthorized Access appeared first on SecurityWeek.
A new report from Google found evidence that state-sponsored hacking groups have leveraged AI tool Gemini at nearly every stage of the cyber attack cycle.
The research underscores how AI tools have matured in their cyber offensive capabilities, even as it doesnโt reveal novel or paradigm shifting uses of the technology.
John Hultquist, chief analyst at Googleโs Threat Intelligence Group, told CyberScoop that many countries still appear to be experimenting with AI tools, determining where they best fit into the attack chain and provide more benefit than friction.
โNobodyโs got everything completely worked out,โ Hultquist said. โTheyโre all trying to figure this out and that goes for attacks on AI, too.โ
But the report also reveals that frontier AI models can build speed, scale and sophistication into a myriad of hacking tasks, and state-sponsored hacking groups are taking advantage.
Gemini was a useful, dynamic and convenient tool for many tasks, helping threat actors in a variety of different ways. In nearly all cases, Googleโs reporting suggests that state-sponsored actors relied on Gemini as one tool among many, using it for specific purposes such as automating routine processes, conducting research or reconnaissance and experimenting with malware.
One North Korean group used it to synthesize open-source intelligence about job roles and salary information at cybersecurity and defense companies. Another North Korean group consulted it โmultiple days a weekโ for technical support, using it to troubleshoot problems and generate new malware code when they got stuck during an operation. One Iranian APT used Gemini to โsignificantly augment reconnaissanceโ techniques against targeted victims. China, Russia, Iran and North Korea all also used Gemini to create fake articles, personas, and other assets for information operations.
โWhatโs so interesting about this capability is itโs going to have an effect across the entire intrusion cycle,โ Hultquist said.
There are no instances of state groups using Gemini to automate large portions of a cyber attack, like a Chinese-government backed campaign identified by Anthropic last year. It suggests threat actors may still be struggling to implement fully or mostly-automated hacks using AI.
Hultquist said that some state groups, particularly those focused on espionage, may not find the speed and scale advantages of agentic AI useful if it results in louder, more detectable operations. In fact, while state actors continue to experiment with AI models, he believes on average these developments will help smaller cybercriminal outfits more than state-sponsored hackers.
But that could change in the future. Frontier AI companies like Anthropic and cybersecurity startups like XBOW have already developed models with powerful defensive cybersecurity capabilities in vulnerability scanning, reconnaissance and automation. Foreign governments with similar technology could use those same features for offensive hacking, as Chinese actors did with Claude before being discovered.
In December, the UK AI Security Instituteโs inaugural report on frontier AI trends found that Al capabilities are improving rapidly across all tested domains, and particularly in cybersecurity.
And the gap between frontier and free, open-source models is shrinking. According to the institute, open-source AI models can now catch up and provide similar capabilities within 4-8 months of a frontier model release.
โThe duration of cyber tasks that Al systems can complete without human direction is also rising steeply, from less than 10 minutes in early 2023 to over an hour by mid-2025,โ the institute said in its Frontier AI Trends Report in December.
The post Google finds state-sponsored hackers use AI at โall stagesโ of attack cycleย appeared first on CyberScoop.