Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Dem report concludes Department of Government Efficiency violates cybersecurity, privacy rules

25 September 2025 at 12:37

Department of Government Efficiency practices at three federal agencies “violate statutory requirements, creating unprecedented privacy and cybersecurity risks,” according to a report that Senate Homeland Security and Governmental Affairs Committee Democrats published Thursday.

The report — drawn from a mix of media reports, legal filings, whistleblower disclosures to the committee and staff visits to the agencies — concludes that the Elon Musk-created DOGE is “operating outside federal law, with unchecked access to Americans’ personal data.” It focuses on DOGE activity at the General Services Administration (GSA), Office of Personnel Management (OPM) and Social Security Administration (SSA).

One previously unreported whistleblower claim is that at the SSA, a June internal risk assessment found that the chance of a data breach with “catastrophic adverse effect” stood between 35% and 65% after DOGE uploaded a computer database file known as Numident, containing personal sensitive information without additional protections against unauthorized access. The potential implications included “widespread PII [personally identifiable information] disclosure or loss of data” and “catastrophic damage to or loss of agency facilities and infrastructure with fatalities to individuals,” according to the assessment.

“DOGE isn’t making government more efficient — it’s putting Americans’ sensitive information in the hands of completely unqualified and untrustworthy individuals,” Michigan Sen. Gary Peters, the top Democrat on the committee, said in a news release. “They are bypassing cybersecurity protections, evading oversight, and putting Americans’ personal data at risk. We cannot allow this shadow operation to continue operating unchecked while millions of people face the threat of identity theft, economic disruption, and permanent harm. The Trump Administration and agency leadership must immediately put a stop to these reckless actions that risk causing unprecedented chaos in Americans’ daily lives.”

The report recommends stripping all DOGE access to sensitive personal information until agencies certify that the initiative is in compliance with federal security and privacy laws such as the Federal Information Security Management Act, and recommends that DOGE employees complete the same kind of cybersecurity training as other federal employees.

It describes the three agencies blocking access to specific offices or otherwise obstructing access. For example, it says that DOGE installed a Starlink network at GSA, but wouldn’t let staff view it. Starlink is the Musk-owned satellite internet service, and the report concludes that Starlink might have allowed DOGE staffers to circumvent agency IT oversight. Data sent over the network “could be an easy target for foreign adversaries,” the report states.

The report also expands upon an alleged attempt at SSA to create a “master database” that would pool data from multiple federal agencies. According to whistleblower disclosures, former SSA DOGE employee John Koval inquired about uploading agency data into a cloud environment to share with the Department of Homeland Security. He was “rebuffed,” the report states, but later worked at DHS and the Justice Department, where SSA data surfaced in some projects, raising further privacy concerns. 

It revisits concerns about DOGE staffer Edward “Big Balls” Coristine having access to sensitive agency data despite reports that he had been fired from an internship at a cybersecurity company for leaking company information to a competitor, and arrives at further conclusions about the risk posed by the ability of Coristine and others “to move highly sensitive SSA data into an unmonitored cloud environment.”

“It is highly likely that foreign adversaries, such as Russia, China, and Iran, who regularly attempt cyber attacks on the U.S. government and critical infrastructure, are already aware of this new DOGE cloud environment,” the report states.

Two of the agencies that were the subject of the report took issue with its conclusions.

“OPM takes its responsibility to safeguard federal personnel records seriously,” said a spokeswoman for the office, McLaurine Pinover. “This report recycles unfounded claims about so-called ‘DOGE teams’ that simply have never existed at OPM. Federal employees at OPM conduct their work in line with longstanding law, security, and compliance requirements.

“Instead of rehashing baseless allegations, Senate Democrats should focus their efforts on the real challenges facing the federal workforce,” she continued. “OPM remains committed to transparency, accountability, and delivering for the American people.”

The SSA pointed to Commissioner Frank Bisignano’s letter to Congress responding to questions about Numident security concerns. 

“Based on the agency’s thorough review, the Numident data and database — stored in a longstanding secure environment used by SSA — have not been accessed, leaked, hacked, or shared in any unauthorized fashion,” a SSA spokesperson wrote, adding, “The location referred to in the whistleblower allegation is actually a secured server in the agency’s cloud infrastructure which historically has housed this data and is continuously monitored and overseen — SSA’s standard practice.”

The SSA spokesperson emphasized there are no DOGE employees at SSA, only agency employees. 

The GSA did not immediately respond to Scoop News Group requests for comment on the Democratic report.

Miranda Nazzaro contributed reporting to this story.

The post Dem report concludes Department of Government Efficiency violates cybersecurity, privacy rules appeared first on CyberScoop.

xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

1 May 2025 at 20:52

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

❌
❌