Reading view

There are new articles available, click to refresh the page.

Self-Replicating Worm Hits 180+ Software Packages

At least 187 code packages made available through the JavaScript repository NPM have been infected with a self-replicating worm that steals credentials from developers and publishes those secrets on GitHub, experts warn. The malware, which briefly infected multiple code packages from the security vendor CrowdStrike, steals and publishes even more credentials every time an infected package is installed.

Image: https://en.wikipedia.org/wiki/Sandworm_(Dune)

The novel malware strain is being dubbed Shai-Hulud — after the name for the giant sandworms in Frank Herbert’s Dune novel series — because it publishes any stolen credentials in a new public GitHub repository that includes the name “Shai-Hulud.”

“When a developer installs a compromised package, the malware will look for a npm token in the environment,” said Charlie Eriksen, a researcher for the Belgian security firm Aikido. “If it finds it, it will modify the 20 most popular packages that the npm token has access to, copying itself into the package, and publishing a new version.”

At the center of this developing maelstrom are code libraries available on NPM (short for “Node Package Manager”), which acts as a central hub for JavaScript development and provides the latest updates to widely-used JavaScript components.

The Shai-Hulud worm emerged just days after unknown attackers launched a broad phishing campaign that spoofed NPM and asked developers to “update” their multi-factor authentication login options. That attack led to malware being inserted into at least two-dozen NPM code packages, but the outbreak was quickly contained and was narrowly focused on siphoning cryptocurrency payments.

Image: aikido.dev

In late August, another compromise of an NPM developer resulted in malware being added to “nx,” an open-source code development toolkit with as many as six million weekly downloads. In the nx compromise, the attackers introduced code that scoured the user’s device for authentication tokens from programmer destinations like GitHub and NPM, as well as SSH and API keys. But instead of sending those stolen credentials to a central server controlled by the attackers, the malicious nx code created a new public repository in the victim’s GitHub account, and published the stolen data there for all the world to see and download.

Last month’s attack on nx did not self-propagate like a worm, but this Shai-Hulud malware does and bundles reconnaissance tools to assist in its spread. Namely, it uses the open-source tool TruffleHog to search for exposed credentials and access tokens on the developer’s machine. It then attempts to create new GitHub actions and publish any stolen secrets.

“Once the first person got compromised, there was no stopping it,” Aikido’s Eriksen told KrebsOnSecurity. He said the first NPM package compromised by this worm appears to have been altered on Sept. 14, around 17:58 UTC.

The security-focused code development platform socket.dev reports the Shai-Halud attack briefly compromised at least 25 NPM code packages managed by CrowdStrike. Socket.dev said the affected packages were quickly removed by the NPM registry.

In a written statement shared with KrebsOnSecurity, CrowdStrike said that after detecting several malicious packages in the public NPM registry, the company swiftly removed them and rotated its keys in public registries.

“These packages are not used in the Falcon sensor, the platform is not impacted and customers remain protected,” the statement reads, referring to the company’s widely-used endpoint threat detection service. “We are working with NPM and conducting a thorough investigation.”

A writeup on the attack from StepSecurity found that for cloud-specific operations, the malware enumerates AWS, Azure and Google Cloud Platform secrets. It also found the entire attack design assumes the victim is working in a Linux or macOS environment, and that it deliberately skips Windows systems.

StepSecurity said Shai-Hulud spreads by using stolen NPM authentication tokens, adding its code to the top 20 packages in the victim’s account.

“This creates a cascading effect where an infected package leads to compromised maintainer credentials, which in turn infects all other packages maintained by that user,” StepSecurity’s Ashish Kurmi wrote.

Eriksen said Shai-Hulud is still propagating, although its spread seems to have waned in recent hours.

“I still see package versions popping up once in a while, but no new packages have been compromised in the last ~6 hours,” Eriksen said. “But that could change now as the east coast starts working. I would think of this attack as a ‘living’ thing almost, like a virus. Because it can lay dormant for a while, and if just one person is suddenly infected by accident, they could restart the spread. Especially if there’s a super-spreader attack.”

For now, it appears that the web address the attackers were using to exfiltrate collected data was disabled due to rate limits, Eriksen said.

Nicholas Weaver is a researcher with the International Computer Science Institute, a nonprofit in Berkeley, Calif. Weaver called the Shai-Hulud worm “a supply chain attack that conducts a supply chain attack.” Weaver said NPM (and all other similar package repositories) need to immediately switch to a publication model that requires explicit human consent for every publication request using a phish-proof 2FA method.

“Anything less means attacks like this are going to continue and become far more common, but switching to a 2FA method would effectively throttle these attacks before they can spread,” Weaver said. “Allowing purely automated processes to update the published packages is now a proven recipe for disaster.”

Salesloft Drift security incident started with undetected GitHub access

Salesloft pinned the root cause of the Drift supply-chain attacks to a threat group gaining access to its GitHub account as far back as March, the company said in an update Saturday. 

During a 10-day period in mid-August, the threat group compromised and stole data from hundreds of organizations

The threat group, which Google tracks as UNC6395, spent time lurking in the Salesloft application environment, downloaded content from multiple repositories, added a guest user and set up workflows over a monthslong period through June, according to Salesloft. 

“The threat actor then accessed Drift’s Amazon Web Services environment and obtained OAuth tokens for Drift customers’ technology integrations,” the company said. “The threat actor used the stolen OAuth tokens to access data via Drift integrations.”

The update marks the most significant details shared yet by Salesloft since Google security researchers first warned about the “widespread data theft campaign” last month. The company is still withholding key details as its incident response firm, Mandiant, has transitioned to confirm the quality of its forensic investigation.

Salesloft has not explained how its GitHub account was accessed, what attackers did in its environment, nor how the threat group accessed Drift’s AWS environment and obtained OAuth tokens. The company also hasn’t explained why OAuth tokens were stored in the cloud environment, and if the stolen OAuth tokens were for internal integrations with third-party platforms or customers’ OAuth tokens for individual integrations.

The company has not responded to multiple requests for comment dating back to Aug. 26, when news of the attacks first surfaced.

Analysts and researchers acknowledge that Salesloft may still be seeking definitive answers about what went wrong, yet the company already misfired when it erroneously claimed exposure was limited to Drift customer instances integrated with Salesforce. Days later, Google Cloud’s incident response firm Mandiant said Salesloft Drift customers were compromised en masse, potentially snagging any user that integrated the AI chat agent platform to another third-party service.

“I don’t think they’re being fully transparent. They’re still holding some stuff back,” said Paddy Harrington, senior analyst at Forrester.

Salesloft’s post-incident investigation thus far underscores multiple areas where the company’s security practices and controls were apparently less than adequate, according to Harrington. 

Nathaniel Jones, VP of security and AI strategy at Darktrace, said he hopes more information will be shared once the investigation is complete. “They’ve confirmed the breach and downstream impacts but stopped short of saying how the attacker got in,” he added.

“They’ve boxed in the Drift environment, taken it offline, rotated credentials, and emphasized containment. That’s all good practice,” Jones said.

Salesloft took Drift offline Friday and said the move was temporary “to fortify the security of the application and its associated infrastructure.” Salesloft rotated all centrally managed keys for OAuth users, but customers who manage Drift connections to third-party applications via API keys need to revoke existing keys directly with the third-party provider’s application, the company said. 

The Salesloft platform, which has been technically segmented from Drift and confirmed uncompromised, according to Mandiant, restored connections with Salesforce Sunday, the company said. 

Salesloft doesn’t know when Drift will be restored and brought back online. Yet, the company may need to make significant changes to regain trust as the lingering and still unknown effects of the damage caused by the breach stain Drift’s reputation.

“They’re probably going to have to rename that thing. The name alone is now totally tainted,” Harrington said. “They could reintroduce the product, but they’re going to have to totally talk about a rearchitecture change.”

Key details are still missing about how the attack occurred, and customers need to understand the true scope of the supply-chain attack and the extent of data stolen, he added.

“We’re in a time where attackers are going to find the least-protected asset and they’re going to go for it, and they struck gold here. Holy crap, did they strike gold,” Harrington said. “This thing just keeps getting worse and worse and worse.”

The post Salesloft Drift security incident started with undetected GitHub access appeared first on CyberScoop.

DOGE Denizen Marko Elez Leaked API Key for xAI

Marko Elez, a 25-year-old employee at Elon Musk’s Department of Government Efficiency (DOGE), has been granted access to sensitive databases at the U.S. Social Security Administration, the Treasury and Justice departments, and the Department of Homeland Security. So it should fill all Americans with a deep sense of confidence to learn that Mr. Elez over the weekend inadvertently published a private key that allowed anyone to interact directly with more than four dozen large language models (LLMs) developed by Musk’s artificial intelligence company xAI.

Image: Shutterstock, @sdx15.

On July 13, Mr. Elez committed a code script to GitHub called “agent.py” that included a private application programming interface (API) key for xAI. The inclusion of the private key was first flagged by GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, said the exposed API key allowed access to at least 52 different LLMs used by xAI. The most recent LLM in the list was called “grok-4-0709” and was created on July 9, 2025.

Grok, the generative AI chatbot developed by xAI and integrated into Twitter/X, relies on these and other LLMs (a query to Grok before publication shows Grok currently uses Grok-3, which was launched in Feburary 2025). Earlier today, xAI announced that the Department of Defense will begin using Grok as part of a contract worth up to $200 million. The contract award came less than a week after Grok began spewing antisemitic rants and invoking Adolf Hitler.

Mr. Elez did not respond to a request for comment. The code repository containing the private xAI key was removed shortly after Caturegli notified Elez via email. However, Caturegli said the exposed API key still works and has not yet been revoked.

“If a developer can’t keep an API key private, it raises questions about how they’re handling far more sensitive government information behind closed doors,” Caturegli told KrebsOnSecurity.

Prior to joining DOGE, Marko Elez worked for a number of Musk’s companies. His DOGE career began at the Department of the Treasury, and a legal battle over DOGE’s access to Treasury databases showed Elez was sending unencrypted personal information in violation of the agency’s policies.

While still at Treasury, Elez resigned after The Wall Street Journal linked him to social media posts that advocated racism and eugenics. When Vice President J.D. Vance lobbied for Elez to be rehired, President Trump agreed and Musk reinstated him.

Since his re-hiring as a DOGE employee, Elez has been granted access to databases at one federal agency after another. TechCrunch reported in February 2025 that he was working at the Social Security Administration. In March, Business Insider found Elez was part of a DOGE detachment assigned to the Department of Labor.

Marko Elez, in a photo from a social media profile.

In April, The New York Times reported that Elez held positions at the U.S. Customs and Border Protection and the Immigration and Customs Enforcement (ICE) bureaus, as well as the Department of Homeland Security. The Washington Post later reported that Elez, while serving as a DOGE advisor at the Department of Justice, had gained access to the Executive Office for Immigration Review’s Courts and Appeals System (EACS).

Elez is not the first DOGE worker to publish internal API keys for xAI: In May, KrebsOnSecurity detailed how another DOGE employee leaked a private xAI key on GitHub for two months, exposing LLMs that were custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X.

Caturegli said it’s difficult to trust someone with access to confidential government systems when they can’t even manage the basics of operational security.

“One leak is a mistake,” he said. “But when the same type of sensitive key gets exposed again and again, it’s not just bad luck, it’s a sign of deeper negligence and a broken security culture.”

Big Tech’s Mixed Response to U.S. Treasury Sanctions

In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.

On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.

The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.

It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.

The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.

Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.

In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.

Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.

These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.

Liu Lizhi’s PayPal account.

In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.

“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”

The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.

In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.

Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.

However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.

One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).

Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.

In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.

All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.

“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”

Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”

“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.

Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.

Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.

“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”

Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.

In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.

The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.

“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.

Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.

“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”

Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.

Update, July 7, 6:56 p.m. ET: In a written statement, PayPal said it continually works to combat and prevent the illicit use of its services.

“We devote significant resources globally to financial crime compliance, and we proactively refer cases to and assist law enforcement officials around the world in their efforts to identify, investigate and stop illegal activity,” the statement reads.

xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

DOGE Worker’s Code Supports NLRB Whistleblower

A whistleblower at the National Labor Relations Board (NLRB) alleged last week that denizens of Elon Musk’s Department of Government Efficiency (DOGE) siphoned gigabytes of data from the agency’s sensitive case files in early March. The whistleblower said accounts created for DOGE at the NLRB downloaded three code repositories from GitHub. Further investigation into one of those code bundles shows it is remarkably similar to a program published in January 2025 by Marko Elez, a 25-year-old DOGE employee who has worked at a number of Musk’s companies.

A screenshot shared by NLRB whistleblower Daniel Berulis shows three downloads from GitHub.

According to a whistleblower complaint filed last week by Daniel J. Berulis, a 38-year-old security architect at the NLRB, officials from DOGE met with NLRB leaders on March 3 and demanded the creation of several all-powerful “tenant admin” accounts that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis said he discovered one of the DOGE accounts had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever used. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

A search on that description in Google brings up a code repository at GitHub for a user with the account name “Ge0rg3” who published a program roughly four years ago called “requests-ip-rotator,” described as a library that will allow the user “to bypass IP-based rate-limits for sites and services.”

The README file from the GitHub user Ge0rg3’s page for requests-ip-rotator includes the exact wording of a program the whistleblower said was downloaded by one of the DOGE users. Marko Elez created an offshoot of this program in January 2025.

“A Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing,” the description reads.

Ge0rg3’s code is “open source,” in that anyone can copy it and reuse it non-commercially. As it happens, there is a newer version of this project that was derived or “forked” from Ge0rg3’s code — called “async-ip-rotator” — and it was committed to GitHub in January 2025 by DOGE captain Marko Elez.

The whistleblower stated that one of the GitHub files downloaded by the DOGE employees who transferred sensitive files from an NLRB case database was an archive whose README file read: “Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Elez’s code pictured here was forked in January 2025 from a code library that shares the same description.

A key DOGE staff member who gained access to the Treasury Department’s central payments system, Elez has worked for a number of Musk companies, including X, SpaceX, and xAI. Elez was among the first DOGE employees to face public scrutiny, after The Wall Street Journal linked him to social media posts that advocated racism and eugenics.

Elez resigned after that brief scandal, but was rehired after President Donald Trump and Vice President JD Vance expressed support for him. Politico reports Elez is now a Labor Department aide detailed to multiple agencies, including the Department of Health and Human Services.

“During Elez’s initial stint at Treasury, he violated the agency’s information security policies by sending a spreadsheet containing names and payments information to officials at the General Services Administration,” Politico wrote, citing court filings.

KrebsOnSecurity sought comment from both the NLRB and DOGE, and will update this story if either responds.

The NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function. Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis’s complaint alleges the DOGE accounts at NLRB downloaded more than 10 gigabytes of data from the agency’s case files, a database that includes reams of sensitive records including information about employees who want to form unions and proprietary business documents. Berulis said he went public after higher-ups at the agency told him not to report the matter to the US-CERT, as they’d previously agreed.

Berulis told KrebsOnSecurity he worried the unauthorized data transfer by DOGE could unfairly advantage defendants in a number of ongoing labor disputes before the agency.

“If any company got the case data that would be an unfair advantage,” Berulis said. “They could identify and fire employees and union organizers without saying why.”

Marko Elez, in a photo from a social media profile.

Berulis said the other two GitHub archives that DOGE employees downloaded to NLRB systems included Integuru, a software framework designed to reverse engineer application programming interfaces (APIs) that websites use to fetch data; and a “headless” browser called Browserless, which is made for automating web-based tasks that require a pool of browsers, such as web scraping and automated testing.

On February 6, someone posted a lengthy and detailed critique of Elez’s code on the GitHub “issues” page for async-ip-rotator, calling it “insecure, unscalable and a fundamental engineering failure.”

“If this were a side project, it would just be bad code,” the reviewer wrote. “But if this is representative of how you build production systems, then there are much larger concerns. This implementation is fundamentally broken, and if anything similar to this is deployed in an environment handling sensitive data, it should be audited immediately.”

Further reading: Berulis’s complaint (PDF).

Update 7:06 p.m. ET: Elez’s code repo was deleted after this story was published. An archived version of it is here.

Whistleblower: DOGE Siphoned NLRB Case Data

A security architect with the National Labor Relations Board (NLRB) alleges that employees from Elon Musk‘s Department of Government Efficiency (DOGE) transferred gigabytes of sensitive data from agency case files in early March, using short-lived accounts configured to leave few traces of network activity. The NLRB whistleblower said the unusual large data outflows coincided with multiple blocked login attempts from an Internet address in Russia that tried to use valid credentials for a newly-created DOGE user account.

The cover letter from Berulis’s whistleblower statement, sent to the leaders of the Senate Select Committee on Intelligence.

The allegations came in an April 14 letter to the Senate Select Committee on Intelligence, signed by Daniel J. Berulis, a 38-year-old security architect at the NLRB.

NPR, which was the first to report on Berulis’s whistleblower complaint, says NLRB is a small, independent federal agency that investigates and adjudicates complaints about unfair labor practices, and stores “reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.”

The complaint documents a one-month period beginning March 3, during which DOGE officials reportedly demanded the creation of all-powerful “tenant admin” accounts in NLRB systems that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis writes that on March 3, a black SUV accompanied by a police escort arrived at his building — the NLRB headquarters in Southeast Washington, D.C. The DOGE staffers did not speak with Berulis or anyone else in NLRB’s IT staff, but instead met with the agency leadership.

“Our acting chief information officer told us not to adhere to standard operating procedure with the DOGE account creation, and there was to be no logs or records made of the accounts created for DOGE employees, who required the highest level of access,” Berulis wrote of their instructions after that meeting.

“We have built in roles that auditors can use and have used extensively in the past but would not give the ability to make changes or access subsystems without approval,” he continued. “The suggestion that they use these accounts was not open to discussion.”

Berulis found that on March 3 one of the DOGE accounts created an opaque, virtual environment known as a “container,” which can be used to build and run programs or scripts without revealing its activities to the rest of the world. Berulis said the container caught his attention because he polled his colleagues and found none of them had ever used containers within the NLRB network.

Berulis said he also noticed that early the next morning — between approximately 3 a.m. and 4 a.m. EST on Tuesday, March 4  — there was a large increase in outgoing traffic from the agency. He said it took several days of investigating with his colleagues to determine that one of the new accounts had transferred approximately 10 gigabytes worth of data from the NLRB’s NxGen case management system.

Berulis said neither he nor his co-workers had the necessary network access rights to review which files were touched or transferred — or even where they went. But his complaint notes the NxGen database contains sensitive information on unions, ongoing legal cases, and corporate secrets.

“I also don’t know if the data was only 10gb in total or whether or not they were consolidated and compressed prior,” Berulis told the senators. “This opens up the possibility that even more data was exfiltrated. Regardless, that kind of spike is extremely unusual because data almost never directly leaves NLRB’s databases.”

Berulis said he and his colleagues grew even more alarmed when they noticed nearly two dozen login attempts from a Russian Internet address (83.149.30,186) that presented valid login credentials for a DOGE employee account — one that had been created just minutes earlier. Berulis said those attempts were all blocked thanks to rules in place that prohibit logins from non-U.S. locations.

“Whoever was attempting to log in was using one of the newly created accounts that were used in the other DOGE related activities and it appeared they had the correct username and password due to the authentication flow only stopping them due to our no-out-of-country logins policy activating,” Berulis wrote. “There were more than 20 such attempts, and what is particularly concerning is that many of these login attempts occurred within 15 minutes of the accounts being created by DOGE engineers.”

According to Berulis, the naming structure of one Microsoft user account connected to the suspicious activity suggested it had been created and later deleted for DOGE use in the NLRB’s cloud systems: “DogeSA_2d5c3e0446f9@nlrb.microsoft.com.” He also found other new Microsoft cloud administrator accounts with nonstandard usernames, including “Whitesox, Chicago M.” and “Dancehall, Jamaica R.”

A screenshot shared by Berulis showing the suspicious user accounts.

On March 5, Berulis documented that a large section of logs for recently created network resources were missing, and a network watcher in Microsoft Azure was set to the “off” state, meaning it was no longer collecting and recording data like it should have.

Berulis said he discovered someone had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever use. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

The complaint alleges that by March 17 it became clear the NLRB no longer had the resources or network access needed to fully investigate the odd activity from the DOGE accounts, and that on March 24, the agency’s associate chief information officer had agreed the matter should be reported to US-CERT. Operated by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA), US-CERT provides on-site cyber incident response capabilities to federal and state agencies.

But Berulis said that between April 3 and 4, he and the associate CIO were informed that “instructions had come down to drop the US-CERT reporting and investigation and we were directed not to move forward or create an official report.” Berulis said it was at this point he decided to go public with his findings.

An email from Daniel Berulis to his colleagues dated March 28, referencing the unexplained traffic spike earlier in the month and the unauthorized changing of security controls for user accounts.

Tim Bearese, the NLRB’s acting press secretary, told NPR that DOGE neither requested nor received access to its systems, and that “the agency conducted an investigation after Berulis raised his concerns but ‘determined that no breach of agency systems occurred.'” The NLRB did not respond to questions from KrebsOnSecurity.

Nevertheless, Berulis has shared a number of supporting screenshots showing agency email discussions about the unexplained account activity attributed to the DOGE accounts, as well as NLRB security alerts from Microsoft about network anomalies observed during the timeframes described.

As CNN reported last month, the NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function.

“Despite its limitations, the agency had become a thorn in the side of some of the richest and most powerful people in the nation — notably Elon Musk, Trump’s key supporter both financially and arguably politically,” CNN wrote.

Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis shared screenshots with KrebsOnSecurity showing that on the day the NPR published its story about his claims (April 14), the deputy CIO at NLRB sent an email stating that administrative control had been removed from all employee accounts. Meaning, suddenly none of the IT employees at the agency could do their jobs properly anymore, Berulis said.

An email from the NLRB’s associate chief information officer Eric Marks, notifying employees they will lose security administrator privileges.

Berulis shared a screenshot of an agency-wide email dated April 16 from NLRB director Lasharn Hamilton saying DOGE officials had requested a meeting, and reiterating claims that the agency had no prior “official” contact with any DOGE personnel. The message informed NLRB employees that two DOGE representatives would be detailed to the agency part-time for several months.

An email from the NLRB Director Lasharn Hamilton on April 16, stating that the agency previously had no contact with DOGE personnel.

Berulis told KrebsOnSecurity he was in the process of filing a support ticket with Microsoft to request more information about the DOGE accounts when his network administrator access was restricted. Now, he’s hoping lawmakers will ask Microsoft to provide more information about what really happened with the accounts.

“That would give us way more insight,” he said. “Microsoft has to be able to see the picture better than we can. That’s my goal, anyway.”

Berulis’s attorney told lawmakers that on April 7, while his client and legal team were preparing the whistleblower complaint, someone physically taped a threatening note to Mr. Berulis’s home door with photographs — taken via drone — of him walking in his neighborhood.

“The threatening note made clear reference to this very disclosure he was preparing for you, as the proper oversight authority,” reads a preface by Berulis’s attorney Andrew P. Bakaj. “While we do not know specifically who did this, we can only speculate that it involved someone with the ability to access NLRB systems.”

Berulis said the response from friends, colleagues and even the public has been largely supportive, and that he doesn’t regret his decision to come forward.

“I didn’t expect the letter on my door or the pushback from [agency] leaders,” he said. “If I had to do it over, would I do it again? Yes, because it wasn’t really even a choice the first time.”

For now, Mr. Berulis is taking some paid family leave from the NLRB. Which is just as well, he said, considering he was stripped of the tools needed to do his job at the agency.

“They came in and took full administrative control and locked everyone out, and said limited permission will be assigned on a need basis going forward” Berulis said of the DOGE employees. “We can’t really do anything, so we’re literally getting paid to count ceiling tiles.”

Further reading: Berulis’s complaint (PDF).

PyFunnels: Data Normalization for InfoSec Workflows

TJ Nicholls // *BHIS Guest Contributor TL;DR How many times have you had to parse the same output from a tool? Wouldn’t you like to get that time back? There […]

The post PyFunnels: Data Normalization for InfoSec Workflows appeared first on Black Hills Information Security, Inc..

❌