Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Undressed victims file class action lawsuit against xAI for Grok deepfakes

By: djohnson
28 January 2026 at 16:27

A class of individuals who say they were victimized by nude or undressed deepfakes generated by Grok have filed a lawsuit against parent company xAI, calling the tool “a generative artificial intelligence chatbot that humiliates and sexually exploits women and girls by undressing them and posing them in sexual positions in deepfake images publicly posted on X.”

The lawsuit, filed Jan. 23 in the U.S. District Court of Northern California, alleges  that xAI executives knew Grok could generate explicit, nonconsensual images from real photos of victims, failed  to implement industry standard safeguards, and instead moved to “capitalize on the internet’s seemingly insatiable appetite for humiliating non-consensual sexual images.”

“xAI’s conduct is despicable and has harmed thousands of women who were digitally stripped and forced into sexual situations that they never consented to and who now face the very real risk that those public images will surface in their lives where viewers may not be able to distinguish whether they are real or fake,” the lawsuit stated.

There are at least 100 individuals involved in the lawsuit. The plaintiffs, who are suing under the anonymous name “Jane Doe, on behalf of herself and all others similarly situated,” cited data compiled by the New York Times showing that over a nine-day period between the end of December and the beginning of January, Grok generated 4.4 million images, of which at least 1.8 million were estimated to be sexualized deepfakes of women. Another analysis from the Center for Countering Digital Hate estimated that as many as three million of the images contained sexualized depictions of women, men and children.

“X users flooded Grok with these requests, and Grok obliged,” the lawsuit stated.

The suit claims that xAI took a number of actions to encourage users to create “nudified” content, including a feature that would allow other users to prompt Grok to manipulate photos on X simply by tagging a person’s handle, providing Grok with a “spicy” option where a user could click on a photo and generate controversial content, including sexualized deepfakes, and failing to implement any prompt filtering that would have prevented sexualized deepfake requests.

XAI owner Elon Musk fueled the controversy by asking Grok on X to generate a photo of himself in a bikini. As backlash grew, Musk announced the feature would be limited to paying subscribers, sparking more criticism that the company was  profiting off the tool’s abusive capability.

Musk has since put forth several different defenses, at one point denying that Grok was even generating illegal sexualized content. On Jan. 14, he posted on X that he was “not aware of any naked underage images generated by Grok. Literally zero.”

As CyberScoop has reported, legal experts believe Grok’s undressing capability – which researchers say goes beyond generating bikini or lingerie images and included images of fully nude women, men and children, or victims covered in bodily fluids – may expose xAI and Musk to a broad range of U.S. and international laws against sexualized deepfakes, digital fraud, and the distribution of child sexual abuse material.

In addition to X’s embedded Grok tool, researchers have said that they were also able to easily generate even more graphic  nonconsensual pornographic content through Grok’s main website.

The class action suit is the latest legal development  to hit xAI and Musk over the episode. The European Union, the UK, South Korea, Canada, Brazil and others have opened formal investigations into whether xAI violated domestic laws. Leaders in the UK, India, Malaysia, Indonesia have all threatened to restrict or ban X unless more is done.

Meanwhile, the U.S. federal government, including the Federal Trade Commission and the Department of Justice, have remained silent.

But even in the United States, Musk is likely to face increasing pressure from states. On the same day the suit was filed, 35 State Attorneys General wrote to Musk following a meeting with xAI officials expressing “deep concern” over the company’s actions.

The state officials said they were “committed” to investigations and prosecutions in this area and pressed xAI to do more to curb the Grok-enabled abuse.

“As several of us conveyed to you in our recent discussion, halting this kind of abusive and illegal behavior is an utmost priority for the undersigned Attorneys General,” they wrote. “The creation and dissemination of child sexual abuse material is a crime. In many states, this is true even where the material has been manipulated or is synthetic. Various state and federal civil and criminal laws also forbid the creation of nonconsensual intimate images and provide remedies to victims.”

While there are numerous AI nudifying tools, they wrote that “Grok merits special attention given evidence that it both promoted and facilitated the production and public dissemination of such images, and made it all as easy as the click of a button.”

The post Undressed victims file class action lawsuit against xAI for Grok deepfakes appeared first on CyberScoop.

California AG launches investigation into X’s sexualized deepfakes

By: Greg Otto
14 January 2026 at 14:36

California Attorney General Rob Bonta announced an investigation Wednesday into xAI over allegations that its artificial intelligence model Grok is being used to create nonconsensual sexually explicit images of women and children on a large scale, marking the latest escalation in regulatory efforts to address AI-generated deepfakes.

The California investigation focuses on Grok’s “spicy mode,” a feature designed to generate explicit content that xAI has promoted as a distinguishing characteristic of its platform. According to Bonta’s office, news reports in recent weeks have documented widespread instances of users manipulating ordinary photos of women and children found online to create sexualized images without the subjects’ knowledge or consent.

“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further. We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said in a release. 

The investigation will examine whether xAI violated California law in developing and maintaining features that facilitate the creation of such content. Bonta stated his office would “use all the tools at my disposal to keep California’s residents safe,” though he did not specify which statutes may have been violated.

xAI, founded by Elon Musk, also owns the social media platform X, where Grok-generated images have circulated. 

The company has not publicly responded to the investigation announcement. Musk posted Wednesday that he was “ not aware of any naked underage images generated by Grok. Literally zero.”

CyberScoop has reached out to X for comment. 

The announcement comes a day after the Senate unanimously passed the DEFIANCE Act, which would grant victims of nonconsensual sexually explicit deepfakes the right to pursue civil action against those who produce or distribute such content. The bill now moves to the House, where similar legislation stalled in 2024 despite Senate approval.

The Senate’s passage of the DEFIANCE Act represents a rare moment of bipartisan consensus on technology regulation. The legislation, introduced by Sens. Dick Durbin, D-Ill., and Lindsey Graham, R-S.C., received no objections during a unanimous consent request Tuesday on the Senate floor.

The bill would establish federal civil liability for individuals who knowingly produce, distribute, or possess with intent to distribute nonconsensual sexually explicit digital forgeries. Rep. Alexandria Ocasio-Cortez, D-N.Y., who has acknowledged being a victim of explicit deepfakes, introduced companion legislation in the House with support from seven Republicans and six Democrats.

The technology to create such content has become increasingly accessible to the general public, lowering barriers that once limited deepfake production to those with specialized technical knowledge.

California has emerged as a focal point for AI regulation, with state lawmakers passing several bills aimed at addressing AI safety concerns. Bonta has been particularly active on issues involving AI and children, meeting with OpenAI executives in September alongside Delaware’s attorney general to discuss concerns about how AI products interact with young people. In August, he sent letters to 12 major AI companies following reports of sexually inappropriate interactions between AI chatbots and children.

California’s investigation comes as the United Kingdom announced earlier this week that it was also conducting its own investigation into the proliferation of deepfakes on X. 

The post California AG launches investigation into X’s sexualized deepfakes appeared first on CyberScoop.

❌
❌