Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Rep. Delia Ramirez takes over as top House cybersecurity Dem

28 April 2026 at 11:45

Illinois Rep. Delia Ramirez is taking over as the top Democrat on the House Homeland Security panel’s cybersecurity subcommittee, replacing former Rep. Eric Swalwell after his resignation.

Committee Democrats approved the change Tuesday at a meeting prior to a “shadow hearing” without the GOP majority, focused on protecting elections from Trump administration interference.

Ramirez first won election to Congress in 2022 and was reelected in 2024. She has served as the vice ranking member of the committee since 2023. She is now the ranking member of the Subcommittee on Cybersecurity and Infrastructure Protection.

She has leveled criticisms during committee hearings about the Trump administration’s personnel cutbacks at the Cybersecurity and Infrastructure Security Agency, and was critical of how data was secured under the administration’s Department of Government Efficiency initiative led by Elon Musk.

“Under a Musk and Trump presidency, it’s clear that the security of Americans’ information is not a priority. I mean, a private civilian with no security clearance bullied his way into the Treasury, set up private servers, and stole sensitive information from an agency. If that isn’t a national security crisis, a cybersecurity  crisis –then I don’t know what is,” Ramirez said at an early 2025 hearing. “The true threat to our homeland security is ‘fElon’ Musk, Trump, and their blatant misuse of power to steal information and coerce employees to leave agencies.”

She cosponsored legislation last year meant to strengthen the cybersecurity workforce by promoting measures to help workers from underrepresented and disadvantaged communities to join the field.

But she also had criticisms of U.S. cybersecurity under the Biden administration, including of Microsoft’s role in the SolarWinds breach.

In a statement about her appointment Tuesday, Ramirez took aim at at Trump, Vice President JD Vance, Department of Homeland Security Secretary Markwayne Mullin and White House homeland security adviser Stephen Miller.

“It’s clear that the security of our communities’ information, federal networks, and critical infrastructure have not been priorities” under them, she said. “Between the security failures of DOGE, the abuses of immigrant families’ data, and the decimation of CISA’s workforce and resources, Republicans have demonstrated a lack of interest in safeguarding our nation’s cybersecurity and our residents’ civil rights and privacy. In neglecting necessary oversight, Republicans have deregulated emerging technologies, allowed bad actors to profit from violations of our civil rights, and consented to the weaponization of government systems. It is more critical than ever that we assert our Congressional authority and disrupt the blatant corruption making us all less safe.”

Swalwell left the position following his resignation from Congress as a representative from California amid allegations of sexual misconduct.

Her ascension completes a full leadership turnover for the subcommittee. Rep. Andy Ogles, R-Tenn., took over the gavel late last year after former chairman Andrew Garbarino, R-N.Y., took over as chairman of the full committee.

The subcommittee is set to hold a hearing Wednesday on CISA and its role as the sector risk management agency for a number of critical infrastructure sectors.

Updated 4/28/26: to include comment from Ramirez.

The post Rep. Delia Ramirez takes over as top House cybersecurity Dem appeared first on CyberScoop.

Undressed victims file class action lawsuit against xAI for Grok deepfakes

By: djohnson
28 January 2026 at 16:27

A class of individuals who say they were victimized by nude or undressed deepfakes generated by Grok have filed a lawsuit against parent company xAI, calling the tool “a generative artificial intelligence chatbot that humiliates and sexually exploits women and girls by undressing them and posing them in sexual positions in deepfake images publicly posted on X.”

The lawsuit, filed Jan. 23 in the U.S. District Court of Northern California, alleges  that xAI executives knew Grok could generate explicit, nonconsensual images from real photos of victims, failed  to implement industry standard safeguards, and instead moved to “capitalize on the internet’s seemingly insatiable appetite for humiliating non-consensual sexual images.”

“xAI’s conduct is despicable and has harmed thousands of women who were digitally stripped and forced into sexual situations that they never consented to and who now face the very real risk that those public images will surface in their lives where viewers may not be able to distinguish whether they are real or fake,” the lawsuit stated.

There are at least 100 individuals involved in the lawsuit. The plaintiffs, who are suing under the anonymous name “Jane Doe, on behalf of herself and all others similarly situated,” cited data compiled by the New York Times showing that over a nine-day period between the end of December and the beginning of January, Grok generated 4.4 million images, of which at least 1.8 million were estimated to be sexualized deepfakes of women. Another analysis from the Center for Countering Digital Hate estimated that as many as three million of the images contained sexualized depictions of women, men and children.

“X users flooded Grok with these requests, and Grok obliged,” the lawsuit stated.

The suit claims that xAI took a number of actions to encourage users to create “nudified” content, including a feature that would allow other users to prompt Grok to manipulate photos on X simply by tagging a person’s handle, providing Grok with a “spicy” option where a user could click on a photo and generate controversial content, including sexualized deepfakes, and failing to implement any prompt filtering that would have prevented sexualized deepfake requests.

XAI owner Elon Musk fueled the controversy by asking Grok on X to generate a photo of himself in a bikini. As backlash grew, Musk announced the feature would be limited to paying subscribers, sparking more criticism that the company was  profiting off the tool’s abusive capability.

Musk has since put forth several different defenses, at one point denying that Grok was even generating illegal sexualized content. On Jan. 14, he posted on X that he was “not aware of any naked underage images generated by Grok. Literally zero.”

As CyberScoop has reported, legal experts believe Grok’s undressing capability – which researchers say goes beyond generating bikini or lingerie images and included images of fully nude women, men and children, or victims covered in bodily fluids – may expose xAI and Musk to a broad range of U.S. and international laws against sexualized deepfakes, digital fraud, and the distribution of child sexual abuse material.

In addition to X’s embedded Grok tool, researchers have said that they were also able to easily generate even more graphic  nonconsensual pornographic content through Grok’s main website.

The class action suit is the latest legal development  to hit xAI and Musk over the episode. The European Union, the UK, South Korea, Canada, Brazil and others have opened formal investigations into whether xAI violated domestic laws. Leaders in the UK, India, Malaysia, Indonesia have all threatened to restrict or ban X unless more is done.

Meanwhile, the U.S. federal government, including the Federal Trade Commission and the Department of Justice, have remained silent.

But even in the United States, Musk is likely to face increasing pressure from states. On the same day the suit was filed, 35 State Attorneys General wrote to Musk following a meeting with xAI officials expressing “deep concern” over the company’s actions.

The state officials said they were “committed” to investigations and prosecutions in this area and pressed xAI to do more to curb the Grok-enabled abuse.

“As several of us conveyed to you in our recent discussion, halting this kind of abusive and illegal behavior is an utmost priority for the undersigned Attorneys General,” they wrote. “The creation and dissemination of child sexual abuse material is a crime. In many states, this is true even where the material has been manipulated or is synthetic. Various state and federal civil and criminal laws also forbid the creation of nonconsensual intimate images and provide remedies to victims.”

While there are numerous AI nudifying tools, they wrote that “Grok merits special attention given evidence that it both promoted and facilitated the production and public dissemination of such images, and made it all as easy as the click of a button.”

The post Undressed victims file class action lawsuit against xAI for Grok deepfakes appeared first on CyberScoop.

British regulator Ofcom opens investigation into X

By: djohnson
12 January 2026 at 13:57

The UK’s top internet regulator opened a formal investigation into social media network X after users, with the help of its AI chatbot Grok, flooded the site with nonconsensual, AI-manipulated nude and undressed photos of real people.

On Monday, the Office of Communications (Ofcom), which regulates internet and telecommunications companies, said the investigation will determine whether the content violates The UK Online Safety Act.

Ofcom said the investigation will focus on whether X has complied with portions of the law requiring them to assess the risk of whether such content will reach UK audiences online, take steps prevent the distribution of nonconsensual images and child sexual abuse material (CSAM), take down illegal content and protect user privacy.

It will also focus on whether X took steps to evaluate the risk that Grok’s deepfake capabilities would pose to UK children or use age verification features to block children from access or seeing the content. The regulator said it continues to engage with officials at X, who will have “an opportunity to respond to our findings in full, as required by the Act, before we make our final decision.”

“Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning,” an Ofcom spokesperson said in a statement. “Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.”

Ofcom also specified that it is a regulatory body, not a government censor, and the purpose of the inquiry is to determine whether X is breaking the law by facilitating the spread of nonconsensual deepfake pornography, including that of children.

Last week, Prime Minister Keir Starmer of the ruling Labour Party called the deepfake scandal “disgusting” and said all options, including banning X from Britain, were on the table.

Following the investigation, the regulator will determine if X has failed to comply with the Online Safety Act and issue a provisional sanction. Beyond possible legal orders compelling X to change Grok and its business practices, the sanctions could include fines of up to £18 million or 10% of the company’s worldwide revenue.Ofcom said it has used its newfound powers under the Online Safety Act – first implemented last year – to launch investigations into more than 90 platforms, issue fines to six companies for failure to have “robust” age verification technology, and issued its first £1 million fine.

But the investigation and potential sanctions of X, based in the U.S. and owned by the richest person in the world, will mark the most significant test yet of the regulatory agency’s authority under the new law. Thus far the U.S. The Department of Justice and the Federal Trade Commission have been silent as outrage from users and international governments continues to grow.

The post British regulator Ofcom opens investigation into X appeared first on CyberScoop.

Dems pressure Google, Apple to drop X app as international regulators turn up heat

By: djohnson
9 January 2026 at 14:06

A trio of Senate Democrats are calling on Apple and Google to drop Elon Musk’s X from app stores as international regulators in Europe and Britain took steps towards investigations of the site’s mass undressing of users using Grok’s AI tool.

On Friday, Senators Ron Wyden, D-Ore., Ben Ray Luján, D-N.M., and Ed Markey, D-Mass., wrote to Apple’s and Google’s chief executives, asking them to “enforce your apps stores’ terms of service against X.”

“X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms,” they wrote.

The Senators quote from Google Play Store’s terms of service stating that apps must “prohibit users from creating, uploading, or distributing content that facilitates the exploitation or abuse of children” and subject them to immediate removal for violations. Apple’s terms allow wide flexibility to take action on apps or content that are “offensive” or “just plain creepy,” something they argued should clearly cover what is happening on X.

“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the lawmakers wrote. 

The lawmakers explicitly compared the lack of action or comments from both companies thus far to the way the stores treated apps meant to track Immigrations and Customs Enforcement operations around the country, such as ICEBlock and Red Dot. 

“Unlike Grok’s sickening content generation, these apps were not creating or hosting harmful or illegal content, and yet, based entirely on the Administration’s claims that they posed a risk to immigration enforcers, you removed them from your stores,” the Senators noted.

The call comes as international regulators have turned up the heat on X over the scandal, while conflicting reports swirl about the extent to which X has limited Grok’s deepfake functionality after weeks of criticism.

The UK’s Office of Communications, the nation’s top communications regulatory agency, said it had made “urgent” contact with X over the images being generated by users through Grok, and that based on their response, “we will undertake a swift assessment to determine whether there are potential compliance issues” under the UK Online Safety Act. Friday, Prime Minister Keir Starmer called the images “unlawful” and “disgusting” and promised that all options, including a potential ban of X, were being considered.

Meanwhile, the European Union has ordered X to preserve all documents related to Grok through 2026, an indication that it could be subject to regulatory or law enforcement investigations, according to Reuters.

As CyberScoop and others have reported, legal experts have said that Musk may be exposing X to broad legal and regulatory risks from states, federal regulators and law enforcement.

There have been conflicting reports that X, which has not responded to inquiries from journalists under Musk’s ownership, may be taking steps to limit Grok’s deepfake functionality for some of its users.

On Friday, Musk posted on X that he was limiting the feature to paid users, which has resulted in a fresh round of outrage from observers who pointed out that monetizing illegal sexual deepfakes was not a solution to the problem. Prior to that statement, the only public response from Musk addressing the scandal was a post he made with “cry-laughing” emojis in response to a Grok-generated deepfake of himself wearing a bikini.

Musk doesn’t release numbers around paid subscribers, but a TechCrunch analysis indicates that it could be as high as 1.3-3.7 million users based on revenues reported from in-app purchases.

But even the claim that non-paying users are shut out from making further sexualized deepfakes through Grok may be inaccurate, as users on social media reported that even after the change, they were able to access Grok’s deepfake feature as a free user through X or Grok’s website.

The post Dems pressure Google, Apple to drop X app as international regulators turn up heat appeared first on CyberScoop.

‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster

By: djohnson
8 January 2026 at 09:08

As collective disgust has continued to build over the widespread generation and sharing of nonconsensual, sexualized deepfakes generated by X’s GrokAI tool, angry onlookers have expressed shock that the activity continues unabated and company owner Elon Musk isn’t being compelled – by either U.S. regulators or law enforcement – to put a halt to the practice.

Legal experts say at the federal level, there are several laws and regulations already on the books that could expose Musk and X to significant fines, civil lawsuits and criminal prosecution.

Those tools include new laws like the Take It Down Act, legislation sponsored last year by Sens. Amy Klobuchar, D-Minn. And Ted Cruz, R-Texas, that would criminally prosecute individuals who share sexualized AI-generated images and require platforms to remove such images within 48 hours of being notified by a victim.

Klobuchar, posting on X, called the AI generated material “outrageous” and said the law would be enforced.

“No one should find AI-created sexual images of themselves online—especially children,” wrote Klobuchar. “X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”

Because AI is still an emerging technology, it remains unclear how it applies to criminal statutes and questions on the enforcement decisions— leaving federal regulators, law enforcement and courts with limited guidance. It’s not immediately clear, for instance, how many of the images and victims could be subject to legal or regulatory action under the Take It Down Act.

“The definitions are not favorable to what we’re dealing with right now,” said Amy Mushahwar, a partner at national law firm Lowenstein Sandler who specializes in data privacy and security issues.

Take It Down….Later

The  Take It Down Act can be enforced in two ways: through criminal prosecution of those who generate and share such images online and takedown notices submitted by victims to platforms, which must remove the image within two days. Neither is a perfect fit for what is happening on X.

The law’s takedown provision, which will be enforced through the Federal Trade Commission, does not take effect until May. 

 While the criminal penalties are currently active, they would only authorize the DOJ to investigate and charge individuals prompting Grok to generate the manipulated photos, not the company or Musk himself.

Further complicating matters, the law’s reliance on specific legal definitions can make it difficult  to prosecute some of the images generated on Grok. A victim’s age, or being depicted with even a small amount of clothing, can mean the difference between an image violating the law or not.

In conversations with lawyers and Hill staffers, many said the Take It Down Act would clearly cover the most egregious violations on Grok, like nudes and sexualized depictions of minors, but would be harder to apply to other instances. That’s because the Act criminalizes the sharing of “intimate visual depictions” using deepfakes, which under U.S. law is defined as an image showing an individual’s uncovered genitals, or displaying them covered in bodily fluids.

“That has a specific meaning under the law so that a depiction of a nude person may be an intimate visual depiction, but someone in a bikini may not be,” said Samir Jain, vice president of policy at the Center for Democracy and Technology.

Victims who have been undressed and placed in bikinis, lingerie or other suggestive clothing by Grok could, alternatively, seek legal relief under another section of the law that bans digital forgeries for adults and minors.

The U.S. Sentencing Commission is currently grappling with how to set minimum and maximum fines and jail sentences under the law and determine how it would apply to different crimes and sections of U.S. criminal code.

Communications Indecency

Even with restrictive language and delayed enforcement timelines, Grok’s mass undressing of users likely runs afoul of other federal and state laws, legal experts tell CyberScoop.

Others questioned whether X’s conduct would truly be protected under Section 230 of the Communications Decency Act, which typically shields social media platforms from civil lawsuits.

While Section 230 has traditionally been a legal bulwark for social media companies, shielding them from lawsuits over user content, X may have personal culpability under the law because Grok is a company feature.

Jain said that legal protections under Section 230 are predicated on the idea that the platforms shouldn’t be held liable for third-party created content posted by users. But in this case, X’s own embedded AI tool is generating the images.

“There’s a good argument that [Grok] at least played a role in creating or developing the image, since Grok seems to have created it at the behest of the user, so it may not be user content insulated by section 230,” he said.

However, he also posited that Musk’s status with the Republican Party and President Donald Trump could also deter federal agencies from taking a hard line. At the FTC, for example, Trump has fired the two commissioners who were nominated by the Democratic Party, leaving it a more partisan and White House-controlled entity than in previous administrations.

Laws “require enforcement by the federal government, the Justice Department in the case of criminal [law], but the FTC in the case of the takedown piece,” he said. “And so there might be questions also about the degree to which the administration would be committed to enforcing those laws against X and Musk.”

A lane for state AGs

As Riana Pfefferkorn, a non-resident fellow at Stanford University’s Center for Internet Security pointed out, Congress has signaled its broader stance on criminalizing AI-generated sexual deepfakes known through legislation like the Take It Down Act. In addition, dozens of states have anti-CSAM laws on the books, including many that specifically target AI-generated child pornography.  

Mushahwar agreed, predicting even if Musk avoids federal scrutiny, state attorneys general will likely move  aggressively to enforce existing CSAM and digital forgery laws. She said they will also look for places where “logical extensions” can cover  the AI images being generated and posted on X.

Given the widespread revulsion the scandal has been met with, many AGs will likely feel serious pressure from their constituents to use whatever legal tools at hand to go after offenders.

“I do think Elon Musk is playing with fire, not just on a legal basis, but on a child safety basis,” Mushahwar said. “Like, if your platform is growing because you’re creating interest from pedophiles, that is creating a cesspool that might end up creating a trafficking haven.”

The post ‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster appeared first on CyberScoop.

❌
❌