Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The FTC’s AI portfolio is about to get bigger

By: djohnson
20 April 2026 at 17:00

The Federal Trade Commission is poised to deepen its involvement in curbing the use of AI for malicious purposes, including the spread of nonconsensual sexualized deepfakes and voice cloning scams.

Last year, Congress passed the Take It Down Act, a law that allowed for criminal prosecution of individuals who share or distribute nonconsensual, intimate images and digital forgeries, including those that are AI-generated.

At a Senate oversight hearing last week, FTC Chair Andrew Ferguson called the new law one of the “greatest legislative achievements” of the current Congress and President Donald Trump’s administration, and said the FTC was preparing for “robust enforcement.”

Earlier this month, the Department of Justice scored its first successful conviction under the new law, when 37-year-old Columbus, Ohio resident James Strahler pleaded guilty to using AI-generated deepfake nudes as part of a harassment campaign targeting at least six women.

Another section of the law – set to become active in May, will permit individuals to file “take down” notices with websites that publish or host sexual deepfakes. Companies will have 48 hours to remove the content or be subject to FTC investigation and enforcement.

Commissioner Mark Meador said at a March 30 conference in Washington D.C. that while he hopes they “never have to enforce it,” the FTC is treating Take It Down enforcement as a top priority and “actively spinning everything up that we need” to enforce the take down provision.   

That could quickly set up one of the first major confrontations with the tech sector— especially companies like xAI. Its Grok tool continues to be used to create and host nonconsensual deepfake images of real people, even after the scandal it faced earlier this year.

Following his speech, CyberScoop asked Meador how the take down provisions might apply to Grok’s mass nudification spree of its users. He said the law specifies that the commission can’t take action against a company until they receive formal complaints starting in May.  

“This is coming into place, and then if they don’t [remove the content] we would get the complaints and then we would go after them at that point,” Meador said. “So, we kind of have to wait and see how…companies respond to complaints and requests being made, and my hope would be that every company that gets a request to take something down would immediately take it down.”

xAI’s press office did not respond to CyberScoop’s request for comment on its preparations to comply with the Take It Down act. 

Strahler, who has yet to be sentenced, also admitted to using photos of children in his neighborhood to create deepfake pornography. A strategic plan published earlier this month flagged protecting children online as a “key concern” for the commission that merits more consumer tools and resources.

The commission is “dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act,” the plan states.

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, told CyberScoop that the current commission’s focus on child online safety leaves ample room for the law to be brought to bear in creative ways.

“We’ve seen enforcing technology and privacy violations related to youth children is a priority, so I think it’s relatively easy to parlay that into some Take it Down Act enforcement,” she said.

Waughn said the one-year delay for provision’s enforcement was so that platforms could prepare, but also said the FTC could do more to publicly signal to companies what lawful compliance looks like, similar to the resources they provide around major privacy laws.  

“I think what would be helpful for all organizations…would be guidance explaining what constitutes a good faith effort, for example, to attempt to address a take down request,” said Waughn.

Living in a scammer’s paradise

The FTC is also grappling with the impact of AI on criminal scams targeting Americans online.

Ferguson told lawmakers that AI is “increasing both the sophistication of the actual mechanisms by which the scams are accomplished, but it’s also making it easier for scammers to choose their targets.”

But the FTC’s powers are limited, as the Federal Communications Commission regulates the telephone and internet providers that transmit most scams. Ferguson also noted that many call center scams are located overseas “where they don’t bat an eye at the risk of civil enforcement from the FTC.” He said the commission was open to additional legislative authorities to tackle the problem.

At the March conference, Meador was said AI-fueled deception was something the commission thinks about “daily” and is lowering the barrier to entry for many criminal schemes.

“The biggest place that we’ve seen [in] the way that some of these AI tools are being used to triple charge scams, to be honest,” he said.

Last year, the FBI reported that voice cloning scams impersonating distressed family members had bilked Americans out of nearly $900 million, and the technology has been used to impersonate high level Trump administration officials in conversations with businesses and political leaders.

Senator Maggie Hassan wrote to four AI voice cloning companies – ElevenLabs, LOVO, Speechify and VEED – asking what policies and programs they had in place to prevent or deter fraud enabled by their tools.

But Meador said that when it comes to deceptive claims, it’s particularly difficult to define credulity around the use of AI. Many deepfakes, he said, are seen and consumed by many people online with the same sort of “willing suspension of disbelief” that they bring to computer-generated effects in movies.

As such, the FTC will likely have to adjudicate on a case-by-case basis rather than through “broad brush strokes.”

“I think we’ll see a lot of that in the AI context, where if you know something wasn’t meant to be real or authentic, that’s not a concern,” he said. “The question is then, what are those situations where there is an expectation that you’re being shown something authentic and quote, unquote ‘real’ as opposed to being AI generated and was there misrepresentation or material omission” to disclose that?”

The post The FTC’s AI portfolio is about to get bigger appeared first on CyberScoop.

From fake nudes to fake quotes: AI deepfakes plagued Olympic athletes

By: djohnson
2 March 2026 at 06:00

While competing for medals and glory in Milan, Italy, U.S. Olympic athletes experienced something that is fast becoming a regular feature of modern public life: the widespread use of AI tools by politicians, trolls and sexual harassers to manipulate their images and voices

Users on 4chan and other sites quickly generated and shared “nudified” or sexualized imagery of multiple female U.S. athletes, including figure skaters Alysa Liu, Amber Glenn and Isabeau Levito, as well as skiers Mikaela Shiffrin and Eileen Gu (who competed for China).

Multiple research firms, including Graphika and Open Measures, tracked the posts and images on 4chan,  a platform that automatically deletes posts and topic-specific boards after a set period.

Cristina López G., a senior analyst at Graphika and author of a report released Monday, told CyberScoop that online communities dedicated to generating and sharing fake, nonconsensual nude images of celebrities, public figures and women they know existed before the generative AI era. But these groups have taken advantage of AI image models, particularly local, open-source versions that can be downloaded and fine-tuned, to improve image quality and make the technology accessible to less technical members.

“These communities have co-opted and adapted these technologies to optimize them for their end use case, which continues to be the production of [nonconsensual sexual imagery],” López G. said.

Users on these 4chan message boards follow a gamified pattern: one person posts a nonconsensual or sexualized image, then asks others to post their own in return. The availability of downloadable, open-source AI models, which lack safety guardrails and can be customized for “nudification”  has accelerated this activity.

These customized weights and settings, called Low Rank Adaptions (LoRA), can be shared online and plugged into other users’ local models, similar to the way gamers create and share mods.

Deepfakes have been around – and steadily improving – for years, but generative AI technology has improved drastically in the past 18 months at generating realistic photos and videos.

Additionally, open-source models have spread throughout the internet, giving users the ability to customize, fine-tune and share ones that are optimized for nudification and non-consensual image generation.

Even though 4chan’s posts auto-delete, they can still spread to the broader internet. In 2024, for example, deepfake nudes of Taylor Swift originated on the site but went viral on mainstream social media. López G. said apps like Telegram—and increasingly X— become conduits for spreading the images further.

“The way in which this alters the game, I would say, is that you’re not only trading outputs anymore, you are trading the ability to generate infinite outputs,” she said. “So the harm compounds, because you are just enabling a lot of other people to be able to produce and uniquely and specifically target these women.”

AI, culture war politics and the public eye

The use of AI to mimic or harass U.S. Olympians during the games wasn’t limited to nonconsensual nudes on 4chan.

Brady Tkachuk of the U.S. men’s hockey team spoke out after the White House posted an AI-generated video that falsely depicted him mockingCanadians after Team USA’s gold medal win over Canada.

The video, shared through the White House’s TikTok account, depicted Tkachuk saying of Canada, “They booed our national anthem, so I had to come out and teach those maple-syrup-eating f—s a lesson.” Despite including an AI-generated disclaimer, the video has been viewed tens of millions of times.

Nevertheless, Tkachuk – an American citizen who plays professionally for the Ottawa Senators – took issue, telling the media “I don’t like that video” because “it’s not my voice, not my lips moving.”

It’s the latest example of the Trump White House using AI to alter or manipulate public imagery. The administration now regularly creates or shares AI-generated images as part of its political messaging, sometimes without disclosing it to the public. Earlier this year, the White House posted an AI-manipulated photo on X showing Minnesota protester Nekima Levy Armstrong crying as she was arrested and led away in handcuffs, an emotion not present in the original image. Other federal agencies’ social media accounts have also shared AI-manipulated images and videos.

White House officials have consistently defended their actions, describing them as little more than jokes. López G. said whether it’s nonconsensual nudes or political deepfakes, the problem “goes deeper than technological harm,” and reflects how pockets of online culture are essentially in denial about this content’s real-world impact.

“One thing that really jumps out is that many of the people producing [deepfakes] do not connect the harms that they are doing to the actual person,” she said. “In their minds it is ‘this is not real’ and so these people are not getting hurt. There is a disconnect there that has nothing to do with the technology, that has more to do with us as a culture.”

The post From fake nudes to fake quotes: AI deepfakes plagued Olympic athletes appeared first on CyberScoop.

Undressed victims file class action lawsuit against xAI for Grok deepfakes

By: djohnson
28 January 2026 at 16:27

A class of individuals who say they were victimized by nude or undressed deepfakes generated by Grok have filed a lawsuit against parent company xAI, calling the tool “a generative artificial intelligence chatbot that humiliates and sexually exploits women and girls by undressing them and posing them in sexual positions in deepfake images publicly posted on X.”

The lawsuit, filed Jan. 23 in the U.S. District Court of Northern California, alleges  that xAI executives knew Grok could generate explicit, nonconsensual images from real photos of victims, failed  to implement industry standard safeguards, and instead moved to “capitalize on the internet’s seemingly insatiable appetite for humiliating non-consensual sexual images.”

“xAI’s conduct is despicable and has harmed thousands of women who were digitally stripped and forced into sexual situations that they never consented to and who now face the very real risk that those public images will surface in their lives where viewers may not be able to distinguish whether they are real or fake,” the lawsuit stated.

There are at least 100 individuals involved in the lawsuit. The plaintiffs, who are suing under the anonymous name “Jane Doe, on behalf of herself and all others similarly situated,” cited data compiled by the New York Times showing that over a nine-day period between the end of December and the beginning of January, Grok generated 4.4 million images, of which at least 1.8 million were estimated to be sexualized deepfakes of women. Another analysis from the Center for Countering Digital Hate estimated that as many as three million of the images contained sexualized depictions of women, men and children.

“X users flooded Grok with these requests, and Grok obliged,” the lawsuit stated.

The suit claims that xAI took a number of actions to encourage users to create “nudified” content, including a feature that would allow other users to prompt Grok to manipulate photos on X simply by tagging a person’s handle, providing Grok with a “spicy” option where a user could click on a photo and generate controversial content, including sexualized deepfakes, and failing to implement any prompt filtering that would have prevented sexualized deepfake requests.

XAI owner Elon Musk fueled the controversy by asking Grok on X to generate a photo of himself in a bikini. As backlash grew, Musk announced the feature would be limited to paying subscribers, sparking more criticism that the company was  profiting off the tool’s abusive capability.

Musk has since put forth several different defenses, at one point denying that Grok was even generating illegal sexualized content. On Jan. 14, he posted on X that he was “not aware of any naked underage images generated by Grok. Literally zero.”

As CyberScoop has reported, legal experts believe Grok’s undressing capability – which researchers say goes beyond generating bikini or lingerie images and included images of fully nude women, men and children, or victims covered in bodily fluids – may expose xAI and Musk to a broad range of U.S. and international laws against sexualized deepfakes, digital fraud, and the distribution of child sexual abuse material.

In addition to X’s embedded Grok tool, researchers have said that they were also able to easily generate even more graphic  nonconsensual pornographic content through Grok’s main website.

The class action suit is the latest legal development  to hit xAI and Musk over the episode. The European Union, the UK, South Korea, Canada, Brazil and others have opened formal investigations into whether xAI violated domestic laws. Leaders in the UK, India, Malaysia, Indonesia have all threatened to restrict or ban X unless more is done.

Meanwhile, the U.S. federal government, including the Federal Trade Commission and the Department of Justice, have remained silent.

But even in the United States, Musk is likely to face increasing pressure from states. On the same day the suit was filed, 35 State Attorneys General wrote to Musk following a meeting with xAI officials expressing “deep concern” over the company’s actions.

The state officials said they were “committed” to investigations and prosecutions in this area and pressed xAI to do more to curb the Grok-enabled abuse.

“As several of us conveyed to you in our recent discussion, halting this kind of abusive and illegal behavior is an utmost priority for the undersigned Attorneys General,” they wrote. “The creation and dissemination of child sexual abuse material is a crime. In many states, this is true even where the material has been manipulated or is synthetic. Various state and federal civil and criminal laws also forbid the creation of nonconsensual intimate images and provide remedies to victims.”

While there are numerous AI nudifying tools, they wrote that “Grok merits special attention given evidence that it both promoted and facilitated the production and public dissemination of such images, and made it all as easy as the click of a button.”

The post Undressed victims file class action lawsuit against xAI for Grok deepfakes appeared first on CyberScoop.

California AG launches investigation into X’s sexualized deepfakes

By: Greg Otto
14 January 2026 at 14:36

California Attorney General Rob Bonta announced an investigation Wednesday into xAI over allegations that its artificial intelligence model Grok is being used to create nonconsensual sexually explicit images of women and children on a large scale, marking the latest escalation in regulatory efforts to address AI-generated deepfakes.

The California investigation focuses on Grok’s “spicy mode,” a feature designed to generate explicit content that xAI has promoted as a distinguishing characteristic of its platform. According to Bonta’s office, news reports in recent weeks have documented widespread instances of users manipulating ordinary photos of women and children found online to create sexualized images without the subjects’ knowledge or consent.

“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further. We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said in a release. 

The investigation will examine whether xAI violated California law in developing and maintaining features that facilitate the creation of such content. Bonta stated his office would “use all the tools at my disposal to keep California’s residents safe,” though he did not specify which statutes may have been violated.

xAI, founded by Elon Musk, also owns the social media platform X, where Grok-generated images have circulated. 

The company has not publicly responded to the investigation announcement. Musk posted Wednesday that he was “ not aware of any naked underage images generated by Grok. Literally zero.”

CyberScoop has reached out to X for comment. 

The announcement comes a day after the Senate unanimously passed the DEFIANCE Act, which would grant victims of nonconsensual sexually explicit deepfakes the right to pursue civil action against those who produce or distribute such content. The bill now moves to the House, where similar legislation stalled in 2024 despite Senate approval.

The Senate’s passage of the DEFIANCE Act represents a rare moment of bipartisan consensus on technology regulation. The legislation, introduced by Sens. Dick Durbin, D-Ill., and Lindsey Graham, R-S.C., received no objections during a unanimous consent request Tuesday on the Senate floor.

The bill would establish federal civil liability for individuals who knowingly produce, distribute, or possess with intent to distribute nonconsensual sexually explicit digital forgeries. Rep. Alexandria Ocasio-Cortez, D-N.Y., who has acknowledged being a victim of explicit deepfakes, introduced companion legislation in the House with support from seven Republicans and six Democrats.

The technology to create such content has become increasingly accessible to the general public, lowering barriers that once limited deepfake production to those with specialized technical knowledge.

California has emerged as a focal point for AI regulation, with state lawmakers passing several bills aimed at addressing AI safety concerns. Bonta has been particularly active on issues involving AI and children, meeting with OpenAI executives in September alongside Delaware’s attorney general to discuss concerns about how AI products interact with young people. In August, he sent letters to 12 major AI companies following reports of sexually inappropriate interactions between AI chatbots and children.

California’s investigation comes as the United Kingdom announced earlier this week that it was also conducting its own investigation into the proliferation of deepfakes on X. 

The post California AG launches investigation into X’s sexualized deepfakes appeared first on CyberScoop.

British regulator Ofcom opens investigation into X

By: djohnson
12 January 2026 at 13:57

The UK’s top internet regulator opened a formal investigation into social media network X after users, with the help of its AI chatbot Grok, flooded the site with nonconsensual, AI-manipulated nude and undressed photos of real people.

On Monday, the Office of Communications (Ofcom), which regulates internet and telecommunications companies, said the investigation will determine whether the content violates The UK Online Safety Act.

Ofcom said the investigation will focus on whether X has complied with portions of the law requiring them to assess the risk of whether such content will reach UK audiences online, take steps prevent the distribution of nonconsensual images and child sexual abuse material (CSAM), take down illegal content and protect user privacy.

It will also focus on whether X took steps to evaluate the risk that Grok’s deepfake capabilities would pose to UK children or use age verification features to block children from access or seeing the content. The regulator said it continues to engage with officials at X, who will have “an opportunity to respond to our findings in full, as required by the Act, before we make our final decision.”

“Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning,” an Ofcom spokesperson said in a statement. “Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.”

Ofcom also specified that it is a regulatory body, not a government censor, and the purpose of the inquiry is to determine whether X is breaking the law by facilitating the spread of nonconsensual deepfake pornography, including that of children.

Last week, Prime Minister Keir Starmer of the ruling Labour Party called the deepfake scandal “disgusting” and said all options, including banning X from Britain, were on the table.

Following the investigation, the regulator will determine if X has failed to comply with the Online Safety Act and issue a provisional sanction. Beyond possible legal orders compelling X to change Grok and its business practices, the sanctions could include fines of up to £18 million or 10% of the company’s worldwide revenue.Ofcom said it has used its newfound powers under the Online Safety Act – first implemented last year – to launch investigations into more than 90 platforms, issue fines to six companies for failure to have “robust” age verification technology, and issued its first £1 million fine.

But the investigation and potential sanctions of X, based in the U.S. and owned by the richest person in the world, will mark the most significant test yet of the regulatory agency’s authority under the new law. Thus far the U.S. The Department of Justice and the Federal Trade Commission have been silent as outrage from users and international governments continues to grow.

The post British regulator Ofcom opens investigation into X appeared first on CyberScoop.

Dems pressure Google, Apple to drop X app as international regulators turn up heat

By: djohnson
9 January 2026 at 14:06

A trio of Senate Democrats are calling on Apple and Google to drop Elon Musk’s X from app stores as international regulators in Europe and Britain took steps towards investigations of the site’s mass undressing of users using Grok’s AI tool.

On Friday, Senators Ron Wyden, D-Ore., Ben Ray Luján, D-N.M., and Ed Markey, D-Mass., wrote to Apple’s and Google’s chief executives, asking them to “enforce your apps stores’ terms of service against X.”

“X’s generation of these harmful and likely illegal depictions of women and children has shown complete disregard for your stores’ distribution terms,” they wrote.

The Senators quote from Google Play Store’s terms of service stating that apps must “prohibit users from creating, uploading, or distributing content that facilitates the exploitation or abuse of children” and subject them to immediate removal for violations. Apple’s terms allow wide flexibility to take action on apps or content that are “offensive” or “just plain creepy,” something they argued should clearly cover what is happening on X.

“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the lawmakers wrote. 

The lawmakers explicitly compared the lack of action or comments from both companies thus far to the way the stores treated apps meant to track Immigrations and Customs Enforcement operations around the country, such as ICEBlock and Red Dot. 

“Unlike Grok’s sickening content generation, these apps were not creating or hosting harmful or illegal content, and yet, based entirely on the Administration’s claims that they posed a risk to immigration enforcers, you removed them from your stores,” the Senators noted.

The call comes as international regulators have turned up the heat on X over the scandal, while conflicting reports swirl about the extent to which X has limited Grok’s deepfake functionality after weeks of criticism.

The UK’s Office of Communications, the nation’s top communications regulatory agency, said it had made “urgent” contact with X over the images being generated by users through Grok, and that based on their response, “we will undertake a swift assessment to determine whether there are potential compliance issues” under the UK Online Safety Act. Friday, Prime Minister Keir Starmer called the images “unlawful” and “disgusting” and promised that all options, including a potential ban of X, were being considered.

Meanwhile, the European Union has ordered X to preserve all documents related to Grok through 2026, an indication that it could be subject to regulatory or law enforcement investigations, according to Reuters.

As CyberScoop and others have reported, legal experts have said that Musk may be exposing X to broad legal and regulatory risks from states, federal regulators and law enforcement.

There have been conflicting reports that X, which has not responded to inquiries from journalists under Musk’s ownership, may be taking steps to limit Grok’s deepfake functionality for some of its users.

On Friday, Musk posted on X that he was limiting the feature to paid users, which has resulted in a fresh round of outrage from observers who pointed out that monetizing illegal sexual deepfakes was not a solution to the problem. Prior to that statement, the only public response from Musk addressing the scandal was a post he made with “cry-laughing” emojis in response to a Grok-generated deepfake of himself wearing a bikini.

Musk doesn’t release numbers around paid subscribers, but a TechCrunch analysis indicates that it could be as high as 1.3-3.7 million users based on revenues reported from in-app purchases.

But even the claim that non-paying users are shut out from making further sexualized deepfakes through Grok may be inaccurate, as users on social media reported that even after the change, they were able to access Grok’s deepfake feature as a free user through X or Grok’s website.

The post Dems pressure Google, Apple to drop X app as international regulators turn up heat appeared first on CyberScoop.

‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster

By: djohnson
8 January 2026 at 09:08

As collective disgust has continued to build over the widespread generation and sharing of nonconsensual, sexualized deepfakes generated by X’s GrokAI tool, angry onlookers have expressed shock that the activity continues unabated and company owner Elon Musk isn’t being compelled – by either U.S. regulators or law enforcement – to put a halt to the practice.

Legal experts say at the federal level, there are several laws and regulations already on the books that could expose Musk and X to significant fines, civil lawsuits and criminal prosecution.

Those tools include new laws like the Take It Down Act, legislation sponsored last year by Sens. Amy Klobuchar, D-Minn. And Ted Cruz, R-Texas, that would criminally prosecute individuals who share sexualized AI-generated images and require platforms to remove such images within 48 hours of being notified by a victim.

Klobuchar, posting on X, called the AI generated material “outrageous” and said the law would be enforced.

“No one should find AI-created sexual images of themselves online—especially children,” wrote Klobuchar. “X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”

Because AI is still an emerging technology, it remains unclear how it applies to criminal statutes and questions on the enforcement decisions— leaving federal regulators, law enforcement and courts with limited guidance. It’s not immediately clear, for instance, how many of the images and victims could be subject to legal or regulatory action under the Take It Down Act.

“The definitions are not favorable to what we’re dealing with right now,” said Amy Mushahwar, a partner at national law firm Lowenstein Sandler who specializes in data privacy and security issues.

Take It Down….Later

The  Take It Down Act can be enforced in two ways: through criminal prosecution of those who generate and share such images online and takedown notices submitted by victims to platforms, which must remove the image within two days. Neither is a perfect fit for what is happening on X.

The law’s takedown provision, which will be enforced through the Federal Trade Commission, does not take effect until May. 

 While the criminal penalties are currently active, they would only authorize the DOJ to investigate and charge individuals prompting Grok to generate the manipulated photos, not the company or Musk himself.

Further complicating matters, the law’s reliance on specific legal definitions can make it difficult  to prosecute some of the images generated on Grok. A victim’s age, or being depicted with even a small amount of clothing, can mean the difference between an image violating the law or not.

In conversations with lawyers and Hill staffers, many said the Take It Down Act would clearly cover the most egregious violations on Grok, like nudes and sexualized depictions of minors, but would be harder to apply to other instances. That’s because the Act criminalizes the sharing of “intimate visual depictions” using deepfakes, which under U.S. law is defined as an image showing an individual’s uncovered genitals, or displaying them covered in bodily fluids.

“That has a specific meaning under the law so that a depiction of a nude person may be an intimate visual depiction, but someone in a bikini may not be,” said Samir Jain, vice president of policy at the Center for Democracy and Technology.

Victims who have been undressed and placed in bikinis, lingerie or other suggestive clothing by Grok could, alternatively, seek legal relief under another section of the law that bans digital forgeries for adults and minors.

The U.S. Sentencing Commission is currently grappling with how to set minimum and maximum fines and jail sentences under the law and determine how it would apply to different crimes and sections of U.S. criminal code.

Communications Indecency

Even with restrictive language and delayed enforcement timelines, Grok’s mass undressing of users likely runs afoul of other federal and state laws, legal experts tell CyberScoop.

Others questioned whether X’s conduct would truly be protected under Section 230 of the Communications Decency Act, which typically shields social media platforms from civil lawsuits.

While Section 230 has traditionally been a legal bulwark for social media companies, shielding them from lawsuits over user content, X may have personal culpability under the law because Grok is a company feature.

Jain said that legal protections under Section 230 are predicated on the idea that the platforms shouldn’t be held liable for third-party created content posted by users. But in this case, X’s own embedded AI tool is generating the images.

“There’s a good argument that [Grok] at least played a role in creating or developing the image, since Grok seems to have created it at the behest of the user, so it may not be user content insulated by section 230,” he said.

However, he also posited that Musk’s status with the Republican Party and President Donald Trump could also deter federal agencies from taking a hard line. At the FTC, for example, Trump has fired the two commissioners who were nominated by the Democratic Party, leaving it a more partisan and White House-controlled entity than in previous administrations.

Laws “require enforcement by the federal government, the Justice Department in the case of criminal [law], but the FTC in the case of the takedown piece,” he said. “And so there might be questions also about the degree to which the administration would be committed to enforcing those laws against X and Musk.”

A lane for state AGs

As Riana Pfefferkorn, a non-resident fellow at Stanford University’s Center for Internet Security pointed out, Congress has signaled its broader stance on criminalizing AI-generated sexual deepfakes known through legislation like the Take It Down Act. In addition, dozens of states have anti-CSAM laws on the books, including many that specifically target AI-generated child pornography.  

Mushahwar agreed, predicting even if Musk avoids federal scrutiny, state attorneys general will likely move  aggressively to enforce existing CSAM and digital forgery laws. She said they will also look for places where “logical extensions” can cover  the AI images being generated and posted on X.

Given the widespread revulsion the scandal has been met with, many AGs will likely feel serious pressure from their constituents to use whatever legal tools at hand to go after offenders.

“I do think Elon Musk is playing with fire, not just on a legal basis, but on a child safety basis,” Mushahwar said. “Like, if your platform is growing because you’re creating interest from pedophiles, that is creating a cesspool that might end up creating a trafficking haven.”

The post ‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster appeared first on CyberScoop.

FCC finalizes new penalties for robocall violators

By: djohnson
6 January 2026 at 17:47

The Federal Communications Commission finalized new financial penalties for telecoms that submit false, inaccurate or late reporting to a federal robocalling system.

The new regulations, which go into effect Feb. 5, will require providers to recertify every year that their information is accurate in the Robocall Mitigation Database (RMD). It would also impose fines on offenders, including $10,000 for submitting false or inaccurate information and $1,000 for each entry not updated within 10 business days of receiving new information.

The commission also added two-factor authentication cybersecurity protections to access the database and directed its Wireline Competition Bureau to establish a new channel for reporting on deficient filings.

Those deficiencies “range from failures to provide accurate contact information to submission of robocall mitigation plans that do not in any way describe reasonable robocall mitigation practices,” the FCC wrote in a final rule posted this week in the Federal Register.

The FCC already requires voice service providers to verify and certify the identities of their callers through the RMD. The database is designed to help regulators and law enforcement track and prevent call spoofing, a frequent tactic of illegal robocallers, and hold providers accountable for the identities of callers and phone numbers that use their networks.

But America’s telecommunications networks are vast and decentralized, comprised of both massive companies like Verizon and AT&T and smaller telecoms and voice-over-internet-protocol (VoIP) providers. Calls often hop from one provider network to another, and verification can get lost or overlooked in the chain of custody.

Historically, federal regulators neither verified nor enforced the accuracy of those filings. Their effectiveness was called into question two years ago, when a political consultant used a voice-cloning tool to impersonate then-President Joe Biden in fake voicemails to New Hampshire voters, spoofing the number of a prominent state Democratic ally. The carrier that transmitted those calls, Lingo Telecom, had nonetheless verified the caller’s identity at their highest level of confidence.

The FCC asked for public feedback on whether to treat violations as minor paperwork errors, which typically carry smaller fines, or as evidence of more serious misrepresentation or lack of candor on the part of the provider. Telecom trade associations opposed fines for false or inaccurate filings unless filers were first granted an opportunity to correct the error or the FCC finds the information “willfully” inaccurate.  State attorneys general and robocall surveillance platform ZipDX urged the FCC to take a stricter approach  arguing that false filings “significantly undermines the Commission’s efforts to curb illegal robocalls.”

“The State AGs and ZipDX each express strong support for treating the filing of false or inaccurate information in the Robocall Mitigation Database akin to misrepresentation/lack of candor, arguing that such actions should elicit the statutory maximum penalty,” the commission wrote.

The FCC ultimately searched for a middle ground, concluding that a false filing in this case “warrants a significantly higher penalty than the existing $3,000 base forfeiture for failure to file required forms or information” but lower than the statutory maximum.

The post FCC finalizes new penalties for robocall violators appeared first on CyberScoop.

FBI says ‘ongoing’ deepfake impersonation of U.S. gov officials dates back to 2023

By: djohnson
19 December 2025 at 15:46

The FBI said that unknown actors have continued to deploy AI voice cloning tools in an ongoing effort to impersonate U.S government officials and extract sensitive or classified information or conduct scams.

The bureau initially warned back in May that the campaign had been ongoing since at least April 2025. In an update Friday, they revised that initial timeline and said there was evidence of such activity dating back to 2023.

“Activity dating back to 2023 reveals malicious actors have impersonated senior U.S. state government, White House, and Cabinet level officials, as well as members of Congress to target individuals, including officials’ family members and personal acquaintances,” the FBI said in a public service announcement.

These communications include the use of encrypted apps like Signal and AI-powered voice cloning tools to trick victims into believing they’re speaking with high-level government officials, who have regularly used Signal to discuss government business under the Trump administration.

The FBI’s updated timeline would mean that such impersonation efforts may have stretched back to the Biden administration, though the bureau does not specify how many individuals, groups or actors may have been involved over the years.

The update also includes new details around the specific tactics and talking points the impersonators use to ensnare victims.

After starting off by engaging the victim through SMS texting, introducing themselves and suggesting that due to the sensitive nature of the discussions, they move to encrypted messaging apps like Signal or WhatsApp, as well as messaging apps like Telegram.

Once there, the fake government official will engage the victim on a topic they are known to be well-versed in, then propose scheduling a meeting between them and President Trump or another high-ranking government official, or float the possibility of nomination to a company’s board of directors.

That sets up the victim for requests for more sensitive personal data under the guise of vetting, like passport photos, requests to sync their device with the victim’s phone contact list, requests for the victim to broker introductions between associates or wiring funds overseas.

The bureau notes in footnote that access to the targeted individual’s contact list is used “to enable further impersonation efforts or targeting.”

“Once actors have access to the victim’s contact list, they send out another round of smishing or vishing messages, this time impersonating the last victim or another notable figure the new targeted individual would logically come in contact with,” the announcement stated.

In July, the State Department sent a cable to diplomats warning that someone was using AI audio tools and text messages to impersonate Secretary of State Marco Rubio. Under the Biden administration in 2024, a deepfake video of former State Department spokesperson Matthew Miller popped up online appearing to suggest that Russian cities were legitimate targets for Ukraine’s military.

The post FBI says ‘ongoing’ deepfake impersonation of U.S. gov officials dates back to 2023 appeared first on CyberScoop.

U.S. Sentencing Commission seeks input on criminal penalties for deepfakes

By: djohnson
18 December 2025 at 12:27

The U.S. Sentencing Commission is issuing preliminary sentencing guidelines for criminal offenses under the Take It Down Act, a law passed earlier this year to curb the spread of nonconsensual deepfake pornography.

The Take It Down Act marks one of the first major pieces of legislation passed by Congress to address AI-generated deepfakes, attracting broad bipartisan support. The legislation sailed through Congress, passing 402-2 in the House and comfortably in the Senate, despite opposition from some digital rights groups, and had the vocal support of First Lady Melania Trump.

The law’s language makes it a federal crime to publicize nonconsensual intimate or pornographic imagery of others, both real and AI-generated, and requires companies to remove any images hosted or shared on their platforms within 48 hours of receiving notice. It also empowers the Federal Trade Commission to investigate and enforce compliance. 

The legislation provides broad guidance on prison sentences and financial penalties for offenses, with digital forgers subject to fines and up to two years of imprisonment for deepfaking an adult and up to three years for a minor.

The commission proposes more specific penalties for different types of offenses, while also seeking public input on the most appropriate way to define the offense in U.S. law.

For example, the law included specific language adding new criminal offenses for deepfakes to sections of U.S. law prohibiting obscene or harassing phone calls, a nod to how much nonconsensual pornography is shared through smartphones.

That section has been updated to further define the offense as anyone using “an interactive computer service”  to knowingly publish an “intimate visual depiction” of a minor and (in certain cases) adults with the intent to “abuse, humiliate, harass, or degrade” or “arouse or gratify the sexual desire of any person.”

Individuals found guilty of threatening to publish nonconsensual deepfakes of an adult would be subject to a maximum of years in prison if the threat involves “an intimate visual depiction” of them and 18 months if the deepfake is used for digital forgery. Deepfaking a minor for the purpose of digital forgery carries a maximum sentence of 30 months.

While experts have warned about the damaging potential of deepfakes for years, large language models have gotten increasingly better at developing lifelike media. As more AI deepfake tools come online, public interest groups have called for companies like OpenAI to take tools like Sora 2 offline after they were used to create scores of false cell-phone style videos depicting food stamp recipients that were later picked up by real news outlets like Fox News.

This month, the American Bar Association released a report around the use of AI in the legal sector that found courts were generally unprepared for deepfake media and the many ways it could impact the integrity of evidence presented to the court.

The deepfake changes are part of a broader package of proposed regulatory changes the U.S. Sentencing Commission is proposing, with any comments from the public accepted until Feb. 16, 2026.

The post U.S. Sentencing Commission seeks input on criminal penalties for deepfakes appeared first on CyberScoop.

AI is causing all kinds of problems in the legal sector 

By: djohnson
15 December 2025 at 13:55

The American Bar Association believes the use of artificial intelligence in the legal sector is eroding key procedures, documentary records and evidence relied on to establish ground-level truth in the court system.

In a report released this month the ABA, which sets ethical standards for the legal profession and oversees the accreditation of roughly 400,000 attorneys in the United States, details how AI has permeated throughout the legal system. The report says lawyers increasingly use it to save time, conduct research, summarize and write key court filings, while judges use it for many of the same functions.

But as artificial intelligence – particularly generative AI tools – has been integrated throughout the legal system, it’s raising major questions for a profession that depends on accuracy and truthful representation in court. 

“Faced with deepfakes offered as evidence in court or claims that legitimate evidence is a deepfake, judges are grappling with questions surrounding the authenticity, validity, and reliability of AI-generated evidence,” the ABA stated.

 One of the most pressing challenges facing the court system is figuring out how to handle the emergence of lifelike, deepfake media. Fake imagery, audio and video can convincingly imitate the kinds of evidence courts have relied on for decades to determine what actually happened in a case.

With voice cloning and deepfake tools, bad actors can also create convincing media depicting judges, lawyers, witnesses or others involved in court cases in a false light, saying or doing things they never did. The ABA report cites reporting over the past year from agencies like the FBI, the Cybersecurity and Infrastructure Security Agency and organizations like the World Economic Forum warning that deepfakes pose a significant, long-term national security threat.

“The ease with which content can now be created and shared, as well as the use of algorithms that are optimized for engagement, means misinformation can spread widely and quickly,” the ABA report stated.

The findings are part of a broader report that outlines both the risks and benefits of incorporating AI technologies into the legal profession. And it comes as courts across the world have reported problems with the technology, including AI-generated legal briefs that cite hallucinated case law and other errors and questions around the ethics of presenting deepfaked testimony from dead victims in criminal proceedings.

But the ABA report also includes numerous positive sentiments from members and lawyers around the technology, citing members who have “consistently emphasized AI’s role in automating core legal functions” such as drafting documents, doing legal research and reviewing high volumes of materials, documentation or evidence.

“Many highlighted generative AI—large language models in particular—as a game-changer for accelerating routine tasks like contract analysis and litigation preparation, as well as helping firms produce first drafts, summarize large datasets, and customize communications at scale,” the report stated.

The increasing use of AI in the legal profession comes as some members of the community have reported higher workloads that have led to increased stress, burnout and attrition. A report last week from the Association of Corporate Counsel called work-related stress and long hours “a pervasive crisis” for in-house legal professionals, with legal leadership and those operating in high-demand sectors facing the highest burdens.

Officials at the highest levels of the judiciary have sounded the alarm around that the integrity of the courts are under constant threat. In his year-end report last year, Supreme Court Chief Justice John Roberts warned that bad actors, including foreign governments, are seeking to undermine trust in the legal system in the digital space, including through the kind of hacking or bot-driven disinformation campaigns that experts say have been significantly augmented by the scaling, speed and automation of large language models.

The goal for many of these parties is to “compromise the public’s confidence in our processes and outcomes” or negatively impact the public’s perception of it. Roberts mused that the judicial branch “is particularly ill-suited” to fight against these campaigns because judges mostly speak through their legal opinions and don’t generally call press conferences or issue rebuttals the way other public officials do.

Meanwhile, an AI task force at the ABA composed of “tech-savvy judges” is currently working to develop public guidance for how their profession should use generative AI and how to address “the intractable problem of deepfakes as evidence in court.” The body is also looking at how AI impacts questions around legal risk and liability.

The post AI is causing all kinds of problems in the legal sector  appeared first on CyberScoop.

Organizations can now buy cyber insurance that covers deepfakes

By: djohnson
9 December 2025 at 16:36

Synthetic media, including AI-generated deepfake audio and video, has been increasingly leveraged by criminals, scammers and spies to deceive individuals and businesses.

Sometimes they do so by imitating an employee’s CEO, urging them to transfer large sums of money or provide them access to work accounts. Other times this fake media is created by a competitor or bad actor to ruin the reputation of executives or their companies.

Now cybersecurity insurance provider Coalition is offering coverage to organizations for deepfake-related incidents. On Tuesday, the company announced its cybersecurity insurance policies will now cover certain deepfake incidents, including ones that lead to reputational harm. The coverage will also include response services such as forensic analysis, legal support for takedown and removal for deepfakes online and crisis communications assistance.

In response to questions about deepfake coverage, Michael Phillips, head of Coalition’s cyber portfolio underwriting, said Coalition has covered deepfake-enabled fraud leading to fraudulent transfers since last year. Now, coverage is being expanded to “any video, image, or audio content that is created or manipulated through the use of AI by a third party, and that falsely purports to be authentic content depicting any past or present executive or employee, or falsely frames the organization’s products or services.”

“Today’s threat actors use AI and deepfakes for more than quick rip-and-run wire transfer theft, so we expanded our coverage to include the additional expenses a business could incur,” Phillips wrote. “We have seen many examples of this type of threat in recent headlines. For example, the deepfake of Warren Buffett promoting fake investment and crypto schemes forced Berkshire Hathaway to issue public warnings not only to protect its reputation, but also to prevent the spread of misinformation, market manipulation, and investor fraud.”

In an interview, Shelley Ma, incident response lead at Coalition, told CyberScoop that deepfakes still represent a small fraction of the claims the company processes, and that 98% of their claims don’t involve any advanced use of AI.

This is largely because “the low hanging fruits still very much work” for malicious hackers, with exploited VPNs, unpatched software and phishing still largely effective for those attempting to  gain access to targeted organizations. Even in impersonation scams, attackers tend to rely on lower tech tactics like spoofing phone numbers.

Ma said that deepfake-enabled breaches they have seen tend to be from sophisticated threat actors that can bring the necessary technical expertise to deploy them in credible and believable ways.

“In the handful of cases where we have spotted deepfakes, we’ve seen attackers mostly use AI-generated voice or text to impersonate trusted contacts,” said Ma. “So typically, it would be a CEO or finance executive to authorize fraudulent payments or share credentials, and these are highly targeted and designed to blend into an existing workflow, which makes them quite dangerous even when they’re not yet that common.”

While traditional phishing relies on persuading victims through convincing text, deepfake video and audio adds “a whole new dimension of sensor authenticity” that make this type of attack more effective. Malicious parties can also generate dozens of tailored voice or text impersonations “in minutes,” something she said used to take days of reconnaissance and manual effort to pull off before LLM automation.

“These attacks, they shortcut skepticism, and they can bypass even very well-trained employees,” Ma said.  

These successful campaigns still require a lot of work, and for now, small and medium-sized businesses may not be attractive enough targets to justify using AI-enabled attacks. However, Ma estimated that as AI technology becomes more advanced, affordable and accessible, these organizations are likely just 12 to 24 months away from seeing AI regularly used in fraud and business email compromise scams.

Update 12/11/25: This article has been edited to remove a reference to a Digital Citizens Alliance report.

The post Organizations can now buy cyber insurance that covers deepfakes appeared first on CyberScoop.

New legislation targets scammers that use AI to deceive

By: djohnson
26 November 2025 at 13:29

A new bipartisan bill introduced in the House would increase the criminal penalties for committing fraud and impersonation with the assistance of AI tools.

The AI Fraud Deterrence Act, introduced by Reps. Ted Lieu, D-Calif., and Neal Dunn, R-Md., would raise the overall ceiling for criminal fines and prison time for fraudsters who use AI tools to create convincing fake audio, video or texts to carry out their schemes.

For instance, the total potential fines incurred for mail fraud, wire fraud, bank fraud and money laundering would all be increased to between $1-2 million, with new language specifying that using AI-assisted tools carries a maximum prison sentence of 20-30 years.

Meanwhile, scammers who use AI to impersonate government officials can be fined up to $1 million and spend 3 years in prison.

“Both everyday Americans and government officials have been victims of fraud and scams using AI, and that can be ruinous for people who fall prey to financial scams, and can be disastrous for our national security if government officials are impersonated by bad actors,” Lieu said in a statement.

The bill comes after a rash of high-profile incidents over the past year where unidentified parties have been able to communicate with or impersonate top U.S. officials in the government, seemingly with the assistance of AI voice and video tools.

In May, The Wall Street Journal reported that federal authorities were investigating fraudulent calls and texts sent to senators, governors, business leaders and other VIPs from someone impersonating White House Chief of Staff Susie Wiles’ voice and number. Wiles reportedly said her phone had been hacked, which President Donald Trump later confirmed publicly, telling the press “they breached the phone; they tried to impersonate her.” Some of the recipients said the voice sounded AI-generated.

Less than two months later, the State Department warned diplomats that someone was impersonating  Secretary of State Marco Rubio in voice mails, texts and Signal messages. The messages were sent to at least three foreign ministers, a U.S. senator and a governor in what  appeared to be a scam. Rubio was also targeted in a deepfake earlier this year,  making it appear he was on  CNN vowing to persuade Elon Musk to cut off Starlink access to Ukraine.

Other high-profile figures like singer Taylor Swift  have seen their likeness and image used in scams, pornography or political attacks, while former President Joe Biden had his voice cloned by AI in a scheme hatched by a Democratic consultant working for rival Dean Phillips ahead of the 2024 New Hampshire presidential primary.

The post New legislation targets scammers that use AI to deceive appeared first on CyberScoop.

Advocacy group calls on OpenAI to address Sora 2’s deepfake risks

By: djohnson
12 November 2025 at 13:21

Throughout 2024, OpenAI teased the public release of Sora, its new video generation large language model, capable of creating lifelike visuals out of user prompts.

But due to concerns about the tool being used to create realistic disinformation during a  critical U.S. election year, the company delayed its release until after the elections. 

Now, a year later, critics are warning their fears about Sora’s reality distortion powers have come to pass, flooding the internet with false, fabricated or manipulated AI content, often with minimal or no labeling to indicate the media is synthetic.

““The rushed release of Sora 2 exemplifies a consistent and dangerous pattern of OpenAI rushing to market with a product that is either inherently unsafe or lacking in needed guardrails,” wrote J.B. Branch, who leads AI accountability work at nonprofit Public Citizen, in a Nov. 11 letter addressed to OpenAI CEO Sam Altman.

Branch added that releasing Sora 2 shows  “reckless disregard” for product safety, the rights of public figures whose names or images could be deepfaked, and consumer protections against other abuses.

Public Citizen is pressing OpenAI to temporarily take the tool offline and work with outside experts to build better guardrails.

“We urge you to pause this deployment and engage collaboratively with legal experts, civil rights organizations, and democracy advocates to establish real, hard technological and ethical redlines,” around Sora, the group wrote. 

Large language models have been able to create deepfakes for years, but the technology was often plagued by a string of identifiable visual cues, such as people having more than 5 fingers or videos that appear overly polished or defy the laws of physics.

Over the past year, new tools like Sora have overcome many of those technical obstacles, and can now deliver lifelike videos. The only indicator a video may be fake is a small, Sora watermark from OpenAI in the lower right corner. Cybersecurity experts say it is trivial in many cases for bad actors to remove or crop out such labeling before sharing them on social media as if they’re real.

Compounding matters, while OpenAI and other AI image and video generators have historically made efforts to prevent their tools from impersonating politicians, celebrities or copyrighted characters, Sora 2 initially launched with none of those guardrails in place. The first weeks of the release were filled with users sharing videos of Altman grilling Pikachu, a popular character from the cartoon anime Pokémon and other fictional figures protected by copyright law.

In response to CyberScoop’s request for comment, an OpenAI spokesperson said the company adds visible watermarks to Sora videos and tracks their origins using metadata standards like the Coalition for Content Provenance and Authenticity. OpenAI also uses reverse-image, audio, and video tools to identify Sora-generated videos online.

“We have multiple guardrails intended to ensure that a living person’s likeness can’t be generated in Sora unless they’ve intentionally uploaded a cameo and given consent for it to be used,” the spokesperson said. “The feature is fully opt-in, backed by video-and-audio verification, and users control who can use their cameo. They can revoke access or remove any video that includes it at any time.”

The spokesperson referenced Sora’s system card, which describes technical specs and model limitations. It says, “where real people are featured in videos, additional model safeguards will apply” to prevent “non-consensual nudity or racy output, graphic violence, or output that could be used for certain fraudulent purposes.”

The document also acknowledges limits, noting, “some deceptive content is highly contextual and not easily detectable by classifiers” and that “there is not a single solution to provenance.” OpenAI said it plans to keep improving Sora’s safeguards.

Bala Kumar, chief product and technology officer at Jumio, said Sora 2 “lowers the barrier to deepfakes for everyone in the general public.”

“But what makes it accessible to everyday people makes it vulnerable to bad actors for misuse,” Kumar added. “While there’s a small watermark on these videos, fraudsters can easily remove it.”

In October, following objections from actor Bryan Cranston and the Screen Actors Guild-American Federation of Television Artists (SAG-AFTRA), OpenAI changed its policy to prevent Sora from generating videos of live celebrities or copyrighted figures.

However, that still allows people to create realistic and disruptive deepfakes without breaking OpenAI’s rules. For instance, the prohibition on public figures only extends to living people, meaning users can still generate videos of dead public figures.

This has led to videos that seem like harmless fun, such as rappers Tupac Shakur and The Notorious B.I.G. participating in a pro-wrestling-styled feud or singer Michael Jackson dancing at fast food restaurants and stealing chicken from customers.

But as the Washington Post has reported, Sora 2 has also been used to create racist videos of deceased public figures, like Martin Luther King Jr. stuttering and drooling, or John F. Kennedy joking about the assassination of right-wing personality Charlie Kirk. OpenAI called the videos of King Jr. “disrespectful” and pulled them offline after his relatives complained.

Beyond historical figures, Sora and other tools can easily be used to generate fake videos that tap into current political issues of the moment for virality. One recent example was a series of videos depicting Americans angrily reacting to food prices at grocery stores, in their cars and other locations.

The videos came while Congress and the White House were in a standoff over government funding, including the money needed for the Supplemental Nutrition Assistance Program (SNAP).  The videos showed AI-generated people saying things like “I ain’t paying for none of this s–t” and “It is the taxpayer’s responsibility to take care of my kids!”

It’s not clear what model was used to generate the videos, though some briefly flash a recognizable Sora watermark, but media outlets like Fox News initially published stories that treated the clips as genuine, with headlines like “SNAP beneficiaries threaten to ransack stores over government shutdown.” Fox News later updated its story and headline to note that the stories were AI-generated, and the story appears to have since been removed from the outlet’s website. 

Outside of politics, these tools have plenty of potential to upend the lives of ordinary Americans who don’t hold power or appear on television. The most popular use of deepfakes by far in the generative AI era has been for nonconsensual pornography targeting women.

Although Public Citizen’s letter  doesn’t accuse people of using Sora 2 to generate pornography, it criticizes OpenAI for allowing“non-nude fetish content” to proliferate on Sora’s social media platform.

“There is a dangerous lack of moderation pertaining to underage individuals depicted in sexual contexts, making Sora 2 unsuitable for public use,” Public Citizen wrote.

The post Advocacy group calls on OpenAI to address Sora 2’s deepfake risks appeared first on CyberScoop.

❌
❌