Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory

By: djohnson
4 May 2026 at 12:02

A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area. 

According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory. 

The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception. 

 On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.

The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”

Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met. 

Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.

“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.

Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.

“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.

After this story was published, Snap told CyberScoop it is investigating.

“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”

The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.

Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.

“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”

The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.

Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”

The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”

On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.

Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.

Apple and Google did not respond to a request for comment.

You can read the full lawsuit below.


5/05/26: This story was updated to include comment from Snap received after publication.

The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.

The FTC’s AI portfolio is about to get bigger

By: djohnson
20 April 2026 at 17:00

The Federal Trade Commission is poised to deepen its involvement in curbing the use of AI for malicious purposes, including the spread of nonconsensual sexualized deepfakes and voice cloning scams.

Last year, Congress passed the Take It Down Act, a law that allowed for criminal prosecution of individuals who share or distribute nonconsensual, intimate images and digital forgeries, including those that are AI-generated.

At a Senate oversight hearing last week, FTC Chair Andrew Ferguson called the new law one of the “greatest legislative achievements” of the current Congress and President Donald Trump’s administration, and said the FTC was preparing for “robust enforcement.”

Earlier this month, the Department of Justice scored its first successful conviction under the new law, when 37-year-old Columbus, Ohio resident James Strahler pleaded guilty to using AI-generated deepfake nudes as part of a harassment campaign targeting at least six women.

Another section of the law – set to become active in May, will permit individuals to file “take down” notices with websites that publish or host sexual deepfakes. Companies will have 48 hours to remove the content or be subject to FTC investigation and enforcement.

Commissioner Mark Meador said at a March 30 conference in Washington D.C. that while he hopes they “never have to enforce it,” the FTC is treating Take It Down enforcement as a top priority and “actively spinning everything up that we need” to enforce the take down provision.   

That could quickly set up one of the first major confrontations with the tech sector— especially companies like xAI. Its Grok tool continues to be used to create and host nonconsensual deepfake images of real people, even after the scandal it faced earlier this year.

Following his speech, CyberScoop asked Meador how the take down provisions might apply to Grok’s mass nudification spree of its users. He said the law specifies that the commission can’t take action against a company until they receive formal complaints starting in May.  

“This is coming into place, and then if they don’t [remove the content] we would get the complaints and then we would go after them at that point,” Meador said. “So, we kind of have to wait and see how…companies respond to complaints and requests being made, and my hope would be that every company that gets a request to take something down would immediately take it down.”

xAI’s press office did not respond to CyberScoop’s request for comment on its preparations to comply with the Take It Down act. 

Strahler, who has yet to be sentenced, also admitted to using photos of children in his neighborhood to create deepfake pornography. A strategic plan published earlier this month flagged protecting children online as a “key concern” for the commission that merits more consumer tools and resources.

The commission is “dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act,” the plan states.

Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, told CyberScoop that the current commission’s focus on child online safety leaves ample room for the law to be brought to bear in creative ways.

“We’ve seen enforcing technology and privacy violations related to youth children is a priority, so I think it’s relatively easy to parlay that into some Take it Down Act enforcement,” she said.

Waughn said the one-year delay for provision’s enforcement was so that platforms could prepare, but also said the FTC could do more to publicly signal to companies what lawful compliance looks like, similar to the resources they provide around major privacy laws.  

“I think what would be helpful for all organizations…would be guidance explaining what constitutes a good faith effort, for example, to attempt to address a take down request,” said Waughn.

Living in a scammer’s paradise

The FTC is also grappling with the impact of AI on criminal scams targeting Americans online.

Ferguson told lawmakers that AI is “increasing both the sophistication of the actual mechanisms by which the scams are accomplished, but it’s also making it easier for scammers to choose their targets.”

But the FTC’s powers are limited, as the Federal Communications Commission regulates the telephone and internet providers that transmit most scams. Ferguson also noted that many call center scams are located overseas “where they don’t bat an eye at the risk of civil enforcement from the FTC.” He said the commission was open to additional legislative authorities to tackle the problem.

At the March conference, Meador was said AI-fueled deception was something the commission thinks about “daily” and is lowering the barrier to entry for many criminal schemes.

“The biggest place that we’ve seen [in] the way that some of these AI tools are being used to triple charge scams, to be honest,” he said.

Last year, the FBI reported that voice cloning scams impersonating distressed family members had bilked Americans out of nearly $900 million, and the technology has been used to impersonate high level Trump administration officials in conversations with businesses and political leaders.

Senator Maggie Hassan wrote to four AI voice cloning companies – ElevenLabs, LOVO, Speechify and VEED – asking what policies and programs they had in place to prevent or deter fraud enabled by their tools.

But Meador said that when it comes to deceptive claims, it’s particularly difficult to define credulity around the use of AI. Many deepfakes, he said, are seen and consumed by many people online with the same sort of “willing suspension of disbelief” that they bring to computer-generated effects in movies.

As such, the FTC will likely have to adjudicate on a case-by-case basis rather than through “broad brush strokes.”

“I think we’ll see a lot of that in the AI context, where if you know something wasn’t meant to be real or authentic, that’s not a concern,” he said. “The question is then, what are those situations where there is an expectation that you’re being shown something authentic and quote, unquote ‘real’ as opposed to being AI generated and was there misrepresentation or material omission” to disclose that?”

The post The FTC’s AI portfolio is about to get bigger appeared first on CyberScoop.

Executive orders likely ahead in next steps for national cyber strategy

15 April 2026 at 14:51

National Cyber Director Sean Cairncross expects more executive orders coming from the White House as part of implementing the national cybersecurity strategy, he said Wednesday.

Staffers on Capitol Hill and others in the cyber world have been awaiting the implementation guidance the Trump administration had proclaimed would come to accompany the strategy  published last month.

Asked at a Semafor event about whether that would include executive orders, Cairncross answered, “I think that that’s the case.”

The administration released an executive order on fraud the same day it released its cyber strategy on March 6. Some of that order touched on cybercrime.

“This is rolling forward actively, and you should expect that there will be more execution and action in line with our strategic goals,” he said.

Cairncross cited another administration activity that fit into the strategy, such as the first conviction last week under the Take It Down Act, a law First Lady Melania Trump advocated for that seeks to combat non-consensual AI-generated sexually explicit images, violent threats and cyberstalking.

He declined to preview any future implementation plans, and said he expected they would be coming “relatively soon.”

A centerpiece of the administration strategy is confronting adversaries to make sure they suffer consequences for their hacking of United States targets.

Cairncross wouldn’t say explicitly if Trump, in his visit to Beijing next month, would address Chinese hacking.

“When we start to see things like prepositioning on critical infrastructure, that is something that needs to be addressed,” he said. Pressed on whether that meant cyber would be on the agenda during the visit, Caincross said, “I would expect that the safety and security of the American people will be first and foremost, as it always is for the president.”

Cairncross touted American ingenuity for producing an artificial intelligence model like Anthropic’s Claude Mythos, rather than it developing under U.S. cyber rivals like China or Russia. He acknowledged reports about the administration holding meetings about the cyber risks and benefits of something like Mythos — “the model right now that everyone’s talking about” — adding that the administration is looking to balance the dangers and positive capabilities of AI in cyberspace.

“I would say from the White House perspective, we are working very closely with industry,” Cairncross said. “We’ve been in close collaboration with the model companies across the interagency to make sure that we are evaluating and doing this.”

The post Executive orders likely ahead in next steps for national cyber strategy appeared first on CyberScoop.

Ohio man becomes first in country to be convicted under federal revenge porn law

By: Dissent
11 April 2026 at 08:17
Henry Aleksandrov reports: An Ohio man who became the first person in the country to be convicted under the federal revenge porn law would be able to eventually reintegrate into society after Ohio lawmakers introduced several bills, some of which were already passed by the legislators. Among the ways the bills would help the man...

‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster

By: djohnson
8 January 2026 at 09:08

As collective disgust has continued to build over the widespread generation and sharing of nonconsensual, sexualized deepfakes generated by X’s GrokAI tool, angry onlookers have expressed shock that the activity continues unabated and company owner Elon Musk isn’t being compelled – by either U.S. regulators or law enforcement – to put a halt to the practice.

Legal experts say at the federal level, there are several laws and regulations already on the books that could expose Musk and X to significant fines, civil lawsuits and criminal prosecution.

Those tools include new laws like the Take It Down Act, legislation sponsored last year by Sens. Amy Klobuchar, D-Minn. And Ted Cruz, R-Texas, that would criminally prosecute individuals who share sexualized AI-generated images and require platforms to remove such images within 48 hours of being notified by a victim.

Klobuchar, posting on X, called the AI generated material “outrageous” and said the law would be enforced.

“No one should find AI-created sexual images of themselves online—especially children,” wrote Klobuchar. “X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”

Because AI is still an emerging technology, it remains unclear how it applies to criminal statutes and questions on the enforcement decisions— leaving federal regulators, law enforcement and courts with limited guidance. It’s not immediately clear, for instance, how many of the images and victims could be subject to legal or regulatory action under the Take It Down Act.

“The definitions are not favorable to what we’re dealing with right now,” said Amy Mushahwar, a partner at national law firm Lowenstein Sandler who specializes in data privacy and security issues.

Take It Down….Later

The  Take It Down Act can be enforced in two ways: through criminal prosecution of those who generate and share such images online and takedown notices submitted by victims to platforms, which must remove the image within two days. Neither is a perfect fit for what is happening on X.

The law’s takedown provision, which will be enforced through the Federal Trade Commission, does not take effect until May. 

 While the criminal penalties are currently active, they would only authorize the DOJ to investigate and charge individuals prompting Grok to generate the manipulated photos, not the company or Musk himself.

Further complicating matters, the law’s reliance on specific legal definitions can make it difficult  to prosecute some of the images generated on Grok. A victim’s age, or being depicted with even a small amount of clothing, can mean the difference between an image violating the law or not.

In conversations with lawyers and Hill staffers, many said the Take It Down Act would clearly cover the most egregious violations on Grok, like nudes and sexualized depictions of minors, but would be harder to apply to other instances. That’s because the Act criminalizes the sharing of “intimate visual depictions” using deepfakes, which under U.S. law is defined as an image showing an individual’s uncovered genitals, or displaying them covered in bodily fluids.

“That has a specific meaning under the law so that a depiction of a nude person may be an intimate visual depiction, but someone in a bikini may not be,” said Samir Jain, vice president of policy at the Center for Democracy and Technology.

Victims who have been undressed and placed in bikinis, lingerie or other suggestive clothing by Grok could, alternatively, seek legal relief under another section of the law that bans digital forgeries for adults and minors.

The U.S. Sentencing Commission is currently grappling with how to set minimum and maximum fines and jail sentences under the law and determine how it would apply to different crimes and sections of U.S. criminal code.

Communications Indecency

Even with restrictive language and delayed enforcement timelines, Grok’s mass undressing of users likely runs afoul of other federal and state laws, legal experts tell CyberScoop.

Others questioned whether X’s conduct would truly be protected under Section 230 of the Communications Decency Act, which typically shields social media platforms from civil lawsuits.

While Section 230 has traditionally been a legal bulwark for social media companies, shielding them from lawsuits over user content, X may have personal culpability under the law because Grok is a company feature.

Jain said that legal protections under Section 230 are predicated on the idea that the platforms shouldn’t be held liable for third-party created content posted by users. But in this case, X’s own embedded AI tool is generating the images.

“There’s a good argument that [Grok] at least played a role in creating or developing the image, since Grok seems to have created it at the behest of the user, so it may not be user content insulated by section 230,” he said.

However, he also posited that Musk’s status with the Republican Party and President Donald Trump could also deter federal agencies from taking a hard line. At the FTC, for example, Trump has fired the two commissioners who were nominated by the Democratic Party, leaving it a more partisan and White House-controlled entity than in previous administrations.

Laws “require enforcement by the federal government, the Justice Department in the case of criminal [law], but the FTC in the case of the takedown piece,” he said. “And so there might be questions also about the degree to which the administration would be committed to enforcing those laws against X and Musk.”

A lane for state AGs

As Riana Pfefferkorn, a non-resident fellow at Stanford University’s Center for Internet Security pointed out, Congress has signaled its broader stance on criminalizing AI-generated sexual deepfakes known through legislation like the Take It Down Act. In addition, dozens of states have anti-CSAM laws on the books, including many that specifically target AI-generated child pornography.  

Mushahwar agreed, predicting even if Musk avoids federal scrutiny, state attorneys general will likely move  aggressively to enforce existing CSAM and digital forgery laws. She said they will also look for places where “logical extensions” can cover  the AI images being generated and posted on X.

Given the widespread revulsion the scandal has been met with, many AGs will likely feel serious pressure from their constituents to use whatever legal tools at hand to go after offenders.

“I do think Elon Musk is playing with fire, not just on a legal basis, but on a child safety basis,” Mushahwar said. “Like, if your platform is growing because you’re creating interest from pedophiles, that is creating a cesspool that might end up creating a trafficking haven.”

The post ‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster appeared first on CyberScoop.

U.S. Sentencing Commission seeks input on criminal penalties for deepfakes

By: djohnson
18 December 2025 at 12:27

The U.S. Sentencing Commission is issuing preliminary sentencing guidelines for criminal offenses under the Take It Down Act, a law passed earlier this year to curb the spread of nonconsensual deepfake pornography.

The Take It Down Act marks one of the first major pieces of legislation passed by Congress to address AI-generated deepfakes, attracting broad bipartisan support. The legislation sailed through Congress, passing 402-2 in the House and comfortably in the Senate, despite opposition from some digital rights groups, and had the vocal support of First Lady Melania Trump.

The law’s language makes it a federal crime to publicize nonconsensual intimate or pornographic imagery of others, both real and AI-generated, and requires companies to remove any images hosted or shared on their platforms within 48 hours of receiving notice. It also empowers the Federal Trade Commission to investigate and enforce compliance. 

The legislation provides broad guidance on prison sentences and financial penalties for offenses, with digital forgers subject to fines and up to two years of imprisonment for deepfaking an adult and up to three years for a minor.

The commission proposes more specific penalties for different types of offenses, while also seeking public input on the most appropriate way to define the offense in U.S. law.

For example, the law included specific language adding new criminal offenses for deepfakes to sections of U.S. law prohibiting obscene or harassing phone calls, a nod to how much nonconsensual pornography is shared through smartphones.

That section has been updated to further define the offense as anyone using “an interactive computer service”  to knowingly publish an “intimate visual depiction” of a minor and (in certain cases) adults with the intent to “abuse, humiliate, harass, or degrade” or “arouse or gratify the sexual desire of any person.”

Individuals found guilty of threatening to publish nonconsensual deepfakes of an adult would be subject to a maximum of years in prison if the threat involves “an intimate visual depiction” of them and 18 months if the deepfake is used for digital forgery. Deepfaking a minor for the purpose of digital forgery carries a maximum sentence of 30 months.

While experts have warned about the damaging potential of deepfakes for years, large language models have gotten increasingly better at developing lifelike media. As more AI deepfake tools come online, public interest groups have called for companies like OpenAI to take tools like Sora 2 offline after they were used to create scores of false cell-phone style videos depicting food stamp recipients that were later picked up by real news outlets like Fox News.

This month, the American Bar Association released a report around the use of AI in the legal sector that found courts were generally unprepared for deepfake media and the many ways it could impact the integrity of evidence presented to the court.

The deepfake changes are part of a broader package of proposed regulatory changes the U.S. Sentencing Commission is proposing, with any comments from the public accepted until Feb. 16, 2026.

The post U.S. Sentencing Commission seeks input on criminal penalties for deepfakes appeared first on CyberScoop.

❌
❌