Reading view

There are new articles available, click to refresh the page.

A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory

A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area. 

According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory. 

The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception. 

 On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.

The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”

Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met. 

Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.

“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.

Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.

“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.

After this story was published, Snap told CyberScoop it is investigating.

“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”

The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.

Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.

“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”

The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.

Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”

The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”

On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.

Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.

Apple and Google did not respond to a request for comment.

You can read the full lawsuit below.


5/05/26: This story was updated to include comment from Snap received after publication.

The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.

Federal judge blocks Perplexity’s AI browser from making Amazon purchases

A federal judge has blocked Perplexity, makers of the Comet AI browser, from accessing user Amazon accounts and making purchases on their behalf.

In an March 9 order, Judge Maxine Chesney of the Northern District Court of California said the temporary injunction reflects the likelihood that Amazon “will succeed on the merits” of its claim that Perplexity’s AI agents violate the Computer Fraud and Abuse Act and the Comprehensive Computer Data Access and Fraud Act.

The court held that Amazon “has provided strong evidence that Perplexity, through its Comet browser, accesses with the Amazon user’s permission but without authorization by Amazon, the user’s password-protected account.”

Per the ruling, Perplexity must prohibit Comet from accessing, attempting to access, assisting, instructing or providing the means for others to access Amazon user accounts. Perplexity must also delete all Amazon account and customer data it collected along the way.

Perplexity told the court that the purchases were legitimate and legal because their users had authorized their AI agent to make the purchases on their behalf. But Amazon has explicitly denied them such permission, saying the agents make mistakes, interfere with Amazon’s own algorithm and place their users at an elevated cybersecurity risk.

Additionally, Chesney wrote that Amazon has incurred “significantly more” than $5,000 needed to qualify as computer fraud, including the cost of time spent by Amazon employees to develop new web tools to block Comet’s access to private customer accounts and detect future unauthorized access by the browser.

According to Amazon, they have asked Perplexity officials on five separate occasions to cease covertly accessing Amazon’s store with its agents. In a cease-and-desist letter sent to Perplexity Oct. 31, 2025, attorney Moez Kaba of law firm Hueston Hennigan wrote to Perplexity, alleging the automated purchases degrade the online shopping experience for Amazon customers.

Amazon requires AI agents to digitally identify themselves when using the e-commerce platform. But they alleged Perplexity executives “refused to operate transparently and have instead taken affirmative steps to conceal its agentic activities in the Amazon Store,” including configuring their software to covertly pose as human traffic.

“Such transparency is critical because it protects a service provider’s right to monitor AI agents and restrict conduct that degrades the customer shopping experience, erodes customer trust, and creates security risks for our customers’ private data,” wrote Kaba.

Additionally, such agents could pose a further risk to Amazon through cybersecurity vulnerabilities exploited by cybercriminals to hijack AI browsers like Comet.

The lack of response from Perplexity executives to earlier entreaties from Amazon may have played a role in the court’s injunction, with Chesney noting that Amazon was likely to suffer irreparable harm without court intervention because “Perplexity has made clear that, in the absence of the relief requested, it will continue to engage in the above-referenced challenged conduct.”

The case could have broader implications for the way commercial AI agent tools are designed and how far they can legally act on a person’s behalf. Notably, while Amazon opposes Comet’s AI-directed purchases, Perplexity claims that its users have given them permission to make purchases on their behalf.

Perplexity argued a court order halting their AI’s activities would go against the public interest, depriving them of consumer choice and innovation. Chesney concluded the opposite, endorsing Amazon’s argument that the public has a greater interest in protecting their computers from unauthorized access.

Perplexity did not respond to a request for comment on the ruling at press time.

You can read the injunction below.

The post Federal judge blocks Perplexity’s AI browser from making Amazon purchases appeared first on CyberScoop.

Fulton County lawsuit claims feds used ‘gross mischaracterizations’ to justify raid

A former federal official who tested and certified voting machines used in Fulton County, Georgia for the 2020 presidential election told a court that the federal government misrepresented key facts and omitted exculpatory public evidence while seeking a warrant in last month’s law enforcement raid.

The raid, carried out by the FBI and overseen by Director of National Intelligence Tulsi Gabbard, saw agents seize ballots and other documentation from the Fulton County election offices. A public affidavit cited five core allegations related to the county’s recordkeeping, electronic ballot image storage,  and election night reporting. Authorities allege these issues point to a potential conspiracy to intentionally manipulate the vote count in favor of Democrat Joe Biden.

Fulton County officials sued the federal government in response, arguing that the affidavit used to obtain a warrant for the raid “does not identify facts that establish probable cause that anyone committed a crime.”

Another filing includes sworn testimony from Ryan Macias, an elections expert who tested and certified the county’s voting machines while at the Election Assistance Commission. In his testimony, Macias told the court that the government’s key claims have already been investigated and have been found to be baseless.  

He said the FBI’s “many individual omissions and misstatements” in its affidavit reflect “gross mischaracterizations” of how elections work and directly contradict the conclusions of multiple prior investigations into the Nov. 2020 election in Fulton County.

“Once the statements and omissions in the Affidavit are corrected and based on my experience administering elections, the Affidavit does not have a substantial basis in reality,” Macias stated.

For instance, the FBI’s affidavit cites the absence of scanned images of all 527,925 ballots for the original count and recount. But Macias, who served as an adviser to Fulton County and witnessed pre and post-election operations in 2020, said this was standard practice.  Jurisdictions typically send only the vote count records from their machines on election night, because ballot images and audit logs are much larger files that can slow down the reporting process.

Macias also notes that the FBI affidavit omits that this issue was already investigated by Republican Secretary of State Brad Raffensperger, who found Georgia election workers weren’t required by law to preserve such images until a state law passed in 2021.

An investigator from Raffensperger’s office later told the Board of Elections that “it was “important to note that ballots can be scanned and tabulated without capturing ballot images,” while general counsel Charlene McGowan testified that ballot images play no role in the vote tabulation process and Fulton County’s paper ballots – counted three times – were the “most important” documents to verify the count.

“These explanations about the storing of ballot images have been publicly available for some time,” Macias noted.

Similarly, the FBI cites instances where some Fulton County ballots were scanned multiple times, claiming it shows evidence of “an intentional tabulation of ballots in a false matter” to make the recount and original vote counts match. The bureau also pointed to small, non-determinative differences between the county’s machine recount and totals from a hand-counted risk-limiting audit.

But the federal government again failed to mention in its petition for a warrant that these claims were “exhaustively” investigated by the Secretary of State’s office, which found the errors were benign, the duplicates weren’t counted, and did not impact the final vote count in the state’s count of the 2020 presidential contest.

According to Macias, the government’s affidavit also contains errors about basic facts about Fulton County’s reporting process. This includes misreporting the correct official vote count and the date and time it was transmitted to state officials for tabulation.

The post Fulton County lawsuit claims feds used ‘gross mischaracterizations’ to justify raid appeared first on CyberScoop.

Undressed victims file class action lawsuit against xAI for Grok deepfakes

A class of individuals who say they were victimized by nude or undressed deepfakes generated by Grok have filed a lawsuit against parent company xAI, calling the tool “a generative artificial intelligence chatbot that humiliates and sexually exploits women and girls by undressing them and posing them in sexual positions in deepfake images publicly posted on X.”

The lawsuit, filed Jan. 23 in the U.S. District Court of Northern California, alleges  that xAI executives knew Grok could generate explicit, nonconsensual images from real photos of victims, failed  to implement industry standard safeguards, and instead moved to “capitalize on the internet’s seemingly insatiable appetite for humiliating non-consensual sexual images.”

“xAI’s conduct is despicable and has harmed thousands of women who were digitally stripped and forced into sexual situations that they never consented to and who now face the very real risk that those public images will surface in their lives where viewers may not be able to distinguish whether they are real or fake,” the lawsuit stated.

There are at least 100 individuals involved in the lawsuit. The plaintiffs, who are suing under the anonymous name “Jane Doe, on behalf of herself and all others similarly situated,” cited data compiled by the New York Times showing that over a nine-day period between the end of December and the beginning of January, Grok generated 4.4 million images, of which at least 1.8 million were estimated to be sexualized deepfakes of women. Another analysis from the Center for Countering Digital Hate estimated that as many as three million of the images contained sexualized depictions of women, men and children.

“X users flooded Grok with these requests, and Grok obliged,” the lawsuit stated.

The suit claims that xAI took a number of actions to encourage users to create “nudified” content, including a feature that would allow other users to prompt Grok to manipulate photos on X simply by tagging a person’s handle, providing Grok with a “spicy” option where a user could click on a photo and generate controversial content, including sexualized deepfakes, and failing to implement any prompt filtering that would have prevented sexualized deepfake requests.

XAI owner Elon Musk fueled the controversy by asking Grok on X to generate a photo of himself in a bikini. As backlash grew, Musk announced the feature would be limited to paying subscribers, sparking more criticism that the company was  profiting off the tool’s abusive capability.

Musk has since put forth several different defenses, at one point denying that Grok was even generating illegal sexualized content. On Jan. 14, he posted on X that he was “not aware of any naked underage images generated by Grok. Literally zero.”

As CyberScoop has reported, legal experts believe Grok’s undressing capability – which researchers say goes beyond generating bikini or lingerie images and included images of fully nude women, men and children, or victims covered in bodily fluids – may expose xAI and Musk to a broad range of U.S. and international laws against sexualized deepfakes, digital fraud, and the distribution of child sexual abuse material.

In addition to X’s embedded Grok tool, researchers have said that they were also able to easily generate even more graphic  nonconsensual pornographic content through Grok’s main website.

The class action suit is the latest legal development  to hit xAI and Musk over the episode. The European Union, the UK, South Korea, Canada, Brazil and others have opened formal investigations into whether xAI violated domestic laws. Leaders in the UK, India, Malaysia, Indonesia have all threatened to restrict or ban X unless more is done.

Meanwhile, the U.S. federal government, including the Federal Trade Commission and the Department of Justice, have remained silent.

But even in the United States, Musk is likely to face increasing pressure from states. On the same day the suit was filed, 35 State Attorneys General wrote to Musk following a meeting with xAI officials expressing “deep concern” over the company’s actions.

The state officials said they were “committed” to investigations and prosecutions in this area and pressed xAI to do more to curb the Grok-enabled abuse.

“As several of us conveyed to you in our recent discussion, halting this kind of abusive and illegal behavior is an utmost priority for the undersigned Attorneys General,” they wrote. “The creation and dissemination of child sexual abuse material is a crime. In many states, this is true even where the material has been manipulated or is synthetic. Various state and federal civil and criminal laws also forbid the creation of nonconsensual intimate images and provide remedies to victims.”

While there are numerous AI nudifying tools, they wrote that “Grok merits special attention given evidence that it both promoted and facilitated the production and public dissemination of such images, and made it all as easy as the click of a button.”

The post Undressed victims file class action lawsuit against xAI for Grok deepfakes appeared first on CyberScoop.

Watchdog group sues for TSA data sharing agreement with ICE 

A nonprofit is suing the federal government for records surrounding a data sharing agreement between the Transportation Security Administration and Immigrations and Customs Enforcement that saw domestic travel data used for immigration enforcement.

Government watchdog group American Oversight filed suit against the agencies Thursday in the U.S. District Court for the District of Columbia, a day after acting TSA Administrator Ha Nguyen McNeill told Congress that it was “absolutely within our authorities” to hand over passenger data to other agencies for immigration enforcement operations.

A New York Times report in December revealed that the data sharing partnership included the names and birth dates of passengers. According to the report, TSA sends ICE a list several times a week containing passenger data for upcoming flights, which ICE then checks against its own immigration records.

Under the Trump administration, the Department of Homeland Security and ICE have dramatically expanded immigration enforcement efforts to areas – like airports and schools – that have not been traditionally targeted by past administrations. The data sharing program between TSA and ICE was reportedly used in the high-profile detention and deportation of 19-year-old college student Any Lucía López Belloza from Boston’s Logan Airport over Thanksgiving 2025. A court later found that Belloza was illegally deported to Honduras.

American Oversight filed Freedom of Information Act requests seeking to learn what other information was passed along as part of the agreement, claiming “the full scope of the collaboration—including what other pieces of data are being shared, and whether U.S. citizens have been swept up in any enforcement actions—has not been disclosed.”

The group claimed that after their initial requests were denied, TSA and ICE stopped responding after the nonprofit filed an appeal under FOIA law.

“As of the date of this Complaint, Defendant TSA has failed to notify…regarding American Oversight’s FOIA request, including the scope of responsive records Defendants intend to produce or withhold and the reasons for any withholdings,” the lawsuit states.

On Wednesday, Acting TSA Administrator Ha Nguyen McNeill defended the data sharing agreement to Congress as both legal and appropriate under the national security mandate of DHS.

While the Privacy Act prevents or constrains agencies from sharing information across different departments, that law doesn’t apply to what TSA and ICE are doing. Both are part of the Department of Homeland Security, and in many instances can legally share data with other component agencies, according to the National Immigration Law Center.

McNeill made a similar argument when pressed by Rep. LaMonica McIver, D-NJ, to explain what legal authorities TSA was relying on to share the data. She later promised to produce “the exact statute” that DHS was citing.

“We are acting within our absolute authorities,” said McNeill. “We are part of the DHS, it was a department set up by Congress to ensure these agencies aren’t operating in silos, and that’s what we’re doing today to advance the national security mission of the department.”

McIver disputed that characterization, noting “there is no law that forbids undocumented [people] from flying domestically within the U.S.”

“TSA’s mission is to secure transportation, not to assist ICE with immigration enforcement,” she said.

The post Watchdog group sues for TSA data sharing agreement with ICE  appeared first on CyberScoop.

❌