Texas sues Netflix over alleged data practices that create ‘surveillance machinery’ without user consent


Read more of this story at Slashdot.
Read more of this story at Slashdot.

A House Democrat who’s been at the forefront of congressional efforts to scrutinize the federal government’s use of commercial spyware wants the Commerce Department to brief Capitol Hill amid apprehension that the Trump administration might further embrace the technology.
Rep. Summer Lee, D-Pa., sent a letter to the department Thursday seeking a briefing on several developments stemming from Immigration and Customs Enforcement acknowledging its use of Paragon’s Graphite spyware, as well as an American company purchasing a controlling stake in Israel’s NSO Group. The Commerce Department sanctioned NSO Group under former President Joe Biden after widespread abuse allegations, including eavesdropping on government officials, activists and journalists.
“The Trump Administration appears to be broadly receptive to using commercial spyware to infiltrate cell phones and allowing U.S. investment in sanctioned spyware companies like NSO Group,” Lee wrote in her letter to Commerce Secretary Howard Lutnick, which CyberScoop is first reporting.
NSO Group’s new executive chairman, David Friedman, is a former Trump ambassador to Israel and was his bankruptcy attorney. He has said in November that he expects the administration will be “receptive” to using NSO Group tech.
“Given those close ties between NSO Group and the Trump Administration, and the serious concerns about how NSO’s technology could be used to spy on Americans, we write to request information regarding the purchase of NSO Group by an American company and the potential usage of NSO Group spyware by federal law enforcement,” wrote Lee, who sits on the Oversight and Government Reform panel and is the top Democrat on its Federal Law Enforcement Subcommittee.
Lee was one of the authors of a recent Democratic letter seeking confirmation of ICE’s use of Paragon’s Graphite, which ICE acknowledged. But they criticized the administration for not answering all their questions, in addition to being outraged.
In her latest letter, Lee asked the Commerce Department to brief Oversight and Government Reform Committee staff about internal department deliberations, Commerce communication with the White House and any outside conversations — including with Friedman — about government use of NSO Group technology or any other commercial spyware, and American investment in NSO.
NSO Group “appears to view the Trump administration as friendly to its interests in the United States, pitching itself as a vital tool for the U.S. government to safeguard national security,” Lee wrote, citing company court filings that it “is reasonably foreseeable that a law enforcement or intelligence agency of the United States will use Pegasus.”
The Biden administration sanctions, and court losses in a case against Meta, represented setbacks for NSO Group’s ambitions. And prior to the U.S. investment firm controlling stake purchase last fall, the Commerce Department under Trump rebuffed efforts to remove NSO Group from its sanctions list.
But the tens of millions of dollars worth of investment, following news that Israel had used Pegasus to track people kidnapped or murdered by Hamas, was a boon.
NSO Group maintains that its products are designed only to help law enforcement and intelligence fight terrorism and crime, and that it vets its customers in advance as well as investigates misuse. News accounts and other investigations have turned up a multitude of abuses.
There have been scattered reports of U.S. flirtation with using NSO Group technology. The FBI acknowledged it had bought a Pegasus license, but stopped short of deploying it. The Times of London reported that “it is believed” the Central Intelligence Agency used Pegasus spyware as part of a rescue mission last month for a U.S. airman downed in Iran.
You can read the full letter below.
The post One House Democrat is pressing Commerce on the government’s spyware use appeared first on CyberScoop.
Read more of this story at Slashdot.


Read more of this story at Slashdot.


A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area.
According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory.
The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception.
On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.
The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”
Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met.
Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.
“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.
Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.
“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.
After this story was published, Snap told CyberScoop it is investigating.
“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”
The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.
Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.
“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”
The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.
Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”
The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”
On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.
Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.
Apple and Google did not respond to a request for comment.
You can read the full lawsuit below.
5/05/26: This story was updated to include comment from Snap received after publication.
The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.


Congress extended a controversial surveillance law for 45 days on Thursday, hours before its latest expiration following an earlier extension.
The Senate passed — then the House cleared — a 45-day extension of Section 702 of the Foreign Intelligence Surveillance Act, which authorizes warrantless surveillance of foreign targets. But those targets are sometimes communicating electronically with Americans, and intelligence officials can search the database using their identifying information, which has long given privacy groups and privacy-minded lawmakers heartburn.
The 45-day reprieve gives lawmakers more time to hammer out a lasting deal, and comes after the leaders of the Senate Intelligence Committee agreed to send a letter to the Director of National Intelligence and attorney general, seeking swift declassification of a letter on a classified ruling from the Foreign Intelligence Surveillance Court.
Sen. Ron Wyden, D-Ore., had sought release of that opinion, and had resisted giving unanimous consent for the latest short-term extension to move forward until Senate Intelligence Chairman Tom Cotton, R-Ark., and top panel Democrat Mark Warner of Virginia agreed to send the letter.
A declassification review was already underway, but the Cotton-Warner letter states that “We expect that this declassification review will be completed and the FISC opinion released publicly within 15 days,” according to Wyden, speaking on the Senate floor.
The March 17 opinion reportedly came with annual recertification of the warrantless surveillance program. The Justice Department is appealing that ruling because it blocked them from using certain tools to analyze communications.
“A few weeks ago, the Foreign Intelligence Surveillance Court found major compliance problems related to the surveillance law known as section 702,” Wyden said earlier this month. “These compliance problems are directly related to Americans’ Constitutional rights.”
Senate Majority Leader John Thune, R-S.D., said the extension will give lawmakers additional room to hold “discussion on reforms.”
The House this week had passed a 3-year reauthorization with some changes to the surveillance program, but key to doing so was leadership’s agreement to attach legislative language on a separate matter that would ban a central bank digital currency. Thune had said that language was going nowhere in the Senate.
On Thursday, the House voted 261-111 to extend the law for 45 days. President Donald Trump has sought a “clean” 18-month reauthorization of the surveillance powers.
The extension continues a perennial ritual for the Hill when it comes to Section 702: A deadline looms, and Congress kicks the can down the road repeatedly.
The post Congress kicks the can down the road on surveillance law (again) appeared first on CyberScoop.

On April 15, 2026, the European Data Protection Board (EDPB) adopted guidelines on the processing of personal data for scientific research purposes.[1] The guidelines aim to clarify GDPR compliance requirements for scientific research involving personal data.
The concepts addressed by the EDPB are of particular relevance to companies active in life sciences, artificial intelligence (AI), and advanced technology R&D.
The guidelines are open for public consultation until June 25, 2026.
The most significant aspect of the guidelines is the EDPB’s clarification of what constitutes “genuine” scientific research. The guidelines set out six key-indicative factors to be considered alongside the nature, scope, context, and purposes of the processing. These factors appear to restrict the scope of processing that can be classified as scientific research, meaning that researchers may need to re-evaluate whether their activities genuinely qualify for the GDPR’s more flexible treatment of scientific research.
The six key-indicative factors are as follows:[2]
If all six factors are met, the activities can be presumed to constitute scientific research. If not, the controller must justify and demonstrate why the activities should nonetheless qualify.
The remainder of the guidelines address GDPR compliance more generally in the context of scientific research, including with respect to: data protection principles, lawfulness of processing, transparency, data subjects’ rights, attribution of responsibility, and appropriate safeguards.
While these sections largely restate existing principles (albeit with useful clarifications on “broad” and “dynamic” consent, including through specific examples on how organizations can navigate the tension with the principles of specificity and purpose limitation as part of their overall data protection governance structure), the EDPB’s views on data minimization merit highlighting.[3] The EDPB takes the view that, because personal data must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed”[4], anonymization should be the default approach for scientific research. Once data is truly anonymized, it falls outside the scope of the GDPR entirely, although the anonymization process itself must still comply with GDPR requirements.[5] Where research aims cannot be achieved using anonymized data, personal data should be pseudonymized.[6] Processing data that can directly identify individuals should only occur where “strictly” necessary and proportionate to the research purpose.[7] Controllers will welcome the clarity provided by the guidelines, though ongoing compliance may require updates to internal processes. The full practical implications will become clearer once the dedicated guidance on anonymization and pseudonymization is published later this year.
Data subjects must be transparently informed about whether their data is processed in identifiable or pseudonymized form, and must not be misled into believing that their data is anonymized when it is not.[8]
In addition to adopting these guidelines, the EDPB established a dedicated “sprint team” to finalize its upcoming and much anticipated guidelines on anonymization by summer 2026.[9] The questions of when personal data qualifies as “anonymous” under the GDPR and under what circumstances personal data (including sensitive personal data) can be used to train AI models, is currently also the subject of ongoing negotiations at EU level on the Digital Omnibus Package.[10]
Finally, the EDPB adopted two opinions approving two sets of Europrivacy certification criteria as a European Privacy Label, simplifying the data transfer process and enhancing accountability in high-risk sectors. The first approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.[11] The second recognizes Europrivacy certification criteria as a European Data Protection Seal that can be used as a transfer mechanism under Articles 42 and 46 GDPR.[12] This will allow data importers outside Europe that are not subject to the GDPR to seek Europrivacy certification for transferred data they receive.
[1] EDPB Press Release, April 16, 2026, available here.
[2] EDPB Guidelines, section 2.1.
[3] EDPB Guidelines, section 8.3.
[4] GDPR Article 5(1)(c).
[5] EDPB Guidelines, para. 156.
[6] EDPB Guidelines, paras. 157-158.
[7] EDPB Guidelines, para. 159.
[8] EDPB Guidelines, para. 164.
[9] EDPB Press Release, April 16, 2026, available here.
[10] Cleary AI and Technology Insights, “Reset or rollback: Unpacking the EU’s Digital Omnibus Package”, November 21, 2025, available here.
[11] Opinion 14/2026 on the Europrivacy certification criteria regarding their approval by the Board as European Data Protection Seal pursuant to Article 42.5 GDP, adopted April 15, 2026, available here.
[12] Opinion 15/2026 on the Europrivacy certification criteria regarding their approval by the Board as European Data Protection Seal to be used as tool for transfers pursuant to Articles 42 and 46 GDPR, adopted April 15, 2026, available here.

Illinois Rep. Delia Ramirez is taking over as the top Democrat on the House Homeland Security panel’s cybersecurity subcommittee, replacing former Rep. Eric Swalwell after his resignation.
Committee Democrats approved the change Tuesday at a meeting prior to a “shadow hearing” without the GOP majority, focused on protecting elections from Trump administration interference.
Ramirez first won election to Congress in 2022 and was reelected in 2024. She has served as the vice ranking member of the committee since 2023. She is now the ranking member of the Subcommittee on Cybersecurity and Infrastructure Protection.
She has leveled criticisms during committee hearings about the Trump administration’s personnel cutbacks at the Cybersecurity and Infrastructure Security Agency, and was critical of how data was secured under the administration’s Department of Government Efficiency initiative led by Elon Musk.
“Under a Musk and Trump presidency, it’s clear that the security of Americans’ information is not a priority. I mean, a private civilian with no security clearance bullied his way into the Treasury, set up private servers, and stole sensitive information from an agency. If that isn’t a national security crisis, a cybersecurity crisis –then I don’t know what is,” Ramirez said at an early 2025 hearing. “The true threat to our homeland security is ‘fElon’ Musk, Trump, and their blatant misuse of power to steal information and coerce employees to leave agencies.”
She cosponsored legislation last year meant to strengthen the cybersecurity workforce by promoting measures to help workers from underrepresented and disadvantaged communities to join the field.
But she also had criticisms of U.S. cybersecurity under the Biden administration, including of Microsoft’s role in the SolarWinds breach.
In a statement about her appointment Tuesday, Ramirez took aim at at Trump, Vice President JD Vance, Department of Homeland Security Secretary Markwayne Mullin and White House homeland security adviser Stephen Miller.
“It’s clear that the security of our communities’ information, federal networks, and critical infrastructure have not been priorities” under them, she said. “Between the security failures of DOGE, the abuses of immigrant families’ data, and the decimation of CISA’s workforce and resources, Republicans have demonstrated a lack of interest in safeguarding our nation’s cybersecurity and our residents’ civil rights and privacy. In neglecting necessary oversight, Republicans have deregulated emerging technologies, allowed bad actors to profit from violations of our civil rights, and consented to the weaponization of government systems. It is more critical than ever that we assert our Congressional authority and disrupt the blatant corruption making us all less safe.”
Swalwell left the position following his resignation from Congress as a representative from California amid allegations of sexual misconduct.
Her ascension completes a full leadership turnover for the subcommittee. Rep. Andy Ogles, R-Tenn., took over the gavel late last year after former chairman Andrew Garbarino, R-N.Y., took over as chairman of the full committee.
The subcommittee is set to hold a hearing Wednesday on CISA and its role as the sector risk management agency for a number of critical infrastructure sectors.
Updated 4/28/26: to include comment from Ramirez.
The post Rep. Delia Ramirez takes over as top House cybersecurity Dem appeared first on CyberScoop.
U.S. states issued $3.45 billion in privacy-related fines to companies in 2025, a total larger than the last five years combined, according to research and advisory firm Gartner.
The increase is driven in part by stronger, more established privacy laws in states like California, new interstate partnerships built around enforcing laws across state lines, and a renewed focus to how AI and automation affect privacy.
The data indicates that “regulators are shifting their efforts away from awareness to full scale enforcement,” marking a significant shift from even the last few years in how aggressively states are investigating and penalizing companies for privacy law violations.
“This is increasingly becoming the standard in 2026 and for the coming two years,” Gartner’s analysis concludes.

The California Consumer Privacy Act had consumer privacy provisions go live in 2023, but for years enforcement was largely dormant. According to Nader Heinen, a data protection and AI analyst at Gartner and co-author of the research, that enforcement lag mirrors the way other major privacy laws, like Europe’s Global Data Protection Regulation, have been carried out in order to “lead with a bit of guidance” for companies while using enforcement sparingly.
But that era appears to be over. In 2025, the California Privacy Protection Agency has used the law to pursue violators across a wide range of industries— not just large conglomerates, but smaller and mid-sized companies in tech, the auto industry, and consumer products, including off-the-shelf goods and apparel.
Heinen said some businesses “weren’t paying attention” and may have been lulled into a false sense of complacency as regulators spun up their enforcement teams, leading to a harsh 2025.
“Unfortunately what happens when so much time passes between the legislation and starting enforcement regularly, is a lot of organizations let their privacy program atrophy,” he said.
States have also sought to combine their resources to target and penalize privacy violators across state lines. Last year, ten states came together to form the Consortium of Privacy Regulators, pledging to coordinate investigations and enforcement of common privacy laws around accessing, deleting and preventing the sale of personal information.
Beyond laws like the CCPA, states have been updating existing privacy and data-protection laws to more directly address harms from automated decision-making technologies, including AI. State privacy regulators are especially focused on how personal or private data is used to train AI systems and help it make inferences.
Gartner expects privacy fines to further increase in the coming years and Heinen said states will likely again lead the way on building the legal infrastructure to enforce data privacy in the AI age as they become the main conduit for lingering anxiety about the potential negative impacts of the technology.
“You have to put yourself in the position of these state legislatures,” Heinen said. “Their constituencies – the voting public – is telling them we’re worried about AI. AI anxiety is a thing. Everybody’s worried about whether AI is going to take their job or impact their capacity to find a job, so they want to see legislation in place to protect them.”
This past month, House Republicans unveiled their latest attempt to pass comprehensive federal privacy legislation with a bill that would preempt tougher state laws like those in California. In particular, the CCPA gives residents a private right of action – the legal right to sue companies directly – for violation of privacy laws.
On Monday, Tom Kemp, executive director of the California Privacy Protection Agency, wrote to House Energy and Commerce Chair Brett Guthrie, R-Ky., to oppose the bill, arguing it would provide “a ceiling” for Americans’ data privacy protections rather than a “floor” to build on.
“Preemption would strip away important existing state privacy provisions that protect tens of millions of Americans now,” Kemp wrote. “That would be a significant step backward in privacy protection at a time when individuals are increasingly concerned about their privacy and security online, and when challenges from data-intensive new technologies such as AI are developing quickly.”
The post U.S. companies hit with record fines for privacy in 2025 appeared first on CyberScoop.