Reading view

There are new articles available, click to refresh the page.

GM Secretly Sold California Drivers' Data, Agrees to Pay $12.75M In Privacy Settlement

"General Motors sold the data of California drivers without their knowledge or consent," says California's attorney general, "and despite numerous statements reassuring drivers that it would not do so." In 2024, The New York Times "reported that automakers including GM were sharing information about their customers' driving behavior with insurance companies," remembers TechCrunch, "and that some customers were concerned that their insurance rates had gone up as a result." Now General Motors "has reached a privacy-related settlement with a group of law enforcement agencies led by California Attorney General Rob Bonta..." The settlement announcement from Bonta's office similarly alleges that GM sold "the names, contact information, geolocation data, and driving behavior data of hundreds of thousands of Californians" to Verisk Analytics and LexisNexis Risk Solutions, which are both data brokers. Bonta's office further alleges that this data was collected through GM's OnStar program, and that the company made roughly $20 million from data sales. However, Bonta's office also said the data did not lead to increased insurance prices in California, "likely because under California's insurance laws, insurers are prohibited from using driving data to set insurance rates." As part of the settlement, GM has agreed to pay $12.75 million in civil penalties and to stop selling driving data to any consumer reporting agencies for five years, Bonta's office said. GM has also agreed to delete any driver data that it still retains within 180 days (unless it obtains consent from customers), and to request that Lexis and Verisk delete that data. "This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians," according to the attorney general's announcement. The settlement "requires General Motors to abandon these illegal practices, and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose." "Modern cars are rolling data collection machines," said San Francisco District Attorney Brooke Jenkins. "Californians must have confidence that they know what data is being collected, how it is being used, and what their opt-out rights are... This case sends a strong message that law enforcement will take action when California privacy laws are not scrupulously followed."

Read more of this story at Slashdot.

Fiber Optic Cables Can Eavesdrop On Nearby Conversations

sciencehabit shares a report from Science Magazine: Cold War spies planted bugs in walls, lamps, and telephones. Now, scientists warn, the cables themselves could listen in. A fiber optic technique used to detect earthquakes can also pick up the faint vibrations of nearby speech, researchers reported this week here at the general assembly of the European Geosciences Union. Freely available artificial intelligence (AI) software turned the fiber optic data into intelligible, real-time transcripts. "Not many people realize that [fiber optic cables] can detect acoustic waves," says Jack Lee Smith, a geophysicist at the University of Edinburgh who presented the result. "We show that in almost every case where you use these fibers, this could be a privacy concern." Fiber optics can pick up on sound through a technique called distributed acoustic sensing (DAS). Using a machine called an interrogator, researchers fire laser pulses down a cable and record the pattern of reflections coming back from tiny glass defects along the length of the fiber optic. When an earthquake's seismic wave crosses a section of the fiber, it stretches and squeezes the defects, leading to shifts in the reflected light that researchers can use to build a picture of an earthquake. DAS essentially turns a fiber cable into a long chain of seismometers that can detect not only earthquakes, but also the rumblings of volcanoes, cars, and college marching bands. And although scientists set up dedicated fiber lines specifically for research, DAS can also be performed on "dark fiber" -- unused strands in the web of fiber optics that runs through cities and across oceans, carrying the world's internet traffic. DAS can also be used to eavesdrop, the work of Smith and his colleagues shows. They conducted a field test using an existing DAS setup used to study coastal erosion. They set a speaker next to the cable and played pure tones, music, and speech. Human speech contains frequencies ranging from a few hundred to several thousand hertz. The low end of the range could be pulled out of the data "even without any preprocessing," Smith says. "You can easily see acoustic waves." Getting higher frequency speech took a bit of postprocessing, but it was possible. Dumping the data directly into Whisper, a free AI transcription tool, provided accurate real-time transcription. However, this technique worked only for coiled cables, exposed at the surface, at distances of up to 5 meters from the speaker. Burying the cable under just 20 centimeters of dirt was enough to muddy the speech. And straight cables -- even exposed ones right next to the speaker -- did not record speech well.

Read more of this story at Slashdot.

One House Democrat is pressing Commerce on the government’s spyware use

A House Democrat who’s been at the forefront of congressional efforts to scrutinize the federal government’s use of commercial spyware wants the Commerce Department to brief Capitol Hill amid apprehension that the Trump administration might further embrace the technology.

Rep. Summer Lee, D-Pa., sent a letter to the department Thursday seeking a briefing on several developments stemming from Immigration and Customs Enforcement acknowledging its use of Paragon’s Graphite spyware, as well as an American company purchasing a controlling stake in Israel’s NSO Group. The Commerce Department sanctioned NSO Group under former President Joe Biden after widespread abuse allegations, including eavesdropping on government officials, activists and journalists.

“The Trump Administration appears to be broadly receptive to using commercial spyware to infiltrate cell phones and allowing U.S. investment in sanctioned spyware companies like NSO Group,” Lee wrote in her letter to Commerce Secretary Howard Lutnick, which CyberScoop is first reporting.

NSO Group’s new executive chairman, David Friedman, is a former Trump ambassador to Israel and was his bankruptcy attorney. He has said in November that he expects the administration will be “receptive” to using NSO Group tech.

“Given those close ties between NSO Group and the Trump Administration, and the serious concerns about how NSO’s technology could be used to spy on Americans, we write to request information regarding the purchase of NSO Group by an American company and the potential usage of NSO Group spyware by federal law enforcement,” wrote Lee, who sits on the Oversight and Government Reform panel and is the top Democrat on its Federal Law Enforcement Subcommittee.

Lee was one of the authors of a recent Democratic letter seeking confirmation of ICE’s use of Paragon’s Graphite, which ICE acknowledged. But they criticized the administration for not answering all their questions, in addition to being outraged.

In her latest letter, Lee asked the Commerce Department to brief Oversight and Government Reform Committee staff about internal department deliberations, Commerce communication with the White House and any outside conversations — including with Friedman — about government use of NSO Group technology or any other commercial spyware, and American investment in NSO.

NSO Group “appears to view the Trump administration as friendly to its interests in the United States, pitching itself as a vital tool for the U.S. government to safeguard national security,” Lee wrote, citing company court filings that it “is reasonably foreseeable that a law enforcement or intelligence agency of the United States will use Pegasus.”

The Biden administration sanctions, and court losses in a case against Meta, represented setbacks for NSO Group’s ambitions. And prior to the U.S. investment firm controlling stake purchase last fall, the Commerce Department under Trump rebuffed efforts to remove NSO Group from its sanctions list.

But the tens of millions of dollars worth of investment, following news that Israel had used Pegasus to track people kidnapped or murdered by Hamas, was a boon.

NSO Group maintains that its products are designed only to help law enforcement and intelligence fight terrorism and crime, and that it vets its customers in advance as well as investigates misuse. News accounts and other investigations have turned up a multitude of abuses.

There have been scattered reports of U.S. flirtation with using NSO Group technology. The FBI acknowledged it had bought a Pegasus license, but stopped short of deploying it. The Times of London reported that “it is believed” the Central Intelligence Agency used Pegasus spyware as part of a rescue mission last month for a U.S. airman downed in Iran.

You can read the full letter below.

The post One House Democrat is pressing Commerce on the government’s spyware use appeared first on CyberScoop.

60% of MD5 Password Hashes Are Crackable In Under an Hour

In honor of World Password Day, Kaspersky researchers revisited their study on the crackability of real-world passwords and found that 60% of MD5-hashed passwords could be cracked in under an hour with a single Nvidia RTX 5090, and 48% could be cracked in under a minute. "The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach," reports The Register. From the report: Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you're wondering whether there's a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you -- only a few percent -- but it's still a move in the wrong direction. "Attackers owe this boost in speed to graphics processors, which grow more powerful every year," Kaspersky explained. "Unfortunately, passwords remain as weak as ever." "This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so," said senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell. His advice is that providers need to modernize their login systems and enforce stronger protections, because users are often stuck with whatever security options they're given.

Read more of this story at Slashdot.

Microsoft Edge Stores Passwords In Plaintext In RAM

Longtime Slashdot reader UnknowingFool writes: Security researcher Tom Joran Sonstebyseter Ronning has found that Microsoft Edge stores passwords in plaintext in RAM. After creating a password and storing it using Edge's password manager, Ronning found that he could dump the RAM and recover his password which was stored in plaintext. Part of the issue is Edge loads all passwords to all sites upon a single verification check, even if the user was not visiting a specific site. This is very different from Chrome, which only loads passwords for specific websites when challenged for the site's password. Also, Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used. Microsoft downplayed the risk noting access would require control over a user's PC like a malware infection: "Access to browser data as described in the reported scenario would require the device to already be compromised," Microsoft said. Ronning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users. "Design choices in this area involve balancing performance, usability, and security, and we continue to review it against evolving threats," Microsoft said. "Browsers access password data in memory to help users sign in quickly and securely -- this is an expected feature of the application. We recommend users install the latest security updates and antivirus software to help protect against security threats."

Read more of this story at Slashdot.

A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory

A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area. 

According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory. 

The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception. 

 On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.

The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”

Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met. 

Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.

“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.

Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.

“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.

After this story was published, Snap told CyberScoop it is investigating.

“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”

The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.

Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.

“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”

The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.

Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”

The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”

On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.

Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.

Apple and Google did not respond to a request for comment.

You can read the full lawsuit below.


5/05/26: This story was updated to include comment from Snap received after publication.

The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.

Congress kicks the can down the road on surveillance law (again)

Congress extended a controversial surveillance law for 45 days on Thursday, hours before its latest expiration following an earlier extension.

The Senate passed — then the House cleared — a 45-day extension of Section 702 of the Foreign Intelligence Surveillance Act, which authorizes warrantless surveillance of foreign targets. But those targets are sometimes communicating electronically with Americans, and intelligence officials can search the database using their identifying information, which has long given privacy groups and privacy-minded lawmakers heartburn.

The 45-day reprieve gives lawmakers more time to hammer out a lasting deal, and comes after the leaders of the Senate Intelligence Committee agreed to send a letter to the Director of National Intelligence and attorney general, seeking swift declassification of a letter on a classified ruling from the Foreign Intelligence Surveillance Court.

Sen. Ron Wyden, D-Ore., had sought release of that opinion, and had resisted giving unanimous consent for the latest short-term extension to move forward until Senate Intelligence Chairman Tom Cotton, R-Ark., and top panel Democrat Mark Warner of Virginia agreed to send the letter.

A declassification review was already underway, but the Cotton-Warner letter states that “We expect that this declassification review will be completed and the FISC opinion released publicly within 15 days,” according to Wyden, speaking on the Senate floor.

The March 17 opinion reportedly came with annual recertification of the warrantless surveillance program. The Justice Department is appealing that ruling because it blocked them from using certain tools to analyze communications.

“A few weeks ago, the Foreign Intelligence Surveillance Court found major compliance problems related to the surveillance law known as section 702,” Wyden said earlier this month. “These compliance problems are directly related to Americans’ Constitutional rights.”

Senate Majority Leader John Thune, R-S.D., said the extension will give lawmakers additional room to hold “discussion on reforms.”

The House this week had passed a 3-year reauthorization with some changes to the surveillance program, but key to doing so was leadership’s agreement to attach legislative language on a separate matter that would ban a central bank digital currency. Thune had said that language was going nowhere in the Senate.

On Thursday, the House voted 261-111 to extend the law for 45 days. President Donald Trump has sought a “clean” 18-month reauthorization of the surveillance powers.

The extension continues a perennial ritual for the Hill when it comes to Section 702: A deadline looms, and Congress kicks the can down the road repeatedly.

The post Congress kicks the can down the road on surveillance law (again) appeared first on CyberScoop.

Key Takeaways From the EDPB’s Draft Guidelines on Scientific Research

On April 15, 2026, the European Data Protection Board (EDPB) adopted guidelines on the processing of personal data for scientific research purposes.[1] The guidelines aim to clarify GDPR compliance requirements for scientific research involving personal data.

The concepts addressed by the EDPB are of particular relevance to companies active in life sciences, artificial intelligence (AI), and advanced technology R&D.

The guidelines are open for public consultation until June 25, 2026.

The most significant aspect of the guidelines is the EDPB’s clarification of what constitutes “genuine” scientific research. The guidelines set out six key-indicative factors to be considered alongside the nature, scope, context, and purposes of the processing. These factors appear to restrict the scope of processing that can be classified as scientific research, meaning that researchers may need to re-evaluate whether their activities genuinely qualify for the GDPR’s more flexible treatment of scientific research.

Six Factor Test to Define “Scientific Research” Under GDPR

The six key-indicative factors are as follows:[2]

  1. Methodical and systematic approach: The research activities, including formulation and testing of a hypothesis, follow a methodical and systematic approach in the relevant field, for example in accordance with a comprehensive research plan.
  • Adherence to ethical standards: The research activities adhere to ethical standards in the relevant field, including respect for human autonomy and consent, transparency, accountability, and (human) oversight.
  • Verifiability and transparency: The research activities aim to achieve verifiable results, with hypotheses, methods, data and conclusions open to criticism (normally through peer review), and results shared with other parties, for example by publication.
  • Autonomy and independence: The research activities are conducted autonomously and independently, with the research team having the freedom to define research questions, identify methods, choose scientific theories, and disseminate results. The researchers have academic or scientific qualifications in the relevant field.
  • Objectives of the research: The research activities aim to contribute to the growth of society’s general knowledge and wellbeing. This does not exclude research that may also further commercial interests, but the EDPB does suggest in one of the examples included in the guidelines that research “solely concerned with furthering […] commercial interests” would not qualify.
  • Potential to contribute to existing scientific knowledge or apply existing knowledge in novel ways: The research activities have the potential to contribute to existing scientific knowledge or apply existing knowledge in novel ways, and their scientific merits can be subject to assessment, review or approval by independent experts or committees.

If all six factors are met, the activities can be presumed to constitute scientific research. If not, the controller must justify and demonstrate why the activities should nonetheless qualify.

Anonymization and Pseudonymization in the Context of Scientific Research

The remainder of the guidelines address GDPR compliance more generally in the context of scientific research, including with respect to: data protection principles, lawfulness of processing, transparency, data subjects’ rights, attribution of responsibility, and appropriate safeguards.

While these sections largely restate existing principles (albeit with useful clarifications on “broad” and “dynamic” consent, including through specific examples on how organizations can navigate the tension with the principles of specificity and purpose limitation as part of their overall data protection governance structure), the EDPB’s views on data minimization merit highlighting.[3] The EDPB takes the view that, because personal data must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed[4], anonymization should be the default approach for scientific research. Once data is truly anonymized, it falls outside the scope of the GDPR entirely, although the anonymization process itself must still comply with GDPR requirements.[5] Where research aims cannot be achieved using anonymized data, personal data should be pseudonymized.[6] Processing data that can directly identify individuals should only occur where “strictly” necessary and proportionate to the research purpose.[7] Controllers will welcome the clarity provided by the guidelines, though ongoing compliance may require updates to internal processes. The full practical implications will become clearer once the dedicated guidance on anonymization and pseudonymization is published later this year.

Data subjects must be transparently informed about whether their data is processed in identifiable or pseudonymized form, and must not be misled into believing that their data is anonymized when it is not.[8]

Other Recent EDPB Updates

In addition to adopting these guidelines, the EDPB established a dedicated “sprint team” to finalize its upcoming and much anticipated guidelines on anonymization by summer 2026.[9] The questions of when personal data qualifies as “anonymous” under the GDPR and under what circumstances personal data (including sensitive personal data) can be used to train AI models, is currently also the subject of ongoing negotiations at EU level on the Digital Omnibus Package.[10]

Finally, the EDPB adopted two opinions approving two sets of Europrivacy certification criteria as a European Privacy Label, simplifying the data transfer process and enhancing accountability in high-risk sectors. The first approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.[11] The second recognizes Europrivacy certification criteria as a European Data Protection Seal that can be used as a transfer mechanism under Articles 42 and 46 GDPR.[12] This will allow data importers outside Europe that are not subject to the GDPR to seek Europrivacy certification for transferred data they receive.


[1] EDPB Press Release, April 16, 2026, available here.

[2] EDPB Guidelines, section 2.1.

[3] EDPB Guidelines, section 8.3.

[4] GDPR Article 5(1)(c).

[5] EDPB Guidelines, para. 156.

[6] EDPB Guidelines, paras. 157-158.

[7] EDPB Guidelines, para. 159.

[8] EDPB Guidelines, para. 164.

[9] EDPB Press Release, April 16, 2026, available here.

[10] Cleary AI and Technology Insights, “Reset or rollback: Unpacking the EU’s Digital Omnibus Package”, November 21, 2025, available here.

[11] Opinion 14/2026 on the Europrivacy certification criteria regarding their approval by the Board as European Data Protection Seal pursuant to Article 42.5 GDP, adopted April 15, 2026, available here.

[12] Opinion 15/2026 on the Europrivacy certification criteria regarding their approval by the Board as European Data Protection Seal to be used as tool for transfers pursuant to Articles 42 and 46 GDPR, adopted April 15, 2026, available here.

Rep. Delia Ramirez takes over as top House cybersecurity Dem

Illinois Rep. Delia Ramirez is taking over as the top Democrat on the House Homeland Security panel’s cybersecurity subcommittee, replacing former Rep. Eric Swalwell after his resignation.

Committee Democrats approved the change Tuesday at a meeting prior to a “shadow hearing” without the GOP majority, focused on protecting elections from Trump administration interference.

Ramirez first won election to Congress in 2022 and was reelected in 2024. She has served as the vice ranking member of the committee since 2023. She is now the ranking member of the Subcommittee on Cybersecurity and Infrastructure Protection.

She has leveled criticisms during committee hearings about the Trump administration’s personnel cutbacks at the Cybersecurity and Infrastructure Security Agency, and was critical of how data was secured under the administration’s Department of Government Efficiency initiative led by Elon Musk.

“Under a Musk and Trump presidency, it’s clear that the security of Americans’ information is not a priority. I mean, a private civilian with no security clearance bullied his way into the Treasury, set up private servers, and stole sensitive information from an agency. If that isn’t a national security crisis, a cybersecurity  crisis –then I don’t know what is,” Ramirez said at an early 2025 hearing. “The true threat to our homeland security is ‘fElon’ Musk, Trump, and their blatant misuse of power to steal information and coerce employees to leave agencies.”

She cosponsored legislation last year meant to strengthen the cybersecurity workforce by promoting measures to help workers from underrepresented and disadvantaged communities to join the field.

But she also had criticisms of U.S. cybersecurity under the Biden administration, including of Microsoft’s role in the SolarWinds breach.

In a statement about her appointment Tuesday, Ramirez took aim at at Trump, Vice President JD Vance, Department of Homeland Security Secretary Markwayne Mullin and White House homeland security adviser Stephen Miller.

“It’s clear that the security of our communities’ information, federal networks, and critical infrastructure have not been priorities” under them, she said. “Between the security failures of DOGE, the abuses of immigrant families’ data, and the decimation of CISA’s workforce and resources, Republicans have demonstrated a lack of interest in safeguarding our nation’s cybersecurity and our residents’ civil rights and privacy. In neglecting necessary oversight, Republicans have deregulated emerging technologies, allowed bad actors to profit from violations of our civil rights, and consented to the weaponization of government systems. It is more critical than ever that we assert our Congressional authority and disrupt the blatant corruption making us all less safe.”

Swalwell left the position following his resignation from Congress as a representative from California amid allegations of sexual misconduct.

Her ascension completes a full leadership turnover for the subcommittee. Rep. Andy Ogles, R-Tenn., took over the gavel late last year after former chairman Andrew Garbarino, R-N.Y., took over as chairman of the full committee.

The subcommittee is set to hold a hearing Wednesday on CISA and its role as the sector risk management agency for a number of critical infrastructure sectors.

Updated 4/28/26: to include comment from Ramirez.

The post Rep. Delia Ramirez takes over as top House cybersecurity Dem appeared first on CyberScoop.

U.S. companies hit with record fines for privacy in 2025

U.S. states issued $3.45 billion in privacy-related fines to companies in 2025, a total larger than the last five years combined, according to research and advisory firm Gartner.

The increase is driven in part by stronger, more established privacy laws in states like California, new interstate partnerships built around enforcing laws across state lines, and a renewed focus to how AI and automation affect privacy.

The data indicates that “regulators are shifting their efforts away from awareness to full scale enforcement,” marking a significant shift from even the last few years in how aggressively states are investigating and penalizing companies for privacy law violations.

“This is increasingly becoming the standard in 2026 and for the coming two years,” Gartner’s analysis concludes.

Privacy related fines have gone up significantly in recent years. (Source: Gartner)

The California Consumer Privacy Act had consumer privacy provisions go live in 2023, but for years enforcement was largely dormant. According to Nader Heinen, a data protection and AI analyst at Gartner and co-author of the research, that enforcement lag mirrors the way other major privacy laws, like Europe’s Global Data Protection Regulation, have been carried out in order to “lead with a bit of guidance” for companies while using enforcement sparingly.

But that era appears to be over. In 2025, the California Privacy Protection Agency has used the law to pursue violators across a wide range of industries— not just large conglomerates, but smaller and mid-sized companies in tech, the auto industry, and consumer products, including off-the-shelf goods and apparel.

Heinen said some businesses “weren’t paying attention” and may have been lulled into a false sense of complacency as regulators spun up their enforcement teams, leading to a harsh 2025.

“Unfortunately what happens when so much time passes between the legislation and starting enforcement regularly, is a lot of organizations let their privacy program atrophy,” he said.

States have also sought to combine their resources to target and penalize privacy violators across state lines. Last year, ten states came together to form the Consortium of Privacy Regulators, pledging to coordinate investigations and enforcement of common privacy laws around accessing, deleting and preventing the sale of personal information.

Beyond laws like the CCPA, states have been updating existing privacy and data-protection laws to more directly address harms from automated decision-making technologies, including AI. State privacy regulators are especially focused on how personal or private data is used to train AI systems and  help it make inferences.

Gartner expects privacy fines to further increase in the coming years and Heinen said states will likely again lead the way on building the legal infrastructure to enforce data privacy in the AI age as they become the main conduit for lingering anxiety about the potential negative impacts of the technology.

“You have to put yourself in the position of these state legislatures,” Heinen said. “Their constituencies – the voting public – is telling them we’re worried about AI. AI anxiety is a thing. Everybody’s worried about whether AI is going to take their job or impact their capacity to find a job, so they want to see legislation in place to protect them.”

This past month, House Republicans unveiled their latest attempt to pass comprehensive federal privacy legislation with a bill that would preempt tougher state laws like those in California. In particular, the CCPA gives residents a private right of action – the legal right to sue companies directly – for violation of privacy laws.

On Monday, Tom Kemp, executive director of the California Privacy Protection Agency, wrote to House Energy and Commerce Chair Brett Guthrie, R-Ky., to oppose the bill, arguing it would provide “a ceiling” for Americans’ data privacy protections rather than a “floor” to build on.

“Preemption would strip away important existing state privacy provisions that protect tens of millions of Americans now,” Kemp wrote. “That would be a significant step backward in privacy protection at a time when individuals are increasingly concerned about their privacy and security online, and when challenges from data-intensive new technologies such as AI are developing quickly.”

The post U.S. companies hit with record fines for privacy in 2025 appeared first on CyberScoop.

❌