Reading view

There are new articles available, click to refresh the page.

Potential EU law sparks global concerns over end-to-end encryption for messaging apps 

Tech experts and companies offering encrypted messaging services are warning that  pending European regulation, which would grant governments broad authority to scan messages and content on personal devices for criminal activity, could spell “the end” of privacy in Europe.

The European Union will vote Oct. 14 on a legislative proposal from the Danish Presidency known as Chat Control — a law that would require mass scanning of user devices, for abusive or illegal material. Over the weekend, Signal warned that Germany — a longtime opponent and bulwark against the proposal — may now move to vote in favor, giving the measure the support needed to pass into law.

On Monday, Signal CEO Meredith Whittaker warned that her company, which provides end-to-end encrypted communications services, could exit the European market entirely if the proposal is adopted.

“This could end private comms-[and] Signal-in the EU,” Whittaker wrote on BlueSky. “Time’s short and they’re counting on obscurity: please let German politicians know how horrifying their reversal would be.”

According to data privacy experts, Chat Control would require access to the contents of apps like Signal, Telegram, WhatsApp, Threema and others before messages are encrypted. While ostensibly aimed at criminal activity, experts say such features would also undermine and jeopardize the integrity of all other users’ encrypted communications, including journalists, human rights activists, political dissidents, domestic abuse survivors and other victims who rely on the technology for legitimate means.

The pending EU vote is the latest chapter in a decades-long battle between governments and digital privacy proponents about whether, and how, law enforcement should be granted access to encrypted communications in criminal or national security cases. 

Supporters point to increasing use of encrypted communications by criminal organizations, child traffickers, and terrorist organizations, arguing that unrestricted encryption impedes law enforcement investigations, and that some means of “lawful access” to that information is technically feasible without imperiling privacy writ-large.

Privacy experts have long argued that there are no technically feasible ways to provide such services without creating a backdoor that could be abused by other bad actors, including foreign governments.

Whittaker reportedly told the German Press Agency that “given a choice between building a surveillance machine into Signal or leaving the market, we would leave the market,” while calling repeated claims from governments that such features could be implemented without weakening encryption “magical thinking that assumes you can create a backdoor that only the good guys can access.”

The Chaos Computer Club, an association of more than 7,000 European hackers, has also opposed the measure, saying its efforts to reach out to Germany’s Home Office, Justice Department and Digital Minister Karsten Wildberger for clarity on the country’s position ahead of the Chat Control vote have been met with “silence” and “stonewalling.”

The association and U.S.-based privacy groups like the Electronic Frontier Foundation have argued that the client-side scanning technology that the EU would implement is error-prone and “invasive.”

“If the government has access to one of the ‘ends’ of an end-to-end encrypted communication, that communication is no longer safe and secure,” wrote EFF’s Thorin Klowsowski.

Beyond the damage Chat Control could cause to privacy, the Chaos Computer Club worried that its adoption by the EU might embolden other countries to pursue similar rules, threatening encryption worldwide.

If such a law on chat control is introduced, we will not only pay with the loss of our privacy,” Elina Eickstädt, spokesperson for the Chaos Computer Club, said in a statement. “We will also open the floodgates to attacks on secure communications infrastructure.”

The Danish proposal leaves open the potential to use AI technologies to scan user content, calling for such technologies “to be vetted with regard to their effectiveness, their impact on fundamental rights and risks to cybersecurity.”

Because Chat Control is publicly focused on curtailing child sexual abuse material (CSAM), the intital scanning will target both known and newly identified CSAM, focusing on images and internet links. For now, text and audio content, as well as scanning for  evidence of grooming — a more difficult crime to define — are excluded. 

Still, the Danish proposal specifies that scanning for grooming is “subject to … possible inclusion in the future through a review clause,” which would likely require even more intrusive monitoring of text, audio and video conversations. 

It also calls for “specific safeguards applying to technologies for detection in services using end-to-end encryption” but does not specify what those safeguards would be or how they would surmount the technical challenges laid out by digital privacy experts.

The post Potential EU law sparks global concerns over end-to-end encryption for messaging apps  appeared first on CyberScoop.

Three states team up in investigative sweep of companies flouting data opt-out laws

A joint investigative sweep across three states kicked off this week aimed at identifying companies that aren’t following opt-out laws for collecting consumer data.

The efforts, led by the state attorneys general, the California Privacy Protection Agency and other state regulators, will involve contacting businesses across all three states who may not be processing opt-out requests or using Global Privacy Control (GPC), and ensuring they start following the required regulations.

“Californians have the important right to opt-out and take back control of their personal data — and businesses have an obligation to honor this request,” Attorney General Rob Bonta said in a statement. “Today, along with our law enforcement partners throughout the country, we have identified businesses refusing to honor consumers’ requests to stop selling their personal data and have asked them to immediately come into compliance with the law.”

California, Connecticut and Colorado all have laws requiring companies to adopt GPC, a browser extension that allows consumers to automatically and universally opt out of invasive data collection. The use of GPC is also required in other states, such as Texas, that aren’t part of this week’s enforcement actions.

According to the Privacy Tech Lab at Wesleyan University in Connecticut, GPC will “automatically send a signal or raise a privacy flag from your browser every time you visit a website.”

“This signal tells the website that you want to opt out of having your personal data sold or used for targeted advertising,” the lab noted.

Some browsers, like Mozilla’s Firefox, have this feature built into their product, while others, like Google’s Chrome, require a third-party extension to use it. But in most cases, it only takes a few minutes to set the protections up on your device or browser.

Connecticut Attorney General William Tong said in a statement that while “many businesses have been diligent in understanding these new protections and complying with the law,” the sweep was about “putting violators on notice today that respecting consumer privacy is non-negotiable.”

In response to questions about the scope of the joint investigation, when it began and whether noncompliant firms would face fines or other sanctions, a spokesperson for the California Department of Justice said in a statement to CyberScoop that the state has used the California Consumer Privacy Act in the past to get court orders and fine privacy offenders, including companies that failed to follow opt-out laws, citing a $1.2 million state fine paid by Sephora in 2022. The spokesperson described the current investigative sweep as “ongoing.”

“We’ve enforced the CCPA against companies, including for failing to honor opt-out requests via the GPC, and obtained both injunctive relief and civil penalties,” the spokesperson said. “Beyond this, to protect their integrity, we’re unable to comment on, even to confirm or deny, any potential or ongoing investigations.”

The sweep represents one of the larger nationwide efforts by states to enforce data privacy opt-out laws — one of the few legal protections U.S. consumers have to prevent wanton data collection and targeted advertising by companies.

Many states have privacy laws that require businesses to give consumers the option to opt-out of having their data being collected or sold to third parties. However, some businesses that profit from buying and selling data simply don’t comply with those laws or make the opt-out process so complicated that it can frustrate and discourage consumers from exercising their rights. 

Last year, the CPPA conducted its own sweep of data brokers out of compliance with state laws amid evidence that at least 40% of the companies on the state’s data broker registry were not complying — or flat out ignoring — requests from consumers to delete their data or opt out of collection.

In April regulators from California, Colorado and Connecticut — along with four other states — formed a bipartisan consortium to work together on implementing and enforcing common privacy laws across state borders. The other states in the coalition are Delaware, Indiana, New Jersey and Oregon.

This story was updated Sept. 11, 2025, with comments from the California Department of Justice.

The post Three states team up in investigative sweep of companies flouting data opt-out laws appeared first on CyberScoop.

Former WhatsApp security manager sues company for privacy violations, professional retaliation

Meta is being sued by a former security manager, who claims the company ignored repeated warnings that its messaging platform WhatsApp was riddled with security vulnerabilities and privacy violations, and retaliated against him for raising these concerns, ultimately firing him.

Attaullah Baig worked at Meta and WhatsApp from 2021 until this past April. Baig, who has held cybersecurity positions at PayPal, Capital One and Whole Foods Market, claims that he was issued a verbal warning Nov. 22, 2024, and was fired by Meta on April 11, 2025, with the company citing poor performance as the reason.

But in the lawsuit, he alleges the real reason he was fired was that soon after joining Meta in September 2021, he “discovered systemic cybersecurity failures that posed serious risks to user data and violated Meta’s legal obligations” to the federal government under a 2020 Federal Trade Commission privacy order and federal securities laws.

“Through a ‘Red Team Exercise’ conducted with Meta’s Central Security team, Mr. Baig discovered that approximately 1,500 WhatsApp engineers had unrestricted access to user data, including sensitive personal information covered by the FTC Privacy Order, and could move or steal such data without detection or audit trail,” the complaint stated.

The lawsuit was filed Monday in the U.S. District Court for the Northern District of California and names Meta, CEO Mark Zuckerberg and four other company executives as defendants.

According to Baig, he attempted to notify Meta executives on five separate occasions over the next year, raising concerns with his supervisors and highlighting information gaps — like what user data the company was collecting, where and how it was stored, and who had access — that made it impossible to comply with the consent order and federal privacy regulations.

He also created a “comprehensive product requirements document” for Meta’s privacy team that would have included a data classification and handling system to better comply with the 2020 order.

Instead, he claimed his supervisor “consistently ignored these concerns and directed Mr. Baig to focus on less critical application security tasks.”

“Mr. Baig understood that Meta’s culture is like that of a cult where one cannot question any of the past work especially when it was approved by someone at a higher level than the individual who is raising the concern,” the complaint alleged.

In August and September 2022, Baig again convened a group of Meta and WhatsApp executives to lay out his concerns, including the lack of security resources and the potential for Meta and WhatsApp to face legal consequences. He noted that WhatsApp had just 10 engineers focused on security, while comparably sized companies usually had teams approaching or exceeding 200 people.

He also outlined — at his supervisor’s request — a number of core digital vulnerabilities the company was facing.

Among the allegations: WhatsApp did not have an inventory of what user data it collected, potentially violating California state law, the European Union’s General Data Protection Regulation (GDPR) and the 2020 privacy order with the federal government. The company could not conclusively determine where it was storing user data and gave thousands of Meta engineers “unfettered access” without any business justifications.

The company also had no security operations center and apparently didn’t have any method of logging or tracking when those engineers sought to access user data, the lawsuit alleged.

Baig also claimed that approximately 100,000 WhatsApp users were suffering account takeovers daily, and the company had no process to prevent or deter such compromises.

During this period, Baig claims he was subject to “ongoing retaliation” from his supervisors for blowing the whistle.

Three days after initially disclosing his concerns, Baig’s direct supervisor told him he was “not performing well” and his work had quality issues. It was the first time he had received negative feedback; that same supervisor had, just three months earlier, praised Baig for his “extreme focus and clarity on project scope, timeline, etc.” In September 2022, the supervisor changed Baig’s employment performance rating to “Needs Support.” Subsequent performance ratings specifically cited Baig’s cybersecurity complaints as a basis for downgrading his score.

Additionally, after reviewing the security report that was explicitly requested of him by executives, his supervisor Suren Verma allegedly told him on a video call that the report was “the worst doc I have seen in my life” and issued a warning that Meta executives “would fire him for writing a document like this.” Verma also reportedly threatened to withhold Baig’s executive compensation package and discretionary equity.

WhatsApp denies retaliation

Meta and WhatsApp have denied Baig’s allegations that he was fired for bringing up security and privacy deficiencies.

“Sadly this is a familiar playbook in which a former employee is dismissed for poor performance and then goes public with distorted claims that misrepresent the ongoing hard work of our team,” said Carl Woog, vice president of policy at WhatsApp. “Security is an adversarial space and we pride ourselves in building on our strong record of protecting people’s privacy.” 

Zade Alsawah, a policy communications manager at WhatsApp, told CyberScoop that Baig was never “head of security” at WhatsApp, and that his formal title was software engineering manager.

“I know he’s been calling himself and framing himself as head of security, but there were seasoned security professionals layered ahead of him,” Alsawah said. “I think he’s been creating himself as this central figure when there are multiple engineers structured ahead of him.”

Further, he said that a Department of Labor and OSHA investigation ultimately cleared WhatsApp of any wrongdoing in Baig’s firing. The company shared copies of two letters from the agencies. One dated April 14, 2025, had the subject line “RE: Meta et al/Baig – notification of dismissal with appeal rights” and stated that Baig’s complaint had been dismissed.

A second letter from OSHA, dated Feb. 13, 2025, provides further reasoning for the dismissal.

“As a result of the investigation, the burden of establishing that Complainant was retaliated against in violation of [federal law] cannot be sustained,” the letter states. “Complainant’s allegations did not make a prima facie showing. Complainant’s asserted protected activity likely does not qualify as objectively reasonable under” federal law.

Even if the activity was reasonable, the agency said, “there is no reasonable expectation of a nexus between the asserted protected activity and the adverse actions. This is largely due to intervening events related to Respondent raising repeated concerns about Complainant’s performance and/or behavior, according to documents provided by Complainant.”

Baig’s allegations closely mirror that of another security whistleblower at a major social media company. Around the same time that Baig was at Meta, the top security executive at Twitter — now X — was documenting similar problems.  

Peiter Zatko, a legendary hacker turned cybersecurity specialist brought in to improve Twitter’s security, quickly determined that the company’s data infrastructure was so decentralized that executives could not reliably answer questions about the data they collected or where it was stored.

“First, they don’t know what data they have, where it lives, or where it came from and so unsurprisingly, they can’t protect it,” Zatko told the Senate Judiciary Committee in 2022. “That leads to the second problem: employees need to have too much access to too much data on too many systems.”

Like the allegations against WhatsApp, Zatko told Congress that when he first arrived at Twitter in 2020 he quickly realized the company was “more than a decade behind industry security standard.”

According to Baig’s lawsuit, in one meeting WhatsApp’s global head of public policy, Jonathan Lee, remarked that the vulnerabilities highlighted by Baig were serious enough that it might lead to WhatsApp facing similar consequences as “Mudge to Twitter” — referring to Zatko.

Baig continued his warnings through March 2023, telling executive leadership that he believed the company’s lackluster efforts around cybersecurity directly violated the 2020 FTC consent order.

After dealing with what he called “escalating retaliation” from his supervisors, Baig wrote to Zuckerberg and Meta general counsel Jennifer Newstead on Jan. 2, 2024, warning that the company’s central security team had falsified security reports to “cover up” their lack of security. Later that month, Baig told his supervisor he was documenting Meta’s “false commitment” to complying with Ireland’s data protection laws, citing specific examples where user data was readily accessible to tens of thousands of employees.

Such warnings continued throughout 2024, with Baig reiterating past concerns and bringing up new ones about the company’s compliance with privacy laws.

In November 2024, Baig filed a TCR (Tip, Complaint or Referral) form with the Securities and Exchange Commission outlining his concerns and lack of remediation by Meta, and filed a complaint with the Occupational Safety and Health Administration for “systematic retaliation” by the company.

Baig was told by Meta in February 2025 that he would be included in upcoming performance-based layoffs, with the company citing “poor performance” and inability to collaborate as the primary reasons.

Update, Sept. 9, 2025: This story was updated with Meta/WhatsApp’s response.

The post Former WhatsApp security manager sues company for privacy violations, professional retaliation appeared first on CyberScoop.

FTC warns tech companies not to weaken encryption, free speech practices for foreign governments

Federal Trade Commission Chair Andrew Ferguson warned U.S. tech companies not to accede to laws in foreign countries that weaken Americans’ free speech or data privacy rights.

Specifically, Ferguson cited laws like the European Union’s Digital Service Act and the U.K.’s Online Safety Act as statutes that incentivize U.S. tech companies “to censor speech, including speech outside of Europe.” He said that could lead to heightened surveillance of Americans by foreign governments and increase their risk around identity theft and fraud.

“Companies might be censoring Americans in response to the laws, demands, or expected demands of foreign powers,” Ferguson wrote in letters to 13 different tech companies Thursday. “And the anti-encryption policies of foreign governments might be causing companies to weaken data security measures and other technological means for Americans to vindicate their right to anonymous and private speech.”

Additionally, as companies continue to face fragmented and balkanized internet laws across different countries, Ferguson worried that some companies may opt for maximally invasive or restrictive policies toward its users to stay in compliance with the strictest laws.  

“I am also concerned that companies such as your own might attempt to simplify compliance with the laws, demands, or expected demands of foreign governments by censoring Americans or subjecting them to increased foreign surveillance even when the foreign government’s requests do not technically require that,” he wrote.

Ferguson sent the letters to executives at Akamai, Alphabet, Amazon, Apple, Cloudflare, Discord, GoDaddy, Meta, Microsoft, Signal, Snap, Slack and X.

He criticized the Biden administration for “actively” working to censor American speech online. The Supreme Court has largely upheld the constitutionality of the federal government’s conversations with tech companies under the Biden administration.

President Donald Trump has publicly attacked and pressured many of same companies Ferguson is targeting, in some cases threatening to use the power of the federal government to force them to adopt his preferred policies — not only on content moderation and disinformation, but also tariffs, diversity, equity and inclusion programs, unflattering search engine results and numerous other demands. Nevertheless, Ferguson praised Trump for allegedly putting “a swift end” to the weaponization of the federal government against Americans for their speech.

The FTC chair said in his letter that the agency is focused on the importance of offering strong end-to-end encryption to users, regardless of what laws or regulations in other countries may require.

“If a company promises consumers that it encrypts or otherwise keeps secure online communications but adopts weaker security due to the actions of a foreign government, such conduct may deceive consumers who rightfully expect effective security, not the increased susceptibility to breach or intercept desired by a foreign power,” Ferguson wrote.

The FTC’s letters were sent the same week that Director of National Intelligence Tulsi Gabbard announced the U.S. government had successfully engaged with U.K. leaders to drop their demand that Apple provide law enforcement with a means to access encrypted user cloud data for investigations, even for users outside the U.K.

The demand resulted in Apple withdrawing its Advanced Protection Program feature from U.K. iPhones and Apple computers, as privacy advocates continued to argue that any access given to law enforcement would fundamentally weaken the encryption that all its users rely on.

The post FTC warns tech companies not to weaken encryption, free speech practices for foreign governments appeared first on CyberScoop.

Hundreds of registered data brokers ignore user requests around personal data

There are few laws at the state or federal level to constrain data brokerage, the process by which companies collect and sell bulk data on people they’ve never met or done business with.

States at the forefront of regulating the industry, like California, currently require hundreds of companies to register with the government and provide consumers with the means to opt out of collection or request deletion of their data.

Now, a study from the University of California, Irvine shows that many registered brokers may be ignoring these requirements, and experts tell CyberScoop that state regulators should strengthen their enforcement of current privacy laws.

In the study, researchers exercised their rights under the California Consumer Privacy Act by contacting all registered data brokers and requesting details about the data the companies had collected on them. Of the 543 companies contacted, 40% failed to respond in any way, showing “rampant non-compliance” among the registered brokers.

“Our findings reveal rampant non-compliance and lack of standardization of the data access request process,” wrote authors Elina van Kempen, Isita Bagayatkar, Pavel Frolikov, Chloe Georgiou and Gene Tsudik. “These issues highlight an urgent need for stronger enforcement, clearer guidelines, and standardized, periodic compliance checks to enhance consumers’ privacy protections and improve data broker accountability.”

In addition to brokers that didn’t respond, those that did often created numerous hurdles for people trying to access their data. There was no standard process for submitting such  requests: some companies required a phone call, others an email, and others asked users to fill out an online form.

The study measured six types of friction in these requests: individual burden, identity verification challenges, response time, response quality, the data collected, and the privacy issues related to the requests.

 One key finding was that inconsistent identity procedures across brokers are confusing and “taxing” to the average consumer, forcing them to navigate a patchwork of different requirements. 

Caption: Even when data brokers (DBRs) do respond to consumers, many offer a confusing and unreliable process to contact them and request data or opt out. (Source: UC Irvine)

Many brokers that collect and sell personal data require strict identity verification for consumer data requests, which helps prevent unauthorized access.

On the other hand, the study’s authors say this creates an “unintended privacy paradox” for consumers looking to limit the exposure of their personal data by engaging with brokers directly, as they must often provide additional forms of personal and personally identifiable information along the way.

“Paradoxically, this means that exercising one’s privacy rights under CCPA introduces new privacy risks,” the authors wrote.

The study, which focused solely on companies registered as data brokers in California, may actually understate the problem, as other research has shown that many data brokers don’t carry their disclosures across state lines. 

Justin Sherman, a privacy expert and scholar-in-residence at the Electronic Privacy Information Center, told CyberScoop that many brokers seem to hold an odd commitment to privacy principles in one particular instance: verifying the identity of people who object to having a third-party company collect and use their personal information.

“It is beyond irony that there are data brokers who will sell to basically anybody and … yet when someone is saying, ‘I don’t consent to you having collected my data behind my back,’ everything is all of a sudden, ‘how are we going to verify?’ and ‘how are we going to do ‘Know Your Customer’” rules, Sherman said. “It’s talking out of both sides of your mouth. They know that if you create some friction, then people are less likely to cancel.”

Additionally, Sherman noted that for opt-out rights to be effective, “the consumer has to be able to easily exercise them.” A process that forces them to personally contact hundreds of different companies without a standardized process for doing so, he argued, is a recipe for frustration and dark patterns.

He added “there’s no gray area” about how registered brokers are obligated to handle such requests.

“I think the law is very clear. The law says: accept the requests and respond, or reject the requests and respond with the exception you’re setting,” Sherman said, something hundreds of registered brokers failed to do, according to the study.

The California Privacy Protection Agency did not respond to questions from CyberScoop about the UC Irvine study or its own research on data broker noncompliance under the CCPA.

The post Hundreds of registered data brokers ignore user requests around personal data appeared first on CyberScoop.

❌