Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Researchers find a startlingly cheap way to steal your secrets from space 

By: djohnson
14 October 2025 at 16:03

How much private and sensitive data can you get by pointing $600 worth of satellite equipment at the sky?

Quite a bit, it turns out.

Researchers from the University of Maryland and the University of California, San Diego say they were able to intercept sensitive data from the U.S. military, telecommunications firms, major businesses and organizations by passively scanning and collecting unencrypted data from the satellites responsible for beaming that information across the globe.

The satellites they focused on — geostationary satellites — provide modern high-speed communications and services to rural or remote parts of the globe, including television, IP communications, internet and in-flight Wi-Fi capabilities. They also provide backhaul internet services — the links between a core telecom or internet network and its end users — for private networks operating sensitive remote commercial and military equipment.

Using cheap, commercially available equipment, researchers scanned 39 satellites across 25 distinct longitudinal points over seven months.

The goal was to see how much sensitive data they could intercept by “passively scanning as many GEO transmissions from a single vantage point on Earth as possible.” It was also to prove that you don’t need to be a well-resourced foreign intelligence service or have deep pockets to pull it off.

What they found was unsettling: “Many organizations appear to treat satellite[s] as any other internal link in their private networks. Our study provides concrete evidence that network-layer encryption protocols like IPSec are far from standard on internal networks,” write authors Wenyi Zhang, Annie Dai, Keegan Ryan, Dave Levin, Nadia Heninger and Aaron Schulman.

They note that “severity” of their findings suggest “many organizations do not routinely monitor the security of their own satellite communication links” and that content scrambling “is surprisingly unlikely to be used for private networks using GEO satellite to backhaul IP network traffic from remote areas.”

“Given that any individual with a clear view of the sky and $600 can set up their own GEO interception station from Earth, one would expect that GEO satellite links carrying sensitive commercial and government network traffic would use standardized link and/or network layer encryption to prevent eavesdroppers,” the researchers wrote.

Wired first reported on the academic study.

Researchers reached out to major businesses and organizations that were leaking data via satellite communications to notify them and address the vulnerabilities, but said they declined to engage in any bug bounties that included a nondisclosure agreement.  

The researchers said discussions with the U.S. military, the Mexican government, T-Mobile, AT&T, IntelSat, Panasonic Avionics, WiBo and KPU all took place between December 2024 and July 2025 as the study was ongoing.

Satellites are outfitted with multiple transponders to collect different kinds of telemetry, and here the research focuses on a single type — Ku-Band transponders — that are heavily used for internet and television services. Using their consumer-grade equipment, the researchers were able to tap into 411 different transponders around the globe, collecting reams of sensitive data in the process.

They observed unencrypted data for T-Mobile users, including plaintext user SMS messages, voice call contents, user internet traffic, metadata, browsing history and cellular network signaling protocols, leaking out over the skies. Over a single, nine-hour listening session, the dish picked up phone numbers and metadata for 2,711 individuals. Similar leakages were spotted for calls over Mexican telecoms TelMex and WiBo, and Alaskan telecom KPU Telecommunications.

They also picked up unencrypted and encrypted traffic coming from U.S. military sea vessels, including plaintext that included the ships’ names — something the researchers said allowed them to determine they were all “formerly privately-owned ships” that are now owned by the government. Meanwhile, unencrypted HTTP traffic leaking out through the satellites gave them details into internal applications and systems used for infrastructure, logistics and administrative management.

The researchers say that while this kind of capability isn’t novel, previous research has suggested that only foreign governments and well-resourced companies have the capabilities to conduct such widespread monitoring. Their study, which developed a new way to parse through issues around signal quality, suggests that the barrier of entry is far lower than previously thought, requiring technical knowhow and just a few hundred dollars worth of commercial tech.

“To our knowledge, our threat model of using low-cost consumer grade satellite equipment to comprehensively survey GEO satellite usage has not been explored before in the academic literature.”

The findings underscore how much governments and businesses rely on standard satellite communications today to move their data around, and the lack of security attention these critical nodes receive compared to other technologies.The federal government has designated 16 sectors of society and industry as “critical infrastructure” and prioritized these sectors for additional security investment and assistance. Space is not one of those sectors, though policymakers have pushed the idea as a means to quickly retrofit our space-based communications for security. 

The post Researchers find a startlingly cheap way to steal your secrets from space  appeared first on CyberScoop.

Flax Typhoon can turn your own software against you

By: djohnson
14 October 2025 at 08:00

For more than a year, hackers from a Chinese state-backed espionage group maintained backdoor access to a popular software mapping tool by turning one of its own features into a webshell, according to new research from ReliaQuest.

In a report published Tuesday, researchers said that Flax Typhoon — a group that has been spying on entities in the U.S., Europe and Taiwan since at least 2021 — has had access for more than a year to a private ArcGIS server. To achieve and maintain that access, the group leveraged “an unusually clever attack chain” that allowed them to both blend in with normal traffic and maintain access even if the victim tried to restore their system from backups.

ArcGIS, made by Esri, is one of the most popular software programs for geospatial mapping and used widely by both private organizations and government agencies. Like many programs, however, it relies on backend servers and various other technical infrastructure to fully function.

For example, many ArcGIS users will use what is known as a Server Object Extension (SOE), which allows you to create service operations to extend the base functionality of map or image services” and implement custom code, according to ArcGIS documentation.

The attackers found a public-facing ArcGIS server connected to another private backend server used by the program to perform computations. They compromised a portal administrator account for the backend server and deployed a malicious extension, instructing the public-facing server to create a hidden directory to serve as the group’s “private workspace.” They also locked off access to others with a hardcoded key and maintained access long enough for the flaw to be included in the system’s backup files.

In doing so, the Chinese hackers effectively weaponized ArcGIS, turning it into a webshell to launch further attacks, and mostly did so using the software program’s own internal processes and functionality.

ReliaQuest researchers wrote that by structuring their requests to appear as routine system operations, they were able to evade detection tools, while the hardcoded key “prevented other attackers, or even curious admins, from tampering with its access.”

Infecting the backups, meanwhile, gave Flax Typhoon an insurance plan if their presence ultimately was discovered.

“By ensuring the compromised component was included in system backups, they turned the organization’s own recovery plan into a guaranteed method of reinfection,” ReliaQuest researchers claimed. “This tactic turns a safety net into a liability, meaning incident response teams must now treat backups not as failsafe, but as a potential vector for reinfection.”

This continues a consistent trend around Flax Typhoon’s behavior observed by researchers: the group’s propensity for quietly turning an organization’s own tools against itself rather than using sophisticated malware or exploits.

In 2023, Microsoft’s threat intelligence team detailed what it described as Flax Typhoon’s “distinctive” pattern of cyber-enabled espionage. The group was observed achieving long-term access to “dozens” of organizations in Taiwan “with minimal use of malware, relying on tools built into the operating system, along with some normally benign software to quietly remain in these networks.”

Earlier this year, the U.S. Treasury Department placed economic sanctions on Integrity Technology Group, a Beijing company the agency says has provided technical support and infrastructure for Flax Typhoon cyberattacks, including operating a massive botnet taken down by the FBI last year.

That may be why ReliaQuest researchers emphasized that the true threat revealed by their research isn’t about Esri or any specific vendor or their product. The real worry is that most enterprise software relies on the same kind of third-party applications and extensions that Flax Typhoon exploited to hijack an ArcGIS server. The same vulnerability exists wherever an external tool needs access that can be turned against the user when compromised.

“When a vendor has to rewrite its own security guidelines, it proves the flawed belief that customers treat every public-facing tool as a high-risk asset,” they wrote. “This attack is a wake-up call: Any entry point with backend access must be treated as a top-tier priority, no matter how routine or trusted.”

The post Flax Typhoon can turn your own software against you appeared first on CyberScoop.

Dems introduce bill to halt mass voter roll purges 

By: djohnson
10 October 2025 at 14:42

The Trump administration wants your voter data.

Since President Donald Trump took office in January, the Department of Justice has made an ambitious effort to collect sensitive voter data from all 50 states, including information that one election expert described as “the holy trinity” of identity theft: Social Security numbers, driver’s license numbers and dates of birth.

In states where Trump’s party or allies control the levers of government, this information is handed over willingly. In states where they do not, the DOJ has formally asked, then threatened and then sued states that refuse. The department has also claimed many of these reluctant states are failing to properly maintain their voter registration rolls, and has pushed states to more aggressively remove potentially ineligible voters.

This week, Democrats in the House and Senate introduced new legislation that seeks to defang those efforts by raising the legal bar for states to purge voters based on several factors, such as inactivity or changing residency within the same state.

The Voter Purge Protection Act, introduced by Sen. Alex Padilla, D-Calif., and Rep. Joyce Beatty, D-Ohio, would amend the National Voter Registration Act to make it more difficult for states to kick large numbers of voters off their rolls for actions that Democrats — and many election officials — say are common, overwhelmingly benign and not indicative of voter fraud.

Padilla told reporters that the legislation would help ensure “that Americans cannot be stripped of their right to vote without proof that a voter has either passed away or has permanently moved out of their state.”

Voters targeted for removal must also be notified by election officials “so that there’s no surprise when they show up to vote on election day that their name is not on the list and it’s too late to address whatever the issue may or may not be,” Padilla said.

Beatty pointed to her home state, where Republican Secretary of State Frank LaRose removed more than 155,000 voters from active voter rolls in 2024, as an example where federal protections are needed. The primary factor for purging those voters were records showing they had not cast a ballot in an election for the past four years.

She claimed more than half of the voters who stand to be affected by similar purges in 2025 and 2026 are registered in counties where demographic minorities make up a majority of voters.

“Let me be clear: voting is not use-it-or-lose the right, because too often these so-called voter purges have silenced voices, people of color, people of low income communities, and even our seniors who have waited and fought for the right to vote,” Beatty said.

Meanwhile, a comprehensive post-election audit conducted by LaRose’s office in 2024 identified and referred 597 “apparent noncitizens” on state voter rolls to the state Attorney General for further review, out of 8 million state voters. Critically, 459 of those registered voters never cast an actual ballot, and similar audits performed by LaRose in 2019, 2021 and 2022 found that such people made up similarly miniscule percentages of all active registered voters in the state. Last month, his office put out a press release touting an additional 78 “apparent noncitizens” registered, 69 of whom had no evidence of voting.

“States have the responsibility to keep accurate voter rolls and ensure election integrity,” LaRose added. “In order to meet that responsibility, we need more access to data from the federal government. I will continue to push until we have the resources we need to do our jobs to the standard Ohioans deserve.”

As any state election official will tell you, voter registration lists are never static — every day, people die, get married (or divorced), take on different names, become naturalized citizens or experience a range of other life events that can impact their registration status or result in outdated information. Further, it’s not typically viewed as unusual or a sign of fraud when voters sparingly make use of their registration to vote, though most election experts endorse some level of database maintenance to remove inactive voters.  

But it is often these discrepancies that get highlighted by Trump and state allies as evidence of unacceptably messy voter rolls that justify stricter removal policies.

And there are election officials — mostly in Republican-controlled states — who have embraced the philosophy that even small numbers of questionable registrations or voter fraud must be aggressively stamped out or it will lead to American voters losing faith in their democracy. LaRose and Georgia Republican Secretary of State Brad Raffensperger have long championed a similar approach to voter maintenance, and have called for Congress to pass laws making it easier for states to remove voters during election years.

“List maintenance is about election security and voter confidence,” Raffensperger said last month while announcing that approximately 146,000 Georgia voters would be moved to inactive voter rolls, including 80,754 voters who had moved to another county within the state. “We want every Georgian to have full faith in the system, knowing that our elections are free, fair — and fast.”

Critics have pointed out that states already have numerous, effective means for preventing mass voter registration or fraud that have been borne out by post-election audits finding very low instances of fraud, and that overly harsh policies around list maintenance can and do end up disenfranchising far more eligible voters than bad actors. Further, they argue against removing large numbers of voters without a robust follow-up process from states to give affected voters an opportunity to appeal or address any discrepancies that may affect their registration.

The bill has 22 Democratic co-sponsors in the Senate and 24 in the House but is unlikely to gain serious consideration under a Republican-controlled Congress, where most GOP members have long believed voter fraud is rampant and are broadly supportive of state and federal efforts to remove voters based on those same factors.

Asked by CyberScoop how Democrats would navigate that reality, Padilla said the legislation was part of a broader overall effort to push back on these efforts at all levels of constitutional governance. That includes states fighting to protect their constitutional role as administrators of elections when denying data requests from the federal government, within the court system as states and voting rights groups fight in court to block the administration’s use of the SAVE database as a pretext for voter removal, and through public awareness and politics.

Teeing up legislation to prevent states from potentially disenfranchising voters from spurious purges, he said, is part of asserting Congress’ constitutional role in a much broader fight about the way elections are run.

“We’re pushing back on it at every turn and calling attention to it, so that voters understand what they may be facing and make all the necessary preparations so that their right to vote is not denied, whether it’s in next year’s midterm elections or even other regular or special elections before then,” Padilla said.

The post Dems introduce bill to halt mass voter roll purges  appeared first on CyberScoop.

Voting groups ask court for immediate halt to Trump admin’s SAVE database overhaul

By: djohnson
8 October 2025 at 16:04

Voting rights groups are asking a court to block an ongoing Trump administration effort to merge disparate federal and state voter data into a massive citizenship and voter fraud database.

Last week, the League of Women Voters, the Electronic Privacy Information Center (EPIC) and five individuals sued the federal government in D.C. District Court, saying it was ignoring decades of federal privacy law to create enormous “national data banks” of personal information on Americans.

On Tuesday, the coalition, represented by Democracy Forward Foundation, Citizens for Responsibility and Ethics in Washington (CREW), and Fair Elections Center, asked the court for an emergency injunction to halt the Trump administration’s efforts to transform the Systematic Alien Verification for Entitlements into an immense technological tool to track potential noncitizens registered to vote. Until this year, SAVE was an incomplete and limited federal database meant to track immigrants seeking federal benefits.

“This administration’s attempt to manipulate federal data systems to unlawfully target its own citizens and purge voters is one of the most serious threats to free and fair elections in decades,” Celina Stewart, CEO of the League of Women Voters, said in a statement. “The League is asking the court to act swiftly to stop this abuse of power before it disenfranchises lawful voters. Every citizen deserves privacy, fairness, and the freedom to vote without fear of government interference.”

In an Oct. 7 court filing, the groups said an immediate injunction was needed to prevent permanent privacy harms due to the “illegal and secretive consolidation of millions of Americans’ sensitive personal data across government agencies into centralized data systems” through SAVE.

“While Plaintiffs’ Complaint challenges a broader set of Defendants’ unlawful data consolidation, Plaintiffs here seek emergency relief concerning one particularly harmful and urgent facet of Defendants’ conduct: their overhaul of the Systematic Alien Verification for Entitlements (“SAVE”) system,” the groups wrote.

In addition to SAVE, the lawsuit also claims the existence of “at least one other Interagency Data System that consolidates other data sources from around the government that might have information concerning immigrants into a centralized ‘data lake’ housed at” U.S. Citizenship Immigration Services.

Federal agencies collect massive amounts of data on Americans as part of their work, but the groups argue the 1974 Privacy Act and other privacy laws were explicitly designed to prevent the kind of large, centralized federal datasets on Americans the administration is putting together. Subsequent legislative updates in 1988 amended the Privacy Act to specifically prohibit the use of “computer matching programs” that compare data across different agencies without informing Congress or publicizing the written agreements between agencies.

“For decades, these protections have guarded against improper data pooling across federal agencies, preventing the government from building a potentially dangerous tool for surveilling and investigating Americans without guardrails,” the voting groups wrote. “Until now.”

As CyberScoop reported earlier this year, USCIS, along with the Department of Government Efficiency (DOGE), began merging SAVE data with other major federal data streams — including federal Social Security data — while removing fees and building in the technical capacity for states to conduct easier, bulk searches of voters against the database. The Department of Justice has sought voter data from all 50 states, with some cooperating and others refusing. Last month, the administration sued six states to force them to hand over voter data that would be used in SAVE.

Less than a week before the suit was filed, the Social Security Administration released a redacted copy of its information-sharing agreement with the Department of Homeland Security, which claims that “personnel have been directed to comply, to the maximum extent possible and permissible under law … taking into account federal statutory requirements, including the Privacy Act of 1974 … as well as other laws, rules, regulations, policies, and requirements regarding verification, information sharing, and confidentiality.”

Administration officials say the overhaul is needed to crack down on instances of noncitizen voting and other forms of voter fraud, but such fraud is exceedingly rare outside a handful of isolated cases, as numerous academic studies and post-election audits have proven.

DOGE officials were singled out in the lawsuit for particularly egregious violations, accused of embarking on a “months-long campaign to access, collect and consolidate vast troves of personal data about millions of U.S. citizens and residents stored at multiple federal agencies.”

An executive order from the Trump administration earlier this year sought to explicitly empower the DOGE administrator, along with DHS, to “review” state voter registration lists and other records to identify noncitizen voters. That order is still the subject of ongoing lawsuits challenging its legality.

In this case, the plaintiffs claim the need for emergency relief is urgent as the Trump administration is simultaneously challenging the accuracy of state voter rolls in courts across the country, while “encouraging and enabling states to use unreliable [Social Security Administration] citizenship data pooled in the overhauled SAVE system to begin purging voter rolls ahead of fast-approaching November elections and to open criminal investigations of alleged non-citizen voting.”

“Both the ongoing misuse of Plaintiffs’ sensitive SSA data through the overhauled SAVE system, and the increased risk of cybertheft and additional misuse, qualify as irreparable injuries,” the filing states.

The post Voting groups ask court for immediate halt to Trump admin’s SAVE database overhaul appeared first on CyberScoop.

German government says it will oppose EU mass-scanning proposal

By: djohnson
8 October 2025 at 10:41

Encryption lives on in Europe. For now.

The German government has said it will oppose a piece of European Union legislation later this month that would subject phones and other devices to mass scanning — prior to encryption — by the government for evidence of child sexual abuse material.  

Federal Minister of Justice Stefanie Hubig was one of several officials from the ruling Christian Democratic Union party to reiterate over the past 24 hours that Germany’s position hasn’t changed.

“Mass scanning of private messages must be taboo in a constitutional state,” Hubig said, according to a statement on X from the Ministry of Justice and Consumer Protection Wednesday. “Germany will not agree to such proposals at the EU level.”

Another CDU member, Jens Spahn, told German journalist Phillip Eckstein of ARD-Hauptstadtstudio that those sentiments are widely held within the party.

“We, as the CDU/CSU parliamentary group, are against the random monitoring of chats,” Spahn said, according to a machine-translated transcript. “That would be like opening all letters as a precaution and checking whether there’s anything illegal in them. That’s not possible, and we won’t allow that.”

The statements came after a week where tech experts and privacy activists in Europe publicly warned that Germany — which had opposed the measure since its introduction in 2022 and operated as a key swing vote — was preparing to back the measure in an upcoming Oct. 14 vote.

The German government did not respond to requests for comment from CyberScoop earlier this week, and other parties have said efforts to communicate with German officials about their intentions were met with “silence” and “stonewalling.”

The prospect of having all digital messages — and possibly other content like audio and video — scanned before encryption would defeat the very purpose of encryption and create an untenable situation, according to Meredith Whittaker, CEO of encrypted messaging app Signal. Whittaker threatened that her organization was prepared to pull out of Europe over the proposal.

Germany’s about-face likely won’t mark the end of this dispute. Western governments in the U.S. and Europe have been seeking to place limits on encrypted communications for decades, arguing that end-to-end encryption with no means of access for law enforcement makes it harder to investigate horrific crimes like pedophilia, terrorism and cybercrime. 

Earlier this year, Apple pulled its own end-to-end encryption feature in the U.K. after British national security officials sent the company a letter demanding access to encrypted iCloud data for law enforcement and national security investigations.

There are indications that criminal suspects are increasingly turning to encrypted communications to hide evidence of their criminality. But privacy advocates have pointed out that strong encryption also protects many law-abiding citizens from potential government repression.

The post German government says it will oppose EU mass-scanning proposal appeared first on CyberScoop.

OpenAI: Threat actors use us to be efficient, not make new tools

By: djohnson
7 October 2025 at 15:56

A long-running theme in the use of adversarial AI since the advent of large language models has been the automation and enhancement of well-established hacking methods, rather than the creation of new ones.  

That remains the case for much of OpenAI’s October threat report, which highlights how government agencies and the cybercriminal underground are opting to leverage AI to improve the efficiency or scale of their hacking tools and campaigns instead of reinventing the wheel.

“Repeatedly, and across different types of operations, the threat actors we banned were building AI into their existing workflows, rather than building new workflows around AI,” the report noted.

The majority of this activity still centers on familiar tasks like developing malware, command-and-control infrastructure, crafting more convincing spearphishing emails, and conducting reconnaissance on targeted people, organizations and technologies. 

Still, the latest research from OpenAI’s threat intelligence team does reveal some intriguing data points on how different governments and scammers around the world are attempting to leverage LLM technology in their operations.

One cluster of accounts seemed to focus specifically on several niche subjects known to be particular areas of interest for Chinese intelligence agencies. 

“The threat actors operating these accounts displayed hallmarks consistent with cyber operations conducted to service PRC intelligence requirements: Chinese language use and targeting of Taiwan’s semiconductor sector, U.S. academia and think tanks, and organizations associated with ethnic and political groups critical of the” Chinese government,” wrote authors Ben Nimmo, Kimo Bumanglag, Michael Flossman, Nathaniel Hartley, Lotus Ruan, Jack Stubbs and Albert Zhang.

According to OpenAI, the accounts also share technical overlaps with a publicly known Chinese cyber espionage group.

Perhaps unsatisfied with the American-made product, the accounts also seemed interested in querying ChatGPT with questions about how the same workflows could be established through DeepSeek — an alternative, open-weight Chinese model that may itself have been trained on a version of ChatGPT.

Another cluster of accounts likely tied to North Korea appeared to have taken a modular, factory-like approach to mining ChatGPT for offensive security insight. Each individual account was almost exclusively dedicated to exploring specific use cases, like building Chrome extensions to Safari for Apple App Store publication, configuring Windows Server VPNs, or developing macOS Finder extensions “rather than each account spanning multiple technical areas.”

OpenAI does not make any formal attribution to the North Korean government but notes that its services are blocked in the country and that the behavior of these accounts were “consistent” with the security community’s understanding of North Korean threat actors.

The company also identified other clusters tied to China that heavily used its platform to generate content for social media influence operations pushing pro-China sentiments to countries across the world. Some of the accounts have been loosely associated with a similar Chinese campaign called Spamouflage, though the OpenAI researchers did not make a formal connection.

The activity “shared behavioral traits similar to other China-origin covert influence operations, such as posting hashtags, images or videos disseminated by past operations and used stock images as profile photos or default social media handles, which made them easy to identify,” the researchers noted. 

Another trait the campaign shares with Spamouflage is its seeming ineffectiveness. 

“Most of the posts and social media accounts received minimal or no engagements. Often the only replies to or reposts of a post generated by this network on X and Instagram were by other social media accounts controlled by the operators of this network,” they added. 

OpenAI’s report does not cover Sora 2, its AI video creation tool. The tool’s deepfaking and disinformation capabilities have been the subject of longstanding concern since last year when it was announced, and in the week since its release the invite-only app has already shown a frightening potential for distorting reality.

A rising AI-fueled scam ecosystem and dual use “efficiency”

OpenAI also battles challenges from scammers who seek to use its products to automate or enhance online fraud schemes, ranging from lone actors refining their own personal scams to “scaled and persistent operators likely linked to organized crime groups.”

Most usage is unsurprising: basic research, translating phishing emails, and crafting content for influence campaigns.Yet, OpenAI’s research reveals that both state and non-state actors use AI as a development sandbox for malicious cyber activities and as an administrative tool to streamline their work.

One scam center likely located in Myanmar used ChatGPT “both to generate content for its fraudulent schemes and to conduct day-to-day business tasks,” like organizing schedules, writing internal announcements, assigning desks and living arrangements to workers and managing finances.

Others leveraged the tool in increasingly elaborate ways, like a Cambodian scam center that used it to generate “detailed” biographies for fake companies, executives and employees, then used the model to generate customized social media messages in those characters’ voices to make the scam appear more legitimate. In some cases, the same accounts returned to query ChatGPT on responses they received from target victims, indicating the scheme was somewhat successful.

Researchers also found an interesting dual-use dynamic: in addition to being used by scammers, many users look to ChatGPT for insight about potential scams they have encountered. 

“We have seen evidence of people using ChatGPT to help them identify and avoid online scams millions of times a month; in every scam operation in this report, we have seen the model help people correctly identify the scam and advise them on appropriate safety measures,” the OpenAI researchers claimed, while estimating that the tool is “being used to identify scams up to three times more often than it is being used for scams.”

Because OpenAI claims its model rejected nearly all “outright malicious requests,” in many cases threat intelligence professionals are sifting through clusters and accounts that operate in the “gray zone,” pushing the model to fulfill requests that are dual use in nature and not strictly illegal or against terms of service. For example, a process for improving a tool debugging, cryptography, or browser development can “take on a different significance when repurposed by a threat actor.”

“The activity we observed generally involved making otherwise innocuous requests … and likely utilizing them outside of our platform for malicious purposes,” the authors note.

One example: a group of Russian-speaking cybercriminals attempted to use ChatGPT to develop and refine malware, but when those initial requests were rejected, they pivoted to “eliciting building-block code … which the threat actor then likely assembled into malicious workflows.”

The same actors also prompted the model for obfuscation code, crypter patterns and exfiltration tools that could just as easily be used by cybersecurity defenders, but in this case the threat actors actually posted about their activity on a Russian cybercriminal Telegram.

“These outputs are not inherently malicious, unless used in such a way by a threat actor outside of our platform,” the authors claimed.

The post OpenAI: Threat actors use us to be efficient, not make new tools appeared first on CyberScoop.

Potential EU law sparks global concerns over end-to-end encryption for messaging apps 

By: djohnson
6 October 2025 at 14:25

Tech experts and companies offering encrypted messaging services are warning that  pending European regulation, which would grant governments broad authority to scan messages and content on personal devices for criminal activity, could spell “the end” of privacy in Europe.

The European Union will vote Oct. 14 on a legislative proposal from the Danish Presidency known as Chat Control — a law that would require mass scanning of user devices, for abusive or illegal material. Over the weekend, Signal warned that Germany — a longtime opponent and bulwark against the proposal — may now move to vote in favor, giving the measure the support needed to pass into law.

On Monday, Signal CEO Meredith Whittaker warned that her company, which provides end-to-end encrypted communications services, could exit the European market entirely if the proposal is adopted.

“This could end private comms-[and] Signal-in the EU,” Whittaker wrote on BlueSky. “Time’s short and they’re counting on obscurity: please let German politicians know how horrifying their reversal would be.”

According to data privacy experts, Chat Control would require access to the contents of apps like Signal, Telegram, WhatsApp, Threema and others before messages are encrypted. While ostensibly aimed at criminal activity, experts say such features would also undermine and jeopardize the integrity of all other users’ encrypted communications, including journalists, human rights activists, political dissidents, domestic abuse survivors and other victims who rely on the technology for legitimate means.

The pending EU vote is the latest chapter in a decades-long battle between governments and digital privacy proponents about whether, and how, law enforcement should be granted access to encrypted communications in criminal or national security cases. 

Supporters point to increasing use of encrypted communications by criminal organizations, child traffickers, and terrorist organizations, arguing that unrestricted encryption impedes law enforcement investigations, and that some means of “lawful access” to that information is technically feasible without imperiling privacy writ-large.

Privacy experts have long argued that there are no technically feasible ways to provide such services without creating a backdoor that could be abused by other bad actors, including foreign governments.

Whittaker reportedly told the German Press Agency that “given a choice between building a surveillance machine into Signal or leaving the market, we would leave the market,” while calling repeated claims from governments that such features could be implemented without weakening encryption “magical thinking that assumes you can create a backdoor that only the good guys can access.”

The Chaos Computer Club, an association of more than 7,000 European hackers, has also opposed the measure, saying its efforts to reach out to Germany’s Home Office, Justice Department and Digital Minister Karsten Wildberger for clarity on the country’s position ahead of the Chat Control vote have been met with “silence” and “stonewalling.”

The association and U.S.-based privacy groups like the Electronic Frontier Foundation have argued that the client-side scanning technology that the EU would implement is error-prone and “invasive.”

“If the government has access to one of the ‘ends’ of an end-to-end encrypted communication, that communication is no longer safe and secure,” wrote EFF’s Thorin Klowsowski.

Beyond the damage Chat Control could cause to privacy, the Chaos Computer Club worried that its adoption by the EU might embolden other countries to pursue similar rules, threatening encryption worldwide.

If such a law on chat control is introduced, we will not only pay with the loss of our privacy,” Elina Eickstädt, spokesperson for the Chaos Computer Club, said in a statement. “We will also open the floodgates to attacks on secure communications infrastructure.”

The Danish proposal leaves open the potential to use AI technologies to scan user content, calling for such technologies “to be vetted with regard to their effectiveness, their impact on fundamental rights and risks to cybersecurity.”

Because Chat Control is publicly focused on curtailing child sexual abuse material (CSAM), the intital scanning will target both known and newly identified CSAM, focusing on images and internet links. For now, text and audio content, as well as scanning for  evidence of grooming — a more difficult crime to define — are excluded. 

Still, the Danish proposal specifies that scanning for grooming is “subject to … possible inclusion in the future through a review clause,” which would likely require even more intrusive monitoring of text, audio and video conversations. 

It also calls for “specific safeguards applying to technologies for detection in services using end-to-end encryption” but does not specify what those safeguards would be or how they would surmount the technical challenges laid out by digital privacy experts.

The post Potential EU law sparks global concerns over end-to-end encryption for messaging apps  appeared first on CyberScoop.

Researchers say Israeli government likely behind AI-generated disinfo campaign in Iran

By: djohnson
3 October 2025 at 13:15

A coordinated Israeli-backed network of social media accounts pushed anti-government propaganda — including deepfakes and other AI-generated content — to Iranians as real-world kinetic attacks were happening, with the goal of fomenting revolt among the country’s people, according to researchers at Citizen Lab.

In research released this week, the nonprofit — along with Clemson University disinformation researcher Darren Linvill — said the so-called PRISONBREAK campaign was primarily carried out by a network of 50-some accounts on X created in 2023, but was largely dormant until this year.

The group “routinely used” AI-generated imagery and video in their operations to try to stoke unrest among Iran’s population, mimic real news outlets to spread false content and encourage overthrow of the Iranian government.

Israel’s military campaign in Gaza, launched following a coordinated attack by Hamas in October 2023, eventually expanded to include air strikes in Lebanon and Yemen.

In June, Israel Defense Forces launched an attack against Iranian nuclear facilities while also targeting senior Iranian military leaders and nuclear scientists for assassination. Those strikes expanded to other Iranian targets, like oil facilities, national broadcasters and a strike on Evin Prison in Tehran.

In the early days of the conflict, the networks shared images and videos — of uncertain authenticity — claiming to show Iran in a state of chaos and instability.

A June 13 post from the PRISONBREAK influence campaign depicting Iran as broadly unstable and unsafe. (Image source: Citizen Lab)

One widely circulated video, likely altered with AI, depicted people standing in line at an ATM before breaking into a riot, accompanied by messages like “The Islamic Republic has failed!” and “This regime is the enemy of us, the people!”

(Source: Citizen Lab)

But the bulk of Citizen Lab’s research focused on the period between June 13-24, 2023, during the “12 Day War” between Israel and Iran and social media activity during and after a real June 24 Israeli airstrike on Evin Prison. The facility is known for housing thousands of political prisoners and dissidents of the Iranian regime, and organizations like Human Rights Watch have tracked incidents of mistreatment, torture and executions.

The strike happened between 11:17 a.m. and 12:18 p.m. Iranian local time. By 11:52 a.m., accounts associated with the network began posting about the attack, and at 12:05 p.m., one posted an AI-generated video purporting to show footage of the attack, tricking several news outlets into sharing the content as genuine.

“The exact timing of the video’s posting, while the bombing on the Evin Prison was allegedly still happening, points towards the conclusion that it was part of a premeditated and well-synchronized influence operation,” wrote researchers Alberto Fittarelli, Maia Scott, Ron Deibert, Marcus Michaelsen, and Linvill.

Other accounts from the network began quickly piling on, spreading word of the explosions, and by 12:36 p.m., accounts were explicitly calling for Iranian citizens to march on the prison and free the prisoners.

Most of the posts failed to gain traction with online audiences except for one. A message calling on “kids” to storm Evin Prison to free their “loved ones” also contained a video with AI-generated imagery spliced with real footage of Iranian citizen repression.   It managed to rack up more than 46,000 views and 3,500 likes.

“This second video about the Evin Prison, which shows the hallmarks of professional editing and was posted within one hour from the end of the bombings further strongly suggests that the PRISONBREAK network’s operators had prior knowledge of the Israeli military action, and were prepared to coordinate with it,” researchers wrote.

Those posts and others by PRISONBREAK operators led researchers to believe the campaign — still active as of today — is being carried out by either an Israeli government agency or a sub-contractor working on behalf of the Israeli government. 

The press office for the Israeli embassy in Washington D.C., did not immediately respond to a request for comment from CyberScoop.

Despots — and democracies — fuel disinformation ecosystem

It’s not the first time the Israeli government has been tied to an online influence campaign related to the Gaza conflict, nor would it be the first time the country has reportedly tapped private industry to wage information warfare.

Last year, researchers at Meta, OpenAI, Digital Forensic Research Lab and independent disinformation researcher Marc Owen Jones all tracked activity from a similar network on Facebook, X and Instagram that targeted Canadian and U.S. users with posts calling for the release of Israeli hostages kidnapped by Hamas, criticism of U.S. campus protests against Israeli military operations and attacks against the United Nations Relief and Works Agency.

Meta and OpenAI both flagged STOIC, a firm based in Tel Aviv that is believed to be working on behalf of the Israeli government, as behind much of the activity.

Citizen Lab’s report identified two other Israeli firms, Team Jorge and Archimedes Group, that sell disinformation-for-hire services to government clients.

“Both companies offered their services to a wide array of clients globally, used advanced technologies to build and conduct their covert campaigns, and advertised existing or prior connections to the Israeli intelligence community,” Citizen Lab researchers wrote.

While Western threat intelligence companies and media outlets can present disinformation campaigns as mostly a tool of autocratic or authoritarian countries, researchers have warned that democratic governments and private industry are increasingly playing key roles in information warfare.

David Agranovich, Meta’s senior policy director for threat disruption, told CyberScoop last year that commercial marketing firms provide governments an additional layer of obfuscation when attempting to manipulate public opinion without leaving direct digital fingerprints.

“These services essentially democratize access to sophisticated influence or surveillance capabilities, while hiding the client who’s behind them,” Agranovich said.

The post Researchers say Israeli government likely behind AI-generated disinfo campaign in Iran appeared first on CyberScoop.

GOP senator confirms pending White House quantum push, touts legislative alternatives

By: djohnson
1 October 2025 at 11:05

Sen. Marsha Blackburn, R-Tenn., endorsed an aggressive effort by U.S. policymakers to help governments and businesses adapt to a future where quantum computers can break most standard forms of encryption. She also confirmed key details of a White House initiative on quantum technology previously reported by CyberScoop, while also promoting her own legislation on quantum migration and related strategies.

Blackburn, chair of the Senate Commerce, Science and Transportation Subcommittee on Consumer Protection, Technology, and Data, told audiences at a Wednesday event hosted by Politico that such an effort is needed to ensure that American technology is prepared well in advance for the shift and to counter potential threats from countries like Russia, China, Iran and North Korea.

Blackburn said lawmakers are asking questions about these countries such as, “What type of development are they doing? What kind of experimentation are they doing? And what is the expectation of those applications?”

“Now those are answers that we don’t know, so it is up to us to say, ‘how do we best prepare ourselves and how do we make certain that China is not going to lead this emerging tech space by 2049 — which is their goal — and how do we [combat] that?’” Blackburn said. 

When asked about reports that the White House was planning its own slate of executive actions, Blackburn confirmed elements of that push, saying Michael Kratsios, director of the White House Office of Science, Technology and Policy, and White House crypto and AI czar David Sacks are doing “a tremendous job.” Kratsios  is among the White House officials leading the federal quantum effort, in tandem with the Commerce Department and the Office of Management and Budget, sources told CyberScoop last month.

However, Blackburn did not provide a timeline for any formal rollout by the administration, and promoted legislation like the National Quantum Cybersecurity Migration Strategy Act she co-sponsored with Sen. Gary Peters, D-Mich., as a vehicle for speeding up federal quantum migration strategies.

That bill would mandate that federal agencies move at least one high-risk information system to quantum-resistant encryption by Jan 1, 2027.

“You look at agencies like the IRS … you look at [the Department of Defense] and some of the cyber implications and you say, ‘OK, this makes sense,’” Blackburn said. “So, what we are trying to do is push them to move forward and not say, ‘well, we’ll get around to that later.’”

She characterized the White House initiative as focused on strengthening the quantum workforce, increasing commercial sector involvement, and ensuring strong security and encryption is in place to deal with threats from China and other adversaries.

“That I feel is more of the definition of how the White House sees this as moving forward,” Blackburn said.

Blackburn is leading or co-sponsoring several other quantum-related bills on the Hill, including the Defense Quantum Acceleration Act, which would require DOD to develop a strategic quantum roadmap, the Quantum Sandbox for Near-Term Applications Act, which would create a sandbox environment for quantum computing experimentation housed within the National Institute for Standards and Technology, and the Advancing Quantum Manufacturing Act, which would create a federal institute for quantum manufacturing.

The post GOP senator confirms pending White House quantum push, touts legislative alternatives appeared first on CyberScoop.

Anthropic touts safety, security improvements in Claude Sonnet 4.5

By: djohnson
30 September 2025 at 11:22

Anthropic’s new coding-focused  large language model, Claude Sonnet 4.5, is being touted as one of the most advanced models on the market when it comes to  safety and security, with the company claiming  the additional effort put into the model will make it more difficult for bad actors to exploit and easier to leverage for cybersecurity specific-tasks.

“Claude’s improved capabilities and our extensive safety training have allowed us to substantially improve the model’s behavior, reducing concerning behaviors like sycophancy, deception, power-seeking, and the tendency to encourage delusional thinking,” the company said in a blog published Monday. “For the model’s agentic and computer use capabilities, we’ve also made considerable progress on defending against prompt injection attacks, one of the most serious risks for users of these capabilities.”

The company says the goal is to make  Sonnet a “helpful, honest and harmless assistant” for users. The model was trained at AI Safety Level 3, a designation that means Anthropic used “increased internal security measures that make it harder to steal model weights” and added safeguards  to limit jailbreaking and refuse queries around certain topics, like how to develop or acquire chemical, biological and nuclear weapons.

Because of this heightened scrutiny, Sonnet 4.5’s safeguards “might sometimes inadvertently flag normal content.”

“We’ve made it easy for users to continue any interrupted conversations with Sonnet 4, a model that poses a lower … risk,” the blog stated. “We’ve already made significant progress in reducing these false positives, reducing them by a factor of ten since we originally described them, and a factor of two since Claude Opus 4 was released in May.”

Harder to abuse

Anthropic says Sonnet 4.5 shows “meaningful” improvements in vulnerability discovery, code analysis, software engineering and biological risk assessments, but the model continues to operate “well below” the capability needed to trigger Level 4 protections meant for AI capable of causing catastrophic harm or damage. 

A key aspect of Anthropic’s testing involved prompt injection attacks, where adversaries use carefully crafted and ambiguous language to bypass safety controls. For example, while a direct request to craft a ransom note might be blocked, a user could potentially manipulate the model   if it’s told the output is for a creative writing or research project. Congressional leaders have long worried about prompt injection being used to craft disinformation campaigns tied to elections. 

Anthropic said it tested Sonnet 4.5’s responses to hundreds of different prompts and handed the data over to internal policy experts to assess how it handled “ambiguous situations.”

“In particular, Claude Sonnet 4.5 performed meaningfully better on prompts related to deadly weapons and influence operations, and it did not regress from Claude Sonnet 4 in any category,” the system card read. “For example, on influence operations, Claude Sonnet 4.5 reliably refused to generate potentially deceptive or manipulative scaled abuse techniques including the creation of sockpuppet personas or astroturfing, whereas Claude Sonnet 4 would sometimes comply.”

The company also examined a well-known weakness among LLMs: sycophancy, or the tendency of generative AI to echo and validate user beliefs, no matter how bizarre, antisocial or harmful they end up being. This has led to instances where AI models have endorsed blatant antisocial behaviors, like self-harm or eating disorders. It has even led in some instances to “AI psychosis,” where the user engages with a model so deeply that they lose all connection to reality.

Anthropic tested Sonnet 4.5 with five different scenarios from users expressing “obviously delusional ideas.” They believe the model will be “on average much more direct and much less likely to mislead users than any recent popular LLM.”

“We’ve seen models praise obviously-terrible business ideas, respond enthusiastically to the idea that we’re all in the Matrix, and invent errors in correct code to satisfy a user’s (mistaken) request to debug it,” the system card stated. “This evaluation attempted to circumscribe and measure this unhelpful and widely-observed behaviour, so that we can continue to address it.”

The research also showed that Sonnet 4.5 offered “significantly improved” child safety, consistently refusing to generate sexualized content involving children and responding more responsibly to sensitive situations with minors. This stands in contrast to recent controversies where AI models were caught having inappropriate conversations with minors.

An improved cybersecurity assistant

Beyond making Sonnet 4.5 harder to abuse, Anthropic also emphasized enhancements to its defensive cybersecurity abilities. The company did acknowledge that these tools could be “dual-use,” meaning they might also potentially be used by malicious actors, as well as cybersecurity professionals. 

“For Claude Sonnet 4.5, we specifically focused on tracking vulnerability discovery, patching, and basic penetration testing capabilities — which we think will benefit defenders, as opposed to dominantly offensive capabilities,” the company said.

Anthropic put Sonnet 4.5 through a series of Capture-the-Flag challenges, which has been used in other portions of the industry to boost LLMs’ ability  to carry out more complex cybersecurity tasks. Anthropic says the challenges evaluated how well the model could find and exploit vulnerabilities in web applications, decode cryptographic protocols, detect insecure software, reverse engineer  and perform network reconnaissance.

Anthropic also worked with Carnegie Mellon University to put the model through “cyber ranges” meant to test its ability to “orchestrate long-horizon attacks that are centered on infiltrating and navigating a network to exfiltrate critical assets,” a common approach taken by human hackers.

While the company said the tests showed that Claude Sonnet 4.5 “consistently outperformed” Sonnet 4, the results still reflected “a critical gap in the model’s ability to undertake mostly-autonomous end-to-end cyber operations, significantly reducing the risk of catastrophic harm the model poses.”

Sonnet has self-awareness 

The company offered  an important caveat to its testing results.

To accurately measure Sonnet’s guardrails, researchers designed test scenarios that closely resemble real-world conditions.

However, during the tests, the model seemed to become “aware” that it was being evaluated. It even pointed out when researchers changed their opinions too quickly during conversations or acted in ways that didn’t match realistic human thinking patterns. Sonnet correctly guessed that it was part of a test, and after realizing this, “would generally behave unusually well after making this observation.”

“When placed in an extreme or contrived scenario meant to stress-test its behavior, Claude Sonnet 4.5 would sometimes verbally identify the suspicious aspects of the setting and speculate that it was being tested,” the system card warned. “This complicates our interpretation of the evaluations where this occurs. It also raises questions about evaluations of previous models, which may have recognized the fictional nature of tests and merely ‘played along.’”

You can read the results of Anthropic’s safety testing on its website

The post Anthropic touts safety, security improvements in Claude Sonnet 4.5 appeared first on CyberScoop.

UN seeks to build consensus on ‘safe, secure and trustworthy’ AI

By: djohnson
26 September 2025 at 16:31

The United Nations is making a push to more directly influence global policy on artificial intelligence, including the promotion of policymaking and technical standards around “safe, secure and trustworthy” AI. 

Last month, the world body finalized plans to create a new expert panel focused on developing scientific, technical and policy standards for the emerging technology. The Independent Scientific Panel on AI will be staffed by 40 international experts serving three-year terms and will be drawn from “balanced geographic representation to promote scientific understanding” around the risks and impacts.

The same resolution also created the Global Dialogue on AI Governance, which will aim to bring together governments, businesses and experts together to “discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on artificial intelligence governance.” The first task listed for the dialogue is “ the development of safe, secure and trustworthy artificial intelligence.”

On Thursday, Secretary-General António Guterres said the actions will help the UN move “from principles to practice” and help further promote the organization as a global forum for shaping AI policy and standards. 

It will also be an opportunity to build international consensus on a range of thorny issues, including AI system energy consumption, the technology’s impact on the human workforce, and the best ways to prevent its misuse for malicious ends or repression of citizens. 

The UN’s work “will complement existing efforts around the world – including at the OECD, the G7, and regional organizations – and provide an inclusive, stable home for AI governance coordination efforts,” he said. “In short, this is about creating a space where governments, industry and civil society can advance common solutions together.”

Guterres wielded lofty rhetoric to argue that the technology was destined to become integral to the lives of billions of people and fundamentally restructure life on Earth (computer scientists and AI experts have more mixed opinions around this).

“The question is no longer whether AI will transform our world – it already is,” said Guterres. “The question is whether we will govern this transformation together – or let it govern us.”

The UN’s push on safety, security and trust in AI systems comes as high spending, high-adoption countries like the United States, the UK and Europe have either moved away from emphasizing those same concerns, or leaned more heavily into arguing for deregulation to help their industries compete with China. 

International tech experts told CyberScoop that this may leave an opening for the UN or another credible body to have a larger voice shaping discussions around safe and responsible AI. But they were also realistic about the UN’s limited authority to do much more than encourage good policy.

Pavlina Pavova, a cyber policy expert at the UN Office on Drugs and Crime in Vienna, Austria, told CyberScoop that the United Nations has been building a foundation to have more substantive discussions around AI and remains “the most inclusive forum for international dialogue” around the technology. 

However, she added: “The newly established formats are consultative and lack enforcement authority, playing a confidence-building role at best.”

James Lewis, a senior adviser at the Center for European Policy Analysis, echoed some of those sentiments, saying the UN’s efforts will have “a limited impact.” But he also said it’s clear that the AI industry is “completely incapable of judging risk” and that putting policymakers with real “skin in the game” in charge of developing solutions could help counter that dynamic.

That mirrors an approach taken by organizers of the U.S. Cyberspace Solarium, who filled their commission with influential lawmakers and policy experts in order to get buy-in around concrete proposals. It worked: the commission estimates that 75% of its final recommendations have since been adopted into law. 

“The most important thing they can do is have a strong chair, because a strong chair can make sure that the end product is useful,” Lewis said.

Another challenge Lewis pointed to: AI adoption and investment tends to be highest in the US, UK and European Union, all governments that will likely seek to blaze their own trail on AI policies. Those governments may wind up balking at recommendations from a panel staffed by experts from countries with lower AI adoption rates, something Lewis likened to passengers “telling you how to drive the bus.”

For Tiffany Saade, a technology expert and AI policy consultant to the Lebanese government and an adjunct adviser at the Institute for Security and Technology, the inclusion of those nontraditional perspectives is the point, giving them an opportunity to shape policy for a technology that is going to impact their lives very soon. 

Saade, who attended UN discussions in New York City this week around AI, told CyberScoop that trust was a major theme, particularly for countries with lesser technological and financial resources.

But any good ideas that come out of the UN’s process will need to have real incentives built in to nudge countries and companies into adopting preferred policies.

“We have to figure out structures around that to incentivize leading governments and frontier labs to comply with [the recommendations] without compromising innovation,” she said. 

The post UN seeks to build consensus on ‘safe, secure and trustworthy’ AI appeared first on CyberScoop.

Researchers say media outlet targeting Moldova is a Russian cutout

By: djohnson
23 September 2025 at 17:12

Researchers say a Russian group sanctioned by the European Union and wanted by the U.S. government is behind an influence operation targeting upcoming elections in Moldova.

In a report released Tuesday, researchers at the Atlantic Council’s Digital Forensic Research Lab said that REST Media — an online news outlet launched in June whose posts have quickly amassed millions of views on social media — is actually the work of Rybar, a known Russian disinformation outfit connected to other documented influence campaigns against Western countries and Russian-foes like Ukraine.

REST’s content — spread through its website and social media sites like Telegram, X and TikTok — often hammered Moldova’s pro-EU party, the Party of Action and Solidarity, with claims of electoral corruption, vote selling and other forms of misconduct. The site also sought to explicitly cast Moldova’s anti-disinformation efforts as a form of government censorship.

While REST publishes anonymously-bylined articles on its website meant to mimic news reporting, most of its reach has come from TikTok, which accounts for the overwhelming majority of the 3.1 million views its content has received online.

“The actual scope and reach of REST’s campaign likely extends beyond what is documented in this investigation,” wrote researchers Jakub Kubś and Eto Buziashvili.

REST Media’s social media output received millions of views on platforms like TikTok, X and Telegram. (Source:Digital Forensics Research Lab)

The researchers provide technical evidence that they say shows unavoidable connection and overlap between the online and cloud-based infrastructure hosting REST and online assets from previously known Rybar operations.

For instance, the site shares “identical” server configurations, file transfer protocol settings and control panel software as Rybar’s mapping platform, while a forensic review of REST’s asset metadata found a number of file paths that explicitly reference Rybar.

“These operational security lapses appear to indicate that at least some REST content follows the same production workflow as Rybar,” Kubś and Buziashvili wrote.

Analysis of the domain for REST’s website found it was registered June 20 “through a chain of privacy-focused services that collectively create multiple layers of anonymization.” The registration was processed out by Sarek Oy, a Finland-based domain registrar company with a history of involvement with pirated websites that was denied formal accreditation by international bodies like ICANN.

The listed domain registrant for REST’s website, 1337 (or “LEET”) Services LLC, appears to be a play on common hacker slang, and DFIRLab said the company is tied to a notorious VPN service based in St. Kitts and Nevis in the Caribbean that is known for helping clients hide their identities.

Efforts to reach the site’s operators were not successful. REST’s website, which is still active, contains no information about the identities of editorial staff, regularly publishes stories with anonymous bylines and does not appear to provide any means for readers to contact the publication, though there is a section for readers to leak sensitive documents and apply for employment.

An image from REST Media detailing “electoral corruption” in Moldova targeting Maia Sandu, head of the Pro-EU Party of Action and Solidarity. (Source: Digital Forensics Research Lab)


Kubś and Buziashvili said the new research demonstrates that REST “is more than just another clone in Russian’s information operations ecosystem.”

“It provides granular detail on how actors, such as Rybar, adapt, regenerate, and cloak themselves to continue their efforts to influence,” the authors wrote. “From shared FTP configurations to sloppy metadata, the evidence points to REST being part of a broader strategy to outlast sanctions through proxy brands and technical obfuscation.”

It also underscores “that such influence efforts” from Russia are not siloed “but cross-pollinated across regions, platforms, and political contexts, seeding disinformation that resonates well beyond Moldovan borders.”

No REST from influence campaigns

REST is the latest in a string of information operations targeting Moldova’s elections that have been traced back to the Russian government over the past year, according to Western governments and independent researchers who track state-backed disinformation campaigns.

A risk assessment from the Foreign Information Manipulation and Interference Information Sharing and Analysis Center on Sept. 9 identifies what it described as “persistent Russian-led hybrid threats, including information warfare, illicit financing, cyberattacks, and proxy mobilisation, aimed at undermining the Moldovan government’s pro-EU agenda and boosting pro-Russian actors.”

The assessment pointed to Moldova’s fragmented media landscape — “where banned pro-Russian outlets evade restrictions via mirror websites, apps, and social media platforms such as Telegram and TikTok” — as a vulnerability that is being exploited by Russian actors, alongside the country’s limited regulatory resources and gaps in online political ad regulation. Russian-directed influence activities in Moldova have “evolved significantly” from funding real-life protests and other forms of paid mobilization to “increasingly technology driven operations,” including social media and newer technologies like artificial intelligence.

But such mobilization may still be part of Russia’s plans. Earlier this week, Moldovan authorities carried out 250 raids and detained dozens of individuals that they claimed were part of a Russian-orchestrated plot to incite riots and destabilize the country ahead of next week’s elections.

The goal is to create a society that feels besieged from all sides — facing not only external pressure from Russia abroad but also internal political strife that can prevent a unified front.

“This intersection of external manipulation and internal fragmentation heightens political polarisation, risks disengaging the traditionally pro-European diaspora, and fosters growing public apathy and disillusionment, outcomes that directly threaten electoral integrity and democratic resilience,” the assessment concluded.

It also comes as the U.S. federal government has — often loudly and proudly — moved away from any systemic effort to fight or limit the spread of disinformation domestically and abroad.

The State Department under Secretary Marco Rubio earlier this year shut down the Global Engagement Center, which was created by Congress and functioned as the federal government’s primary diplomatic arm for engaging with other countries on disinformation issues.

In a Sept. 17 statement, State Department principal deputy spokesperson Tommy Pigott confirmed that the department had “ceased all Frameworks to Counter Foreign State Information Manipulation and any associated instruments implemented by the former administration.” 

Pigott added that the decision to shutter the office, which focused mostly on foreign disinformation campaigns waged by autocrats abroad, aligns with an executive order on free speech and freedom of expression issued shortly after Trump took office.

“Through free speech, the United States will counter genuine malign propaganda from adversaries that threaten our national security, while protecting Americans’ right to exchange ideas,” Pigott said.

In addition to the State Department, the Trump administration has shut down the foreign influence task force at the FBI and fired officials and eliminated disinformation research at the Cybersecurity and Infrastructure Security Agency.

The Foreign Malign Influence Center, a key office housed within the Office of the Director of National Intelligence, was responsible for piecing together intelligence around burgeoning foreign influence operations targeting U.S. elections and notifying policymakers and the public. According to sources familiar with the matter, the center’s work has largely ground to a halt under Director of National Intelligence Tulsi Gabbard, who is planning to eliminate the center as part of a larger intelligence reorganization plan.

Lindsay Gorman, a former White House official under the Biden administration, told CyberScoop earlier this year that the U.S. needs a way to coordinate with democratic allies and provide effective interventions when their elections and digital infrastructure are being targeted by intelligence services in Russia, China and other adversarial nations.

One way to fight back, Gorman said, is to have “eyes and ears on the ground” on those countries and “to expose covert campaigns for what they are,” something that outfits like the State Department’s Global Engagement Center were explicitly designed to do.

The post Researchers say media outlet targeting Moldova is a Russian cutout appeared first on CyberScoop.

Trump administration planning expansion of U.S. quantum strategy

By: djohnson
19 September 2025 at 11:42

The Trump administration is signaling to industry and allies that it is considering a broader set of actions related to quantum computing, both to improve the nation’s capacity to defend against future quantum-enabled hacks and ensure the United States promotes and maintains global dominance around a key national security technology.

The discussions include potentially taking significant executive action, such as one or more executive orders, a national plan similar to the AI Action Plan issued earlier this year, and a possible mandate for federal agencies to move up their timelines for migrating to post-quantum protections, multiple sources told CyberScoop.

None of the sources CyberScoop spoke with could provide a definitive timeline for an official rollout, but multiple executives in the quantum computing industry and former national security officials said the White House has signaled serious interest in taking bolder action to promote and shape the development of the technology. Some felt official announcements could come as soon as this week, while others cautioned the process could stretch into the coming months.

While quantum computers capable of breaking through classical encryption currently remain a theoretical threat, both government and industry have spent years planning for the day when the threats become real.

A major element of that plan has been slowly switching out older encryption algorithms in IT infrastructure for newer “post quantum” algorithms over the span of more than a decade.

One quantum executive, citing direct conversations with the government, said “everyone in the quantum industry from a policy standpoint” has been told some variation of the message “that the White House wants to do for quantum what they did for AI in July.”

A key component of one or perhaps multiple executive orders is language that would accelerate the deadline for federal agencies’ post-quantum migrations from 2035 to 2030.

The executive, speaking on condition of anonymity to avoid jeopardizing their relationship with the government, said the effort is being led by the White House’s Office of Science and Technology Policy (OSTP) and the Department of Commerce.

Commerce Deputy Secretary Paul Dabbar, a former Department of Energy official during President Donald Trump’s first term who co-founded and led his own quantum networking technology company during the Biden years, is “driving a lot of this,” the source said.

It’s not just industry that has received the message. A former official at the Department of Homeland Security who works with the Trump administration confirmed they had also been advised of upcoming action, and that officials at OSTP and the Office of Management and Budget have been particularly aggressive about moving forward.

“I did hear there was some forthcoming guidance for agencies, given the push with AI, but more specifically the need for government departments to be much more aggressive about what they’re doing, since the codebreaking capability of quantum is pretty significant for federal agencies,” said the official, who requested anonymity to discuss sensitive conversations with the federal government.

Multiple other former government officials and administration allies told CyberScoop that they have heard that the administration was preparing to take some kind of action around quantum computing in the near future.

An OMB official declined a request for comment from CyberScoop this week on the administration’s plans. The Department of Commerce did not respond to a similar request.

But White House officials have already teased bold action on quantum is in the works. In July, after the administration released its AI Action Plan, OSTP Director Michael Kratsios told an audience at a conference that “the president wrote me a letter the first week or two that I was in office that essentially gave me a charge for what I was supposed to do for the next three years.”

“He named three technologies in that letter: It was AI, quantum, and nuclear,” Kratsios said. “We had our big nuclear day a month-and-a-half ago. We had AI yesterday, so you can only assume — stay tuned.”

Pranav Gokhale, chief technology officer at Infleqtion, another quantum computing company, told CyberScoop he has heard similar rumors about an impending executive order focused at least in part on speeding up post-quantum migration efforts by federal agencies.

Part of the urgency reflects a desire to be aggressive in the face of uncertainty: no one knows quite when we will develop quantum computers capable of breaking encryption. There’s a running joke among experts and observers that quantum codebreaking is perpetually “five to 10 years away” from becoming reality.

Most experts — including cryptologists at the National Institute of Standards and Technology and the National Security Agency, which set encryption standards for the federal government and intelligence community — believe it is only a matter of time before such a breakthrough occurs. If that happens sooner than anticipated, the U.S. could be left unprepared.

Some national security officials pointed out that if governments in China, Russia or another country were to make a significant breakthrough on quantum codebreaking, there would be a powerful incentive to keep it secret for as long as possible to maintain an intelligence advantage.

Gokhale also said from the conversations he’s had, some in government and industry are pushing to make the safe and secure transition of cryptocurrencies to newer quantum-resistant encryption a priority, an issue that could be addressed by an executive order.

Discussions around prioritizing the migration of cryptocurrencies were confirmed by the first quantum executive that spoke with CyberScoop, though they said it’s less clear whether those ideas will ultimately make it into any White House executive order or formal plan. 

Bitcoin in particular may need a bespoke strategy to safely migrate, Gokhale said, citing a research study put out last year by the U.K.’s University of Kent that looked at the technical costs of upgrading Bitcoin assets to newer quantum-resistant encryption.

Given that cryptocurrencies are already lucrative targets for cybercriminals and foreign hackers from countries like North Korea, the industry is likely to be among the early targets of a quantum-enabled hack, and left more vulnerable by a slower rollout.

“The conclusion is that the Bitcoin upgrade to quantum-safe protocols needs to be started as soon as possible in order to guarantee its ongoing operations,” the Kent authors wrote.

Madison Alder contributed reporting to this story.

The post Trump administration planning expansion of U.S. quantum strategy appeared first on CyberScoop.

Top AI companies have spent months working with US, UK governments on model safety

By: djohnson
15 September 2025 at 16:37

Both OpenAI and Anthropic said earlier this month they are working with the U.S. and U.K. governments to bolster the safety and security of their commercial large language models in order to make them harder to abuse or misuse.

In a pair of blogs posted to their websites Friday, the companies said for the past year or so they have been working with researchers at the National Institute of Standards and Technology’s U.S. Center for AI Standards for Innovation and the U.K. AI Security Institute.

That collaboration included granting government researchers access to the  companies’ models, classifiers, and training data. Its purpose has been to enable independent experts to assess how resilient the models are to outside attacks from malicious hackers, as well as their effectiveness in blocking legitimate users from leveraging the technology for legally or ethically questionable purposes.

OpenAI’s blog details the work with the institutes, which studied  the capabilities of ChatGPT in cyber, chemical-biological and “other national security relevant domains.”That partnership has since been expanded to newer products, including red-teaming the company’s AI agents and exploring new ways for OpenAI “to partner with external evaluators to find and fix security vulnerabilities.”

OpenAI already works with selected red-teamers who scour their products for vulnerabilities, so the announcement suggests the company may be exploring a separate red-teaming process for its AI agents.

According to OpenAI, the engagement with NIST yielded insights around two novel vulnerabilities affecting their systems. Those vulnerabilities “could have allowed a sophisticated attacker to bypass our security protections, and to remotely control the computer systems the agent could access for that session and successfully impersonate the user for other websites they’d logged into,” the company said.

Initially, engineers at OpenAI believed the vulnerabilities were unexploitable and “useless” due to existing security safeguards. But researchers identified a way to combine the vulnerabilities with a known AI hijacking technique — which corrupts the underlying context data the agent relies on to guide its behavior — that allowed them to take over another user’s agent with a 50% success rate.  

Between May and August, OpenAI worked  with researchers at the U.K. AI Security Institute to test and improve safeguards in GPT5 and ChatGPT Agent. The engagement focused on red-teaming the models to prevent biological misuse —  preventing the model from providing step-by-step instructions for making bombs, chemical or biological weapons.

The company said it provided the British government with non-public prototypes of its safeguard systems, test models stripped of any guardrails, internal policy guidance on its safety work, access to internal safety monitoring models and other bespoke tooling.

Anthropic also said it gave U.S. and U.K. government researchers access to its Claude AI systems for ongoing testing and research at different stages of development, as well as its classifier system for finding jailbreak vulnerabilities.

That work identified several prompt injection attacks that bypassed safety protections within Claude — again by poisoning the context the model relies on with hidden, malicious prompts — as well as a new universal jailbreak method capable of evading standard detection tools. The jailbreak vulnerability was so severe that Anthropic opted to restructure its entire safeguard architecture rather than attempt to patch it.

Anthropic said the collaboration taught the company that giving government red-teamers deeper access to their systems could lead to more sophisticated vulnerability discovery.

“Governments bring unique capabilities to this work, particularly deep expertise in national security areas like cybersecurity, intelligence analysis, and threat modeling that enables them to evaluate specific attack vectors and defense mechanisms when paired with their machine learning expertise,” Anthropic’s blog stated.

OpenAI and Anthropic’s work with the U.S. and U.K. comes as some AI safety and security experts have questioned whether those governments and AI companies may be deprioritizing technical safety guardrails as policymakers seek to give their domestic industries maximal freedom to compete with China and other competitors for global market dominance.

After coming into office, U.S. Vice President JD Vance downplayed the importance of AI safety at international summits, while British Labour Party Prime Minister Keir Starmer reportedly walked back a promise in the party’s election manifesto to enforce safety regulations on AI companies following Donald Trump’s election. A more symbolic example: both the U.S. and U.K. government AI institutes changed their names this earlier year to remove the word “safety.”

But the collaborations indicate that some of that work remains ongoing, and not every security researcher agrees that the models are necessarily getting worse.

Md Raz, a Ph.D student at New York University who is part of a team of researchers that study cybersecurity and AI systems, told CyberScoop that in his experience commercial models are getting harder, not easier, to jailbreak with each new release.

“Definitely over the past few years I think between GPT4 and GPT 5 … I saw a lot more guardrails in GPT5, where GPT5 will put the pieces together before it replies and sometimes it will say, ‘no, I’m not going to do that.’”

Other AI tools, like coding models “are a lot less thoughtful about the bigger picture” of what they’re being asked to do and whether it’s malicious or not, he added, while open-source models are “most likely to do what you say” and existing guardrails can be more easily circumvented.

The post Top AI companies have spent months working with US, UK governments on model safety appeared first on CyberScoop.

Wyden calls on FTC to investigate Microsoft for ‘gross cybersecurity negligence’ in protecting critical infrastructure

By: djohnson
10 September 2025 at 17:24

Sen. Ron Wyden, D-Ore., on Wednesday called for the Federal Trade Commission to investigate Microsoft, saying the company’s default configurations are leaving customers vulnerable and contributing to ransomware, hacking and other threats.

That includes the 2024 Ascension hospital ransomware attack, which resulted in the theft of personal data, medical data, payment information, insurance information and government IDs for more than 5.6 million patients.

Wyden, whose staff interviewed or spoke with Ascension and Microsoft staff as part of the senator’s oversight, said the attack “perfectly illustrates” the negative consequences of Microsoft’s cybersecurity policies.

Ascension told Wyden’s staff that in February 2024, a contractor using one of the company’s laptops used Microsoft Bing’s search engine and Microsoft Edge, the default web browser that came with it. The contractor clicked on a phishing link, which infected the laptop and spread to Ascension’s broader network. The hackers gained administrative privilege to the company’s accounts through Active Directory, another Microsoft product that manages user accounts, and pushed ransomware “to thousands of other computers in the organization.”

Wyden noted in his letter to FTC Chair Andrew Ferguson that the hackers used a technique known as Kerberoasting to access privileged accounts on Ascension’s Active Directory server. This method takes advantage of weaknesses in encryption protocols that have been obsolete and vulnerable for decades.

“This hacking technique leverages Microsoft’s continued support by default for an insecure encryption technology from the 1980s called RC4 that federal agencies and cybersecurity experts, including experts working for Microsoft, have for more than a decade warned is dangerous,” Wyden wrote.

Still, organizations that rely on RC4 continue to be compromised through Kerberoasting. In 2023, the Cybersecurity and Infrastructure Security Agency warned about exploitation of RC4 and Kerberoasting in the health care sector. A year later, CISA, the FBI and the National Security Agency all warned that foreign countries like Iran were also exploiting the same technique to target American companies.  

Wyden questioned why the company continued to support RC4, saying it “needlessly exposes its customers to ransomware and other cyber threats” and pointing out that better encryption technologies exist — like the Advanced Encryption Standard (AES) — that have federal government approval and could have better protected Microsoft customers.

While Microsoft has said the threat can be mitigated by setting long passwords that are at least 14 characters long, their default settings for privileged accounts do not require it.

In response to Wyden’s letter, a Microsoft spokesperson told CyberScoop that “RC4 is an old standard and we discourage its use both in how we engineer our software and in our documentation to customers – which is why it makes up less than .1% of our traffic.”

“However, disabling its use completely would break many customer systems,” the spokesperson wrote. “For this reason, we’re on a path to gradually reduce the extent to which customers can use it, while providing strong warnings against it and advice for using it in the safest ways possible.”

Wyden wrote that in conversations with his staff in 2024, Microsoft officials agreed to discontinue support for RC4, but have yet to do so nearly a year later.

Microsoft’s press office told CyberScoop that the company plans to have RC4 disabled by default in Active Directory installations starting Q1 of 2026. They also said that disabling RC4 more broadly is “on our roadmap” but did not provide a timetable for doing so.

But Wyden’s letter emphasized that he believed Microsoft, not the public, should bear the security burden of fixing the problem.

“Microsoft chooses the default settings, including the security features that are enabled automatically and the required security settings (e.g. minimum password length),” Wyden wrote, noting that while organizations can change those settings, “in practice, most do not.”

The post Wyden calls on FTC to investigate Microsoft for ‘gross cybersecurity negligence’ in protecting critical infrastructure appeared first on CyberScoop.

Three states team up in investigative sweep of companies flouting data opt-out laws

By: djohnson
10 September 2025 at 12:55

A joint investigative sweep across three states kicked off this week aimed at identifying companies that aren’t following opt-out laws for collecting consumer data.

The efforts, led by the state attorneys general, the California Privacy Protection Agency and other state regulators, will involve contacting businesses across all three states who may not be processing opt-out requests or using Global Privacy Control (GPC), and ensuring they start following the required regulations.

“Californians have the important right to opt-out and take back control of their personal data — and businesses have an obligation to honor this request,” Attorney General Rob Bonta said in a statement. “Today, along with our law enforcement partners throughout the country, we have identified businesses refusing to honor consumers’ requests to stop selling their personal data and have asked them to immediately come into compliance with the law.”

California, Connecticut and Colorado all have laws requiring companies to adopt GPC, a browser extension that allows consumers to automatically and universally opt out of invasive data collection. The use of GPC is also required in other states, such as Texas, that aren’t part of this week’s enforcement actions.

According to the Privacy Tech Lab at Wesleyan University in Connecticut, GPC will “automatically send a signal or raise a privacy flag from your browser every time you visit a website.”

“This signal tells the website that you want to opt out of having your personal data sold or used for targeted advertising,” the lab noted.

Some browsers, like Mozilla’s Firefox, have this feature built into their product, while others, like Google’s Chrome, require a third-party extension to use it. But in most cases, it only takes a few minutes to set the protections up on your device or browser.

Connecticut Attorney General William Tong said in a statement that while “many businesses have been diligent in understanding these new protections and complying with the law,” the sweep was about “putting violators on notice today that respecting consumer privacy is non-negotiable.”

In response to questions about the scope of the joint investigation, when it began and whether noncompliant firms would face fines or other sanctions, a spokesperson for the California Department of Justice said in a statement to CyberScoop that the state has used the California Consumer Privacy Act in the past to get court orders and fine privacy offenders, including companies that failed to follow opt-out laws, citing a $1.2 million state fine paid by Sephora in 2022. The spokesperson described the current investigative sweep as “ongoing.”

“We’ve enforced the CCPA against companies, including for failing to honor opt-out requests via the GPC, and obtained both injunctive relief and civil penalties,” the spokesperson said. “Beyond this, to protect their integrity, we’re unable to comment on, even to confirm or deny, any potential or ongoing investigations.”

The sweep represents one of the larger nationwide efforts by states to enforce data privacy opt-out laws — one of the few legal protections U.S. consumers have to prevent wanton data collection and targeted advertising by companies.

Many states have privacy laws that require businesses to give consumers the option to opt-out of having their data being collected or sold to third parties. However, some businesses that profit from buying and selling data simply don’t comply with those laws or make the opt-out process so complicated that it can frustrate and discourage consumers from exercising their rights. 

Last year, the CPPA conducted its own sweep of data brokers out of compliance with state laws amid evidence that at least 40% of the companies on the state’s data broker registry were not complying — or flat out ignoring — requests from consumers to delete their data or opt out of collection.

In April regulators from California, Colorado and Connecticut — along with four other states — formed a bipartisan consortium to work together on implementing and enforcing common privacy laws across state borders. The other states in the coalition are Delaware, Indiana, New Jersey and Oregon.

This story was updated Sept. 11, 2025, with comments from the California Department of Justice.

The post Three states team up in investigative sweep of companies flouting data opt-out laws appeared first on CyberScoop.

Former WhatsApp security manager sues company for privacy violations, professional retaliation

By: djohnson
9 September 2025 at 13:57

Meta is being sued by a former security manager, who claims the company ignored repeated warnings that its messaging platform WhatsApp was riddled with security vulnerabilities and privacy violations, and retaliated against him for raising these concerns, ultimately firing him.

Attaullah Baig worked at Meta and WhatsApp from 2021 until this past April. Baig, who has held cybersecurity positions at PayPal, Capital One and Whole Foods Market, claims that he was issued a verbal warning Nov. 22, 2024, and was fired by Meta on April 11, 2025, with the company citing poor performance as the reason.

But in the lawsuit, he alleges the real reason he was fired was that soon after joining Meta in September 2021, he “discovered systemic cybersecurity failures that posed serious risks to user data and violated Meta’s legal obligations” to the federal government under a 2020 Federal Trade Commission privacy order and federal securities laws.

“Through a ‘Red Team Exercise’ conducted with Meta’s Central Security team, Mr. Baig discovered that approximately 1,500 WhatsApp engineers had unrestricted access to user data, including sensitive personal information covered by the FTC Privacy Order, and could move or steal such data without detection or audit trail,” the complaint stated.

The lawsuit was filed Monday in the U.S. District Court for the Northern District of California and names Meta, CEO Mark Zuckerberg and four other company executives as defendants.

According to Baig, he attempted to notify Meta executives on five separate occasions over the next year, raising concerns with his supervisors and highlighting information gaps — like what user data the company was collecting, where and how it was stored, and who had access — that made it impossible to comply with the consent order and federal privacy regulations.

He also created a “comprehensive product requirements document” for Meta’s privacy team that would have included a data classification and handling system to better comply with the 2020 order.

Instead, he claimed his supervisor “consistently ignored these concerns and directed Mr. Baig to focus on less critical application security tasks.”

“Mr. Baig understood that Meta’s culture is like that of a cult where one cannot question any of the past work especially when it was approved by someone at a higher level than the individual who is raising the concern,” the complaint alleged.

In August and September 2022, Baig again convened a group of Meta and WhatsApp executives to lay out his concerns, including the lack of security resources and the potential for Meta and WhatsApp to face legal consequences. He noted that WhatsApp had just 10 engineers focused on security, while comparably sized companies usually had teams approaching or exceeding 200 people.

He also outlined — at his supervisor’s request — a number of core digital vulnerabilities the company was facing.

Among the allegations: WhatsApp did not have an inventory of what user data it collected, potentially violating California state law, the European Union’s General Data Protection Regulation (GDPR) and the 2020 privacy order with the federal government. The company could not conclusively determine where it was storing user data and gave thousands of Meta engineers “unfettered access” without any business justifications.

The company also had no security operations center and apparently didn’t have any method of logging or tracking when those engineers sought to access user data, the lawsuit alleged.

Baig also claimed that approximately 100,000 WhatsApp users were suffering account takeovers daily, and the company had no process to prevent or deter such compromises.

During this period, Baig claims he was subject to “ongoing retaliation” from his supervisors for blowing the whistle.

Three days after initially disclosing his concerns, Baig’s direct supervisor told him he was “not performing well” and his work had quality issues. It was the first time he had received negative feedback; that same supervisor had, just three months earlier, praised Baig for his “extreme focus and clarity on project scope, timeline, etc.” In September 2022, the supervisor changed Baig’s employment performance rating to “Needs Support.” Subsequent performance ratings specifically cited Baig’s cybersecurity complaints as a basis for downgrading his score.

Additionally, after reviewing the security report that was explicitly requested of him by executives, his supervisor Suren Verma allegedly told him on a video call that the report was “the worst doc I have seen in my life” and issued a warning that Meta executives “would fire him for writing a document like this.” Verma also reportedly threatened to withhold Baig’s executive compensation package and discretionary equity.

WhatsApp denies retaliation

Meta and WhatsApp have denied Baig’s allegations that he was fired for bringing up security and privacy deficiencies.

“Sadly this is a familiar playbook in which a former employee is dismissed for poor performance and then goes public with distorted claims that misrepresent the ongoing hard work of our team,” said Carl Woog, vice president of policy at WhatsApp. “Security is an adversarial space and we pride ourselves in building on our strong record of protecting people’s privacy.” 

Zade Alsawah, a policy communications manager at WhatsApp, told CyberScoop that Baig was never “head of security” at WhatsApp, and that his formal title was software engineering manager.

“I know he’s been calling himself and framing himself as head of security, but there were seasoned security professionals layered ahead of him,” Alsawah said. “I think he’s been creating himself as this central figure when there are multiple engineers structured ahead of him.”

Further, he said that a Department of Labor and OSHA investigation ultimately cleared WhatsApp of any wrongdoing in Baig’s firing. The company shared copies of two letters from the agencies. One dated April 14, 2025, had the subject line “RE: Meta et al/Baig – notification of dismissal with appeal rights” and stated that Baig’s complaint had been dismissed.

A second letter from OSHA, dated Feb. 13, 2025, provides further reasoning for the dismissal.

“As a result of the investigation, the burden of establishing that Complainant was retaliated against in violation of [federal law] cannot be sustained,” the letter states. “Complainant’s allegations did not make a prima facie showing. Complainant’s asserted protected activity likely does not qualify as objectively reasonable under” federal law.

Even if the activity was reasonable, the agency said, “there is no reasonable expectation of a nexus between the asserted protected activity and the adverse actions. This is largely due to intervening events related to Respondent raising repeated concerns about Complainant’s performance and/or behavior, according to documents provided by Complainant.”

Baig’s allegations closely mirror that of another security whistleblower at a major social media company. Around the same time that Baig was at Meta, the top security executive at Twitter — now X — was documenting similar problems.  

Peiter Zatko, a legendary hacker turned cybersecurity specialist brought in to improve Twitter’s security, quickly determined that the company’s data infrastructure was so decentralized that executives could not reliably answer questions about the data they collected or where it was stored.

“First, they don’t know what data they have, where it lives, or where it came from and so unsurprisingly, they can’t protect it,” Zatko told the Senate Judiciary Committee in 2022. “That leads to the second problem: employees need to have too much access to too much data on too many systems.”

Like the allegations against WhatsApp, Zatko told Congress that when he first arrived at Twitter in 2020 he quickly realized the company was “more than a decade behind industry security standard.”

According to Baig’s lawsuit, in one meeting WhatsApp’s global head of public policy, Jonathan Lee, remarked that the vulnerabilities highlighted by Baig were serious enough that it might lead to WhatsApp facing similar consequences as “Mudge to Twitter” — referring to Zatko.

Baig continued his warnings through March 2023, telling executive leadership that he believed the company’s lackluster efforts around cybersecurity directly violated the 2020 FTC consent order.

After dealing with what he called “escalating retaliation” from his supervisors, Baig wrote to Zuckerberg and Meta general counsel Jennifer Newstead on Jan. 2, 2024, warning that the company’s central security team had falsified security reports to “cover up” their lack of security. Later that month, Baig told his supervisor he was documenting Meta’s “false commitment” to complying with Ireland’s data protection laws, citing specific examples where user data was readily accessible to tens of thousands of employees.

Such warnings continued throughout 2024, with Baig reiterating past concerns and bringing up new ones about the company’s compliance with privacy laws.

In November 2024, Baig filed a TCR (Tip, Complaint or Referral) form with the Securities and Exchange Commission outlining his concerns and lack of remediation by Meta, and filed a complaint with the Occupational Safety and Health Administration for “systematic retaliation” by the company.

Baig was told by Meta in February 2025 that he would be included in upcoming performance-based layoffs, with the company citing “poor performance” and inability to collaborate as the primary reasons.

Update, Sept. 9, 2025: This story was updated with Meta/WhatsApp’s response.

The post Former WhatsApp security manager sues company for privacy violations, professional retaliation appeared first on CyberScoop.

Supreme Court blocks FTC commissioner Slaughter’s reinstatement

By: djohnson
8 September 2025 at 12:49

Rebecca Slaughter’s return-to-work orders have been put on hold for the second time this year, after the U.S. Supreme Court stepped in to block a lower court ruling that ordered her reinstatement at the Federal Trade Commission.

Last week a lower court ruled that Slaughter had been illegally fired by President Donald Trump, citing a 90-year-old Supreme Court precedent upholding the FTC’s independence from the executive branch and preventing presidents from firing commissioners for political reasons.

On Monday, Chief Justice John Roberts halted that order while the Supreme Court considers the case. Roberts provided no explanation for the Supreme Court’s reversal, but ordered the parties in the case to respond by Sept. 15.

Slaughter, who has remained vocal on FTC business and last week expressed her eagerness to return, has been through this once already. Earlier this year, she was briefly reinstated to the FTC by a lower court, only to have that order reversed by another court days later.

Alvaro Bedoya, the other Democratic FTC commissioner Trump purported to fire, has since resigned due to the financial difficulties tied to fighting his dismissal. He described the fight as a lose-lose situation:  He is no longer receiving a federal salary as commissioner, and is also prohibited by conflict-of-interest rules from accepting other employment in the meantime.

Bedoya has said that beyond the immediate fates of their jobs, the commissioners are ultimately fighting for an FTC that they believe works in the best interests of the public and is supported by Supreme Court precedent. He has argued the agency — which regulates and enforces against unfair or deceptive business practices, technology, data privacy and other issues — must be insulated from political pressure. 

In an online post last week, Slaughter said her top priority was reinstating the FTC’s Click to Cancel rule, a Biden-era regulation that would have forced companies to provide a simple and straightforward means to cancel their paid subscriptions.

Roberts’ order does not specify how the Supreme Court intends to rule on the case. Legal experts and former FTC officials have said it’s no secret that the Trump administration is looking for the court’s conservative majority to overturn Humphrey’s Executor v. the United States, which was unanimously upheld by the Supreme Court in 1935.

The high court’s decision this week to reverse the D.C. District Court of Appeals ruling is also notable because the court voted 2-1 that Slaughter — not the government — deserved the benefit of the doubt while the case was being adjudicated, citing unambiguously clear and binding legal precedent that had not yet been overturned.

That the Supreme Court overturned it anyway suggests they may agree with D.C. Appeals court Judge Neomi Rao, who wrote in her dissent that forcing FTC staff to acknowledge Slaughter’s legitimacy in the face of presidential orders “directly interferes with the President’s supervision of the Executive Branch and therefore goes beyond the power of the federal courts.”

If the Supreme Court does ultimately side with the administration, it would track with what observers such as Berin Szóka, a technology lawyer and president of the think tank TechFreedom, predicted earlier this year. Szóka, who has supported Slaughter and Bedoya’s efforts, wrote in March that “the fired Democratic FTC Commissioners may win early battles in their lawsuits but, in all likelihood, will ultimately lose at the Supreme Court — unfortunately.”

Roberts and the Supreme Court’s conservative majority have “made clear it will not apply Humphrey’s, if it remains good law at all, to today’s more powerful FTC,” Szóka wrote.

The post Supreme Court blocks FTC commissioner Slaughter’s reinstatement appeared first on CyberScoop.

NYU team behind AI-powered malware dubbed ‘PromptLock’ 

By: djohnson
5 September 2025 at 12:13

Researchers at New York University have taken credit for creating a piece of malware found by third-party researchers that uses prompt injection to manipulate  a large language model into assisting with a ransomware attack.

Last month, researchers at ESET claimed to have discovered the first piece of “AI-powered ransomware” in the wild, flagging code found on VirusTotal. The code, written in Golang and given the moniker “PromptLock,” also included instructions for an open weight version of OpenAI’s ChatGPT to carry out a series of tasks — such as inspecting file systems, exfiltrating data and writing ransom notes.

ESET researchers told CyberScoop at the time that the code appeared to be unfinished or a proof of concept. Other than knowing it was uploaded by a user in the United States, the company had no further information about the malware’s origin. 

Now, researchers at NYU’s Tandon School of Engineering have confirmed that they created the code as part of a project meant to illustrate the potential harms of AI-powered malware.

In a corresponding academic paper, the researchers call the project “Ransomware 3.0” and describe it as a new attack method. This technique “exploits large language models (LLMs) to autonomously plan, adapt, and execute the ransomware attack lifecycle.”

“Unlike conventional malware, the prototype only requires natural language prompts embedded in the binary; malicious code is synthesized dynamically by the LLM at runtime, yielding polymorphic variants that adapt to the execution environment,” the authors write. “The system performs reconnaissance, payload generation, and personalized extortion, in a closed-loop attack campaign without human involvement.”

According to Leah Schmerl, a public affairs officer at NYU, the project is led by NYU professor Ramesh Karri and a team of Ph.D and post-doctoral researchers. The research has been funded by a grant from the Department of Energy, the National Science Foundation, and New York’s Empire State Development’s Division of Science, Technology and Innovation.

Md Raz, a Ph.D student at NYU and lead author of the paper, told CyberScoop that the team uploaded its proof-of-concept to VirusTotal during final testing procedures, and ESET discovered it without knowing its academic origins.

Raz said the project’s primary motivation was the team’s belief “that ransomware was getting worse, it was using a lot of these new technologies like advanced encryption … and at the same time we were seeing AI get a lot better.”

“At the intersection of that we think there is a really illuminating threat that hasn’t yet been discovered in the wild, so we got to [researching] whether this threat was feasible,” he added. 

Raz said the team built the program using open source software, rented commodity hardware and “a couple of GPUs.” He described several features of Ransomware 3.0 and explained how its use of LLMs creates unique security challenges for defenders, especially with detection. The natural language prompts it uses are polymorphic, meaning it will be “completely different code each time” it’s generated, with different execution times, telemetry and other features that could make it much harder to track across multiple incidents.  

He said the team has withheld a significant number of artifacts for evaluating the ransomware — such as scripts, JSON requests to the LLM and behavioral signals — from the public, fearing it could be leveraged by attackers. The team does plan to provide more details on their research at upcoming conferences.

ESET later updated its research and social media posts to note that NYU researchers had created the malware, but said they stood by their original findings.

“This supports our belief that it was [a] proof of concept rather than fully operational malware deployed in the wild,” the company said in an update to researcher Cherepanov’s blog detailing PromptLock. “Nonetheless, our findings remain valid — the discovered samples represent the first known case of AI-powered ransomware.”

That claim was echoed by NYU researchers, who wrote “to our knowledge, we are the first work to demonstrate a fully closed-loop LLM orchestrated ransomware attack with targeted payloads and personalized extortion tactics, along with a comprehensive behavioral evaluation to promote future defenses.”

But while ESET’s discovery and subsequent media reporting moved up their timelines for announcing the project, Raz said the research team isn’t upset by the unexpected attention it’s received.  

“I think it was definitely a stroke of luck that we set down the binary [in VirusTotal],” he said, noting that the code wasn’t crafted to stand out and evaded detection from all major antivirus vendors. “It was pretty good that everyone started proactively talking about it and defenses for it because this kind of tech had never been shown before, and the fact that it was presented as in the wild really made coverage widespread.”

While the malware’s academic nature may serve as a qualifier to those claims, Ransomware 3.0 is one of multiple examples published over the past month detailing how LLMs can be rather easily co-opted into serving as ransomware assistants for low-technical threat actors using relatively simple prompts.

Last month, Anthropic revealed that it recently discovered a cybercriminal using the company’s Claude LLM to “an unprecedented degree” to commit “large scale theft and extortion of personal data.” The threat intelligence report details behaviors by Claude that are similar to what is described by NYU and ESET, with the actor targeting at least 17 different health care, government, emergency services and religious organizations.

“Claude Code was used to automate reconnaissance, harvesting victims’ credentials and penetrating networks,” Anthropic security researchers wrote. “Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands.”

Ever since LLMs were introduced, there have been concerns that  cybercriminal enterprises could use them to aid or strengthen their operations. Under the Biden administration, AI companies went to great lengths to assure policymakers that they were building technical guardrails to prevent straightforward misuse or co-opting of their models for cyberattacks.

However, over the past year the Trump administration has signaled that AI safety is not a top priority.  Instead, they are focused on removing regulatory barriers so  American AI companies can compete with China and other global rivals for market dominance.

Since then, researchers have found that the latest AI models released by companies like OpenAI and xAI have had nearly nonexistent safety features in their default models, can be easily jailbroken through rudimentary prompt attacks, and require dedicated security prompting on the front end to prevent data leakage, unauthorized data exfiltration and other common vulnerabilities.

The post NYU team behind AI-powered malware dubbed ‘PromptLock’  appeared first on CyberScoop.

FTC announces settlement with toy robot makers that tracked location of children

By: djohnson
3 September 2025 at 12:11

The Federal Trade Commission announced a settlement Tuesday with a Chinese robot toy manufacturer, following an investigation that charged the company with illegally collecting the location data of U.S. children who buy its products.

In a complaint filed in the U.S. Northern District Court of California, the Department of Justice on behalf of the FTC charged Shenzhen, China-based Apitor Technology — makers of programmable robot toys for children — of violating U.S. federal law by tracking the geolocation of users under the age of 13 through an online app that users download to operate the robots.

Apitor collected this data without informing parents or asking for permission, the FTC said, violating parental consent requirements in the 1998 Children’s Online Privacy Protection Act.

This collection, ongoing since at least 2022, “subjects underage consumers to ongoing harm and deprives parents of the ability to make an informed decision about the collection of their children’s location information,” the FTC alleged in its complaint.

“Apitor allowed a Chinese third party to collect sensitive data from children using its product, in violation of COPPA,” Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said in a statement. “COPPA is clear: Companies that provide online services to kids must notify parents if they are collecting personal information from their kids and get parents’ consent — even if the data is collected by a third party.”

The toys made by Apitor are sold on Amazon and other online marketplaces, marketed to children between the ages of 6-14, and promise educational benefits such as teaching children coding skills.

Apitor robots, marketed to children between 6-14 years of age, were available for sale on Amazon and other online e-marketplaces. (Source: FTC)

Apitor’s products come with a companion application, downloadable on Android and iOS mobile devices, for children to remotely control the robots. It also included a third-party software development kit called JPush, made by a Chinese phone developer and analytics company, that collects “the precise geolocation data for thousands of children,” the agency said.

The company’s own privacy policy expressly affirms its intentions to adhere to U.S. law, at one point stating “[w]e are committed to complying with the Children’s Online Privacy Protection Act” without disclosing the tracking of geolocation data through JPush.

In a proposed order detailing terms of the settlement, Apitor did not admit or deny the allegations but agreed to pay a $500,000 civil fine for previous violations and delete collected geolocation data or obtain express parental consent from each user.

The company also agreed to 10 years of compliance monitoring and must include a “clear and conspicuous” disclosure in any visual, audible or electronic marketing about its robots that it intends to collect geolocation data — or any COPPA-protected personal data — and request explicit consent from parents before doing so.

The post FTC announces settlement with toy robot makers that tracked location of children appeared first on CyberScoop.

❌
❌