Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

American duo sentenced for hosting laptop farms for North Korean IT workers

By: Greg Otto
7 May 2026 at 09:56


Two U.S. nationals were sentenced to 18 months in prison for running laptop farms that facilitated North Korea’s expansive remote IT workers scheme, the Justice Department said Wednesday.

Matthew Issac Knoot and Erick Ntekereze Prince both received and hosted laptops at their residences to dupe U.S. companies into thinking remote IT workers they hired were located in the country. The pair’s separate schemes impacted almost 70 U.S. companies and generated a combined $1.2 million in revenue for the North Korean regime.

“The FBI and our partners will continue to disrupt North Korea’s ability to circumvent sanctions and fund its totalitarian regime,” Brett Leatherman, lead of the FBI’s Cyber Division, said in a statement. “These cases should leave no doubt that Americans who choose to facilitate these schemes will be identified and held accountable. Hosting laptops for DPRK IT workers is a federal crime which directly impacts our national security, and these sentences should serve as a warning to anyone considering it.”

Knoot, of Nashville, Tennessee, and Prince, of New York, received the laptops from unsuspecting U.S. companies and installed remote desktop applications on the machines to enable co-conspirators to work from anywhere while appearing to be based at their respective residences.

Prince’s company Taggcar was contracted to supply IT workers to victim U.S. companies from June 2020 through August 2024. He pleaded guilty in November 2025 to wire fraud conspiracy for his yearslong involvement in the North Korean IT worker scheme. 

Prince was indicted and charged in January 2025 along with his alleged co-conspirators, who collectively obtained work for North Korean IT workers at 64 U.S. companies, earning nearly $950,000 in salary payments. 

A federal judge sentenced Prince Wednesday and ordered him to forfeit $89,000, which is the amount he netted personally. 

Knoot was arrested in August 2024, a year after the FBI searched his home. Officials said he made multiple false and misleading statements and destroyed evidence to obstruct the investigation at that time. 

Victim companies paid North Korean workers linked to Knoot’s laptop farm more than $250,000 from July 2022 to August 2023. The remote IT workers transferred those funds to Knoot and accounts associated with North Korean and Chinese nationals, officials said. 

Knoot was sentenced May 1 and ordered to pay $15,100 in restitution to the victim companies and forfeit an additional $15,100, which is equivalent to the amount of his direct take from the scheme.

The pair of North Korean operatives join a growing list of people who have been charged and jailed for supporting the regime’s scheme that generates hundreds of millions of dollars annually for the country’s military and organizations involved in its weapons programs.

Authorities have been cracking down on the malicious insider activity by seizing cryptocurrency linked to the theft, and targeting U.S.-based facilitators who provided forged or stolen identities and hosted laptop farms for North Korean operatives. 

The countermeasures are stacking up, but the scheme is widespread and has infiltrated an undetermined number of businesses, including hundreds of Fortune 500 companies.

Federal judges previously sentenced other people to prison for their involvement in the scheme, including Keija Wang and Zhenxing Wang; Audricus Phagnasay, Jason Salazar and Alexander Paul Travis; Oleksandr Didenko and Christina Chapman

“These sentences hold accountable U.S nationals who enabled North Korea’s illicit efforts to infiltrate U.S. networks and profit on the back of U.S. companies,” John A. Eisenberg, assistant attorney general for national security, said in a statement. 

“These defendants helped North Korean ‘IT workers’ masquerade as legitimate employees, compromising U.S. corporate networks and helping generate revenue for a heavily sanctioned and rogue regime,” he added. “The National Security Division will continue to pursue those who, through deception and cyber-enabled fraud, threaten our national security.”

The post American duo sentenced for hosting laptop farms for North Korean IT workers appeared first on CyberScoop.

Space Force official touts AI’s impact on cyber compliance

By: djohnson
14 April 2026 at 16:00

Seth Whitworth, who is both acting Associate Deputy Chief of Space Operations for Cyber and Data and acting chief information security officer, said he believes AI tools are shifting the way defenders review cyber risk, both for individual systems and more holistically throughout an enterprise.  

In particular, Large Language Models can be used to systematically implement fixes for the smaller but critical weaknesses that have allowed state-sponsored hackers and cybercriminals to get inside victim networks and live off the land.

“Our adversaries are not looking for the massive cybersecurity vulnerabilities – we’re actually pretty good at [defending] that,” said Whitworth Tuesday at AI Talks, presented by Scoop News Group. “They’re looking for a misconfiguration, a failed update, a tiny little thing that allows them an entry point into a very connected network.”

Many of these basic cyber hygiene problems tend to fall under existing compliance programs, but it can take more than legal mandates to fix them. Many enterprise IT networks – particularly older ones – build up technical debt over time, leading to forgotten systems, hidden routers and other forms of shadow IT that get more insecure over time.

Cybersecurity experts say agents and the Large Language Models that power them – which operate in perpetuity 24/7, – are particularly well-suited to finding these smaller flaws and quickly exploiting them.

But Whitworth argued that the same technology can be used to reshape how organizations measure and track cyber compliance, from a sluggish box-checking exercise to something more nimble and substantive. He claimed that Space Force’s internal process for obtaining Authorities to Operate and other formal security certifications used to take 3-18 months. Now, it “can now be done in weeks and days.”

That in turn can empower program managers to “pull in all of that massive amount of data, allow the AI – who doesn’t get tired, who doesn’t miss patterns, who doesn’t miss these components – to churn on those items and them deliver something” that can inform real-time changes to cybersecurity, he said.

Whitworth also acknowledged the “fear” that many organizations still have around the use of AI, as well as lingering concerns about some of the technology’s enduring limitations like hallucinations and data poisoning. He said he still gives AI-generated outputs “extra scrutiny, because I haven’t seen the trusted validation” yet.

But he also said he gets more valuable insight on the Space Force’s holistic cyber risk from using Large Language Models than he does from other security control assessments, which tend to narrowly focus on the risk of single systems or assets in isolation.

“We are operating in a highly connected, highly orchestrated world, and so moderate risk that’s accepted in one program immediately becomes moderate risk that is accepted in another program,” said Whitworth. “AI can take that whole picture and understand that when this system change impacts this system, it also impacts this [other] system.”

The post Space Force official touts AI’s impact on cyber compliance appeared first on CyberScoop.

Commerce setting up new AI export regime to push adoption of ‘American AI’ abroad

By: djohnson
10 April 2026 at 15:40

The Department of Commerce is putting together a catalog of AI tools that will be given special export status by the federal government to be sold abroad.

The department issued a call for proposals to participating companies in the Federal Register, looking to create a “menu of priority AI export packages that the U.S. Government will promote to allies and partners around the world.”

The companies and technologies included “will be presented by U.S. Government representatives as a standing, full-stack American AI export package and may receive priority government advocacy, export licensing review and processing, interagency coordination, and financing referrals, subject to applicable law,” the department said in a Federal Register notice Friday.

The export package was mandated through President Donald Trump’s AI executive order last year, which described the export packages as part of a larger effort to “ensure that American AI technologies, standards, and governance models are adopted worldwide” and “secure our continued technological dominance.”

“The American AI Exports Program delivers on President Trump’s directive to ensure that American AI systems – built on trusted hardware, secure data, and world-leading innovation – are deployed at scale around the world,” Secretary of Commerce Howard Lutnick said in a statement earlier this month. “By promoting full-stack American solutions, we are strengthening our economic and national security, deepening ties with allies and partners, and ensuring that the future of AI is led by the United States.”

The executive order called for certain technologies to be included in the package, including AI models and systems but also computer chips, data center storage, cloud services and networking services, along with unspecified “measures” to ensure security and cybersecurity of AI systems.

The Commerce notice envisions offering multiple packages of AI technology from “standing teams of AI companies organized to offer a complete American AI technology stack to foreign markets on an ongoing basis.” There is no limit on the number of companies that participate in a consortium, and Commerce said there isn’t “any particular legal structure” required.

While the proposal at several points refers to these packages as “American AI,” the notice does specify that foreign companies can participate.

In fact, for certain categories like hardware, the total level of U.S.-made content only needs to be 51% or greater. Member companies providing data, software, cybersecurity or application layer services can’t be incorporated or primarily based in countries like China or Russia, where national security laws may compel them to work with foreign governments or hand over sensitive data.

The potential business would be broad, covering foreign public and private sector buyers in global, regional, and country-specific markets. It also includes the potential formation of separate, “on demand” packages of companies and products meant for “specific foreign opportunities.”

But the notice also states that final decisions will be made on the basis of “national interest” by principals at the Departments of Commerce, State, Defense and Energy, as well as the White House Office of Science, Technology and Policy.

Commerce does not intend to formally rank proposals or use fixed scoring formulas to approve packages of technology for the export program, and the language in the notice appears to give wide latitude to federal decisionmakers to determine whether a particular proposal meets the “national interest” threshold.

“A proposal that undertakes reasonable efforts to satisfy the 51 percent hardware U.S.-content presumption is not automatically entitled to designation, and a proposal that does not satisfy that presumption is not automatically disqualified,” the notice said. 

The post Commerce setting up new AI export regime to push adoption of ‘American AI’ abroad appeared first on CyberScoop.

Don’t just fight fraud, hunt it

By: Greg Otto
9 April 2026 at 08:00

Our nation has entered a new fraud arms race fueled by AI.

With billions of dollars in fraud losses mounting in both the private and public sectors, it’s clear the old ways of deterring fraud aren’t working. That’s why we need a new playbook that starts with understanding how fraudsters operate, evolving our defenses, and shifting to a proactive posture that doesn’t just fight fraud but actively hunts it down. 

In the AI era, treating fraud as just a front-door problem won’t work. This moment requires industry, government, and consumers to work together, reduce silos, and share real-time intelligence. The goal is to move beyond reactive detection by understanding the lifecycle of a threat—from its formation to its spread—so we can intervene before it establishes a foothold.

For decades, fraud has been treated like a series of isolated incidents. This false assumption has underpinned nearly every past effort to crack down on it. Those efforts, while well-intentioned, have missed the mark. 

Now, in light of the Trump Administration’s Cyber Strategy for America and accompanying executive order, it’s critical to understand the modern fraud landscape and the central role that digital identity exploitation plays within it.

New research from Socure reveals just how dramatically the landscape is evolving. 

Fraud has become industrialized, with organized crime syndicates running operations that are global, systemic, automated, and powered by AI. No organization, service, or program is safe. Fraudsters target government programs, banks, fintech platforms, telecom companies, and more, blurring the lines between public sector fraud, financial crime, and cybercrime.

It used to be that fraud could be detected through the reuse of identity elements across multiple applications: the same email, device, phone number, or IP address used over and over. 

But the data is clear: these links are declining fast. Today’s sophisticated fraudsters are now engineering their attacks to avoid traditional fraud detection patterns. Our research demonstrates that emails will be completely unique within fraud populations as soon as 2027, so we won’t be able to rely on email to identify patterns.

Speed is another defining feature of modern identity fraud. Fraudsters use AI to create clean, durable, synthetic and stolen identities at scale. In one observed campaign, 24,148 synthetic identities were built and launched in under a month, with many attacks occurring within 48 hours. What once took weeks or even months can now be completed in days. 

The rapid rise of identity farms is another indicator of the industrialization of fraud. Identity farms are operated by crime rings to systematically create synthetic or stolen identities over time in order to closely resemble legitimate identities. Matured identities are used to open bank, credit, and money-movement accounts, siphon government benefits, launder funds, and more. These identity farms focus on durable identities that can bypass traditional verification controls.

So what should we do? Simply put, we must go on offense. 

This means treating identity as critical infrastructure and implementing strategies that track how identities were created before the moment of application; expanding signals monitoring to include elements like residential proxies, ISP behavior, and domain registration activity; evaluating velocity and orchestration in real-time; and treating continuous measurement, rapid model iteration, and cross-industry intelligence as core capabilities.

Additionally, given the rapid scaling of fraud, we need more analysis of the complete ecosystem, including dynamic factors like device information, digital footprints, and behavioral biometrics so organizations can effectively distinguish genuine humans from machines. Ultimately, this layered and interconnected approach makes it significantly harder for malicious actors to recreate or steal identities at scale.

Fraud is no longer a series of isolated acts. It is a coordinated, global enterprise built on the exploitation of identity. Until our efforts reflect this new reality, we will continue to fight an imminent and ongoing threat with outdated tools and fall further behind. 

Now is the time to make this strategic shift and finally put fraudsters on their heels. 

Mike Cook serves as head of fraud insights at Socure, the identity and risk platform for the AI age.

The post Don’t just fight fraud, hunt it appeared first on CyberScoop.

Officials worry Salt Typhoon apathy is killing momentum for tougher telecom security rules

By: djohnson
12 March 2026 at 11:24

Two years ago, it was revealed that Chinese hackers had compromised at least ten U.S. telecoms, giving them broad access to phone data affecting nearly all Americans. Since then, public officials charged with responding to the campaign and bolstering the nation’s cyber defenses have reported a common problem.

Many of their constituents struggle to understand why the hacks – carried out by a group called Salt Typhoon – should rank among their top concerns, or how it impacts their day to day lives.

Some state and federal officials worry that this lack of interest is depriving policymakers the public pressure needed to build momentum for stronger action to improve the nation’s telecommunications cybersecurity.

Mike Geraghty, the CISO and director of the New Jersey Cybersecurity and Communications Cell, said New Jersey is the nation’s most densely populated state, with a high concentration of critical infrastructure and a major telecommunications footprint. For that reason, a campaign like Salt Typhoon should, in theory, be of strong interest to Garden State residents.

“However, if you talk to a person on the street in New Jersey, they’’ll say who cares that the Chinese are looking at – you know – what numbers I call?” he said Wednesday at the Billington State and Local Cybersecurity Summit. “It has a big role to play in my job, but trying to get people to understand what that means for New Jersey is really difficult.”

Congress hasn’t passed comprehensive privacy legislation in decades. Meanwhile, cyberattacks that expose sensitive data are widespread, and U.S. companies routinely collect and sell customers’ personal information. Some officials speculate that, taken together, these trends have left Americans numb to data theft and data-for-profit–so additional breaches feel like just another drop in the bucket.

Mischa Beckett, deputy chief information security officer and director of cyber threat intelligence at GDIT, said Salt Typhoon’s focus on telecom data can feel like an abstract threat to many Americans. By contrast, other Chinese hacking campaigns like Volt Typhoon suggest potential damage to water plants and electric grids that are easier to grasp.

“It’s maybe a little bit easier to write off a loss of data..and move on, as unfortunate but no big deal,” said Beckett. “I think that case is much harder to make when we’re talking about pre-positioning and critical infrastructure, things that touch all of our lives every day.”

Last year, a former intelligence official at the Office of the Director of National Intelligence told CyberScoop that a lack of outrage from the public following the Salt Typhoon attacks was dampening momentum for broader regulation or reforms to telecom cybersecurity.

“We can’t accept this level of espionage on our networks,” said Laura Galante who led the Cyber Threat Intelligence Integration Center under the Biden administration. “If you had 50 Chinese [Ministry of State Security] spies or contractors sitting inside a major [telecom company’s] building, they would be walked out and it would be a full-scale effort. That’s in broad strokes what has happened, but the access was digital.”

The post Officials worry Salt Typhoon apathy is killing momentum for tougher telecom security rules appeared first on CyberScoop.

The Caracas operation suggests cyber was part of the plan – just not the whole operation

By: Greg Otto
19 February 2026 at 06:00

The dominant narrative has framed the Jan. 3 Caracas power outage during the mission to capture Venezuelan leader Nicolás Maduro as a “precision cyberattack.” But publicly available information points to a more complicated picture: videos, photographs, and accounts published from Caracas show significant physical damage to at least three Venezuelan substations. Experts who reviewed that material say the observed kinetic damage could, on its own, account for the outages—raising questions about how much of the outage can be confidently attributed to cyber activity alone.

These experts say Operation Absolute Resolve appears to have involved more than a stand-alone “cyber blackout,” despite the framing of many early accounts. In their view, cyber operations may have played some role, but the visible physical attacks alone could plausibly explain the outages—and that kinetic dimension is largely absent from the dominant narrative.

Retired Rear Adm. Mark Montgomery, a former director of operations at US Indo-Pacific Command and now a senior cybersecurity expert at the Foundation for the Defense of Democracies, described the outage to CyberScoop as part of “a campaign that likely took months to source cyber targets, days to work kinetic targets, and then integrated them into a single campaign plan that took a night.”

How the outage is framed matters because it can shape accountability, influence how governments and utilities prioritize grid security, and affect perceptions of offensive cyber capabilities. If the episode is widely presented as a “cyber-only” success without clear, corroborated evidence, it may encourage outsized conclusions about what cyber tools can accomplish on their own. Over time, that framing can steer policy and spending toward the wrong lessons—emphasizing digital defenses while giving less attention to physical vulnerabilities that may be just as consequential.

How ‘cyber blackout’ became the headline

Immediate coverage of the operation largely treated cyber as the decisive cause of the outage. Much of that framing traced back to a cryptic line from President Donald Trump  at a post-operation press conference: “It was dark, the lights of Caracas were largely turned off due to a certain expertise [emphasis added] that we have, it was dark, and it was deadly.” (Later Trump suggested that the lights were turned out in Caracas by a “discombobulator.”)

The cyber narrative gained further momentum when Chairman of the Joint Chiefs of Staff Gen. Dan Caine said at the same press conference that US Cyber Command and Space Command provided “layering effects” for the operation. One widely cited report went further, citing anonymous “people briefed on the matter” to assert that a US cyberattack caused the blackout without offering forensic evidence, technical details, or independent corroboration.

Neither the Pentagon nor Cyber Command has yet to publicly confirm that a cyberattack caused the grid outage. US Cyber Command referred CyberScoop to the Department of War, which did not respond to our queries.

The grid damage is visible, not virtual

While cyber attribution largely rested on anonymous sourcing and inference, the evidence of physical damage was public, visual, and documented shortly after the attack.

Beginning on Jan. 5, publicly shared videos and photos appeared to show extensive physical damage at substations in Caracas owned by the government’s energy utility company, Corpoelec. The images included apparent bullet impacts, destroyed equipment, blown doors, and oil leaks at the Panamericana 69 kV and Escuela Militar 4.8 kV sites. In Venezuelan government statements, officials attributed the incidents to an attack and said the damage took multiple transmission lines out of service, including the OAM-Vega Caricuao-Panamericana 1 and 2 (69 kV) and Junquito-Panamericana 1 and 2 (69 kV). Electric grid security experts who reviewed the footage told CyberScoop it appeared credible and consistent with the kind of damage that could contribute to localized outages.

Local journalists noted physical attacks on these facilities, as well as a third substation at Fuerte Tiuna, a military installation in Caracas. Videos showing damage to the Fuerte Tiuna substation—some with fires still burning—were uploaded to YouTube on Jan. 12.  AirWars, a not-for-profit group that describes itself as a civilian harm watchdog in conflict-affected nations, confirmed the geolocation of the affected substations and said “heavy weapons and explosive munitions” were used, though it reported no civilian harm.

The Venezuelan government did not respond to CyberScoop’s requests for comment, but it said in a press release that the damage was caused by “missiles.” Several experts with military or electric-sector cybersecurity backgrounds told CyberScoop that, based on what’s visible in the videos, the damage appears consistent with a kinetic attack—most likely carried out via helicopters and planes.

“There were obviously pretty large .50-caliber bullet holes in the walls,” Earl Shockley, president and CEO of INPOWERD, a military veteran and cybersecurity expert who worked for forty years as a power-grid operations engineer, told CyberScoop after viewing one of the videos.

“That’s a kinetic attack,” FDD’s Montgomery told CyberScoop after watching video of the Fuerte Tiuna substation incident.

Across interviews, grid operators, cybersecurity specialists, and military experts independently reached the same conclusion: the visible physical damage alone was enough to cause the outages observed.

An easy target, cyber or not

Experts note that cyber operations can sometimes produce kinetic effects—as they did in the highly complex US-Israeli operation known as Stuxnet—but they also say that taking down Caracas’s already fragile power grid would not necessarily have required that level of sophistication.

“All of us who are electric sector people, we’ve seen the videos,” Patrick Miller, president and CEO of Ampyx Cyber, told CyberScoop. “We’re all pretty much convinced that would definitely cause an outage. If you’re going to go in and shoot up the substations, why do you need cyber again?”

Miller said that temporarily disrupting the flow of power is a well-understood capability for any nation with the interest to do it–and that it often requires almost no precision or skill. “These are fragile systems, he said.

“This was not a hard cyber target,” Montgomery said. “It’s an easy cyber target. These are older systems that we have worked on before in other countries. They’re not unique. We’re not talking about taking down Idaho National Labs here. We’re talking about taking down a poorly defended, underfunded, under-resourced network.”

Ron Brash, operational technology and industrial control system expert, told CyberScoop, “These energy management systems are probably relatively easy to infiltrate either because they haven’t updated the software or updated what they need to update, and you can exploit the vulnerabilities, or because you buy insider access.” Moreover, he said, “There’s probably so much analog stuff in there from the 1960s.”

Cyber to blind, kinetic to break

Experts generally agree that physical damage likely disabled at least parts of the power grid. But they also think cyber activity may still have played an important supporting role in Operation Absolute Resolve—one that could have enabled or amplified the operation, even if it wouldn’t fully account for where the outages occurred or how long they lasted without accompanying physical damage.

Some experts say that it’s possible the US used cyber capabilities to briefly disrupt power transmission in specific areas—potentially to reduce Venezuelan defenders’ situational awareness as they moved toward Maduro’s compound. “You want to reduce situational awareness, blind the enemy, break their coordination, and enable yourself to maneuver where you need to be. And all of those things just played out with that operation,” Shockley said.

“If we shut down the radars, if we shut down the power grid, they don’t see what’s going on,” he said. “Then we do some kinetic damage to prevent them from bringing the grid back up quickly. That way, we have plenty of time to do what we need to do.”

“A cyberattack is reversible, so it’s temporary,” Montgomery said. “It’s possible that cyber was attempted to take down power stations and equipment before the missiles came in to take down the power stations and equipment,” he added. “You have missiles coming in and taking down power, so nothing works. And before that, you do cyber so that more of your missiles get through. It is kind of a layer to the attack.”

Vice Adm. Heidi Berg, commander of 10th Fleet/Fleet Cyber Command, hinted at such layering at the WEST conference in San Diego earlier this week.

Cyber-based surveillance may also have been used for months in advance, giving the US military visibility into the grid’s weak points and helping inform where kinetic strikes have the greatest effect. “It takes months to identify what the system does, what the software does, do we have access to their older systems,” and so forth, Montgomery said.

“If you monitor that system, you learn where the power flows go, you learn where the single points of failure are, you learn that if this thing blows up, man, I’m in trouble because I can’t get power from this area to that area,” Shockley said.

Trump said at the press briefing that the lights went out in Caracas, and some coverage interpreted that as widespread darkness across large parts of the city. That framing sits uneasily with the idea of narrowly targeted, area-specific disruption. At the same time, social media posts and news accounts from the incident did not indicate that a large portion of Caracas was plunged into darkness.

Valentina Aguana, a Venezuelan digital rights advocate and systems engineer now working in Spain, told CyberScoop that a widespread blackout “was never a thing for my team working in Venezuela. There were very few areas in which the power went down and it came back on in a few minutes,” which you would expect with a pure cyberattack. “All the areas that were left without power were left without power for a couple of hours,” she added, which experts say is consistent with a kinetic attack.

“I haven’t seen any real proof or even correlating proof that the outage was widespread,” Miller said, adding that he has an extensive network of electric system security contacts throughout South America.

What gets lost in a cyber-only framing

Given how quickly and widely videos, press releases, and other confirmation of physical damage to the Venezuelan substations circulated, it remains unclear why so many outlets gave little attention to the kinetic dimension of the outage.

Whatever the source of the omissions, recent reporting on Pentagon computer warfare doctrine has underscored that cyber operations are increasingly designed to shape battlefield conditions rather than function as stand-alone weapons, an approach that aligns with the expert assessments of the role of kinetic attacks in the Caracas operation.

However, continued accounts of what happened in Caracas that treat the sabotage as primarily “cyber” could skew risk assessments and preparedness—potentially leaving substations, transmission lines, and transformers less protected than they should be against the kind of real-world attacks that visible damage suggests are possible.

“This was a very complex thing, and it wasn’t just one thing; it wasn’t just a cyberattack,” Shockley said. “In my industry, we have regulations around how we’re supposed to protect our critical infrastructure, our substations, our power plants, our control centers. Physical security is a big thing that we do. We do physical security inspections, and we make recommendations.”

The post The Caracas operation suggests cyber was part of the plan – just not the whole operation appeared first on CyberScoop.

Cantwell claims telecoms blocked release of Salt Typhoon report 

By: djohnson
3 February 2026 at 18:09

More than a year after national security officials revealed that Chinese hackers had systematically infiltrated U.S. telecommunications networks, the top Senate Democrat on the committee overseeing the industry is calling for hearings with executives from the nation’s biggest telecom companies.

In a public letter released Tuesday, Sen. Maria Cantwell, D-Wash., called for the CEOs of Verizon and AT&T to appear before Congress and explain how the hacking group known as Salt Typhoon breached their networks, as well as what steps they’ve taken to prevent another intrusion.

“For months, I have sought specific documentation from AT&T and Verizon that would purportedly corroborate their claims that their networks are now secure from this attack,” Cantwell wrote to Sen. Ted Cruz, R-Texas, who is the Chair of the Senate Commerce, Science and Transportation Committee. “Unfortunately, both AT&T and Verizon have chosen not to cooperate, which raises serious questions about the extent to which Americans who use these networks remain exposed to unacceptable risk.”

Salt Typhoon’s intrusion into telecom networks exposed major security weaknesses and put sensitive communications and data belonging to U.S. politicians and policymakers at risk. The federal government has done little since to hold the industry publicly accountable.

Congress has neither  proposed or passed meaningful legislation to address the issue.  While a handful of federal departments and agencies began public regulatory and oversight reviews, most of those efforts have been shut down or rolled back.

An investigation by the Cyber Safety Review Board at the Department of Homeland Security into the intrusions was abruptly stopped when the Trump administration eliminated the advisory body. One former member remarked recently that the failure to finish the investigation ranked among her biggest career regrets.

Weeks before President Joe Biden left office, his Federal Communications Commission issued emergency regulations aimed at holding telecom companies legally responsible – under federal wiretapping laws – for securing their communications. The rules would have also required carriers to file annual certifications with the FCC confirming they have cyber risk management plans in place. That certification would include addressing common security gaps, like lack of multifactor authentication, that are widely believed to have been exploited by Salt Typhoon.

While outgoing Chair Jessica Rosenworcel told CyberScoop the rules were badly needed to hold telecoms accountable for their cybersecurity, Brendan Carr— an FCC commissioner and Rosenworcel’s successor as chair—rescinded those rules, arguing they were unnecessary because the FCC and telecoms could work together voluntarily on cybersecurity. Another commissioner, Anna Gomez, told CyberScoop she had seen no evidence her agency had been meeting with telecoms on the issue.

At a hearing in December, Cruz endorsed the FCC’s elimination of the rules, arguing that improving the nation’s telecom cybersecurity “doesn’t come from imposing outdated checklists and top down regulations, it arises from a strong partnership between the private sector and government, working together to detect and deter attacks in real time.”

Cantwell, citing reporting from CyberScoop and other sources, argued that  “telecommunications providers have taken few protective actions thus far due to the costs involved” and said the committee “must hear directly from the CEOs of AT&T and Verizon so Americans have clarity and confidence about the security of their communications.”

According to Cantwell, she has already requested documentation from AT&T CEO John Stankey and then-Verizon CEO Hans Vestberg on how they’ve responded to the breaches. Both confirmed that Mandiant, Google Cloud’s incident response and threat-intelligence division wrote a report, one that Cantwell said “would presumably document the vulnerabilities identified and detail what corrective actions” telecoms took to improve their privacy and security.

She claimed after requesting the report from Mandiant, AT&T and Verizon “apparently intervened to block Mandiant from cooperating with my requests.”

AT&T and Verizon representatives did not immediately respond to a request for comment.

The post Cantwell claims telecoms blocked release of Salt Typhoon report  appeared first on CyberScoop.

Sean Cairncross’ cybersecurity agenda: less regulation, more cooperation

By: Greg Otto
3 February 2026 at 12:49

The Trump administration needs help from industry to reduce the cybersecurity regulatory burden and to back important cyber legislation on Capitol Hill, among other areas, National Cyber Director Sean Cairncross said Tuesday.

“You know your regulatory scheme better than I do: Where there’s friction, where there’s frustration with information sharing, what sort of information is shared, the process through which it’s shared,” he said. “It is helpful for us to hear that and have that feedback so that we can address it, engage it and try to make it better.”

The Trump administration is interested in being a partner with industry rather than a “scold,” Cairncross said at an Information Technology Industry Council event. The Biden administration sought to impose more cybersecurity rules on the private sector than prior administrations.

Cairncross also called on industry to help pass the Cybersecurity Information Sharing Act of 2015, which has expired and dealt with short-term extensions in recent months as Congress stalls on what to do with a law that provides legal protections to companies that share cyber threat data with the government and each other.

The Trump administration would like to see the law extended as-is for 10 years.

“What we need from industry is an echo chamber up on the Hill to help make that happen,” he said. “I can go tell people how important this is, or the White House can weigh in, and we have done that. But when the people who are actually affected by this start to weigh in with members, that has an even greater impact.”

Overall, Cairncross wants industry to “show up and engage,” he said, as the administration has done with its forthcoming cybersecurity strategy, something he said would be rolled out “sooner rather than later.”

“Reach out to us,” he urged. “We will certainly be reaching out how we have gone about this strategic piece of this. Just from the outset, we have had a heavy industry engagement side of this and looked for feedback and thoughts. It’s been extremely helpful, and hopefully it has been successful in sending the message that we want to, which is, we are here to do everything we can to partner with industry.”

The post Sean Cairncross’ cybersecurity agenda: less regulation, more cooperation appeared first on CyberScoop.

HackerOne rolls out industry framework to support ‘good faith’ AI research

By: djohnson
20 January 2026 at 15:59

Four years ago, the Department of Justice announced it would no longer seek criminal charges against independent and third-party security researchers for “good faith” security research under the Computer Fraud and Abuse Act.

Now, a prominent bug bounty platform is attempting to build a framework for industry to offer similar protections to researchers who study flaws in AI systems, including fields like AI safety and others that look at unintended behaviors and outputs that can impact security outcomes.

Ilona Cohen, chief legal and policy officer at HackerOne, told CyberScoop the Good Faith AI Research Safe Harbor is meant to build off previous efforts — like the DOJ policy change and the company’s own Gold Standard Safe Harbor framework — that provide wider legal freedom for third-party security researchers to prod and test commercial products and systems for flaws and expand them to the AI realm.

HackerOne previously pushed the DOJ to provide further guidance on how its good faith researcher policy would apply to AI systems. Cohen said the department’s language “provides a lot of clarity and helped security researchers have the comfort to be able to do the testing that’s so important to the backbone of our security industry, [but] it doesn’t necessarily apply to all AI research.”

The DOJ’s policy change in 2022 represented a hard-fought victory following years of advocacy by the cybersecurity community. Without further guidance from DOJ, Cohen said it was important for industry to do the same foundational work around advocacy and governance for AI testing that helped good faith hackers convince the agency that independent researchers are an asset to the broader cybersecurity ecosystem.

Participating companies can attach a “banner” to their HackerOne profile advertising their adoption of the protections, which commit them to, among other things, “refraining from legal action … and supporting researchers if third parties pursue claims related to authorized research.”

Even as the Trump administration signals little interest in safety or security issues around AI systems, other policymakers have said strong protections and guardrails should be one of the key differentiators when convincing other countries to adopt U.S.-made AI systems and models over authoritarian competitors like China. Cohen said it was especially critical to open testing of AI systems when they’re being broadly adopted across society.

“Since AI systems are essentially deploying a lot faster than any of the governance or legal frameworks can keep up, that creates some risk … for all of us when people are reluctant to do testing of AI systems,” Cohen said.

Frontier AI companies like OpenAI and Anthropic have generally kept a tighter grip on their own security research programs.

OpenAI, for instance, runs its own network of third-party red team researchers, vetting and selecting them through an application process. According to the company’s website, red-team engagements are commissioned by OpenAI and can be steered to different researchers at the company’s discretion, with participation from some members as little as five-to-10 hours per year. Researchers can also apply under a separate program that focuses on issues like AI safety and misuse.

Anthropic’s responsible disclosure policy defines “good faith” third-party security research as testing information systems “for the sole purpose” of identifying a reportable vulnerability. As such, researchers are expected to only take actions that are “minimally required to reasonably prove that such potential vulnerability exists” and avoid actual harmful actions, such as exfiltrating or deleting data.

It also requires the researcher to “avoid disclosing the existence of or any details relating to the discovered vulnerability to a third party or to the public” without “notice” from the company.

“We fully support researchers’ right to publicly disclose vulnerabilities they discover,” the terms state. “We ask only to coordinate on the timing of such disclosures to prevent potential harm to our services, customers and other parties.”

Anthropic’s terms also seek to broadly indemnify them from any negative outcomes related to the use or integration of their products, using all caps to emphasize that it will “EXPRESSLY DISCLAIM” all warranties of fitness their products may have in areas like “ACCURACY, AVAILABILITY, RELIABILLITY, SECURITY, PRIVACY, COMPATABILITY [and] NON-INFRINGEMENT.”

OpenAI and Anthropic did not respond to a request for comment by the time of publication.

The post HackerOne rolls out industry framework to support ‘good faith’ AI research appeared first on CyberScoop.

The quiet way AI normalizes foreign influence

By: Greg Otto
15 January 2026 at 09:30

Americans are being taught to trust propaganda. Often, it’s not intentional. A classic bit of advice for separating propaganda from real research is “Check the citations.” If the sources support the analysis, the material can be trusted. But AI is changing the rules of the game.

In December, the White House announced new guidance to ensure that AI tools procured for government use are “truthful” and “ideologically neutral,” including transparency around citation practices. But even with this new oversight there is a structural issue that the memo can’t fix; authoritarian states are optimizing their propaganda for AI consumption while America’s most credible news sources are actively blocking AI tools. This means that even ideologically neutral AI directs users towards state-aligned propaganda — simply because that is what is freely available.

Those who trust AI citations wind up trusting propaganda while believing they are doing responsible research.

Most large language models (LLMs) provide sources along with their analysis. But these models do not choose what sources to cite based on credibility. Rather, they choose based on availability. Many of the best sources, like top U.S. news outlets, are behind paywalls or are blocking the automated systems that AI uses to scan and collect information. These legacy media companies are slowly litigating and negotiating individual licensing deals with AI unicorns.

Authoritarian states, on the other hand, have optimized their content for accessibility. State-run media, like Qatar’s Al Jazeera, or Russian and Chinese outlets published in English, are free. That results in students, academics and federal analysts seeking to understand Gaza, Ukraine, or Taiwan being more likely to engage with state-backed propaganda than independent journalism.

Research from the Foundation for Defense of Democracies analyzing three major LLMs (ChatGPT, Claude, and Gemini) found that 57 percent of responses to questions about current international conflicts cited state-aligned propaganda sources.

When AI tools answer questions about contested conflicts — including Gaza, Ukraine, and Taiwan — they draw on enormous training data. While not perfect, the responses are often more nuanced than any one commentator or media outlet. But LLMs then funnel their hundreds of millions of users to a narrow subset of sources that it serves up as citations. FDD research found that 70 percent of neutral questions about the Israel-Gaza conflict yielded Al Jazeera citations.

This isn’t a minor technical flaw — citations are the attribution architecture shaping what Americans learn to trust.

While Western legacy media certainly carries its own biases, there is a crucial difference between editorial bias and state-controlled narratives. In 2024 alone, Russia-backed propaganda aggregator Pravda flooded the internet with more than 3.6 million articles from pro-Kremlin influencers and government spokespeople, in order to saturate the space with pro-Russian narratives.

AI sometimes fabricates information, or “hallucinate,” and that presents real risks. But urging people to “check the linked sources” can end up steering them straight to state-controlled media. Those links aren’t citations in the traditional sense — they are traffic directions. And the traffic they generate turns into revent, which ultimately determines which news outlets survive. AI platforms are becoming the internet’s traffic arbiters, and right now they’re systematically directing traffic away from independent journalism and toward state-controlled propaganda.

AI companies must bring credible journalism into their systems. There is no question that quality journalism requires resources and revenue to survive. Unfortunately, the licensing deals that are being negotiated now between LLM companies and media outlets are moving slowly. Every delay allows citation patterns to harden while we are increasingly vulnerable to foreign influence.

There’s no silver bullet, but a patchwork of solutions can help. The White House has already taken a strong stance by requiring agency heads to restrict AI procurement to LLMs that are “ideological neutral” and not “in favor of ideological dogmas.” Vendors selling to the U.S. government should present data on citation influence.

An LLM literacy campaign is needed so users understand citation bias. But awareness alone isn’t enough — AI companies should give lower priority to state-controlled media in their outputs and label them as such. And as LLMs evolve from being a consumer technology into a common infrastructure like the internet itself, citation patterns should be considered in AI safety frameworks — because a healthy democratic society needs a broad array of media sources, and that means independent journalism will always need support.

Leah Siskind is director of impact and an AI research fellow at the Foundation for Defense of Democracies.

The post The quiet way AI normalizes foreign influence appeared first on CyberScoop.

DOJ announces takedown of alleged laundering platform used by cybercriminal groups

By: Greg Otto
17 December 2025 at 16:54

Federal prosecutors in Michigan say they have dismantled online infrastructure tied to an alleged money laundering operation that moved tens of millions of dollars in proceeds from ransomware and other cybercrime, along with indicting the service’s creator.

The U.S. Attorney’s Office for the Eastern District of Michigan announced a coordinated action with international partners and the Michigan State Police targeting E-Note, a cryptocurrency exchange and payment processing service used to launder illicit funds. The announcement coincided with the unsealing of an indictment charging a Russian national, Mykhalio Petrovich Chudnovets, with one count of money laundering conspiracy.

Authorities allege that Chudnovets controlled and operated E-Note and offered money-laundering services to cybercriminals for years, first as a more personal operation and later through a more scalable online platform. Prosecutors say he began providing laundering services in 2010 and ran a network between about 2011 and 2025. Court documents describe an evolution common in cybercrime ecosystems: services that start as ad-hoc arrangements using “money mules” can become streamlined online businesses that lower the barriers for criminals looking to move funds quickly and quietly across borders.

Authorities did not say whether Chudnovets is in U.S. custody, and the announcement did not indicate that he had been arrested, suggesting he may still be in Russia.

The FBI said it identified more than $70 million in illicit proceeds from ransomware attacks and account takeovers transferred via the E-Note service and associated money-mule network since 2017. The government said the funds included money stolen or extorted from U.S. victims, including organizations in health care and critical infrastructure — sectors that have faced mounting pressure from ransomware groups because of the potentially severe consequences of disrupted services.

As part of the operation, law enforcement seized servers hosting the alleged operation, along with mobile applications and websites listed as “e-note.com,” “e-note.ws,” and “jabb.mn.” U.S. authorities also said they had obtained earlier copies of servers that included customer databases and transaction records, suggesting investigators may be positioned not only to map past flows of money, but also to identify networks of users and intermediaries that relied on the service.

The indictment charges conspiracy to launder monetary instruments, an offense that carries a maximum penalty of 20 years in prison.

The Justice Department credited cooperation from the German Federal Criminal Police Office and the Finnish National Bureau of Investigation, along with Michigan State Police and its Michigan Cyber Command Center.

You can read the full indictment below. 

The post DOJ announces takedown of alleged laundering platform used by cybercriminal groups appeared first on CyberScoop.

Washington Post confirms data on nearly 10,000 people stolen from its Oracle environment

13 November 2025 at 12:30

The Washington Post said it, too, was impacted by the data theft and extortion campaign targeting Oracle E-Business Suite customers, compromising human resources data on nearly 10,000 current and former employees and contractors.

The company was first alerted to the attack and launched an investigation when a “bad actor” contacted the media company Sept. 29 claiming they gained access to the company’s Oracle applications, according to a data breach notification it filed in Maine Wednesday. The Washington Post later determined the attacker had access to its Oracle environment from July 10 to Aug. 22. 

The newspaper is among dozens of Oracle customers targeted by the Clop ransomware group, which exploited a zero-day vulnerability affecting Oracle E-Business Suite to steal heaps of data. Other confirmed victims include Envoy Air and GlobalLogic.

The Washington Post said it confirmed the extent of data stolen during the attack on Oct. 27, noting that personal information on 9,720 people, including names, bank account numbers and routing numbers, and Social Security numbers were exposed. The company didn’t explain why it took almost a month to determine the amount of data stolen and has not responded to multiple requests for comment. 

Oracle disclosed and issued a patch for the zero-day vulnerability —  CVE-2025-61882 affecting Oracle E-Business Suite — in a security advisory Oct. 4, and previously said it was aware some customers had received extortion emails. Mandiant, responding to the immediate fallout from the attacks, said Clop exploited multiple vulnerabilities, including the zero-day to access and steal large amounts of data from Oracle E-Business Suite customer environments.

Oracle, its customers and third-party researchers were not aware of the attacks until executives of alleged victim organizations received extortion emails from members of Clop demanding payment in late September. Cynthia Kaiser, senior vice president of Halcyon’s ransomware research center, previously told CyberScoop ransom demands reached up to $50 million.

Clop’s data-leak site included almost 30 alleged victims as of last week. The notorious ransomware group has threatened to leak alleged victims’ data unless it receives payment. 

The ransomware group has intruded multiple technology vendors’ systems before, allowing it to steal data and extort many downstream customers. Clop specializes in exploiting vulnerabilities in file-transfer services and achieved mass exploitation in 2023 when it infiltrated MOVEit environments, ultimately exposing data from more than 2,300 organizations.

The post Washington Post confirms data on nearly 10,000 people stolen from its Oracle environment appeared first on CyberScoop.

❌
❌