Normal view

There are new articles available, click to refresh the page.
Today — 12 May 2026CyberScoop

Pressure mounts on Canvas as data leak extortion deadline looms

11 May 2026 at 19:31

Pressure is mounting on Instructure, the company behind Canvas, as cybercriminals threaten to leak a trove of sensitive data they claim was stolen during a prolonged cyberattack on the widely used education tech platform.

Widespread outages left schools, students and teachers temporarily unable to access critical data late last week after the company took Canvas offline following additional malicious activity, including a defacement of the platform’s login page. By Friday, the company said Canvas — a central hub for K-12 and university coursework, exams, grades and communication — was back online and fully operational. 

ShinyHunters, a decentralized crew of prolific cybercriminals affiliated with The Com, claimed responsibility for the attack on its data leak site and is attempting to extort the company for an unknown ransom amount. Instructure hasn’t confirmed the existence of a ransom demand and declined to answer questions about its response.

The threat group initially set a deadline of May 6 — four days after Instructure previously said the incident was contained soon after it disclosed the attack — claiming it stole 3.65 terabytes of data spanning 275 million records across 8,809 school systems. 

When that deadline passed without payment, ShinyHunters escalated its pressure on the company by “injecting an extortion message directly into the Canvas login pages of roughly 330 institutions, and pivoted to school-by-school extortion with a current deadline of May 12,” Cynthia Kaiser, senior vice president of Halcyon’s Ransomware Research Center, told CyberScoop.

“The scope makes this one of the largest single education-sector exposures we’ve tracked,” she added.

The additional public pressure prompted Infrastructure to take Canvas offline, disrupting schoolwork and access to critical systems nationwide. 

Instructure CEO Steve Daly apologized over the weekend for the company’s inconsistent communication and deficient public response to the cyberattack. 

“Over the past few days, many of you dealt with real disruption. Stress on your teams. Missed moments in the classroom. Questions you couldn’t get answered. You deserved more consistent communication from us, and we didn’t deliver it. I’m sorry for that,” he said in a statement.

Daly acknowledged that the attack, which remains under investigation aided by CrowdStrike, exposed usernames, email addresses, course names, enrollment information and messages. He insisted that course content, submissions and credentials were not compromised.

The temporary but widespread disruption caused has spurred broad concern across the education sector as ransomware experts and threat hunters continue to track developments. The cyberattack also caught the attention of lawmakers on Capitol Hill. 

The House Homeland Security Committee on Monday published a letter to Daly seeking a briefing with him or a senior leader at Instructure by May 21. 

“The recurrence of an intrusion within days of an initial breach disclosure, and Instructure’s apparent failure to fully remediate the underlying vulnerabilities during that window, raise serious questions about the company’s incident response capabilities and its obligations to the institutions and individuals whose data it holds,” House Homeland Security Chairman Andrew Garbarino, R-N.Y., wrote in the letter to Daly.

The committee wants to learn more about the “circumstances of both intrusions, the the nature and volume of data accessed, the steps Instructure has taken and is taking to contain the threat and notify affected institutions, and the adequacy of the company’s coordination with federal law enforcement and the Cybersecurity and Infrastructure Security Agency,” he added. 

CISA did not describe the extent of its involvement in Instructure’s response. “CISA is aware of a potential cyber incident affecting Canvas. As the nation’s cyber defense agency, we provide voluntary support and cybersecurity services to organizations in responding to and recovering from incidents,” Chris Butera, the agency’s acting executive assistant director for cybersecurity, said in a statement.

Instructure’s timeline of the attack has changed and remains incomplete. The company said it first detected unauthorized activity in Canvas on April 29 and immediately revoked the attacker’s access and initiated an incident response. Researchers not directly involved with the formal investigation said ShinyHunters gained access to Canvas at least a few days earlier.

The follow-on malicious activity on May 7 — the defacement of public login pages — was tied to the same incident, the company said. 

“We have since confirmed that the unauthorized actor carried out this activity by exploiting an issue related to our Free-For-Teacher accounts. This is the same issue that led to the unauthorized access the prior week. As a result, we have made the difficult decision to temporarily shut down Free-For-Teacher accounts,” the company said in an updated post about the incident.

Instructure did not answer questions about the vulnerability or explain how attackers intruded its systems. The company said it also revoked privileged credentials and access tokens for affected systems, rotated internal keys, restricted token creation pathways, and deployed additional security controls and monitoring.

Canvas is fully operational and safe to use, the company said, adding that CrowdStrike has reviewed known indicators of compromise and “found no evidence that the threat actor currently has access to the platform.”

Access still remains spotty and unavailable for some Canvas users as school districts restore the platform in phases after conducting their own internal checks.

Halcyon published an alert about the attack Friday, including a screenshot of the message that some school staff, guardians and students encountered before Instructure took the learning management system offline.

ShinyHunters threatened Instructure and all affected schools to contact the threat group and reach a resolution by end of day Tuesday. The cybercrime group, which has a “known pattern of removing victim entries once communications and negotiations have started,” removed Instructure from its data leak site after it defaced the Canvas login pages, Halcyon said. 

ShinyHunters is a notorious data theft extortion group that previously hit major cloud platforms, including Salesforce and Snowflake, via voice phishing, credential theft and supply-chain attacks. 

“Historically, their claims of compromise typically hold up, but they often exaggerate the impact, scale, and type of data stolen,” Kaiser said.

Education is a recurring and consistent target for cybercriminals. Researchers at Halcyon tracked more than 250 ransomware attacks on education institutions globally last year. Yet, the attack on Canvas stands apart from most of these attacks because of its widespread use and downstream impact.

“This is student, parent, and staff data, including minors, which creates downstream phishing and impersonation risk that will outlast the immediate incident,” Kaiser said. 

“By compromising a shared platform used across thousands of schools, ShinyHunters hit the entire education sector in one move, which is the same playbook Clop ran against Oracle EBS customers last fall,” she added. “Among 2026 incidents against critical infrastructure, this is at or near the top for education-sector impact, and it highlights a trend of third-party software vendors now being part of an attack surface, and causing cascading effects across an entire sector.”

Cybersecurity professionals focused on ransomware and data theft extortion consistently encourage victims to not pay ransoms, but they also often acknowledge that companies have to make tough decisions based on their own interests and the security of their customers or users caught up in the aftermath.

Allison Nixon, chief research officer at Unit 221B, said the threat group claiming responsibility for the attack should not be trusted. 

“They are claiming they will delete the data after they are paid, and if they are not paid that they will leak the data,” she told CyberScoop. “This is in line with the past data extortion scams run by the same and related Com actors, who have made false statements to victims and to the public in the past.”

Instructure hasn’t indicated what it plans to do as part of any effort to prevent the leak of stolen data. 

Daly — a longtime security executive who was previously CEO at Ivanti — ended his mea culpa with a pledge to improve communications and provide a summary of a forensics report soon.

“Last week, we made a call to get the facts right before speaking publicly. That instinct isn’t wrong, but we got the balance wrong. We focused on fact-finding and went quiet when you needed consistent updates. You’ve been clear about that, and it’s fair feedback. We will change that moving forward,” he said. 

“Rebuilding trust takes time,” Daly added. “We’re going to earn it back through consistent action and honest communication.”

The post Pressure mounts on Canvas as data leak extortion deadline looms appeared first on CyberScoop.

Google spotted an AI-developed zero-day before attackers could use it

11 May 2026 at 09:00

Google researchers found a zero-day exploit developed by artificial intelligence and alerted the susceptible vendor to the imminent threat before a well-known cybercrime group initiated a mass-exploitation campaign, the company said in a report released Monday.

The averted disaster probably isn’t the first time attackers used AI to build a zero-day, but it is the first time Google Threat Intelligence Group found compelling evidence that this long-predicted and worrying escalation in vulnerability-exploit development is underway.

“We finally uncovered some evidence this is happening,” John Hultquist, chief analyst at GTIG, told CyberScoop. “This is probably the tip of the iceberg and it’s certainly not going to be the last.”

Google declined to identify the specific vulnerability, which has been patched, or name the “popular open-source, web-based administration tool” it affected. It did, however, note that the defect impacted a Python script that allows attackers to bypass two-factor authentication for the service.

Researchers also withheld details about how they discovered the zero-day exploit or the cybercrime group that was preparing to use it for a large-scale attack spree.

The threat group has a “strong record of high-profile incidents and mass exploitation,” Hultquist said, suggesting the attackers are prominent and well-known among cybersecurity practitioners. 

GTIG is fairly confident the threat group was using AI in a meaningful way throughout the entire process, but it has yet to determine if the technology also discovered the vulnerability it ultimately developed into an exploit.

Whichever AI model the attackers used — Google is confident it wasn’t Gemini or Anthropic’s Mythos — left artifacts throughout the exploit code that are inconsistent with human developers. This evidence, which included documentation strings in Python, highly annotated code and a hallucinated but non-existent CVSS score, tipped Google off to the fact AI was heavily involved, Hultquist said. 

GTIG has been warning about and expecting AI-developed exploits to hit systems in the wild, especially after its Big Sleep AI agent found a zero-day vulnerability in late 2024.

“I think the watershed moment was two years ago when we proved this was possible,” Hultquist said, adding that there are probably several other AI developed zero-days in play now. 

Yet, to him, the discovery of a zero-day exploit developed by AI is less concerning than what this single instance forebodes even further.

“The game’s already begun and we expect the capability trajectory is pretty sharp,” Hultquist said. “We do expect that this will be a much bigger problem, that there will be more devastating zero-day attacks done over this, especially as capabilities grow.”

The post Google spotted an AI-developed zero-day before attackers could use it appeared first on CyberScoop.

The missing cybersecurity leader in small business

By: Greg Otto
11 May 2026 at 06:00

The average cyberattack costs for a small- or medium-size business is more than $250,000. The salary for a chief information security officer (CISO) is about the same, pulling in between $250,000 and $400,000, according to the annual 2026 CISO Report from Sophos and Cybersecurity Ventures. Small- and medium-size businesses (SMBs) know they cannot afford the salary, so they roll the dice, hoping they will not be attacked. This is a dangerous gamble that these businesses, which make up the backbone of the American economy, should not have to take. A virtual (vCISO) or fractional CISO (fCISO) can provide a practical solution.

As the American economy goes digital, SMBs now rely on the same building blocks as big enterprises — cloud services, payment systems, remote access, customer data, and other third-party vendors.  But without senior cyber leadership, cybersecurity often becomes a patchwork of tools, checklists, insurance paperwork, and whatever guidance a vendor offers. That may get these companies through a questionnaire; it will not build real resilience. Nearly half, all reported cyber incidents, which is projected to cost the global economy $12.2 trillion annually by 2031, involve smaller firms.

The threat is growing in both size and sophistication. Adversaries are deploying AI to automate reconnaissance, develop malware, and run phishing campaigns at scale.  This reduces the cost and skill needed to target smaller firms at volume. Adversaries are also collecting encrypted data with the intent to decrypt it later when they have access to large enough quantum computers. SMBs in defense, healthcare, and financial supply chains often hold sensitive credentials that provide access into larger enterprise environments, but most are not prepared to adopt quantum-resistant encryption.

SMBs generally understand they face cyber risk. The real gap is leadership: someone who can turn technical vulnerabilities into business decisions, set priorities, brief executives, prepare for audits, and hold vendors accountable. For more SMBs, hiring a full-time CISO is financially unrealistic.

A Virtual CISO provides remote, on-demand cybersecurity leadership and advice, typically supporting several organizations at the same time. A fractional CISO is a dedicated, part-time executive who is more deeply integrated into one organization’s governance, security planning, and day-to-day operations. Both models give smaller organizations access to senior-level cybersecurity expertise in a flexible, more affordable way than hiring a full-time CISO.

Washington should make it easier for SMBs to hire fractional cybersecurity leaders, because the private market is not closing this gap on its own. The Cybersecurity and Infrastructure Security Agency (CISA) and the Small Business Administration (SBA) could help by publishing buyer guidance: vetted criteria for evaluating providers, example scopes of work and deliverables, and real-world case studies that show SMB owners what a high-quality vCISO or fCISO engagement should look like.

Clear guidance matters because many smaller firms cannot easily tell the difference between true cybersecurity leadership and a tool reseller, compliance-only consultant, or a generic managed services contract. Any vetted provider criteria should emphasize proven experience building and running security programs, independence from vendor incentives and product quotas, and the ability to tie security investment to real business risk, not just a list of certifications. Model scopes of work should also spell out the basics every engagement should deliver: an initial risk assessment, a prioritized remediation roadmap, and simple metrics that show whether security is improving over time. Without clear buyer criteria, federal efforts could end up funding low-quality services that add cost and paperwork without making companies safer.

The National Institute for Standards and Technology (NIST) should recognize these CISO models in its SMB-focused Cybersecurity Framework guidance. That would help smaller firms turn the framework’s Govern, Identify, Protect, Detect, Respond, and Recover functions into a clear, accountable leadership structure. This would make these roles less abstract: the point is not merely providing advice, but taking executive-level ownership of risk priorities, vendor oversight, incident readiness, and communication with the owner or board.

Congress and the Treasury Department should consider targeted tax incentives or credits for qualified cybersecurity leadership services, tied to measurable risk-reduction outcomes. Eligible activities could include completing a risk assessment, building a incident response plan, conducting vendor security reviews, running employee training, and producing a remediation roadmap. SMBs often defer cybersecurity because every dollar competes with payroll, inventory, and growth. A targeted incentive would make security leadership easier to justify as a business investment rather than an optional add-on.

Federal acquisition officials should require contractors that handle sensitive government data to show it has executive-level cybersecurity oversight, whether it is full-time, virtual, or fractional, and should extend that expectation down to relevant subcontractors and suppliers. This is necessary because SMBs serve as entry points into defense, healthcare, financial, and critical infrastructure supply chains.

Finally, CISA and the SBA should support vCISO- and fractional-CISO-led workforce training. Employees improve security when training comes with leadership, regular reinforcement, and clear accountability, not just annual awareness training. The aim is not to turn every SMB into a Fortune 500 security shop. It should be to give smaller firms access to the leadership they need before the next incident forces the issue.

Georgianna Shea, who is a Doctor of Computer Science, is chief technologist at the Foundation for Defense of Democracies’ Center on Cyber and Technology Innovation and its Transformative Cyber Innovation Lab, where Cason Smith served as a summer 2025 intern. Cason is studying integrated information technology at the University of South Carolina.

The post The missing cybersecurity leader in small business appeared first on CyberScoop.

Before yesterdayCyberScoop

Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments

8 May 2026 at 13:20

The Senate’s top Democrat called on the Department of Homeland Security Friday to work closely with state and local governments to defend against artificial intelligence-strengthened hacks. 

Senate Minority Leader Chuck Schumer, D-N.Y., wrote to DHS Secretary Markwayne Mullin to make sure state, local, tribal and territorial (SLTT) governments aren’t left behind as AI models advance, posing new hacking threats.

“There is a race between cybersecurity defenders and AI-enabled hacking — and there’s no time to waste,” Schumer wrote.

“While the White House has reportedly begun hosting meetings about its internal security priorities following these frontier AI cyber breakthroughs, it is glaringly obvious that the Department of Homeland Security needs an updated plan for coordinating these efforts with [state, local, tribal and territorial] governments and implementing procedures to reduce the risk of disruptive cyberattacks enabled by frontier AI,” he stated.

Schumer said he was worried about the capabilities of DHS and its Cybersecurity and Infrastructure Security Agency to carry out that coordination, given federal funding cuts to the Multistate Information Sharing and Analysis Center, and the lack of a Senate-confirmed CISA director for the duration of the second Trump administration.

Schumer wants a plan from DHS by July 1 on coordinating with state and local governments on a range of questions, such as how to identify top AI talent, carry out rapid patching and conduct risk assessments.

“AI is changing the cyber battlefield fast — and we cannot let hackers get there first,” Schumer said in comments accompanying the letter. “Hospitals, power grids, water systems, schools, elections, and emergency services cannot be left exposed while criminal gangs and state-backed hackers race to exploit new AI tools. DHS must immediately help states and localities find and fix vulnerabilities before Americans are hit with outages, disruptions, and attacks that could put lives and livelihoods at risk.”

CISA is using AI to help on the defensive side internally, agency officials recently said.

The post Sen. Schumer seeks DHS plan on AI cyber coordination with state, local governments appeared first on CyberScoop.

Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI

By: djohnson
8 May 2026 at 09:06

As businesses and governments turn to AI agents to access the internet and perform higher-level tasks, researchers continue to find serious flaws in large language models that can be exploited by bad actors.

The latest discovery comes from browser security firm LayerX, involving a bug in the Chrome extension for Anthropic’s Claude AI model that allows any other plugin – even ones without special permissions – to embed hidden instructions that can take over the agent

“The flaw stems from an instruction in the extension’s code that allows any script running in the origin browser to communicate with Claude’s LLM, but does not verify who is running the script,” wrote LayerX senior researcher Aviad Gispan. “As a result, any extension can invoke a content script (which does not require any special permissions) and issue commands to the Claude extension.”

Gispan said he was able to execute any prompt he wanted, blow through Claude’s safety guardrails, evade user confirmation and perform cross-site actions across multiple Google tools. As a proof of concept, LayerX was able to exploit the flaw to extract files from Google Drive folders and share them with unauthorized parties, surveil recent email activity and send emails on behalf of a user, and pilfer private source code from a connected GitHub repository.

The vulnerability “effectively breaks Chrome’s extension security” by creating “a privilege escalation primitive across extensions, something Chrome’s security model is explicitly designed to prevent,” Gispan wrote.

A graphic depicting how a vulnerability exploits the trust boundaries in Clade Chrome’s extension. (Source: LayerX)


Claude relies on text, user interface semantics, and interpretation of screenshots to make decisions, all things that an attacker can control on the input side. The researchers modified Claude’s user interface to remove labels and indicators around sensitive information, like passwords and sharing feedback, then prompted Claude to share the files with an outside server.

That means cybersecurity defenders often have nothing obviously malicious to detect. Where there is visible activity, the model can be prompted to cover its tracks by deleting emails and other evidence of its actions.

Ax Sharma, Head of Research at Manifold Security, called the vulnerability “a useful demonstration of why monitoring AI agents at the prompt layer is fundamentally insufficient.”

“The most sophisticated part of this attack isn’t the injection, but that the agent’s perceived environment was manipulated to produce actions that looked legitimate from the inside,” said Sharma. “That’s the class of threat the industry needs to be building defenses for.”

Gispan said LayerX reported the flaw to Anthropic on April 27, but claimed the company only issued a “partial” fix to the problem. According to LayerX, Anthropic responded a day later to say that the bug was a duplicate of another vulnerability already being addressed in a future update.   

While that fix, issued May 6, introduced new approval flows for privileged actions that made it harder to exploit the same flaw, Gispan said he was still able to take over Claude’s agent in some scenarios.

“Switching to ‘privileged’ mode, even without the user’s notification or consent, enabled circumventing these security checks and injecting prompts into the Claude extension, as before,” Gispan wrote.

Anthropic did not respond to a request for comment from CyberScoop on the research and mitigation efforts.

The post Flaw in Claude’s Chrome extension allowed ‘any’ other plugin to hijack victims’ AI appeared first on CyberScoop.

Ivanti customers confront yet another actively exploited zero-day

7 May 2026 at 17:50

Attackers are hitting Ivanti customers yet again — circling back to a common target and consistently susceptible vendor in the network edge space — by exploiting a zero-day vulnerability in one of the company’s most besieged products. 

Ivanti warned customers that attackers have successfully exploited CVE-2026-6973, an improper input validation defect in Ivanti Endpoint Manager Mobile (EPMM) that allows authenticated users with administrative privileges to run code remotely. The company alerted customers to the threat in a security advisory Thursday while also disclosing four additional high-severity vulnerabilities in the same product.

“At the time of disclosure, Ivanti is aware of very limited exploitation in the wild of CVE-2026-6973, which requires authenticated administrative access to implement,” a spokesperson for Ivanti said in a statement.

Ivanti did not say when the first instance of exploitation occurred, or precisely how many customers have already been impacted.

The Cybersecurity and Infrastructure Security Agency added the zero-day to its known exploited vulnerabilities catalog within hours of Ivanti’s disclosure.

The company released patches for all five vulnerabilities Thursday, including the four additional defects — CVE-2026-5787, CVE-2026-5788, CVE-2026-6973 and CVE-2026-7821 — which it said haven’t been exploited in the wild.

“Ivanti discovered these vulnerabilities in recent weeks through internal detection processes which are supported by advanced AI, customer collaboration, and responsible disclosure,” the company spokesperson said. One of the defects was discovered and responsibly reported to Ivanti by a former employee.

The company suggested at least one of the root causes for the latest zero-day may be traced to lingering risk posed by a pair of separate, critical zero-days — CVE-2026-1281 and CVE-2026-1340 — that were exploited starting in late January. The fallout from those exploited vulnerabilities in Ivanti EPMM spread to nearly 100 victims, including The Netherlands’ Dutch Data Protection Authority and the Council for the Judiciary, by early February.

The latest Ivanti EPMM zero-day “requires authenticated administrative access to exploit, which is why customers who followed Ivanti’s recommendation in January to rotate EPMM credentials are at significantly reduced risk. Customers unaffected by the prior vulnerability are also at a much lower risk,” the company spokesperson said.

Caitlin Condon, vice president of security research at VulnCheck, said the administrative privileges required to exploit CVE-2026-6973 indicates it was possibly exploited as part of an attack chain relying on another method for initial access. 

“No attribution was shared on threat actor exploitation of CVE-2026-6973, but two other 2026 CVEs in Ivanti EPMM — CVE-2026-1281 and CVE-2026-1340 — have been exploited by a range of threat actors, including China- and Iran-attributed groups,” Condon told CyberScoop. 

“Those vulnerabilities notably were code-injection vulnerabilities that were remotely exploitable without authentication, unlike CVE-2026-6973,” she added. “Both CVE-2026-1281 and CVE-2026-1340 appear to have been fixed in today’s Ivanti release. Comparatively, these earlier vulns were of higher initial concern than today’s fresh zero-day vulnerability, which requires admin authentication.”

Attacks involving Ivanti defects are a recurring problem for the vendor’s customers and security practitioners at large, including many vulnerabilities that attackers exploited before the company caught or fixed the errors. 

The Cybersecurity and Infrastructure Security Agency has flagged 34 Ivanti defects on its known exploited vulnerabilities catalog since late 2021. At least 22 defects across Ivanti products have been exploited in the past two years, including five vulnerabilities in Ivanti EPMM in the last year.

During an interview with CyberScoop in March at the RSAC Conference, Ivanti Chief Security Officer Daniel Spicer said the company’s transparency partly explains the high number of vulnerabilities reported and disclosed in its products. 

“My position here at Ivanti is it doesn’t do our customers any good to be quiet about this,” he said, describing the company’s communication stance with the public, CISA and global partners as “very aggressive.”

That’s not always the case with other vendors, Spicer said. “I don’t know that transparency is a core tenant of all other organizations.”

The company, which serves many government agencies and critical infrastructure operators, also routinely notes that highly skilled and resourced attackers, including those backed by nation-states, are often responsible for these waves of attacks on its customers.

Ivanti maintains that it’s trying to consistently improve the security of its products. “Through continued investment in its product security program, including the use of advanced AI paired with human verification, Ivanti is strengthening its ability to identify, remediate, and disclose issues quickly, helping customers stay ahead of an increasingly compressed threat landscape,” the spokesperson said.

The way Spicer put it in March: “We want to make sure that people understand that we are trying to do the right thing.”

The post Ivanti customers confront yet another actively exploited zero-day appeared first on CyberScoop.

Trump officials are steering a cybersecurity scholarship program toward AI

7 May 2026 at 15:57

The Trump administration is redirecting a cybersecurity scholarship program that requires recipients to work in government service toward artificial intelligence, leaving some current program scholars dismayed and bewildered.

In an email to participating school program coordinators obtained by CyberScoop, the Office of Personnel Management and National Science Foundation said the CyberCorps Scholarship For Service program would now be known as CyberAI SFS.

“The SFS students we enroll today will not be employable when they graduate in 2-3 years without significant AI background,” the email reads. “Any SFS student in this new program must be proficient in using AI in cybersecurity or providing security and resilience for AI systems. Therefore, new students in the legacy CyberCorps program must learn to acquire AI expertise to augment their cybersecurity expertise.”

“Effective immediately, new SFS scholars will not be accepted to the Legacy CyberCorps(C) program without a description on how they will develop competencies at the intersection of cybersecurity and AI,” the email continues. “The description of the competency development could include, but are not limited to, formal program of study, experimental learning, research activities, capstone projects, competitions, certifications, and/or no-credit professional development via external providers.”

One current program scholar graduating soon said they were “disappointed” by the change for several reasons. As of earlier this week, the agencies collectively running the program — OPM, NSF and the Department of Homeland Security — hadn’t notified any program participants that any changes were on the horizon.

For another: “I was a little bit surprised that it was coming out as so blatantly disregarding the people that haven’t graduated yet, that everyone in my cohort is already considered ‘legacy,’ and the fact that it said people in the program that I’m currently in will not be employable in the coming years,” they said.

The email leaves scholars uncertain about what will happen as they try to fulfill their side of the agreement, especially since doing so has  already been difficult amid cyber job cutbacks and other concerns about how the program has recently been administered. The scholar told CyberScoop there are around 300 people in this current group.

“I assume it will affect placements,” they said. “I can’t say for sure one way or another, because placements are already so impacted by everything that’s been going on. I don’t know what’s due to lack of AI background and what’s due to everything else.”

Another scholar said it was wrong for OPM “to keep claiming repeatedly that they’re acting in our best interests,” when “we’re left out to dry.” Already, the current group of scholars has been frustrated by their inability to get questions answered.

“If we’re legacy CyberCorps, then how does that address anything?” the scholar asked. “We’re just kind of being shoved into a closet and forgotten about. Now in that email, they were saying that we were going to be unhireable in two years time without all this AI stuff under our belt. But at the same time, almost all of our universities were actively discouraging the use of AI.”

Another part of the email brought welcome news to those scholars: a temporary easing of the program’s requirements, including the 70-20-10 rule that sets targets for jobs in the federal government, state and local governments, and the education sector, as well as the rules for securing an internship.. Even so, scholars say they still haven’t received any direct information about the changes.

A spokesperson for NSF said there have been some misunderstandings about the email to school program coordinators (known as principal investigators), but didn’t address current scholars’ concerns about communication.

“The guidance does not require scholars to possess these competencies upon entry,” said the spokesperson, Michael Englund. “Rather, it requires principal investigators (PIs) to clearly describe how their programs will prepare scholars to develop AI-related competencies by the time they graduate (typically within two to three years). In other words, programs must have a concrete and immediate plan to ensure scholars gain these skills during the course of their studies, not prior to admission.”

A spokesperson for OPM addressed the two biggest concerns of current participants.

“There are no changes to placement requirements,” the spokesperson said. “As noted, NSF’s updates are forward-looking to ensure future cohorts are prepared for evolving workforce needs. NSF has encouraged institutions to use professional development funds to expand AI-related training where needed. At OPM, we are also expanding AI training and have introduced AI ambassadors to support adoption.”

On communication: “Principal investigators (PIs) remain the primary point of contact for scholars, but OPM plans to increase direct outreach and plans to issue follow-up communication to scholars on placement efforts,” the spokesperson said.

Last week’s email is the latest turn for the program, with the Cybersecurity and Infrastructure Security Agency last month declaring that it was canceling summer internships due to the lapse in funding for some DHS agencies. Congress has since provided funding for CISA. 

The agency didn’t answer a question about whether that cancellation decision has been reversed as a result.

The post Trump officials are steering a cybersecurity scholarship program toward AI appeared first on CyberScoop.

American duo sentenced for hosting laptop farms for North Korean IT workers

By: Greg Otto
7 May 2026 at 09:56


Two U.S. nationals were sentenced to 18 months in prison for running laptop farms that facilitated North Korea’s expansive remote IT workers scheme, the Justice Department said Wednesday.

Matthew Issac Knoot and Erick Ntekereze Prince both received and hosted laptops at their residences to dupe U.S. companies into thinking remote IT workers they hired were located in the country. The pair’s separate schemes impacted almost 70 U.S. companies and generated a combined $1.2 million in revenue for the North Korean regime.

“The FBI and our partners will continue to disrupt North Korea’s ability to circumvent sanctions and fund its totalitarian regime,” Brett Leatherman, lead of the FBI’s Cyber Division, said in a statement. “These cases should leave no doubt that Americans who choose to facilitate these schemes will be identified and held accountable. Hosting laptops for DPRK IT workers is a federal crime which directly impacts our national security, and these sentences should serve as a warning to anyone considering it.”

Knoot, of Nashville, Tennessee, and Prince, of New York, received the laptops from unsuspecting U.S. companies and installed remote desktop applications on the machines to enable co-conspirators to work from anywhere while appearing to be based at their respective residences.

Prince’s company Taggcar was contracted to supply IT workers to victim U.S. companies from June 2020 through August 2024. He pleaded guilty in November 2025 to wire fraud conspiracy for his yearslong involvement in the North Korean IT worker scheme. 

Prince was indicted and charged in January 2025 along with his alleged co-conspirators, who collectively obtained work for North Korean IT workers at 64 U.S. companies, earning nearly $950,000 in salary payments. 

A federal judge sentenced Prince Wednesday and ordered him to forfeit $89,000, which is the amount he netted personally. 

Knoot was arrested in August 2024, a year after the FBI searched his home. Officials said he made multiple false and misleading statements and destroyed evidence to obstruct the investigation at that time. 

Victim companies paid North Korean workers linked to Knoot’s laptop farm more than $250,000 from July 2022 to August 2023. The remote IT workers transferred those funds to Knoot and accounts associated with North Korean and Chinese nationals, officials said. 

Knoot was sentenced May 1 and ordered to pay $15,100 in restitution to the victim companies and forfeit an additional $15,100, which is equivalent to the amount of his direct take from the scheme.

The pair of North Korean operatives join a growing list of people who have been charged and jailed for supporting the regime’s scheme that generates hundreds of millions of dollars annually for the country’s military and organizations involved in its weapons programs.

Authorities have been cracking down on the malicious insider activity by seizing cryptocurrency linked to the theft, and targeting U.S.-based facilitators who provided forged or stolen identities and hosted laptop farms for North Korean operatives. 

The countermeasures are stacking up, but the scheme is widespread and has infiltrated an undetermined number of businesses, including hundreds of Fortune 500 companies.

Federal judges previously sentenced other people to prison for their involvement in the scheme, including Keija Wang and Zhenxing Wang; Audricus Phagnasay, Jason Salazar and Alexander Paul Travis; Oleksandr Didenko and Christina Chapman

“These sentences hold accountable U.S nationals who enabled North Korea’s illicit efforts to infiltrate U.S. networks and profit on the back of U.S. companies,” John A. Eisenberg, assistant attorney general for national security, said in a statement. 

“These defendants helped North Korean ‘IT workers’ masquerade as legitimate employees, compromising U.S. corporate networks and helping generate revenue for a heavily sanctioned and rogue regime,” he added. “The National Security Division will continue to pursue those who, through deception and cyber-enabled fraud, threaten our national security.”

The post American duo sentenced for hosting laptop farms for North Korean IT workers appeared first on CyberScoop.

One House Democrat is pressing Commerce on the government’s spyware use

7 May 2026 at 06:00

A House Democrat who’s been at the forefront of congressional efforts to scrutinize the federal government’s use of commercial spyware wants the Commerce Department to brief Capitol Hill amid apprehension that the Trump administration might further embrace the technology.

Rep. Summer Lee, D-Pa., sent a letter to the department Thursday seeking a briefing on several developments stemming from Immigration and Customs Enforcement acknowledging its use of Paragon’s Graphite spyware, as well as an American company purchasing a controlling stake in Israel’s NSO Group. The Commerce Department sanctioned NSO Group under former President Joe Biden after widespread abuse allegations, including eavesdropping on government officials, activists and journalists.

“The Trump Administration appears to be broadly receptive to using commercial spyware to infiltrate cell phones and allowing U.S. investment in sanctioned spyware companies like NSO Group,” Lee wrote in her letter to Commerce Secretary Howard Lutnick, which CyberScoop is first reporting.

NSO Group’s new executive chairman, David Friedman, is a former Trump ambassador to Israel and was his bankruptcy attorney. He has said in November that he expects the administration will be “receptive” to using NSO Group tech.

“Given those close ties between NSO Group and the Trump Administration, and the serious concerns about how NSO’s technology could be used to spy on Americans, we write to request information regarding the purchase of NSO Group by an American company and the potential usage of NSO Group spyware by federal law enforcement,” wrote Lee, who sits on the Oversight and Government Reform panel and is the top Democrat on its Federal Law Enforcement Subcommittee.

Lee was one of the authors of a recent Democratic letter seeking confirmation of ICE’s use of Paragon’s Graphite, which ICE acknowledged. But they criticized the administration for not answering all their questions, in addition to being outraged.

In her latest letter, Lee asked the Commerce Department to brief Oversight and Government Reform Committee staff about internal department deliberations, Commerce communication with the White House and any outside conversations — including with Friedman — about government use of NSO Group technology or any other commercial spyware, and American investment in NSO.

NSO Group “appears to view the Trump administration as friendly to its interests in the United States, pitching itself as a vital tool for the U.S. government to safeguard national security,” Lee wrote, citing company court filings that it “is reasonably foreseeable that a law enforcement or intelligence agency of the United States will use Pegasus.”

The Biden administration sanctions, and court losses in a case against Meta, represented setbacks for NSO Group’s ambitions. And prior to the U.S. investment firm controlling stake purchase last fall, the Commerce Department under Trump rebuffed efforts to remove NSO Group from its sanctions list.

But the tens of millions of dollars worth of investment, following news that Israel had used Pegasus to track people kidnapped or murdered by Hamas, was a boon.

NSO Group maintains that its products are designed only to help law enforcement and intelligence fight terrorism and crime, and that it vets its customers in advance as well as investigates misuse. News accounts and other investigations have turned up a multitude of abuses.

There have been scattered reports of U.S. flirtation with using NSO Group technology. The FBI acknowledged it had bought a Pegasus license, but stopped short of deploying it. The Times of London reported that “it is believed” the Central Intelligence Agency used Pegasus spyware as part of a rescue mission last month for a U.S. airman downed in Iran.

You can read the full letter below.

The post One House Democrat is pressing Commerce on the government’s spyware use appeared first on CyberScoop.

A DOD contractor’s API flaw exposed military course data and service member records

By: Greg Otto
6 May 2026 at 17:15

A defense technology company with Department of Defense contracts exposed user records and military training materials through API endpoints that lacked meaningful authorization checks, according to an account published by Strix, an open-source autonomous security testing project.

The issue affected Schemata, an AI-powered virtual training platform used in military and defense settings. According to Strix, an ordinary low-privilege account was able to access data across multiple tenants, including user listings, organization records, course information, training metadata and direct links to documents hosted on the Schemata’s Amazon Web Services instances.

Strix said the exposed materials included a 3D virtual training course for naval maintenance personnel with documentation marked confidential and proprietary, a course containing Army field manuals on explosive ordnance handling and tactical deployment, and hundreds of user records linked to bases and training enrollments. Additionally, the exposed information included names, email addresses, enrollment details and the military bases where U.S. service members were stationed. 

Schemata acknowledged the affected endpoints were exposed May 1, after what Strix described as a 150-day disclosure process. Strix said it verified remediation before publication and published its account earlier this week, 152 days after its initial disclosure attempt.

The reported vulnerability did not require a complex exploit. Strix said it used a low-privilege account to watch normal browser traffic, identify API endpoints exposed through the application, and request high-value data using the same session. According to Strix, those requests returned records from outside the account’s own organization, suggesting the API was not properly enforcing tenant boundaries or user permissions.

In multi-tenant software, authorization controls are intended to ensure users can access only the data and functions assigned to their account or organization. The failure described by Strix would represent a basic breakdown in that model. The firm said some routes also appeared “write-enabled,” meaning a malicious actor could potentially modify or delete courses through update or delete requests, though the account does not say Strix performed destructive testing.

Strix did not respond to CyberScoop’s request for comment. 

Schemata’s platform serves military and defense training environments, where user identities, assignments and course enrollments can reveal sensitive operational context. Even when information is not classified, records showing where service members are based, what training they are enrolled in and which materials they can access may create risks if exposed outside intended channels.

In a statement posted on the company’s website, Schemata said it did not have “evidence that any third party exploited the vulnerability to access customer data.”

The disclosure timeline also raises questions about how companies handling sensitive government-related data receive and respond to vulnerability reports. Strix said it first contacted Schemata on Dec. 2, 2025. According to the account, Schemata’s CEO initially responded, “I would love to hear what the vulnerability is, but I assume you want to get paid for it. Is that the play?”

Strix said it clarified the same day that compensation was not required and that its priority was user safety. It said it sent multiple follow-ups from Dec. 8-29, warning that the vulnerability was critical and asking where to send details. Five months later, after telling Schemata that researchers were publishing the information publicly, Schemata responded, acknowledged the exposed endpoints and said it would patch the issue immediately.

“After we received actionable details about the vulnerability and confirmed the security researcher appeared to be legitimate, our team remediated the vulnerability the same day, and the researcher independently verified the fix before publishing their findings,” Schemata’s statement reads. “We appreciate the security researcher bringing this to our attention and their contribution to the security of our platform.”

Schemata said it’s working with cybersecurity consultants to assist with its response and improve its security posture. The company also said it is in contact with government authorities about the vulnerability.

Defense contractors that handle Controlled Unclassified Information, or CUI, must report cyber incidents to the Department of Defense Cyber Crime Center (DC3). The center did not respond to CyberScoop’s request for comment. 

According to contracting data, the company holds $3.4 million in contracts with the Department of Defense. In May 2025, Schemata announced $5 million in venture funding from several firms, including Andreessen Horowitz. 

The post A DOD contractor’s API flaw exposed military course data and service member records appeared first on CyberScoop.

A critical Palo Alto PAN-OS zero-day is being exploited in the wild

6 May 2026 at 15:48

Attackers are actively exploiting a zero-day vulnerability affecting some Palo Alto Networks’ customers’ firewalls, the security vendor said in an advisory Tuesday.

The critical memory corruption vulnerability — CVE-2026-0300 — affects the authentication portal of PAN-OS, and allows unauthenticated attackers to run  code with root privileges on the vendor’s PA-Series and VM-Series firewalls, the company said.

Palo Alto Networks did not say when or how it became aware of active exploitation, nor when the earliest known exploitation occurred. The Cybersecurity and Infrastructure Security Agency added the defect to its known exploited vulnerabilities catalog Wednesday.

The company hasn’t released a patch for the vulnerability or described the scope and objective of confirmed attacks.

“This vulnerability is specific to a limited number of customers with their User-ID Authentication Portal (Captive Portal) exposed to the public internet or untrusted IP addresses. We have observed limited exploitation of this issue and are working to release software fixes, with the first updates expected to be available on May 13,” a Palo Alto Networks spokesperson told CyberScoop.

The company said firewalls exposed to the buffer-overflow vulnerability, which has a CVSS rating of 9.3, are broadly exposed in real-world deployments, and it described the attack complexity as low.

Shadowserver scans found more than 5,800 publicly exposed VM-Series firewalls running PAN-OS as of Tuesday, yet it’s unknown how many of those instances have restricted authentication access to trusted internal IP addresses or disabled the feature altogether.

“We have provided clear mitigation guidance to our customers to secure their environments immediately. This issue does not impact Cloud NGFW or Panorama appliances. We remain committed to a transparent, security-first approach to protect our global customer base,” Palo Alto Networks’ spokesperson added.

Benjamin Harris, CEO and founder of watchTowr, noted that Palo Alto Networks proactively alerted customers to the zero-day, a step that allowed defenders to take action on potentially exposed instances. 

“In a bad situation, that is the best they can do immediately. However, that also alerts everyone to the existence of a vulnerability,” he told CyberScoop.

Despite the risk, Harris said watchTowr expects attacks linked to the zero-day exploit to be “very limited.” 

Palo Alto Networks and its impacted customers remain the only parties to have observed exploitation in the wild, but researchers warn that will likely change soon. 

“It’s likely rules will also start to fire in third-party organizations and honeypots shortly,” Caitlin Condon, vice president of security research at VulnCheck, told CyberScoop. 

“Management interfaces, login pages, and authentication portals have been common adversary targets for both opportunistic and targeted campaigns in recent years,” she added. “With researcher and community eyes on the vulnerability, it’s likely that we’ll see public exploits and broader exploitation quickly, provided the issue isn’t prohibitively difficult to exploit.”

Palo Alto Networks has yet to attribute the attacks to any known threat group, publish indicators or compromise, nor disclose the type of organizations that have been targeted and impacted. 

Researchers are hunting for malicious activity and advise customers to apply patches upon release.

The post A critical Palo Alto PAN-OS zero-day is being exploited in the wild appeared first on CyberScoop.

CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict

By: djohnson
5 May 2026 at 17:47

The Cybersecurity and Infrastructure Security Agency is urging critical infrastructure owners and operators to plan for delivering essential services under emergency conditions – potentially for months at a time.

The federal government’s top cybersecurity agency warned that state-sponsored hackers, particularly two Chinese groups known as Salt Typhoon and Volt Typhoon, continue to threaten critical sectors like electricity, water, and internet. 

The agency is now working with the private sector to protect operational technology – the systems that control the heavy machinery and equipment that powers most critical infrastructure – from attacks that enter through business IT systems or third-party vendor products.

The initiative  — known as CI Fortify – will include CISA conducting targeted technical assessments of critical infrastructure entities and aims to create plans that “allow for safe operations for weeks to months while isolated” from IT networks and third-party tools, according to the agency’s website.

Nick Andersen, CISA’s acting director, told reporters that the goal is “service delivery [that] can still reach critical infrastructure after the asset owner has disconnected with IT and OT, disconnected from third party vendors and service provider connections and disconnected from third party telecommunications equipment.”

Over the past two years, wars in Ukraine, Gaza, Iran and elsewhere have seen water plants, power substations, data centers and other critical infrastructure targeted by kinetic or cyberattacks.

Andersen said the agency has already begun engaging with some companies to pilot the assessments and expects that work to ramp up considerably as CISA hires additional staff in the coming months.

He declined to name the entities involved in the pilot program, but said they will focus on organizations that support national security, defense, public health and safety and economic continuity. He added that CISA’s assessments will vary from sector to sector depending on their unique needs.

“Water isn’t necessarily designed to prioritize specific customer needs outside of recovery periods, while energy and transportation have more immediate tradeoffs for selecting one load or one set of cargo over another,” Andersen said as an example.

One pillar of CISA’s strategy is isolation: essentially turning off all third-party and business network connections to an OT network when facing an emergency or unknown vulnerability.

Organizations also need to develop an internal plan for what acceptable service levels look like under those conditions and reach understandings with their critical customers, like U.S. military installations and lifeline services.

The second pillar, recovery, involves best practices for organizations: backing up files, documenting systems and having manual backups for operations when normal computer systems are down.

In conversations with cybersecurity specialists who focus on critical infrastructure and operational technology, it is widely assumed that China is not the only nation to have broadly compromised Americans critical infrastructure. That hacking groups tied to other nations have almost surely noticed and exploited the same basic vulnerabilities and hygiene issues found by the Typhoons.

Agencies like the FBI and Federal Communications Commission have touted efforts to purge Chinese hackers and work voluntarily with telecoms to harden their network security. But U.S. national security officials and cybersecurity defenders have consistently said both Salt Typhoon and Volt Typhoon remain active threats to U.S. critical infrastructure.

The post CISA wants critical infrastructure to operate ‘weeks to months’ in isolation during conflict appeared first on CyberScoop.

CISA boasts AI automation improvements to threat analysis, mission support

5 May 2026 at 15:18

The Cybersecurity and Infrastructure Security Agency has gotten “by far” the biggest gains from artificial intelligence automation in its security operations unit to help analysts sift through threats, but it’s also proven valuable elsewhere within the agency, CISA officials said Tuesday.

It’s “really allowing those analysts to do triage very fast, so they focus on what matters versus the noise,” Tammy Barbour, acting chief of application management at CISA, said. “They’re able to do a lot of real-time, quick looks before events happen in most places.”

Barbour, speaking at the UiPath FUSION Public Sector event hosted by Scoop News Group, said automation has also been a boon to CISA’s Technology Operations Center.

“The top analysts are able to quickly respond to customers who are reaching out to talk and asking questions, and be able to get real-time efficiencies with that,” she said. 

And it’s been a big help for data migration, Barbour said.

Lauren Wind, acting deputy chief technology officer at CISA, said from her wing of the department, it’s focused on finding benefits from automation in areas like human resources, contracting and finance.

“So we can continue to drive mission, but also accelerate the mission-supporting functions,” she said. “We really want to ensure that our cyber analysts are focusing on the things that matter, like malware.”

But there are some barriers to adoption of the technology, both said.

“We’re still kind of in our infancy,” Barbour said. “But we still struggle with the legacy workflows, processes. We still have some systems that need to be modernized, that we’re currently working towards adoption. People love their spreadsheets. I just can’t force it out of their hands, especially the — sorry, all the accountants in the room, I apologize, but you’ve got to let it go.”

AI governance needs to be laid out in advance, too, and transparently, Wind said.

“One of the biggest things is ensuring that the CTO is driving governance, whether that’s for data, whether that’s for AI,” she said. “I think we’re pretty good on generative, and everyone’s a little bit catching up to industry on agentic.”

How to handle data is another consideration, Wind said.

“Whether you’re on the cloud and you’re serverless or you’re still on prem, if you haven’t figured out what your structure of your data platform looks like, it makes automation a lot more difficult,” she said. 

The comments from Barbour and Wind offered a window into how CISA is viewing AI internally. Much of the agency’s recent work related to AI is focused on advice for safe deployment of agentic AI at other organizations, or examination of the way AI is deepening threats.

The post CISA boasts AI automation improvements to threat analysis, mission support appeared first on CyberScoop.

Latvian national sentenced for ransomware attacks run by former Conti leaders

5 May 2026 at 12:28

A federal judge sentenced a Latvian national to 102 months in prison for his involvement in a series of ransomware attacks for more than two years prior to his arrest in 2023, the Justice Department said Monday.

Deniss Zolotarjovs, a resident of Moscow at the time, helped an organization led by former leaders of the Conti ransomware group extort payments from more than 54 companies. 

The 35-year-old was mostly tasked with putting pressure on the crew’s victims. In one case, Zolotarjovs urged co-conspirators to leak or sell children’s health records stolen from a pediatric healthcare company and ultimately sent a collection of sensitive data to “hundreds of patients,” according to court records. 

The ransomware crew identified itself in ransom notes under multiple names during Zolotarjovs’ involvement, including Conti, Karakurt, Royal, TommyLeaks, SchoolBoys Ransomware, Akira and others. 

Zolotarjov and his co-conspirators extorted nearly $16 million in confirmed ransom payments from their victims. Officials estimate the group’s crimes resulted in hundreds of millions of dollars in losses, not including the psychological and future financial exposure confronting tens of thousands of people whose personal data was stolen.

“Deniss Zolotarjovs helped his ransomware gang profit from hacks of dozens of companies, and even on a government entity whose 911 system was forced offline,” A. Tysen Duva, assistant attorney general of the Justice Department’s Criminal Division, said in a statement. 

Officials said Zolotarjovs searched for points of leverage after researching victim companies and analyzing stolen data. Many of the victims impacted during his active participation between June 2021 and August 2023 were based in the United States.

Zolotarjov was arrested in the country of Georgia in December 2023 and extradited to the United States in August 2024. He pleaded guilty to money laundering and wire fraud in July 2025. 

“Cybercriminals might think they are invulnerable by hiding behind anonymizing tools and complex cryptocurrency patterns while they attack American victims from non-extradition countries,” Dominick S. Gerace II, U.S. attorney for the Southern District of Ohio, said in a statement. “But Zolotarjovs’s prosecution shows that federal law enforcement also has a global reach, and we will hold accountable bad actors like Zolotarjovs, who will now spend significant time in prison.”

The Russian ransomware crew was prolific and spread across multiple teams, relying on companies registered in Russia, Europe and the United States to conceal its operations. Authorities said the group included former Russian law enforcement officers whose connections allowed members to access Russian government databases to harass detractors and identify potential new recruits.

Conti was among the most prolific ransomware groups globally for a time, impacting hundreds of critical infrastructure providers, Costa Rica’s government in 2022, and ultimately leading the State Department to offer a $10 million reward for information related to Conti’s leaders. The group was notoriously resilient, bouncing back with new infrastructure and hitting new targets after a massive leak exposed chats between the group’s members in 2022.

Conti disbanded later that year, but members of the Cyrillic-language group rebranded under three subgroups: Zeon, Black Basta and Quantum, which quickly rebranded to Royal, before rebranding again to BlackSuit in 2024.

The post Latvian national sentenced for ransomware attacks run by former Conti leaders appeared first on CyberScoop.

‘Copy Fail’ is a real Linux security crisis wrapped in AI slop

4 May 2026 at 17:54

Attackers are actively exploiting a Linux vulnerability in the wild, and researchers warn that the fallout could be broad — anyone with authenticated local access can leverage it to gain total control of a system. 

But the story behind CVE-2026-31431 is almost as interesting as the bug itself. Theori, the company that discovered the bug, leaned heavily on AI to find and initially disclose it. The result is a case study that  underscores the challenges that occur when the relentless hunt for defects collides with marketing impulses and inflated AI-generated language that was long on bluster but lacked technical details. 

Theori dubbed the high-severity vulnerability “Copy Fail” with a vanity domain containing AI-generated content, and warned that every mainstream Linux kernel built since 2017 is in scope of potential exploitation resulting in root access. 

Theori’s AI-powered penetration testing platform, Xint, discovered the local privilege-escalation flaw in a Linux kernel module and reported it to the Linux kernel security team March 23. Major Linux distributions affected by the vulnerability had issued patches prior to Theori’s disclosure, which it published alongside a proof-of-concept exploit. 

The Cybersecurity and Infrastructure Security Agency added CVE-2026-31431 to its known exploited vulnerabilities catalog Friday.

Researchers have yet to determine how many organizations have been impacted by the flaw, but they noted that critical requirements for exploitation, specifically local access achieved through a separate exploit or pathway to unauthorized access, should limit potential exposure.

“The attacker would need to have already established a foothold on the target system either through some means of legitimate access or another exploit,” Spencer McIntyre, secure researcher at Rapid7, told CyberScoop. “That’s a large limiting factor since this vulnerability would therefore need to be paired with another.”

Theori’s disclosure turned heads among other vulnerability researchers who noted the defect’s broad potential impact, but also for lacking details about the proof-of-concept exploit. 

“The exploit is real, there is something to worry about, but understandably, teams now have to do additional validation to know how to parse the extreme AI FUD (fear, uncertainty and doubt) from [Theori’s] blog post,” Caitlin Condon, vice president of security research at VulnCheck, told CyberScoop. 

“It’s not helpful that the blog is AI slop, because it detracts from technical reality,” she added. 

Theori acknowledges it used AI to discover and describe the vulnerability, explaining that it’s focusing on finding and fixing a large amount of defects. 

“We used AI to help craft the disclosure site and the blog post to help speed things up, but all material was thoroughly reviewed by our internal teams for accuracy,” said Tim Becker, senior security researcher at Theori. 

Theori is intentionally withholding additional details until the patch is broadly applied, he added.

“We stand by our technical description of the vulnerability. Helping downstream users to understand the impact of a security bug has always been a challenge for security researchers,” Becker said. “Copy Fail allows for trivial privilege escalation on most desktop and server Linux distributions. It also has implications for containerization including Kubernetes.”

Other researchers have drawn similar conclusions, noting that exploitation can be automated and doesn’t require specialization. 

Meanwhile, hundreds of additional proof-of-concept exploits have surfaced since the vulnerability was disclosed five days ago. “As expected, the majority of these appear to be copycat AI PoCs that do nothing but add banners or different colors to the command-line interface. Many new PoCs are simply ports of the original AI PoC to a different programming language,” Condon said. 

“Organizations should exercise caution when running untested research artifacts, including AI-generated exploit code that isn’t fully explained,” she added. 

Becker said Theori is aware of the burden defenders confront, and insists the company’s reports contain enough information for organizations to quickly triage and validate its findings.

The post ‘Copy Fail’ is a real Linux security crisis wrapped in AI slop appeared first on CyberScoop.

A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory

By: djohnson
4 May 2026 at 12:02

A 19-year-old woman is suing the makers of a dating app, alleging they took a video she posted online, repurposed it without her consent into an advertisement for the app, then used geofencing to target that ad to people in her area. 

According to the lawsuit filed Apr. 28 in Tennessee and an interview with her lawyer, the company allegedly used geotargeting to serve the ads on platforms like Snapchat to users near her, including men in her own dormitory. 

The allegations, if proven, offer another example of how modern technology has made it easier than ever today for bad actors to imitate, objectify, profit off and harass individuals, often women. Recent laws like the Take It Down Act have focused particularly on the use of AI to create sexualized imagery of their victims. In this case, the lawsuit alleges that Meete used not AI, but simple video editing, a voiceover and geofencing to create the same kind of deception. 

 On the day of her high school graduation, Kaelyn Lunglhofer posted a brief video to TikTok, wearing an orange outfit and saying a few words to her followers over background music. She went on to attend the University of Tennessee in the fall, where she began building a following as a TikTok influencer.

The complaint alleges that the makers behind the dating app Meete took that video without Lunglhofer’s consent, overlayed it with graphics advertising the app, and added a voiceover to make it appear she was saying “Are you looking for a friend with benefits? This app shows you women around you who are looking for some fun. You can video chat with them.”

Abe Pafford, Lunglhofer’s attorney, told CyberScoop that his client had no idea Meete was using her likeness until a male student in her dormitory told her he had repeatedly seen her in ads for the app on his Snapchat shortly after the two had met. 

Pafford called it “implausible” that this was a coincidence, pointing to Meete’s premise of connecting users with nearby women and the precision of geofencing technology. Before filing the case, Pafford’s law firm hired an investigative firm to gather additional evidence.

“I think the idea is they want[ed] viewers of these advertisements – and candidly this is pretty clearly targeted at male viewers – to have their eye caught by someone they may know or recognize or think they may have seen around, and that’s part of what makes it so disturbing,” he said.

Pafford said he believes Lunglhofer is far from the only person whose image Meete has misappropriated, and that most victims likely have no idea it’s happening. Lunglhofer herself only had evidence because the student who told her had saved recordings and screenshots of the ads featuring her video.

“The bottom line is we think there are likely others that have been victimized in a similar way, but finding out who they are and landing on tangible proof of that can be challenging,” he said.

After this story was published, Snap told CyberScoop it is investigating.

“Snap’s advertising policies require that advertisers have all necessary rights to the content in their ads, including the rights to any individuals featured,” Snap spokesperson Ahrim Nam said in an email. “Using someone’s likeness without their consent is a violation of our policies. Upon learning of these allegations, we are actively reviewing the matter and will take appropriate action.”

The lawsuit cites alleged violation of multiple federal and state laws, including the Lanham Act, the primary U.S. law governing trademark rights. The suit also alleges violations of Tennessee state law under the ELVIS Act, which prevents the unauthorized use of image or likeness for artists and musicians, and Tennessee common laws for defamation and right of publicity.

Lunglhofer is seeking $750,000 in punitive damages, as well as any revenue tied to the ads featuring her likeness. Pafford said that the advertisements damaged her online brand and reputation while also putting her at risk of harassment or falsely implying she was endorsing a local dating service and was open to casual hookups.

“It’s really kind of grotesque and it’s also kind of dangerous,” he said. “Someone may not be aware that this is happening and they’re targeted in this way, but you can put people at risk in ways that are really troubling if you stop to think about it.”

The suit names Quantum Communications Development Unlimited, based in the Virgin Islands, as well as Chinese companies Starpool Data Limited and Guangzhou Yuedong Interconnection Technology, as defendants. A judge has ordered representatives from all three to appear for depositions in the United States.

Quantum Communications Development Unlimited has a sparse internet footprint: their website consists of a single page with a message written in broken English and an email address that no longer appears to work. Efforts by CyberScoop to reach the company and other defendants for comment were not successful. The company is listed as Meete’s publisher on Apple’s App Store, where it describes the app as “a space where you can be yourself and meet people” and promises “safety and respect first” — adding that “Meete provides a secure environment where your privacy and safety are our top concerns.”

The description also claims the app adheres to Apple’s safety standards, citing a “Zero-Tolerance Policy regarding objectionable content and abusive behavior.” Listed safeguards include “24/7” manual reviews by moderation teams, instant reporting and blocking of other users, and AI filtering “to detect and prevent harassment before it happens.”

On Meete’s Google Play Store page, user reviews accuse the app of failing to match them to nearby users and being largely populated by bots posing as women to sell in-app currency.

Pafford acknowledged that the defendants being based overseas complicates efforts to hold them accountable under U.S. law, but argued that Meete is clearly designed to operate in the United States. The companies behind the app have filed U.S. patents and trademarks, for their business, and distribute their app through the Apple and Google Play Stores while advertising on major U.S. social media platforms like Snapchat.

Apple and Google did not respond to a request for comment.

You can read the full lawsuit below.


5/05/26: This story was updated to include comment from Snap received after publication.

The post A college student is suing a dating app that allegedly used her TikTok videos to target men in her dormitory appeared first on CyberScoop.

Why data centers now belong on the critical infrastructure list

By: Greg Otto
4 May 2026 at 06:00

Missile and drone attacks that took out cloud data centers in the Middle East underscored a critical vulnerability in the modern economy: reliance on digital infrastructure that sustains competitive advantage and operational continuity for corporations, nations, and militaries. 

The outages and downstream disruption were a preview of a new form of strategic and operational risk. Data centers have long been the backbone of the digital economy. What is changing is the scale of dependence as AI workloads dramatically increase the compute power required to run businesses, supply chains, and national security systems. 

Artificial intelligence has moved beyond business applications and into the core of warfare and national security. Last month, The New York Times reported that AI is “totally integrated” into the collection of intelligence and its use in strategic decision-making and military operations. Even if AI models are not directly firing weapons, AI-enabled analysis now plays a central role in how modern militaries gain visibility, find insights, and drive action.

That matters because it changes what should be considered critical infrastructure. If AI is a competitive advantage for companies and a battlefield advantage for warfighters, then the infrastructure that trains, hosts and runs AI becomes a high-value target. Attacks on the digital infrastructure organizations rely on can do more than inflict financial damage. They can slow decision-making, degrade logistics and reduce military effectiveness without ever engaging a conventional force.

Historically, nation-state campaigns targeting data centers and service providers focused on cyber intrusions for espionage or pre-positioning. What is different now is the emergence of physical attacks on digital infrastructure during active conflict. Russian military intelligence has been linked to campaigns aimed at digital infrastructure and managed services, often as part of a supply chain attack to compromise organizations at scale. Iran-aligned groups have repeatedly demonstrated willingness to target private sector entities to advance geopolitical goals. In many cases, the objective was access: steal data, implant persistence, map networks, and maintain a foothold that could be used later for espionage or disruption. 

What’s clearer now than ever before is that data centers and the AI workloads they support have become so vital to modern society, our adversaries will seek to degrade or destroy their efficacy as a tactic of both kinetic and cyber warfare.

We have already seen how quickly a digital incident can become real-world disruption. On March 11, reports surfaced of thousands of servers and endpoints wiped inside Stryker, a U.S.-based medical device manufacturer. A hacktivist group sympathetic to Iran, known as Handala, claimed responsibility. The incident reportedly halted Stryker’s global production after attackers accessed its Microsoft environment and issued a wipe command via Intune. Even without a single missile, the outcome looked like a strategic disruption: operations stopped and downstream customers felt it.

For business leaders, the imperative is clear: treat operational resilience as a board-level priority in the AI era.

In the world of corporate IT, cybersecurity prioritizes confidentiality: preventing theft of sensitive information. Resilience is a different discipline. It is the ability to sustain operations when systems are degraded, disrupted or actively under attack. For data centers and the businesses that depend on them, resilience comes down to preventing cascading failures and reducing the consequence when something inevitably goes wrong.

These developments carry an important implication for the private sector. Digital infrastructure is increasingly a strategic target, making resilience a core business priority rather than a narrow IT issue. For business leaders, the impact of data center disruption extends into multiple, often overlooked areas of cybersecurity risk.

For example, AI’s growth is colliding with a power wall in many regions where grid capacity cannot scale fast enough. That is driving facilities toward new power dependencies, including on-site generation through distributed energy and renewables, yielding more complex power management environments. This power infrastructure becomes a pressure point as interruptions to power supply or management systems can quickly force a data center offline. Russia has on several occasions demonstrated the ability to target and disrupt power generation and distribution in Ukraine in both 2015 and 2016.

Building management and automation systems, including HVAC and physical access controls, are another. These systems are essential to creating safe and supporting operational environments, but they typically have long capital depreciation cycles and inconsistent security safeguards. Frequently exposed to the Internet, and commonly misconfigured and not properly secured, they can become a pathway to outages by an attacker.

With an increasing density of computing infrastructure, thermal management has become a core environment control in data centers. As the industry adopts liquid cooling for dense AI loads, interference with cooling is no longer a niche technical issue. It is a risk vector that can cause downtime and potential equipment damage if breached by attackers.

Remote access creates another major exposure. Data centers rely on vendors, contractors, and systems integrators for maintenance, monitoring, and support, and each remote connection can become an entry point if it isn’t tightly controlled, centrally managed, and well secured. Adversaries often target these trusted access routes because they can be easier to compromise than a well-defended perimeter, allowing attackers to bypass standard controls and safeguards.

All of this has broader economic implications because data center disruption does not stay inside the technology sector. It cascades into the industries that keep society functioning and supply chains moving: hospitals, electric utilities, chemical production, food and beverage, oil and gas, and transportation. An extended outage becomes missed shipments, halted production, delayed care, safety concerns and lost trust.

What should leaders do now?

Start by defining resilience targets that match business reality: what must stay running, what can degrade, what cannot fail. Then invest in the controls that limit the impact of an incident. Segmentation between IT and OT assets should be non-negotiable. Remote access should be treated as a critical risk pathway with least privilege, strong authentication and continuous monitoring.

Manage facilities systems such as building management systems, power, and cooling controls as critical operational technology, with asset inventories, vulnerability management, logging, and incident response plans that anticipate disruption.

Finally, train to operate under degraded conditions. Tabletop exercises should include scenarios like loss of a cloud region, partial failure of a facility, or compromise of a management plane. Use these exercises to validate that the organization can maintain essential operations and recover quickly when disruptions occur. 

Policy is moving in this direction as well. Governments are increasingly treating data centers as critical infrastructure. Policies and frameworks such as the National Cybersecurity Strategy, CISA’s Secure by Design principles, and international standards like IEC 62443 all reflect a growing recognition that digital infrastructure is a national security issue. Companies that get ahead of this shift will not only reduce risk, they will build competitive advantage in a world where downtime can become a strategic weapon.

In the AI era, data centers are essential infrastructure for modern economies and national security. Their rising importance also makes them attractive targets in cyber and physical conflict. Protecting them is no longer just about safeguarding company operations, it is about protecting the systems society depends on every day. 

Grant Geyer is the chief strategy officer at Claroty.

The post Why data centers now belong on the critical infrastructure list appeared first on CyberScoop.

US government, allies publish guidance on how to safely deploy AI agents

By: Greg Otto
1 May 2026 at 12:49

Cybersecurity agencies from the United States, Australia, Canada, New Zealand and the United Kingdom jointly published guidance Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern, warning that the technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI — software built on large language models that can plan, make decisions and take actions autonomously. In order for this software to function it needs to connect to external tools, databases, memory stores and automated workflows, allowing it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by the U.S. Cybersecurity and Infrastructure Security Agency, the National Security Agency, the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre and the United Kingdom’s National Cyber Security Centre.

The agencies’ central message is that agentic AI does not require an entirely new security discipline. Organizations should fold these systems into the cybersecurity frameworks and governance structures they already maintain, applying established principles such as zero trust, defense-in-depth and least-privilege access.

The document identifies five broad categories of risk. The first is privilege: When agents are granted too much access, a single compromise can cause far more damage than a typical software vulnerability. The second covers design and configuration flaws, where poor setup creates security gaps before a system even goes live.

The third category covers behavioral risks, or cases where an agent pursues a goal in ways its designers never intended or predicted. The fourth is structural risk, where interconnected networks of agents can trigger failures that spread across an organization’s systems.

The fifth category is accountability. Agentic systems make decisions through processes that are difficult to inspect and generate logs that are hard to parse, making it difficult to trace what went wrong and why. The agencies also note that when these systems fail, the consequences can be concrete: altered files, changed access controls and deleted audit trails.

The guidance also flags prompt injection, where instructions embedded inside data can hijack an agent’s behavior to perform malicious tasks. Prompt injection has been a lingering problem with large language models, with some companies admitting that the problem may never be solved

Identity management gets significant attention throughout the document. The agencies recommend that each agent carry a verified, cryptographically secured identity, use short-lived credentials and encrypt all communications with other agents and services. For high-impact actions, a human should have to sign off, and the guidance is explicit that deciding which actions require that approval is a job for system designers, not the agent.

The agencies admit the security field has not fully caught up with agentic AI. Some risks unique to these systems are not yet covered by existing frameworks, and the guidance calls for more research and collaboration as the technology takes on a growing number of operational roles.

“Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritising resilience, reversibility and risk containment over efficiency gains,” the guidance reads. 

You can read the full guidance below.

The post US government, allies publish guidance on how to safely deploy AI agents appeared first on CyberScoop.

Former incident responders sentenced to 4 years in prison for committing ransomware attacks

30 April 2026 at 19:29

Two former cybersecurity professionals who moonlighted as cybercriminals, committing a series of ransomware attacks in 2023, were each sentenced to four years in prison, the Justice Department said Thursday.

Ryan Clifford Goldberg and Kevin Tyler Martin previously pleaded guilty to one of three charges brought against them in December and faced up to 20 years behind bars. 

Goldberg, who was a manager of incident response at Sygnia, and Martin, a ransomware negotiator at DigitalMint at the time, collaborated with Angelo John Martino III to attack victim computers and networks and use ALPHV, also known as BlackCat, ransomware to extort payments.

“These defendants exploited specialized cybersecurity knowledge not to protect victims, but to extort them,” Jason A. Reding Quiñones, U.S. attorney for the Southern District of Florida, said in a statement. “They used ransomware to lock down critical systems, steal sensitive data, and pressure American businesses into paying to regain access to their own information.”

Victims impacted by the attacks Goldberg and Martin participated in over a six-month period in 2023 included a medical company based in Florida, a pharmaceutical company based in Maryland, a California doctor’s office, an engineering company based in California and a drone manufacturer in Virginia. 

“They harmed important firms who were providing medical and engineering services. They played hardball with them, going so far as to cause the leak of patient data from a doctor’s office victim,” A. Tysen Duva, assistant attorney general of the Justice Department’s criminal division, said in a statement.

“These were supposed to be cybersecurity specialists who did good and helped businesses and people. Instead, they used their high-level cyber skills to feed their greed. Ransomware attackers like this should be punished and removed from society to serve their lawful sentences so they cannot harm others,” Duva added.

Goldberg and Martin received identical sentences for their crimes, despite significant differences surrounding their initial arrests. Martin was arrested without incident in October and freed on bond later that month.

Goldberg fled the country in June, 10 days after he was interviewed by the FBI. He was arrested Sept. 22 and ordered to remain in custody pending trial due to flight risk. 

Goldberg and his wife boarded a one-way flight to Paris from Atlanta on June 27 and remained in Europe until Sept. 21. When Goldberg flew directly from Amsterdam to Mexico City, he was arrested upon landing and deported to the United States.

“When Goldberg sought to flee abroad and escape prosecution, the FBI tracked him through 10 countries, demonstrating the lengths we will go to hold cyber criminals accountable and protect victims,” Brett Leatherman, assistant director of the FBI’s Cyber Division, said in a statement.

The cases against Golberg, Martin and their co-conspirator Martino showcase an extreme, albeit rare, example of the dark underbelly of ransomware negotiation as a practice. The pitfalls of ransomware negotiation are excessive and these backchannel negotiations, which remain largely unscrutinized, can go awry for various reasons.

Goldberg, 40, and Martin, 36, extorted a $1.3 million ransom payment from the medical company with Martino in May 2023, but did not receive ransom payments from their other victims.

Martino’s ransomware scheme went much further and caused significantly more damage, helping accomplices extort a combined $75.3 million in ransom payments. Five of Martino’s victims hired DigitalMint, which assigned the 41-year-old to conduct ransomware negotiations on their clients’ behalf — a rare position he exploited to play both sides.

He pleaded guilty earlier this month to sharing confidential information about victim organizations’ internal negotiating positions and insurance policy limits he gained from his work as a ransomware negotiator to extract the maximum ransom payment for himself and other BlackCat affiliates.

The five U.S.-based victims that hired DigitalMint and unwittingly tapped Martino to allegedly conduct ransomware negotiations with himself and his co-conspirators include a nonprofit and companies in the hospitality, financial services, retail and medical industries. All five of those victims paid a ransom.

Martino surrendered in March to the U.S. Marshals in Miami and was released on a $500,000 bond. He faces up to 20 years in federal prison and is scheduled for sentencing July 9.

Sygnia and DigitalMint are not accused of any knowledge or involvement in the crimes, and both previously said they fired their former employees once federal authorities alerted the companies to their alleged crimes. 

ALPHV/BlackCat was a notorious ransomware and extortion group linked to a series of attacks on critical infrastructure providers. The ransomware variant first appeared in late 2021, and was later used in dozens of attacks on organizations in the health care sector.

The group behind the ransomware strain also claimed responsibility for the February 2024 attack on UnitedHealth Group subsidiary Change Healthcare, which paid a $22 million ransom and became the largest health care data breach on record, compromising data on about 190 million people.

The post Former incident responders sentenced to 4 years in prison for committing ransomware attacks appeared first on CyberScoop.

❌
❌