Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Two new extortion crews are speedrunning the Scattered Spider playbook

30 April 2026 at 11:00

A pair of persistent and problematic threat groups affiliated with The Com are actively targeting organizations across multiple critical infrastructure sectors for rapid data theft and extortion attacks, according to CrowdStrike.

The financially-motivated attackers, which CrowdStrike tracks as Cordial Spider and Snarky Spider, have used voice-phishing and social engineering attacks to break into victims’ identity platforms and traverse SaaS environments since at least October 2025, the company said in a report Thursday, which it shared exclusively with CyberScoop prior to release. 

Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, said the subgroups composed of native English speakers primarily target U.S.-based organizations in the academic, aviation, retail, hospitality, automotive, financial services, legal and technology sectors.

This “new wave of ecrime threat actors” are closely aligned with Scattered Spider and linked to other subsets of The Com, including SLSH and ShinyHunters, Meyers said. 

Because these attacks target identity systems and can expose data in other connected services beyond the initial breach point, it’s difficult to determine how many victims have been caught up in these campaigns. 

CrowdStrike’s warning closely follows research Palo Alto Networks’ Unit 42 and the Retail & Hospitality Information Sharing and Analysis Center shared last week about Cordial Spider’s string of attacks targeting organizations in the retail and hospitality industry, among others. 

Cordial and Snarky Spider have set lures via voice calls, text messages and emails directing targeting employees to phishing pages posing as their employer’s legitimate single sign-on page or primary identity provider, researchers said. 

These phishing pages, which capture credentials, session keys or tokens, depending on the workflow, provide attackers an entry point into systems, which they exploit for widespread access across victims’ entire SaaS ecosystems.

Attackers use these initial hooks to remove and establish multi-factor authentication devices, then delete emails and other alerts that would otherwise warn organizations of potential malicious activity, researchers said. 

The data theft for extortion campaigns share striking similarities, but CrowdStrike said the tactics, techniques and procedures for each subgroup are distinct. These variances include hours of operation, different phishing domain providers, preferred operating systems, data leak sites, and the tools or devices they used to register for multi-factor authentication. 

The domain for BlackFile, Cordial Spider’s data-leak site, was offline as of Wednesday, according to Meyers.

CrowdStrike declined to put a range on the groups’ extortion demands, but Unit 42 previously said Cordial Spider, which is also tracked as CL-CRI-1116 and UNC6671, are typically in the seven-figure range.

Some victims that didn’t pay extortion demands have been subjected to DDoS attacks, and Snarky Spider has used more aggressive follow-on harassment tactics, including the swatting of victim organizations’ employees, Meyers said. 

CrowdStrike said Cordial and Snarky Spider also use residential proxy networks — including Mullvad, Oxylabs, NetNut, 9Proxy, Infatica and NSOCKS — to evade IP-based detection and blend in with typical traffic. 

Residential proxy networks, which rely on IP addresses assigned to real home users, can serve a legitimate purpose, but researchers have been warning that unethical or outright criminal operators are abusing these networks to build and support botnets, cybercrime campaigns, espionage and other malicious activity.

Cordial and Snarky Spider haven’t achieved the impact or technical capability of Scattered Spider, but the groups share many commonalities and objectives, Meyers said. 

“They’ve kind of taken their playbook and they’re using a lot of their techniques, but we haven’t really seen the technical sophistication demonstrated by them that we saw from Scattered Spider,” he said. “It’s kind of the new generation of Scattered Spider.”

The post Two new extortion crews are speedrunning the Scattered Spider playbook appeared first on CyberScoop.

New cybersecurity guidance paves the way for AI in critical infrastructure 

By: Greg Otto
11 December 2025 at 07:00

Global cybersecurity agencies have issued the first unified guidance on applying artificial intelligence (AI) within critical infrastructure, signaling a major shift from theoretical debate to practical guardrails for safety and reliability.

The release of joint guidance on Principles for the Secure Integration of Artificial Intelligence in Operational Technology marks a meaningful milestone for critical infrastructure security because major global cybersecurity agencies, including CISA, the FBI, the NSA, the Australian Signals Directorate’s Australian Cyber Security Centre, and other partners, have aligned on a shared direction. As AI adoption accelerates across operational environments, this document moves us from theory to practice. It acknowledges AI’s promise while making clear that it also “introduces significant risks—such as operational technology (OT) process models drifting over time or safety-process bypasses” that operators must actively manage to ensure reliability.

The guidance draws a firm distinction between safety and security, emphasizing that large language models should be used to make safety decisions for OT environments and urges operators to adopt push-based architectures with strong architectural boundaries, maintain human-in-the-loop oversight, and demand transparency from vendors embedding AI into industrial systems. It frames AI as an adviser rather than a controller, reinforcing that resilience depends on skilled operators, clear validation procedures, and visibility into how AI models interact with the physical world.

A central contribution of this guidance is its clear distinction between safety and security in the AI era. Protecting the integrity and availability of systems is not the same as preventing physical harm, and AI complicates this relationship in ways many CISOs are now expected to navigate. The guidance recognizes that AI’s non-deterministic nature can lead to unpredictable behaviors or hallucinations. This is why it draws an explicit line: “AI such as LLMs almost certainly should not be used to make safety decisions for OT environments.” 

The message is not a rejection of innovation. It is a pragmatic call to preserve the safety foundations that operational technology depends on. For example, in a water treatment facility, a generative model might misinterpret sensor anomalies and make a recommendation that inadvertently adjusts chemical dosing. Even if security controls are intact, the safety implications can be immediate and physical.

The architecture recommendations extend that safety-first mindset. The guidance maps where AI belongs within the OT hierarchy with clarity. Predictive Machine Learning can strengthen operations at levels 0 through 3, such as forecasting pump failures based on vibration patterns or identifying anomalies in turbine exhaust temperatures. Meanwhile, large language models are better suited for business functions at levels 4 and 5 where they assist with documentation, work order generation, or regulatory reporting. 

The guidance also cautions against introducing new attack vectors. To reduce inbound risk, agencies recommend “push-based or brokered architectures that move required features or summaries out of OT without granting persistent inbound access”. This pattern prevents scenarios where an adversary could exploit a cloud-hosted AI system to pivot directly into OT networks. In other words, AI should act as an advisor rather than a controller, supporting operations without becoming an unseen entry point for adversaries.

Importantly, the document looks beyond systems to the humans who operate them. It warns that “heavy reliance on AI may cause OT personnel to lose manual skills needed for managing systems during AI failures or system outages.” For critical infrastructure, this is not theoretical. Many power plant and water utility operators are already experiencing a loss of skilled workers as employees retire. The guidance encourages organizations to train operators not only on how to use AI, but also on how to challenge it. For example, personnel should be able to validate AI outputs using alternative sensors and observations to confirm that digital recommendations align with physical reality. A compressor temperature anomaly flagged by an ML model, for example, should still be correlated with on-floor readings by humans before operators take corrective action.

The guidance also recommends that critical infrastructure owners should develop strong procurement strategies that take AI into account. Organizations are encouraged to “demand transparency and security considerations from OT vendors regarding how AI technologies are embedded into their products.” This includes requiring SBOMs (or AIBOMs) that specify where models are sourced and hosted, and ensuring that vendors disclose whether they are training those models on an operator’s sensitive data. 

Many CISOs are finding that AI-enabled features are being added quietly into third-party software and SaaS without clear disclosure. This guidance supports a shift toward secure by demand, giving operators the clarity to make informed choices before AI features are embedded deep into their environments.

Finally, the document reaffirms that accountability sits with people. It reminds us that “ultimately, humans are responsible for functional safety.” The recommended “human in the loop” model ensures that AI informs decisions but does not replace human judgment. This approach mitigates challenges such as “model drift” and avoids the risk of blindly executing “black box” outputs in environments where the stakes include real human safety. For example, as refinery equipment ages, model drift can cause a machine learning model to predict failure thresholds that are too low, making it critical for operators to regularly validate the model over the asset’s lifetime. 

As we move forward, the path is both challenging and hopeful. This shared global guidance gives operators a clearer map, and it reinforces that resilience grows when humans and machines work in partnership. A practical next step is to review where AI already touches your OT landscape, then establish or refresh validation procedures that keep operators engaged and confident. You can also begin early conversations with vendors about transparency requirements, which helps set expectations before new capabilities are deployed. In a landscape shaped by rapid innovation, these proactive actions will help ensure that safety and trust remain at the center of progress.

Diana Kelley is the chief information security officer at Noma Security. She has also held senior leadership roles at major technology and cybersecurity companies, including Microsoft Cybersecurity Field CTO, Global Executive Security Advisor at IBM Security, and GM at Symantec. 

The post New cybersecurity guidance paves the way for AI in critical infrastructure  appeared first on CyberScoop.

When trust turns toxic: Lessons from the Salesloft Drift incident

By: Greg Otto
24 November 2025 at 06:00

The recent Salesloft Drift breach offered a sobering reminder of how easily trust can be weaponized in today’s SaaS and AI-integrated environments. In this incident, hackers exploited the Drift chatbot, stole OAuth tokens, and used them to obtain data from CRM systems before the tokens could be revoked. In the wake of the incident, many deemed the weak spot to be the tokens, but they are missing the bigger issue. Namely, identity and permission sprawl, and a misuse of excessive trust.

Inside the Salesloft Drift Attack

With Drift, attackers used OAuth tokens to make legitimate API calls against CRM environments, and since the tokens were valid, the fraudulent activity didn’t raise any flags. In the eyes of all, it was simply business as usual. Organizations later confirmed that data was stolen before tokens could be revoked. This includes sensitive business records, contact information, support data, and, in some cases, embedded credentials across more than 700 organizations using the compromised integration with Salesforce. 

And while those impacted have traced the chain of compromise, the next step is to address the larger underlying problem of the chatbots and the excessive scopes they are given. 

Consider the following:

  • Exceedingly Broad Scopes: The chatbots don’t just have access to what they need; they have access to everything, including users’ credentials.
  • Ongoing Authorization: Chatbot credentials often remain valid indefinitely in the name of speed, in essence creating a permanent open door.
  • Standing Privileges: Permanent credentials mean chatbots stay connected even when not in use, making them targets ready to be exploited at any time.

Add it all up, and you can see how a single compromised credential can create significant exposure. And the risk is only growing, thanks to SaaS and AI-powered integrations that are creating an unimaginable number of vulnerabilities. Still, businesses treat integrations and agents as background utilities that have no ownership, governance, or lifecycle management. Ironically, it’s the absence of these controls that gives them greater operating privileges and reach than any human would ever be granted, while making them ideal targets for attackers.

The identity and access wake-up call

Whether or not an organization was impacted by Drift, it’s time to reassess all SaaS and AI integration footprints. This includes verifying every connected app, API bridge, and automation workflow. 

Start with addressing hygiene, including the following:

  • Remove and rotate any old tokens, as well as those with excessive permissions, especially those connected to third-party integrations. Where possible, static tokens should be eliminated entirely in favor of short-lived tokens with a narrow window of operation.
  • Replace blanket-scoped permissions with narrowly defined access that is tied to specific roles and actions. 
  • Audit logs and event data for unusual exports, API surges, or unexpected user agents. These actions can help surface silent compromises before they grow.

This tactical cleanup is not a one-time exercise. Everything must be re-evaluated on an ongoing basis. Even then, your work is not done. 

From static access to runtime authorization

The next generation of security requires using adaptive access models such as Zero Standing Privileges (ZSP), where “always-on” automation is replaced by dynamic, ephemeral identities and permissions that are enforceable at runtime.  With ZSP, every integration or AI agent receives temporary, just-in-time access that is created at runtime, bound by clear time-to-live parameters and contextual conditions. When the task ends, the permission disappears.

Because these are enabled through runtime authorization, businesses can easily verify not only who or what is making a request, but also why, for how long, and under what conditions. When paired with continuous monitoring, organizations can quickly spot anomalous activities and revoke privileges instantly when behavior deviates from policy.

Treat all integrations as identities

Another key to success is treating all integrations, whether they are human, machine, agentic AI, or AI-driven assistants, equally. Each of these should have a distinct identity, a defined purpose, ownership, and lifecycle stages. These controls provide teams with critical visibility across all identities and, when irregular activities are spotted, the answers to critical questions—who had access, what they did, and for how long?

Pay special attention to AI-driven tools, ensuring that agents operating on behalf of humans only act within the parameters set by their sponsor. Helpful tools here include allowlisting and runtime guardrails that can keep agents in their assigned lane and, in doing so, prevent them from veering off and initiating unauthorized actions. This includes those that have been compromised or manipulated through prompt injection.

The bigger picture: trust as a dynamic perimeter

The Drift incident wasn’t an anomaly—it was a preview. As AI-driven automations and SaaS integrations multiply, every organization will face the same question: can you truly see, control, and verify who or what has access to your data at any given moment?

Security can no longer depend on static controls or the assumption that trusted systems will stay trustworthy. The future belongs to those who treat identity as the new perimeter and access as a living, breathing condition—not a one-time approval. When every token, credential, and agent is governed by context, time, and intent, trust becomes measurable—and defensible.

Because in a world where automation never sleeps, trust can’t either.

Art Poghosyan is the CEO of Britive, a cloud privileged access management software company. 

The post When trust turns toxic: Lessons from the Salesloft Drift incident appeared first on CyberScoop.

❌
❌