Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Everyone’s building AI agents. Almost nobody’s ready for what they do to identity.

By: Greg Otto
30 April 2026 at 06:00

Anthropic recently announced that it would not release Mythos, its most powerful AI model, to the public. The model discovered thousands of previously unknown software vulnerabilities — flaws that had sat undetected in major operating systems and web browsers for as long as nearly three decades. Anthropic said the model was too dangerous to deploy broadly because the same capabilities that let it find and fix security flaws could let attackers exploit them. A single AI agent, the company warned, could scan for weaknesses faster and more persistently than hundreds of human hackers. 

That decision tells you something important about where we are. The same AI systems that companies are racing to deploy as autonomous assistants — scheduling your appointments, writing your code, managing your workflows — are also capable of probing digital defenses at a speed and scale no human team can match. And most of the systems they’d be probing still rely on a security model designed for an era when a person sat behind every keyboard. 

Think of it like a building where every door has a lock, but the locks were all designed to recognize human hands. Now the building is full of robots — some of them authorized couriers, some of them intruders — and the locks can’t tell the difference. 

Not long ago, you could sit at your desk, glance at the sticky note on your monitor for your username and password, type them in, and grab a cup of coffee while your browser opened a doorway to the rest of the world. Every layer of security that followed — passwords, security questions, biometric scans, two-factor authentication — grew out of a single bedrock assumption: a person was on the other end. 

AI agents break that assumption from two directions at the same time. Legitimate agents need credentials to act like a human. OpenAI’s Operator navigates websites on your behalf. Google’s Gemini can plan your next family vacation while you sleep. Visa recently unveiled Intelligence Commerce Connect, a platform that lets AI agents do the shopping for consumers. These aren’t demos or hot takes from a tech conference floor. They’re shipping products that act on behalf of real people—and to do that, they need your identity. 

At the same time, adversaries can fake humanity at scale. The same AI that can act like a helpful assistant convincing can also be a malicious impersonator. They don’t break in, they log in—through shared credentials, hiring pipelines, vendor onboarding portals, and collaboration tools. Most organizations still treat identity as a login problem—something IT handles with stronger passwords or additional authentication steps layered on top of existing systems. The harder challenge now is knowing who, or what, you’ve already let in. 

That distinction is collapsing just as digital systems become more autonomous. 

When that distinction blurs, the damage is concrete. If a procurement workflow cannot distinguish between a human manager and an AI impersonator, purchase orders go out under false authority. When compliance logs cannot determine how a decision was authorized — by a person or a bot — the accountability chain falls apart. Regulators and customers will not accept “we’re not sure” as an explanation. 

The economics have tilted sharply toward the attacker. Sophisticated fraud once required coordination, with people researching targets, crafting messages, and adjusting tactics in real time. AI agents eliminate those constraints. One person can now supervise an army of autonomous systems, each running a valid persona across multiple interactions simultaneously. A single operator can field a hundred synthetic employees for the cost of one real salary. The barrier to large-scale impersonation is no longer skill or manpower. It is access to a capable model and a set of stolen credentials. 

Stronger identity controls do carry a cost. Every additional verification step is a moment when a customer might abandon a transaction, or an employee might lose patience with a security protocol. The goal is not to shut down automation. It is to make sure the systems acting in your name are authorized to do so. 

Some organizations are adapting. They are treating AI agents less like software and more like new employees, cataloging every agent in their environment, limiting permissions, requiring human approval for sensitive actions. They are moving beyond passwords to phishing-resistant authentication that binds access to a known device and a verified user. They are building behavioral baselines so that when a customer service bot suddenly queries a financial database, or a new hire accesses source code on day one, alarms go off. 

Nobody keeps their password on a sticky note anymore (I hope). But the assumption behind the sticky note, that a human hand would type it in, still underpins most of the systems we depend on. These systems hold your medical records, process your mortgage, and let an AI assistant rebook your flight. In a world where AI agents act faster, more persistently, and more convincingly than any person, that assumption is the vulnerability. 

The organizations that can verify identity continuously — not just at the door, but at every action, for every actor, human or machine — will have a durable advantage. The ones that cannot will find out what ambiguity costs. 

Devin Lynch is Senior Director of the Paladin Global Institute and a former Director for Policy and Strategy Implementation at the Office of the National Cyber Director. 

The post Everyone’s building AI agents. Almost nobody’s ready for what they do to identity. appeared first on CyberScoop.

Stolen Logins Are Fueling Everything From Ransomware to Nation-State Cyberattacks

31 March 2026 at 11:04

Report shows how industrialized credential theft underpins ransomware, SaaS breaches, and geopolitical attacks, shifting security focus from prevention to detecting misuse of legitimate access.

The post Stolen Logins Are Fueling Everything From Ransomware to Nation-State Cyberattacks appeared first on SecurityWeek.

Salesforce issues new security alert tied to third customer attack spree in six months

11 March 2026 at 10:12

Threat hunters and a collection of unconfirmed victims are responding to a series of attacks targeting Salesforce customers, which the vendor disclosed in a security advisory Saturday. 

“Salesforce is actively monitoring threat activity targeting public-facing Experience Cloud sites, including attempts to take advantage of overly permissive guest user configurations,” the company said in the alert.

The campaign marks the third widespread attack spree targeting Salesforce customers in about six months. 

The number of victims ensnared by the latest attacks is unverified, but ShinyHunters, the threat group asserting responsibility for the attacks, claims about 100 companies have already been impacted. 

Researchers told CyberScoop they are confident the threat group behind the campaign is associated with ShinyHunters, an outfit that’s previously stolen data from Salesforce instances for extortion attempts.

Salesforce did not attribute the attacks, but pinned blame on a “known threat actor group,” adding that the issue is not due to a vulnerability in the company’s platform.

The company said the threat activity reflects a broader trend of identity-based targeting, in this case customer-configured guest user settings that expose publicly accessible Experience Cloud sites to potential attacks.

“We are aware of a threat actor attempting to identify misconfigurations within Salesforce Experience Cloud instances,” Charles Carmakal, chief technology officer at Mandiant Consulting, said in a statement. “We are working closely with Salesforce and our customers to provide the necessary telemetry and detection rules to mitigate potential risk.”

Salesforce said the threat actor is using a modified version of the Mandiant-developed open-source tool AuraInspector to scan for public-facing Experience Cloud sites and steal data from instances with a guest user profile. 

This setting is designed to provide unauthenticated users access to data intended for public consumption. Yet, guest profiles with excessive permissions allow attackers to view additional data by directly querying Salesforce CRM objects without logging in, the company explained.

Salesforce did not say when or how it became aware of the latest campaign targeting its customers, nor how many companies have already been impacted. “We don’t have anything further to add at this time,” said Nicole Aranda, senior manager of corporate communications at Salesforce. 

The company advised customers to ensure guest user configurations are properly restricted.

“Any system exposed to the internet must be configured with the expectation that it will be continuously scanned,” Shane Barney, chief information security officer, at Keeper Security, said in an email. 

“At its core, this is an access governance issue,” he added. “Guest accounts, service accounts and API integrations must be treated with the same discipline as privileged users. Applying least privilege, restricting API access and continuously auditing permissions are foundational security controls.”

Salesforce customers confronted a pair of attack sprees involving third-party vendors last year. Google Threat Intelligence Group at the time said it was aware of more than 200 potentially affected Salesforce instances linked to malicious activity in Gainsight applications connected to Salesforce customer environments in November.

A more extensive downstream attack spree discovered in August impacted more than 700 companies who integrated the AI chat agent Salesloft Drift into their Salesforce environments. ShinyHunters or threat clusters affiliated with the extortion group were involved in both of those campaigns as well.

The post Salesforce issues new security alert tied to third customer attack spree in six months appeared first on CyberScoop.

FBI says even in an AI-powered world, security basics still matter

10 March 2026 at 15:31

Artificial intelligence may be enhancing cyber threats, but the defensive approach to those AI-amplified attacks remains the same, a top FBI official said Tuesday.

“We have seen actors both criminal and nation-state, they’re absolutely using AI to their advantage,” said Jason Bilnoski, deputy assistant director at the FBI’s cyber division. “But the way attacks unfold have not changed. Cyberattacks still follow basic steps. It just becomes an incredible speed now.”

The best way to deal with those attacks is to implement all the traditional defenses, like those the FBI has been emphasizing as part of its Operation Winter SHIELD media campaign, he said.

“Don’t worry about the speed and capability” of AI attacks, Biloski said at a Billington Cybersecurity conference. “If you’re focused on the basics, it’ll help prevent the actual intrusion from occurring.”

It’s a message that the acting director of the Cybersecurity and Infrastructure Security Agency, Nick Andersen, also shared at the conference. Sophisticated attackers are out there, he said, but the agency’s recent binding operational directive for federal agencies to get rid of unsupported edge devices was a way of shoring up basic vulnerabilities.

“We continue to see any non-zero-days continuing to be exploited within this environment,” he said. “The very least that we can do is harden that edge and make it just a little bit more difficult to take advantage in that regard.” 

His advice to state and local officials was to take a “back to the basics” approach, such as adopting multi-factor authentication.

Bilnoski offered further warnings about the threat, too.

“Identity is the new perimeter. You’re hunting legitimate traffic on your network,” he said. “So we’re no longer seeing malware drop. We’re no longer seeing these very noisy TTPs [tactics, techniques and procedures]. It’s legitimate credentials moving laterally throughout the network, as if it’s a legitimate user on the network. You need to hunt the adversaries as if they’re already on your network, because that’s the type of activity you’re looking for.”

The post FBI says even in an AI-powered world, security basics still matter appeared first on CyberScoop.

Unit 42: Nearly two-thirds of breaches now start with identity abuse

17 February 2026 at 06:00

Identity is still the primary entry point for cyberattacks, according to Palo Alto Networks’ threat intelligence firm Unit 42. In its annual incident response report released Tuesday, Unit 42 found that identity-based techniques accounted for nearly two-thirds of all initial network intrusions last year. 

Social engineering was the leading attack method, accounting for one-third of the 750 incidents Unit 42 responded to in the one-year period ending in September 2025. Attackers also bypassed security controls with compromised credentials, brute-force attacks, overly permissive identity policies and insider threats, researchers said.

The persistent pitfalls of identity extended beyond initial access, with an identity-related element playing a critical role in nearly 90% of all incidents last year. Unit 42’s report highlights the explosive impact of identity abuse, and pins much of the problem on poor security controls and misconfigurations across interconnected tools and systems.

“Across the attack lifecycle, the biggest thing is that once you have an identity, you’ve got everything, you’ve got the key and you’re in,” Sam Rubin, senior vice president of consulting and threat intelligence at Unit 42, told CyberScoop. “From a defense standpoint, enterprises are still not very good at finding the signal in the noise, essentially the detection when an identity-based tactic is used because there isn’t unauthorized access per se from a technical telemetry standpoint, and it becomes a harder detection mechanism.”

Vulnerability exploits, an ever-moving target, were still prolific and accounted for 22% of initial intrusions across attacks, but humans remain the weakest link, Rubin said. 

The rise of machine-based identities and AI agents, which require an identity to take action, is expanding the attack surface for cybercriminals. Identity challenges are manifesting in the software supply chain as well, as API access and SaaS integrations become another weak link and way in for attackers if control keys aren’t properly controlled.

An attack on Salesloft Drift customers last summer highlighted how tightly integrated services can unravel and expose victims that are multiple layers removed from the vendor. More than 700 organizations were impacted directly, but Salesloft Drift’s integrations with dozens of third-party tools opened many additional paths of potential compromise. 

More broadly, attackers are jumping from branch offices into a victims’ headquarters or data centers because too many accounts remain over permissioned and cloud-based accounts are established with too much privilege or a lack of segmentation, Rubin said. 

These gaps allow threat groups to turn break-ins into significant attacks. 

“We just see this time and again that there could have been better identity-based practices that would have constrained the blast radius, even if it didn’t stop the initial access,” Rubin said. 

“It’s a problem of signal and noise,” he added. “Think about a global enterprise and all of this authenticated, legitimate activity happening every day. How do you see and identify the one instance where a user is already authenticated but doing something that they shouldn’t do?”

Large and older organizations are at a greater disadvantage, Rubin said. Over time, their technology stacks have evolved to include legacy systems acquired through various business deals. This leaves IT teams managing a patchwork of disparate systems that are poorly integrated, creating significant security vulnerabilities. 

“We forgot as defenders to consider the entire attack chain, because too often we see the defense happens in silos,” Rubin said, adding that attacks that pivot from endpoints to cloud-based services are commonly missed. 

Each of those jumps gives defenders a chance to  thwart attacks. Nearly 90% of the attacks Unit 42 investigated last year involved malicious activity across multiple attack surfaces.

Financially motivated attacks accounted for most of the 750 incidents Unit 42 responded to last year. Unit 42 did not say how many of those attacks resulted in payments, but it said median payments increased 87% year-over-year to $500,000 last year. 

Attackers continue to pick up speed as well, exfiltrating data from victim networks under a median duration of two days. Attackers stole data in under one hour in 22% of the attacks Unit 42 responded to last year. 

Unit 42’s annual look-back spotlights critical areas of concern and attack trends that continue to take root, yet it’s not comprehensive. The report’s visibility is limited to incidents that went from bad to worse and prompted victims to seek help from Unit 42. 

“The hardest thing about incident response in cybersecurity,” Rubin said, “is there is no one global spot for how much is going on.”

The post Unit 42: Nearly two-thirds of breaches now start with identity abuse appeared first on CyberScoop.

Review of the Data Brokers

By: BHIS
5 September 2017 at 11:26

Jordan Drysdale // The following content is loosely based on a presentation I gave at BSides Denver. After speaking at BSides Denver, one of the audience members spent some time […]

The post Review of the Data Brokers appeared first on Black Hills Information Security, Inc..

❌
❌