Reading view

There are new articles available, click to refresh the page.

Securing and governing the rise of autonomous agents​​

In this blog, you will hear directly from Corporate Vice President and Deputy Chief Information Security Officer (CISO) for Identity, Igor Sakhnov, about how to secure and govern autonomous agents. This blog is part of a new ongoing series where our Deputy CISOs share their thoughts on what is most important in their respective domains. In this series you will get practical advice, forward-looking commentary on where the industry is going, things you should stop doing, and more.

By 2026, enterprises may have more autonomous agents than human users. Are we ready to secure and govern them?

2024 was a year defined by learning about generative AI. Organizations were experimenting with it: testing its boundaries and exploring its potential. In 2025, organizations moved into execution. Autonomous agents are no longer theoretical. They’re now being deployed across development, operations, and business workflows.

This shift is being driven by platforms like Microsoft Copilot Studio and Azure AI Foundry and accelerated by patterns like Model Context Protocol (MCP) and Agent-to-Agent (A2A) interactions. These agents are evolving from tools into digital actors—ones capable of reasoning, acting, and collaborating.

That evolution brings real value. But it also introduces a new class of risk—and with it, a new set of responsibilities.

The rise of the agent: What’s here and what’s next

To understand the rise of autonomous agents, it’s worth starting at the beginning. Generative AI first captured the spotlight with models that could produce human-like text, code, and imagery. Meanwhile, researchers were also advancing autonomous systems designed to perceive, decide, and act independently. As these two domains converged, a new class of AI emerged—agents capable not just of generating output, but of taking action towards goals with limited human input. Today, these agents are beginning to surface across each layer of the cloud stack, each designed to tackle different layers of complexity:

  • Software as a service (SaaS)-based agents, often built using low-code or no-code platforms like Copilot Studio, are enabling business users to automate tasks with minimal technical support.
  • Platform as a service (PaaS)-based agents support both low-code and pro-code development, offering flexibility for teams building more sophisticated solutions. Azure AI Foundry is a good example.
  • Infrastructure as a service (IaaS)-based agents are typically deployed in virtual networks (VNETs), virtual private clouds (VPCs), or on-premises environments, often as custom models or services integrated into enterprise infrastructure.

Each of these categories includes both custom-built first-party and third-party individual software vendors (ISVs) agents, all of whom are rapidly multiplying across the enterprise. As organizations embrace this diversity and scale, the number of agents will soon outpace human users—making visibility, oversight, and robust governance not just important, but essential.

The new risk landscape: Why agents are different

While autonomous agents unlock new levels of efficiency, scalability, and continuous operation for organizations, they also introduce a fundamentally different risk profile:

  • Self-initiating: Agents can act without direct human prompts, enabling automation and responsiveness at scale—but this autonomy also means they may take unintended actions or operate outside established guardrails.
  • Persistent: Running continuously with long-lived access allows agents to deliver ongoing value and handle tasks around the clock. However, persistent presence increases the risk of over-permissioning, lifecycle drift, and undetected misuse.
  • Opaque: Their ability to operate as “black boxes” can simplify complex workflows and abstract away technical details, but it also makes them difficult to audit, explain, or troubleshoot—especially when built on large language models (LLMs).
  • Prolific: The ease with which agents can be created, even by non-technical users, accelerates innovation and experimentation—while simultaneously increasing the risk of shadow agents, sprawl, and inconsistent governance.
  • Interconnected: By calling other agents and services, they can orchestrate complex, multi-step processes—but this interconnectedness creates complex dependencies and new attack surfaces that are challenging to secure and monitor.

Given this new risk profile, these autonomous agents aren’t a minor extension of existing identity or application governance—they’re a new workload. Treat them accordingly.

What’s more—as they scale, they will soon outnumber human users in the enterprise.

Common failure points in autonomous agents

Despite their impressive capabilities, AI agents can still make mistakes. These errors tend to arise during long-running tasks, where “task drift” can occur, or when the agent encounters malicious input such as Cross Prompt Injection Attacks (XPIA). In these cases, the agent may veer off course or even be manipulated into acting against its intended purpose.

That’s why it’s useful to approach agent security the same way you would approach working with a junior employee: by setting clear guardrails, monitoring behavior, and establishing strong protections. Microsoft is addressing XPIA with prompt shields and evolving best practices. Robust authentication can help counter deepfakes, and improved prompt engineering through orchestration or employee training can reduce hallucinations and strengthen overall response accuracy.

Understanding Model Context Protocol for agent governance

One of the most powerful enablers of the growth of autonomous agents is the Model Context Protocol (MCP). MCP is an open standard that allows AI agents to securely and effectively connect with external data sources, tools, and services—providing flexibility to fetch real-time data, call external tools, and operate autonomously. This open standard essentially acts as a “USB-C port for AI.”

But with that flexibility comes risk. Poorly governed MCP implementations can expose agents to data exfiltration, prompt injection, or access to unvetted services. Because MCPs are easy to create, they can proliferate quickly, often without proper access controls or oversight. This is where role-based access control (RBAC) becomes critical: MCP’s ability to connect agents to a wide range of resources means that robust, granular access controls are essential to prevent misuse. However, implementing effective role-based access control for MCP-enabled agents is complex: it requires dynamic, context-aware permissions that can adapt to rapidly changing agent behaviors and access needs. Without this rigor, organizations risk over-permissioning agents, losing visibility into who can access what, and ultimately exposing sensitive data or critical services to unauthorized use.

In short, agents don’t sleep, they don’t forget, and they don’t always follow the rules. That’s why governance and thought-through authorization can’t be optional, for both agents and MCP servers.

Securing and governing agents starts with visibility

The first challenge customers raise is simple: “Do I even know which agents I have?” Before any meaningful governance or security can take place, organizations must achieve observability. Without a clear inventory of agents—across SaaS, PaaS, IaaS, and local environments—governance is guesswork. Visibility provides the foundation for everything that follows: it helps organizations to audit agent activity, understand ownership, and assess access patterns. Only with this single, unified view can organizations move from reactive oversight to proactive control.

Once visibility is in place, securing and governing agents requires a layered approach built on seven core capabilities:

Identity management

Agents must have unique, traceable identities. These identities might be identities derived, but distinguishable, from user identities or independent identities like those used by services—but no matter what they are, these identities need to be governed throughout their lifecycle (from creation to deactivation) with clear sponsorship and accountability to prevent sprawl.

Access control

Agents should operate with the minimum permissions required. Whether acting autonomously or on behalf of a user, access must be scoped, time-bound, and revocable in real time.

Data security

Sensitive data must be protected at every step. This requires implementing inline data loss prevention (DLP), sensitivity-aware controls, and adaptive policies to prevent oversharing. These safeguards are especially critical in low-code environments where agents are created quickly and often without sufficient oversight.

Posture management

Security posture must be continuously assessed. Organizations need to continually identify misconfigurations, excessive permissions, and vulnerable components across the agent stack to maintain a strong baseline.

Threat protection

Agents introduce new attack surfaces; therefore, prompt injection, misuse, and anomalous behavior must be detected early. To mitigate this increased surface area for attacks, signals from across the compute, data, and AI layers should feed into existing extended detection and response (XDR) platforms for proactive defense.

Network security

Just like users and devices, agents need secure network access. That includes controlling which agents can access which resources, inspecting traffic, and blocking access to malicious or non-compliant destinations.

Compliance

Agent activities must align with internal policies and external regulations. Organizations should audit interactions, enforce retention policies, and demonstrate compliance across the agent lifecycle.

These are not theoretical requirements; they are essential for building trust in agentic systems at scale.

Building the foundation: Agent identity

To address the need for augmented governance, Microsoft is introducing Entra Agent ID—a new identity designed specifically for AI agents. You can think of them the same way as managed identities (MSIs) with no default permissions. They can act on behalf of users, other agents, or independently, with just-in-time access that’s automatically revoked when no longer needed. They’re secure by default, auditable, and easy for developers to use. As organizations move beyond managing just users and applications, the need to extend these foundational identity principles to AI agents becomes increasingly important.

An emerging strategy to manage AI agents at scale and improve risk management is the concept of an agent registry. While the directory of Microsoft Entra ID is an authoritative source for both human users and application artifacts, there is a need to provide a similar authoritative store for all agent-specific metadata. This is where the concept of an agent registry comes in—serving as a natural extension to the directory, tailored to capture the unique attributes, relationships, and operational context of AI agents as they proliferate across the enterprise. As these registries evolve, they are likely to integrate with core components like MCP servers, reflecting the expanding role of agents within the ecosystem. Together, these tools will allow organizations to achieve observability, manage risk, and scale governance.

Extending Microsoft Security to meet the moment

To meet organizational needs that come with autonomous agents, Microsoft is building on a strong foundation and extending our existing security products to meet the unique demands of the agentic era, grounded in a Zero Trust approach that protects both people and AI agents.

Microsoft’s security stack—including Entra, Purview, Defender, and more—adapts identity management, access control, data protection, secure network access, threat detection, posture management, and compliance to support AI agents across both first-party and third-party ecosystems. We are innovating from this baseline to deliver agent-specific capabilities:

  • Microsoft Entra extends identity management and access control to AI agents, ensuring each agent has a unique, governed identity and operates with just-in-time, least-privilege access.
  • Microsoft Purview brings robust data security and compliance controls to AI agents, helping organizations prevent data oversharing, manage regulatory requirements, and gain visibility into AI-specific risks.
  • Microsoft Defender integrates AI security posture management and runtime threat protection, empowering developers and security teams to proactively mitigate risks and respond to emerging threats in agentic environments.

This isn’t a separate security silo for AI. It’s agent governance becoming a natural extension of the security investments customers already trust—ones that are integrated, consistent, and ready to scale with them.

A call to action

The agentic era is here, and the opportunities are real—but so are the risks.

To move quickly without compromising trust, we need to integrate governance into the core of agent design. This begins with visibility, scales with identity, access, and data controls, and matures with posture, threat, and compliance capabilities that treat agents as first-class workloads.

Let’s build a future where agents are not just powerful—but trustworthy by design.

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Securing and governing the rise of autonomous agents​​ appeared first on Microsoft Security Blog.

Securing and governing the rise of autonomous agents​​

In this blog, you will hear directly from Corporate Vice President and Deputy Chief Information Security Officer (CISO) for Identity, Igor Sakhnov, about how to secure and govern autonomous agents. This blog is part of a new ongoing series where our Deputy CISOs share their thoughts on what is most important in their respective domains. In this series you will get practical advice, forward-looking commentary on where the industry is going, things you should stop doing, and more.

By 2026, enterprises may have more autonomous agents than human users. Are we ready to secure and govern them?

2024 was a year defined by learning about generative AI. Organizations were experimenting with it: testing its boundaries and exploring its potential. In 2025, organizations moved into execution. Autonomous agents are no longer theoretical. They’re now being deployed across development, operations, and business workflows.

This shift is being driven by platforms like Microsoft Copilot Studio and Azure AI Foundry and accelerated by patterns like Model Context Protocol (MCP) and Agent-to-Agent (A2A) interactions. These agents are evolving from tools into digital actors—ones capable of reasoning, acting, and collaborating.

That evolution brings real value. But it also introduces a new class of risk—and with it, a new set of responsibilities.

The rise of the agent: What’s here and what’s next

To understand the rise of autonomous agents, it’s worth starting at the beginning. Generative AI first captured the spotlight with models that could produce human-like text, code, and imagery. Meanwhile, researchers were also advancing autonomous systems designed to perceive, decide, and act independently. As these two domains converged, a new class of AI emerged—agents capable not just of generating output, but of taking action towards goals with limited human input. Today, these agents are beginning to surface across each layer of the cloud stack, each designed to tackle different layers of complexity:

  • Software as a service (SaaS)-based agents, often built using low-code or no-code platforms like Copilot Studio, are enabling business users to automate tasks with minimal technical support.
  • Platform as a service (PaaS)-based agents support both low-code and pro-code development, offering flexibility for teams building more sophisticated solutions. Azure AI Foundry is a good example.
  • Infrastructure as a service (IaaS)-based agents are typically deployed in virtual networks (VNETs), virtual private clouds (VPCs), or on-premises environments, often as custom models or services integrated into enterprise infrastructure.

Each of these categories includes both custom-built first-party and third-party individual software vendors (ISVs) agents, all of whom are rapidly multiplying across the enterprise. As organizations embrace this diversity and scale, the number of agents will soon outpace human users—making visibility, oversight, and robust governance not just important, but essential.

The new risk landscape: Why agents are different

While autonomous agents unlock new levels of efficiency, scalability, and continuous operation for organizations, they also introduce a fundamentally different risk profile:

  • Self-initiating: Agents can act without direct human prompts, enabling automation and responsiveness at scale—but this autonomy also means they may take unintended actions or operate outside established guardrails.
  • Persistent: Running continuously with long-lived access allows agents to deliver ongoing value and handle tasks around the clock. However, persistent presence increases the risk of over-permissioning, lifecycle drift, and undetected misuse.
  • Opaque: Their ability to operate as “black boxes” can simplify complex workflows and abstract away technical details, but it also makes them difficult to audit, explain, or troubleshoot—especially when built on large language models (LLMs).
  • Prolific: The ease with which agents can be created, even by non-technical users, accelerates innovation and experimentation—while simultaneously increasing the risk of shadow agents, sprawl, and inconsistent governance.
  • Interconnected: By calling other agents and services, they can orchestrate complex, multi-step processes—but this interconnectedness creates complex dependencies and new attack surfaces that are challenging to secure and monitor.

Given this new risk profile, these autonomous agents aren’t a minor extension of existing identity or application governance—they’re a new workload. Treat them accordingly.

What’s more—as they scale, they will soon outnumber human users in the enterprise.

Common failure points in autonomous agents

Despite their impressive capabilities, AI agents can still make mistakes. These errors tend to arise during long-running tasks, where “task drift” can occur, or when the agent encounters malicious input such as Cross Prompt Injection Attacks (XPIA). In these cases, the agent may veer off course or even be manipulated into acting against its intended purpose.

That’s why it’s useful to approach agent security the same way you would approach working with a junior employee: by setting clear guardrails, monitoring behavior, and establishing strong protections. Microsoft is addressing XPIA with prompt shields and evolving best practices. Robust authentication can help counter deepfakes, and improved prompt engineering through orchestration or employee training can reduce hallucinations and strengthen overall response accuracy.

Understanding Model Context Protocol for agent governance

One of the most powerful enablers of the growth of autonomous agents is the Model Context Protocol (MCP). MCP is an open standard that allows AI agents to securely and effectively connect with external data sources, tools, and services—providing flexibility to fetch real-time data, call external tools, and operate autonomously. This open standard essentially acts as a “USB-C port for AI.”

But with that flexibility comes risk. Poorly governed MCP implementations can expose agents to data exfiltration, prompt injection, or access to unvetted services. Because MCPs are easy to create, they can proliferate quickly, often without proper access controls or oversight. This is where role-based access control (RBAC) becomes critical: MCP’s ability to connect agents to a wide range of resources means that robust, granular access controls are essential to prevent misuse. However, implementing effective role-based access control for MCP-enabled agents is complex: it requires dynamic, context-aware permissions that can adapt to rapidly changing agent behaviors and access needs. Without this rigor, organizations risk over-permissioning agents, losing visibility into who can access what, and ultimately exposing sensitive data or critical services to unauthorized use.

In short, agents don’t sleep, they don’t forget, and they don’t always follow the rules. That’s why governance and thought-through authorization can’t be optional, for both agents and MCP servers.

Securing and governing agents starts with visibility

The first challenge customers raise is simple: “Do I even know which agents I have?” Before any meaningful governance or security can take place, organizations must achieve observability. Without a clear inventory of agents—across SaaS, PaaS, IaaS, and local environments—governance is guesswork. Visibility provides the foundation for everything that follows: it helps organizations to audit agent activity, understand ownership, and assess access patterns. Only with this single, unified view can organizations move from reactive oversight to proactive control.

Once visibility is in place, securing and governing agents requires a layered approach built on seven core capabilities:

Identity management

Agents must have unique, traceable identities. These identities might be identities derived, but distinguishable, from user identities or independent identities like those used by services—but no matter what they are, these identities need to be governed throughout their lifecycle (from creation to deactivation) with clear sponsorship and accountability to prevent sprawl.

Access control

Agents should operate with the minimum permissions required. Whether acting autonomously or on behalf of a user, access must be scoped, time-bound, and revocable in real time.

Data security

Sensitive data must be protected at every step. This requires implementing inline data loss prevention (DLP), sensitivity-aware controls, and adaptive policies to prevent oversharing. These safeguards are especially critical in low-code environments where agents are created quickly and often without sufficient oversight.

Posture management

Security posture must be continuously assessed. Organizations need to continually identify misconfigurations, excessive permissions, and vulnerable components across the agent stack to maintain a strong baseline.

Threat protection

Agents introduce new attack surfaces; therefore, prompt injection, misuse, and anomalous behavior must be detected early. To mitigate this increased surface area for attacks, signals from across the compute, data, and AI layers should feed into existing extended detection and response (XDR) platforms for proactive defense.

Network security

Just like users and devices, agents need secure network access. That includes controlling which agents can access which resources, inspecting traffic, and blocking access to malicious or non-compliant destinations.

Compliance

Agent activities must align with internal policies and external regulations. Organizations should audit interactions, enforce retention policies, and demonstrate compliance across the agent lifecycle.

These are not theoretical requirements; they are essential for building trust in agentic systems at scale.

Building the foundation: Agent identity

To address the need for augmented governance, Microsoft is introducing Entra Agent ID—a new identity designed specifically for AI agents. You can think of them the same way as managed identities (MSIs) with no default permissions. They can act on behalf of users, other agents, or independently, with just-in-time access that’s automatically revoked when no longer needed. They’re secure by default, auditable, and easy for developers to use. As organizations move beyond managing just users and applications, the need to extend these foundational identity principles to AI agents becomes increasingly important.

An emerging strategy to manage AI agents at scale and improve risk management is the concept of an agent registry. While the directory of Microsoft Entra ID is an authoritative source for both human users and application artifacts, there is a need to provide a similar authoritative store for all agent-specific metadata. This is where the concept of an agent registry comes in—serving as a natural extension to the directory, tailored to capture the unique attributes, relationships, and operational context of AI agents as they proliferate across the enterprise. As these registries evolve, they are likely to integrate with core components like MCP servers, reflecting the expanding role of agents within the ecosystem. Together, these tools will allow organizations to achieve observability, manage risk, and scale governance.

Extending Microsoft Security to meet the moment

To meet organizational needs that come with autonomous agents, Microsoft is building on a strong foundation and extending our existing security products to meet the unique demands of the agentic era, grounded in a Zero Trust approach that protects both people and AI agents.

Microsoft’s security stack—including Entra, Purview, Defender, and more—adapts identity management, access control, data protection, secure network access, threat detection, posture management, and compliance to support AI agents across both first-party and third-party ecosystems. We are innovating from this baseline to deliver agent-specific capabilities:

  • Microsoft Entra extends identity management and access control to AI agents, ensuring each agent has a unique, governed identity and operates with just-in-time, least-privilege access.
  • Microsoft Purview brings robust data security and compliance controls to AI agents, helping organizations prevent data oversharing, manage regulatory requirements, and gain visibility into AI-specific risks.
  • Microsoft Defender integrates AI security posture management and runtime threat protection, empowering developers and security teams to proactively mitigate risks and respond to emerging threats in agentic environments.

This isn’t a separate security silo for AI. It’s agent governance becoming a natural extension of the security investments customers already trust—ones that are integrated, consistent, and ready to scale with them.

A call to action

The agentic era is here, and the opportunities are real—but so are the risks.

To move quickly without compromising trust, we need to integrate governance into the core of agent design. This begins with visibility, scales with identity, access, and data controls, and matures with posture, threat, and compliance capabilities that treat agents as first-class workloads.

Let’s build a future where agents are not just powerful—but trustworthy by design.

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Securing and governing the rise of autonomous agents​​ appeared first on Microsoft Security Blog.

Defending against evolving identity attack techniques

In today’s evolving cyber threat landscape, threat actors are committed to advancing the sophistication of their attacks. The increasing adoption of essential security features like multifactor authentication (MFA), passwordless solutions, and robust email protections has changed many aspects of the phishing landscape, and threat actors are more motivated than ever to acquire credentials—particularly for enterprise cloud environments. Despite these evolutions, social engineering—the technique of convincing or deceiving users into downloading malware, directly divulging credentials, or more—remains a key aspect of phishing attacks.

Implementing phishing-resistant and passwordless solutions, such as passkeys, can help organizations improve their security stance against advanced phishing attacks. Microsoft is dedicated to enhancing protections against phishing attacks and making it more challenging for threat actors to exploit human vulnerabilities. In this blog, I’ll cover techniques that Microsoft has observed threat actors use for phishing and social engineering attacks that aim to compromise cloud identities. I’ll also share what organizations can do to defend themselves against this constant threat.

While the examples in this blog do not represent the full range of phishing and social engineering attacks being leveraged against enterprises today, they demonstrate several efficient techniques of threat actors tracked by Microsoft Threat Intelligence. Understanding these techniques and hardening your organization with the guidance included here will help contribute to a significant part of your defense-in-depth approach.

Pre-compromise techniques for stealing identities

Modern phishing techniques attempt to defeat authentication flows

Adversary-in-the-middle (AiTM)

Today’s authentication methods have changed the phishing landscape. The most prevalent example is the increase in adversary-in-the-middle (AiTM) credential phishing as the adoption of MFA grows. The phish kits available from phishing-as-a-service (PhaaS) platforms has further increased the impact of AiTM threats; the Evilginx phish kit, for example, has been used by multiple threat actors in the past year, from the prolific phishing operator Storm-0485 to the Russian espionage actor Star Blizzard.

Evilginx is an open-source framework that provides AiTM capabilities by deploying a proxy server between a target user and the website that the user wishes to visit (which the threat actor impersonates). Microsoft tracked Storm-0485 directing targets to Evilginx infrastructure using lures with themes such as payment remittance, shared documents, and fake LinkedIn account verifications, all designed to prompt a quick response from the recipient. Storm-0485 also consistently uses evasion tactics, notably passing initial links through obfuscated Google Accelerated Mobile Pages (AMP) URLs to make links harder to identify as malicious.

Screenshot of Storm-0485's fake LinkedIn verify account lure stating Account Action Required with a button reading Verify Account and an alternative LinkedIn URL to copy and paste if the button does not work.
Figure 1. Example of Storm-0485’s fake LinkedIn verify account lure

To protect against AiTM attacks, consider complementing MFA with risk-based Conditional Access policies, available in Microsoft Entra ID Protection, where sign-in requests are evaluated using additional identity-driven signals like IP address location information or device status, among others. These policies use real-time and offline detections to assess the risk level of sign-in attempts and user activities. This dynamic evaluation helps mitigate risks associated with token replay and session hijacking attempts common in AiTM phishing campaigns.

Additionally, consider implementing Zero Trust network security solutions, such as Global Secure Access which provides a unified pane of glass for secure access management of networks, identities, and endpoints.

Device code phishing

Device code phishing is a relatively new technique that has been incorporated by multiple threat actors into their attacks. In device code phishing, threat actors like Storm-2372 exploit the device code authentication flow to capture authentication tokens, which they then use to access target accounts. Storm-1249, a China-based espionage actor, typically uses generic phishing lures—with topics like taxes, civil service, and even book pre-orders—to target high-level officials at organizations of interest. Microsoft has also observed device code phishing being used for post-compromise activity, which are discussed more in the next sections.

At Microsoft, we strongly encourage organizations to block device code flow where possible; if needed, configure Microsoft Entra ID’s device code flow in your Conditional Access policies.

OAuth consent phishing

Another modern phishing technique is OAuth consent phishing, where threat actors employ the Open Authorization (OAuth) protocol and send emails with a malicious consent link for a third-party application. Once the target clicks the link and authorizes the application, the threat actor gains access tokens with the requested scopes and refresh tokens for persistent access to the compromised account. In one OAuth consent phishing campaign recently identified by Microsoft, even if a user declines the requested app permissions (by clicking Cancel on the prompt), the user is still sent to the app’s reply URL, and from there redirected to an AiTM domain for a second phishing attempt.

Screenshot of the OAuth app prompt requesting permissions for an unverified Share-File Point Document
Figure 2. OAuth app prompt seeks account permissions

You can prevent employees from providing consent to specific apps or categories of apps that are not approved by your organization by configuring app consent policies to restrict user consent operations. For example, configure policies to allow user consent only to apps requesting low-risk permissions with verified publishers, or apps registered within your tenant.

Device join phishing

Finally, it’s worth highlighting recent device join phishing operations, where threat actors use a phishing link to trick targets into authorizing the domain-join of an actor-controlled device. Since April 2025, Microsoft has observed suspected Russian-linked threat actors using third-party application messages or emails referencing upcoming meeting invitations to deliver a malicious link containing valid authorization code. When clicked, the link returns a token for the Device Registration Service, allowing registration of the threat actor’s device to the tenant. You can harden against this type of phishing attack by requiring authentication strength for device registration in your environment.

Lures remain an effective phishing weapon

While both end users and automated security measures have become more capable at identifying malicious phishing attachments and links, motivated threat actors continue to rely on exploiting human behavior with convincing lures. As these attacks hinge on deceiving users, user training and awareness of commonly identified social engineering techniques are key to defending against them.

Impersonation lures

One of the most effective ways Microsoft has observed threat actors deliver lures is by impersonating people familiar to the target or using malicious infrastructure spoofing legitimate enterprise resources. In the last year, Star Blizzard has shifted from primarily using weaponized document attachments in emails to spear phishing with a malicious link leading to an AiTM page to target the government, non-governmental organizations (NGO), and academic sectors. The threat actor’s highly personalized emails impersonate individuals from whom the target would reasonably expect to receive emails, including known political and diplomatic figures, making the target more likely to be deceived by the phishing attempt.

Screenshot of Star Blizzard's file share spear-phishing email showing a redacted user shared a file with a button to Open the shared PDF. Clicked the Open button displays the embedded link was changed from a legitimate URL to an actor-controlled one.
Figure 3. Star Blizzard file share spear-phishing email

QR codes

We have seen threat actors regularly iterating on the types of lure links incorporated into their attacks to make social engineering more effective. As QR codes have become a ubiquitous feature in communications, threat actors have adopted their use as well. For example, over the past two years, Microsoft has seen multiple actors incorporate QR codes, encoded with links to AiTM phishing pages, into opportunistic tax-themed phishing campaigns.

The threat actor Star Blizzard has even leveraged nonfunctional QR codes as a part of a spear-phishing campaign offering target users an opportunity to join a WhatsApp group: the initial spear-phishing email contained a broken QR code to encourage the targeted users to contact the threat actor. Star Blizzard’s follow-on email included a URL that redirected to a webpage with a legitimate QR code, used by WhatsApp for linking a device to a user’s account, giving the actor access to the user’s WhatsApp account.

Use of AI

Threat actors are increasingly leveraging AI to enhance the quality and volume of phishing lures. As AI tools become more accessible, these actors are using them to craft more convincing and sophisticated lures. In a collaboration with OpenAI, Microsoft Threat Intelligence has seen threat actors such as Emerald Sleet and Crimson Sandstorm interacting with large language models (LLMs) to support social engineering operations. This includes activities such as drafting phishing emails and generating content likely intended for spear-phishing campaigns.

We have also seen suspected use of generative AI to craft messages in a large-scale credential phishing campaign against the hospitality industry, based on the variations of language used across identified samples. The initial email contains a request for information designed to elicit a response from the target and is then followed by a more generic phishing email containing a lure link to an AiTM phishing site.

Screenshot of a suspected AI-generated phishing email claiming to be hiring various services for a wedding.
Figure 4. One of multiple suspected AI-generated phishing email in a widespread phishing campaign

AI helps eliminate the common grammar mistakes and awkward phrasing that once made phishing attempts easier to spot. As a result, today’s phishing lures are more polished and harder for users to detect, increasing the likelihood of successful compromise. This evolution underscores the importance of securing identities in addition to user awareness training.

Phishing risks continue to expand beyond email

Enterprise communication methods have diversified to support distributed workforce and business operations, so phishing has expanded well beyond email messages. Microsoft has seen multiple threat actors abusing enterprise communication applications to deliver phishing messages, and we’ve also observed continued interest by threat actors to leverage non-enterprise applications and social media sites to reach targets.

Teams phishing

Microsoft Threat Intelligence has been closely tracking and responding to the abuse of the Microsoft Teams platform in phishing attacks and has taken action against confirmed malicious tenants by blocking their ability to send messages. The cybercrime access broker Storm-1674, for example, creates fraudulent tenants to create Teams meetings to send chat messages to potential victims using the meeting’s chat functionality; more recently, since November 2024, the threat actor has started compromising tenants and directly calling users over Teams to phish for credentials as well. Businesses can follow our security best practices for Microsoft Teams to further defend against attacks from external tenants.

Leveraging social media

Outside of business-managed applications, employees’ activity on social media sites and third-party communication platforms has widened the digital footprint for phishing attacks. For instance, while the Iranian threat actor Mint Sandstorm primarily uses spear-phishing emails, they have also sent phishing links to targets on social media sites, including Facebook and LinkedIn, to target high-profile individuals in government and politics. Mint Sandstorm, like many threat actors, also customizes and enhances their phishing messages by gathering publicly available information, such as personal email addresses and contacts, of their targets on social media platforms. Global Secure Access (GSA) is one solution that can reduce this type of phishing activity and manage access to social media sites on company-owned devices.

Post-compromise identity attacks

In addition to using phishing techniques for initial access, in some cases threat actors leverage the identity acquired from their first-stage phishing attack to launch subsequent phishing attacks. These follow-on phishing activities enable threat actors to move laterally within an organization, maintain persistence across multiple identities, and potentially acquire access to a more privileged account or to a third-party organization.

You can harden your environment against internal phishing activity by configuring the Microsoft Defender for Office 365 Safe Links policy to apply to internal recipients as well as by educating users to be wary of unsolicited documents and to report suspected phishing messages.

AiTM phishing crafted using legitimate company resources

Storm-0539, a threat actor that persistently targets the retail industry for gift card fraud, uses their initial access to a compromised identity to acquire legitimate emails—such as help desk tickets—that serve as templates for phishing emails. The crafted emails contain links directing users to AiTM phishing pages that mimic the federated identity service provider of the compromised organization. Because the emails resemble the organization’s legitimate messages, lead to convincing AiTM landing pages, and are sent from an internal account, they could be highly convincing. In this way, Storm-0539 moves laterally, seeking an identity with access to key cloud resources.

Intra-organization device code phishing

In addition to their use of device code phishing for initial access, Storm-2372 also leverages this technique in their lateral movement operations. The threat actor uses compromised accounts to send out internal emails with subjects such as “Document to review” and containing a device code authentication phishing payload. Because of the way device code authentication works, the payloads only work for 15 minutes, so Microsoft has seen multiple waves of post-compromise phishing attacks as the threat actor searches for additional credentials.

Screenshot of Storm-2372 lateral movement attempt containing a device code phishing payload
Figure 5. Storm-2372 lateral movement attempt contains device code phishing payload

Defending against credential phishing and social engineering

Defending against phishing attacks begins at the primary gateways: email and other communication platforms. Review our recommended settings for Exchange Online Protection and Microsoft Defender for Office 365, or the equivalent for your email security solution, to ensure your organization has established essential defenses and knows how to monitor and respond to threat activity.

A holistic security posture for phishing must also account for the human aspect of social engineering. Investing in user awareness training and phishing simulations is critical for arming employees with the needed knowledge to defend against tried-and-true social engineering methods. Training can also help when threat actors inevitably refine and improve their techniques. Attack simulation training in Microsoft Defender for Office 365, which also includes simulating phishing messages in Microsoft Teams, is one approach to running realistic attack scenarios in your organization.

Hardening credentials and cloud identities is also necessary to defend against phishing attacks. By implementing the principles of least privilege and Zero Trust, you can significantly slow down determined threat actors who may have been able to gain initial access and buy time for defenders to respond. To get started, follow our steps to configure Microsoft Entra with increased security.

As part of hardening cloud identities, authentication using passwordless solutions like passkeys is essential, and implementing MFA remains a core pillar in identity security. Use the Microsoft Authenticator app for passkeys and MFA, and complement MFA with conditional access policies, where sign-in requests are evaluated using additional identity-driven signals. Conditional access policies can also be scoped to strengthen privileged accounts with phishing resistant MFA. Your passkey and MFA policy can be further secured by only allowing MFA and passkey registrations from trusted locations and devices.

Finally, a Security Service Edge solution like Global Secure Access (GSA) provides identity-focused secure network access. GSA can help to secure access to any app or resource using network, identity, and endpoint access controls.

Among Microsoft Incident Response cases over the past year where we identified the initial access vector, almost a quarter incorporated phishing or social engineering. To achieve phishing resistance and limit the opportunity to exploit human behavior, begin planning for passkey rollouts in your organization today, and  at a minimum, prioritize phishing-resistant MFA for privileged accounts as you evaluate the effect of this security measure on your wider organization. In the meantime, use the other defense-in-depth approaches I’ve recommended in this blog to defend against phishing and social engineering attacks.

Stay vigilant and prioritize your security at every step.

Recommendations

Several recommendations were made throughout this blog to address some of the specific techniques being used by threat actors tracked by Microsoft, along with essential practices for securing identities. Here is a consolidated list for your security team to evaluate.

At Microsoft, we are accelerating security with our work on the Secure by Default framework. Specific Microsoft-managed policies are enabled for every new tenant and raise your security posture with security defaults that provide a baseline of protection for Entra ID and resources like Office 365.

Learn more  

For the latest security research from the Microsoft Threat Intelligence community, check out the Microsoft Threat Intelligence Blog

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast

The post Defending against evolving identity attack techniques appeared first on Microsoft Security Blog.

❌