❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayAzure Status

Multi-service impact in Switzerland North

26 September 2025 at 20:03

Impact Statement:Β Starting at 23:54 UTC on 26 September 2025, customers in Switzerland North may experience service unavailability or degraded performances for resources hosted in the region. Virtual Machines may have shutdown to preserve data integrity.Β 

Current Status:Β We were alerted to this issue by our telemetry informing us in a significant drop in traffic. It was discovered that a recent deployment introduced a malformed prefix in one of the certificates used for connection authorization. We have pinpointed the deployment error involving the certificate prefix and are rolling back the faulty deployment to restore normal traffic flow and service availability.

Majority of the impacted services have been fully recovered, and a subset are nearing completion. We continue to monitor traffic and service stability to ensure full recovery.

Increased network latency on traffic routes through the Middle East

9 September 2025 at 20:00
Starting at 05:45 UTC on 06 September 2025, network traffic traversing through the Middle East may experience increased latency due to undersea fiber cuts in the Red Sea. Network traffic continues to not be interrupted as Microsoft has rerouted traffic through alternate network paths. We do expect higher latency on some traffic that previously traversed through the Middle East until the undersea fiber cuts are fully addressed. Network traffic that does not traverse through the Middle East is not impacted.Β We will not have a complete return to typical latencies through the Middle East in the near term, as this is the expected state until repairs can be made in the Red Sea over the coming weeks. Our engineers continue to work in parallel on other methods to optimize latency. With several days of stable operations now, we will remove this awareness banner from the Azure Status page on 10 September 2025 and continue further communications via Azure Service Health.

Windows Update is causing disruptions to virtual machines running the Windows 11, version 22H2 and 23H2 Operating System

29 May 2025 at 01:47
Starting on May 13, 2025, some customers using Azure Virtual Machines or Azure Virtual Desktop (AVD), on-premises virtual machines hosted on Citrix or Hyper-V running Microsoft Windows 11, version 22H2 and 23H2 may have been impacted by a recent Windows update (KB 5058405). This Windows update may have caused issues for customers, leading to system recovery screen. As a result, affected virtual machine instances may fail to boot up following the application of this update. We advise our customers to refrain from installing Windows update (KB 5058405) to prevent potential issues. Additionally, our team has published a document that explains the issue in more detail and provides recovery methods at this link: https://learn.microsoft.com/en-us/windows/release-health/status-windows-11-23h2. The investigation is still ongoing; customers are advised to reach out to Azure Support for any help with recovery options. The next update will be provided in 60 minutes or as events warrant.

Mitigated – Networking reduced availability in East US

18 March 2025 at 09:09

What happened?

Between 13:09 UTC and 18:51 UTC on 18 March 2025, a platform issue resulted in an impact to a subset of Azure customers in the East US region. Customers may have experienced intermittent connectivity loss and increased network latency sending traffic within as well as in and out of East US Region.Β 

At 23:21 UTC on 18 March 2025, another impact to network capacity occurred during the recovery of the underlying fiber that customers may have experienced the same intermittent connectivity loss and increased latency sending traffic within, to and from East US Region.


What do we know so far?

We identified multiple fiber cuts affecting a subset of datacenters in the East US region at 13:09 UTC on 18 March 2025. The fiber cut impacted capacity to those datacenters increasing the utilization for the remaining capacity serving the affected datacenters. At 13:55 UTC on 18 March 2025, we began mitigating the impact of the fiber cut by load balancing traffic and restoring some of the impacted capacity; customers should have started to see service recover starting at this time. The restoration of traffic was fully completed by 18:51 UTC on 18 March 2025 and the issue was mitigated.Β 

At 23:20 UTC on 18 March 2025, another impact was observed during the capacity repair process. This was due to a tooling failure during the recovery process that started adding traffic back into the network before the underlying capacity was ready. The impact was mitigated at 00:30 UTC on 19 March after isolating the capacity impacted by the tooling failure.Β 

At 01:52 UTC on 19 March, the underlying fiber cut has been fully restored. We continue working to test and restore all capacity to pre-incident levels.Β 

Our telemetry data shows that the customer impact has been fully mitigated. We are continuing to monitor the situation during our capacity recovery process before confirming complete resolution of the incident.

An update will be provided in 3 hours, or as events warrant

Networking issues impacting Azure Services in East US2

8 January 2025 at 17:00

Summary of Impact: As early as 22:00 UTC on 08 Jan 2025, we noticed a partial impact to some of the Azure Services in East US2 due to a configuration change in a regional networking service. The configuration change caused inconsistent service state. This could have resulted in intermittent Virtual machine connectivity issues or failures in allocating resources or communicating with resources in the region. The services impacted include Azure Databricks, Azure Container Apps, Azure Function Apps, Azure App Service, SQL Managed Instances, Azure Data Factory, Azure Container Instances, PowerBI, VMSS, PostgreSQL flexible servers etc. Customers using resources with Private Endpoint NSG communicating with other services would also be impacted.

The impact is limited to a single zone in East US2 region. No other regions are impacted by this issue.

Current Status:

As early as 22:00 UTC on 08 Jan 2025, service monitoring alerted us to a networking issue in East US2 impacting multiple services. As part of the investigation, it was identified that a network configuration issue in one of the zones resulted in three of the Storage partitions going unhealthy. As an immediate remediation measure, traffic was re-routed away from the impacted zone, which brought some relief to the non-zonal services, and helped with newer allocations. However, services that sent zonal requests to the impacted zone continued to be unhealthy. Some of the impacted services initiated their own Disaster Recovery options to mitigate some of them.

Additional workstreams to rehydrate the impacted zone by bringing back the impacted partitions to a healthy state have been ongoing as per the plan. To avoid any further impact, we are validating the fix on one of the partitions, and once that is confirmed, the mitigation will be applied to the other unhealthy partitions as well. We have completed the validation process successfully for one of the partitions and are working on applying the mitigation to all the partitions. Once the mitigation is applied, we intend to complete additional validations before bringing the partitions online.

We do not have an ETA available at this time, but we expect to be able to share more details on our progress in the next update. We continue to advise customers to execute Disaster Recovery to expedite recovery of their impacted services. Customers that have already failed out of the region should not fail back until this incident is fully mitigated. The next update will be provided in 1 hour or as events warrant.

For customers impacted due to Private Link, a patch was applied, and we confirm dependent services should be available.

We have been able to confirm that customers impacted by Azure Databricks, App Services multi-tenant, Azure Function Apps, Logic Apps, and Azure Synapse should start seeing some recovery.

Active - Storage latency, timeouts, or HTTP 500 errors in South Central US

26 December 2024 at 13:44

Impact Statement: Starting at 18:44 UTC on 26 December 2024, a power incident in South Central US may have resulted in degradation in service availability.Β Β 

Current Status: We have determined that an unexpected power incident in one of the availability zones in South Central US impacted the availability of multiple Azure services. At approximately 20:43 UTC, power was confirmed to be fully restored, and services have started to recover.Β 

Mitigation steps are being applied, and services are on the path of recovery.Β 

  • Service Bus, Log Analytics, Logic Apps, Azure Firewall, Storage accounts, and Application Gateway have been fully recovered.Β 
  • Virtual Machines are close to mitigation.Β 
  • CosmosDB, SQL DB, and App Service are on path of recovery.Β 

We are actively monitoring recovery progress and further updates will be provided in the next 2 hours, or as events develop.Β 

If you are impacted and it is possible, we advise you to consider failing your services to a different Availability Zone or region until we are fully restored.

❌
❌