
AWS shifted traffic away from the affected zone for most services and warned of longer-than-usual provisioning times.
As the evening progressed, the company struggled to bring temperatures down. By 6:47 PM PDT, AWS warned that “Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone may also experience impairments,” and at 8:06 PM PDT, it conceded that “progress is slower than originally anticipated,” recommending that customers needing immediate recovery restore from EBS snapshots or launch resources in unaffected zones.
By 10:11 PM PDT, AWS reported “incremental progress to restore cooling systems” but said users were still “experiencing elevated error rates and latencies for some workflows.”
The May 7 incident is not the first time US-EAST-1 has gone down. The region suffered two outages in October 2025, including a 15-hour disruption on October 19 and 20 caused by a race condition in DynamoDB’s automated DNS management system that affected over 70 AWS services and produced cascading failures across Slack, Atlassian, Snapchat, and other dependent services. AWS regions in Ohio have also experienced power-related outages tied to EC2 instances in past years.
