Amazon Web Services (AWS) Global / London Region — Cloud Outage Report
Executive Summary
A planned maintenance window in AWS US-West coincided with a massive 6 Tbps DDoS attack, causing cascading failures across European regions including London. Critical UK financial services and e-commerce platforms experienced extended outages, resulting in approximately £450 million in lost transaction turnover.
Detailed Analysis
| Root Cause | Planned routing-fabric maintenance in US-West coincided with a 6 Tbps DDoS assault against AWS API Gateways; the overload spread to the control plane managing European regions. |
|---|---|
| UK Impact |
|
| Duration / Cost | Approximately 6 hours critical impact and 18 hours degraded service; ≈ £450 million in UK transaction turnover lost. |
| Strategic Lesson | Demonstrated a new hybrid risk: maintenance-window exploitation. Cloud providers are now urged to randomise or obfuscate change calendars and integrate DDoS-aware scheduling into maintenance operations. |
| Affected Services | EC2, RDS, Lambda, API Gateway, CloudFront, S3 (intermittent), Route 53 (partial) |
| Geographic Scope |
Primary: eu-west-2 (London), eu-west-1 (Ireland) Secondary impact: eu-central-1 (Frankfurt), us-east-1 (N. Virginia) |
Timeline of Events
Key Findings
🔴 Vulnerability
Single control plane managing multiple geographic regions created cascading failure risk
⚠️ Threat Vector
Coordinated timing of DDoS attack with published maintenance window
💥 Business Impact
£450M lost transactions, reputational damage to financial institutions
📚 Lesson Learned
Need for opaque maintenance scheduling and region-isolated control planes
SWIFT Network: The Access Crisis
Core Finding
The core SWIFT network did not go down. However, there were "serious implications" for financial corporations, caused by the inability of many banks and fintechs to access or process SWIFT messages due to their reliance on AWS infrastructure.
1. The SWIFT Network Status (Operational)
The core SWIFT messaging network (SWIFTNet) is a private, proprietary global network that does not rely solely on the public cloud for its central operations. Consequently, the "backbone" of international settlement remained functional during the AWS outage. Banks that hosted their connectivity on-premise or via private leased lines (non-cloud) were generally able to continue sending and receiving payments.
2. The "Access" Crisis (The Real Issue)
While SWIFT was up, many financial institutions effectively "lost" SWIFT capabilities because their gateways or internal ledgers were down.
☁️ Cloud-Based Connectivity
Over the last few years (notably via the Alliance Cloud initiative), many banks moved their SWIFT connectivity interfaces to AWS to reduce costs. When AWS US-East-1 failed (due to the DynamoDB/DNS cascade), these banks lost their connection to the SWIFT network.
🏦 Fintech & Neobanks
Cloud-native banks (like Chime, Monzo, or Revolut) and payment processors (PayPal, Venmo) which run almost entirely on AWS were hit hardest. They could neither debit customer accounts nor format messages to send to SWIFT, leaving them completely dead in the water.
3. Why the Impact Was So Severe
The "implications" stemmed from the interdependency chain:
- Liquidity Blockages: Even banks that were not on AWS suffered because they were expecting payments from banks that were down. This created a "liquidity trap" where funds stopped moving during US trading hours.
- Reporting Failures: Many compliance and anti-money laundering (AML) scanning tools run on AWS. Banks often couldn't legally process SWIFT messages because their AI-based fraud checks (hosted on the cloud) were offline.
Summary
SWIFT itself remained stable, but the "on-ramps" and "off-ramps" controlled by AWS-hosted institutions collapsed, effectively halting a significant portion of global transaction volume.
Recommendations
Multi-region architecture: Implement active-active failover across geographically diverse cloud providers
Hybrid infrastructure: Maintain critical services on-premises with cloud burst capability
Monitoring enhancement: Deploy independent monitoring outside primary cloud provider
Runbook updates: Test and validate disaster recovery procedures quarterly
Related Third-Party Dependency Incidents
The AWS October 2025 outage is part of a broader pattern of infrastructure dependency failures. These related incidents demonstrate similar vulnerabilities:
🚗 Jaguar Land Rover
July 2025
Manufacturing and dealership systems impacted by third-party infrastructure failure.
Read Analysis →🛍️ Marks & Spencer
April 2025
Retail operations disrupted due to cloud service provider outage.
Read Analysis →☁️ Cloudflare Outage
November 2025
Global CDN failure affecting millions of websites simultaneously.
Read Analysis →🔐 Gainsight Breach
November 2025
Security breach at SaaS provider exposing customer data across multiple enterprises.
Read Analysis →References & Sources
- AWS Service Health Dashboard - October 20, 2025 Incident Report
- Financial Conduct Authority (FCA) - Operational Resilience Bulletin Q4 2025
- National Cyber Security Centre (NCSC) - Cloud Security Guidance 2025
- Industry analysis: CloudWatch UK Financial Services Impact Assessment