DigitalOcean Status
Real-time updates of DigitalOcean issues and outages
DigitalOcean status is Operational
Global
Active Incidents
No active incidents
Recently Resolved Incidents
Our Engineering team is currently investigating an issue with Managed Kubernetes clusters. During this time, some users may experience delays when provisioning Kubernetes clusters, specifically version 1.31.1-5.
We apologize for the inconvenience and will provide an update as soon as we have more information.
DigitalOcean Outage Survival Guide
DigitalOcean Components
DigitalOcean API
DigitalOcean Billing
DigitalOcean Cloud Control Panel
DigitalOcean Cloud Firewall
DigitalOcean Community
DigitalOcean DNS
DigitalOcean Support Center
DigitalOcean Reserved IP
DigitalOcean WWW
DigitalOcean App Platform
Global
Amsterdam
Bangalore
Frankfurt
London
New York
San Francisco
Singapore
Sydney
Toronto
DigitalOcean Container Registry
Global
AMS3
BLR1
FRA1
NYC3
SFO2
SFO3
SGP1
SYD1
DigitalOcean Droplets
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO1
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Event Processing
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO1
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Functions
Global
AMS3
BLR1
FRA1
LON1
NYC1
SFO3
SGP1
SYD1
TOR1
DigitalOcean GPU Droplets
Global
NYC2
TOR1
DigitalOcean Managed Databases
Global
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Monitoring
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SGP1
SFO1
SFO2
SFO3
SYD1
TOR1
DigitalOcean Networking
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO1
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Kubernetes
Global
Our Engineering team is currently investigating an issue with Managed Kubernetes clusters. During this time, some users may experience delays when provisioning Kubernetes clusters, specifically version 1.31.1-5.
We apologize for the inconvenience and will provide an update as soon as we have more information.
AMS3
BLR1
FRA1
LON1
NYC1
NYC3
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Load Balancers
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO1
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Spaces
Global
AMS3
FRA1
NYC3
SFO2
SFO3
SGP1
SYD1
BLR1
DigitalOcean Spaces CDN
Global
AMS3
FRA1
NYC3
SFO3
SGP1
SYD1
DigitalOcean VPC
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO1
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Volumes
Global
AMS2
AMS3
BLR1
FRA1
LON1
NYC1
NYC2
NYC3
SFO1
SFO2
SFO3
SGP1
SYD1
TOR1
DigitalOcean Alternatives
Insights - Historical System Profiles
Red Hat Quay.io
US-IAD (Washington)
Our team is investigating an issue affecting connectivity in our US-IAD (Washington) data center. During this time, users may experience intermittent connection timeouts and errors for all services deployed in this data center. Compute instances running on Premium instances are not impacted by this issue. We will share additional updates as we have more information.
US-LAX (Los Angeles)
Our team has identified an issue affecting our Block Storage service in our US-ORD (Chicago), US-IAD (Washington, D.C.) and Los Angeles (LAX3) data centers between the following times:
- Chicago (ORD2): 21:57 UTC to 22:18 UTC
- Washington, D.C. (IAD3): 22:07 UTC to 22:20 UTC
- Los Angeles (LAX3): 21:32 UTC to 21:45 UTC During this time, users may have experienced slow or failed storage operations for their Block Storage Clients. We have been able to correct the issues and the service has resumed normal operations. We will be monitoring this to ensure that it remains stable. If you continue to experience problems, please open a Support ticket for assistance.
QuadraNet Management Portal
Dear Customers,
For those amongst our clients who are using the WHMCS part of our CRM. For the time being as a temporary solution, please use our legacy NEO portal for any helpdesk requirements and service communications via https://neo.quadranet.com, if you don't have the logins to NEO, you may simply email to support@quadranet.com.
We will keep you up to date once WHMCS issue has been rectified.
Apologies for the inconvenience caused,
QuadraNet Team
QuadraNet LAX Downtown
We are currently experiencing a networking issue. This is being worked on by our networking engineers, once resolved a RFO will be provided.