
Public Cloud Status
Real-time updates of Public Cloud issues and outages
Public Cloud status is Minor Service Outage
BHS3
GRA4
GRA1
EU-CENTRAL-LZ-PRG
EU-NORTH-LZ-OSL
EU-NORTH-LZ-STO
EU-SOUTH-LZ-LIS
Active Incidents
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
An incident is in progress on our Block Storage offer.
Please be aware that this is currently affecting the Availability.
Here is detail for this incident :
Start time : 19/12/2024 14:40 UTC Service impact : Customers cannot access the Block Storage service. Ongoing actions : Investigating
Our teams are fully mobilized and we will keep you informed of developments and the resolution of the incident.
We apologize for the inconvenience and thank you for your understanding.
An incident is in progress on our Compute-Instance offer.
Please be aware that this is currently affecting the Functionality.
Here is detail for this incident :
Start time : 14/12/2024 08:18 UTC Service impact : Some instances in the GRA1 region are unreachable. Ongoing actions : Investigating
Our teams are fully mobilized and we will keep you informed of developments and the resolution of the incident.
We apologize for the inconvenience and thank you for your understanding.
An incident is in progress on our public cloud offer.
Please be aware that this is currently affecting the availability.
Here is detail for this incident : Start time : 14/10/2024 17:10 UTC Service impact : Some instances are temporarily unreachable. Ongoing actions : Investigating Our teams are fully mobilized and we will keep you informed of developments and the resolution of the incident.
We apologize for the inconvenience and thank you for your understanding.
Recently Resolved Incidents
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Compute - Instance offer in the BHS3 region.
Here are some supplementary details : Start time : 11/05/2025 04:08 UTC Impacted Service(s) : Some instances in the BHS3 region are unreachable. Customers Impact : Customers are temporarily unable to access and use their instances located in the specified region. Root Cause : This incident is caused by a network equipment issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
We are currently experiencing an event affecting our Compute - Instance offer in GRA1.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Public Cloud Outage Survival Guide
Public Cloud Components
Public Cloud AI & Machine Learning || AI Dashboard
Public Cloud AI & Machine Learning || AI Deploy
BHS
GRA
Public Cloud AI & Machine Learning || AI Endpoint
Public Cloud AI & Machine Learning || AI Notebooks
BHS
GRA
Public Cloud AI & Machine Learning || AI Training
BHS
GRA
Public Cloud Compute - Instance || BHS
BHS1
BHS3
We have determined the origin of the incident affecting our Compute - Instance offer in the BHS3 region.
Here are some supplementary details : Start time : 11/05/2025 04:08 UTC Impacted Service(s) : Some instances in the BHS3 region are unreachable. Customers Impact : Customers are temporarily unable to access and use their instances located in the specified region. Root Cause : This incident is caused by a network equipment issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
BHS5
Public Cloud Compute - Instance || ERI
UK1
Public Cloud Compute - Instance || EU-WEST-PAR
EU-WEST-PAR-A
GRA2
EU-WEST-PAR-B
GRA4
An incident is in progress on our public cloud offer.
Please be aware that this is currently affecting the availability.
Here is detail for this incident : Start time : 14/10/2024 17:10 UTC Service impact : Some instances are temporarily unreachable. Ongoing actions : Investigating Our teams are fully mobilized and we will keep you informed of developments and the resolution of the incident.
We apologize for the inconvenience and thank you for your understanding.
EU-WEST-PAR-C
Public Cloud Compute - Instance || GRA
GRA1
An incident is in progress on our Compute-Instance offer.
Please be aware that this is currently affecting the Functionality.
Here is detail for this incident :
Start time : 14/12/2024 08:18 UTC Service impact : Some instances in the GRA1 region are unreachable. Ongoing actions : Investigating
Our teams are fully mobilized and we will keep you informed of developments and the resolution of the incident.
We apologize for the inconvenience and thank you for your understanding.
We are currently experiencing an event affecting our Compute - Instance offer in GRA1.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
GRA3
GRA5
GRA7
GRA9
GRA11
Public Cloud Compute - Instance || India
AP-SOUTH-MUM-1
Public Cloud Compute - Instance || LIM
DE1
Public Cloud Compute - Instance || RBX
RBX-A
Public Cloud Compute - Instance || SGP
SGP1
SGP2
Public Cloud Compute - Instance || SBG
SBG5
SBG7
Public Cloud Compute - Instance || SYD
SYD1
AP-SOUTHEAST-SYD-2
Public Cloud Compute - Instance || USA
US-EAST-VA-1
US-WEST-OR-1
Public Cloud Compute - Instance || WAW
WAW1
Public Cloud Containers & Orchestration || Load Balancer
BHS5
DE1
GRA5
GRA7
GRA9
GRA11
SBG5
SGP1
SYD1
WAW1
UK1
US-EAST-VA-1
US-WEST-OR-1
Public Cloud Containers & Orchestration || Managed Kubernetes Service
BHS5
DE1
GRA5
GRA7
GRA9
GRA11
SBG5
SGP1
SYD1
WAW1
UK1
US-EAST-VA-1
US-WEST-OR-1
Public Cloud Containers & Orchestration || Managed Private Registry
GRA
DE
BHS
VA
Public Cloud Containers & Orchestration || Managed Rancher Service
Public Cloud Containers & Orchestration || Workflow Management
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
SBG5
SBG7
SGP1
SGP2
SYD1
SYD2
SYD3
UK1
WAW1
WAW2
US-EAST-VA-1
US-WEST-OR-1
GRA
Public Cloud Databases || Cassandra
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Databases || M3 Aggregator
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Databases || M3DB
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Databases || MongoDB
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Databases || MySQL
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Databases || PostgreSQL
BHS
DE
GRA
SBG
SGP
SYD
UK
WAW
Public Cloud Databases || Caching
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Data & Analytics || Data Processing
GRA
Public Cloud Data & Analytics || Dataplatform
Public Cloud Data & Analytics || Grafana
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Data & Analytics || Kafka
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Data & Analytics || Kafka Connect
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Data & Analytics || Kafka MirrorMaker
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Data & Analytics || Logs Data Platform
BHS
GRA
Public Cloud Data & Analytics || OpenSearch
BHS
DE
GRA
SBG
UK
WAW
Public Cloud Identity, Security & Operations || Key Management Service
BHS
RBX
SBG
YYZ
Public Cloud LocalZones || AF-NORTH
AF-NORTH-LZ-RBA
Public Cloud LocalZones || EU-CENTRAL
EU-CENTRAL-LZ-PRG
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Public Cloud LocalZones || EU-NORTH
EU-NORTH-LZ-OSL
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
MAD
EU-NORTH-LZ-STO
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Public Cloud LocalZones || EU-SOUTH
EU-SOUTH-LZ-LIS
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-SOUTH-LZ-MAD
Public Cloud Network || FailoverIP
EU-SOUTH-LZ-MIL
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Public Cloud LocalZones || EU-WEST
EU-WEST-LZ-AMS
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-LZ-BRU
EU-WEST-LZ-DLN
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-LZ-LUX
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-LZ-MNC
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-LZ-MRS
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-LZ-VIE
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-LZ-ZRH
We are currently experiencing an incident impacting our LocalZones Storage offer in EU except for EU-SOUTH-LZ-MAD and EU-WEST-LZ-BRU.
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
Public Cloud LocalZones || US-EAST
US-EAST-LZ-ATL
US-EAST-LZ-BOS
US-EAST-LZ-CHI
US-EAST-LZ-DAL
US-EAST-LZ-MIA
US-EAST-LZ-NYC
Public Cloud LocalZones || US-WEST
US-WEST-LZ-DEN
US-WEST-LZ-LAX
Public Cloud Network || Load Balancer
US-WEST-LZ-PAO
US-WEST-LZ-SEA
Public Cloud Management || Horizon
Public Cloud Network || Floating IP
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
SBG5
SBG7
SGP1
SYD1
SYD3
UK1
WAW1
US-EAST-VA-1
US-WEST-OR-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
AP-SOUTH-MUM-1
Public Cloud Network || Gateway
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
SBG5
SBG7
SGP1
SYD1
SYD3
UK1
WAW1
US-EAST-VA-1
US-WEST-OR-1
AP-SOUTH-MUM-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
AP-SOUTH-MUM-1
Public Cloud Network || Load Balancer
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
SBG5
SBG7
SGP1
SYD1
SYD3
UK1
WAW1
US-EAST-VA-1
US-WEST-OR-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
AP-SOUTH-MUM-1
Public Cloud Network || Private network (vRack)
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
RBX-A
SBG5
SBG7
SGP1
SGP2
SYD1
SYD3
UK1
WAW1
US-EAST-VA-1
US-WEST-OR-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
AP-SOUTH-MUM-1
Public Cloud Network || Public network
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
RBX-A
SBG1
SBG3
SBG5
SBG7
SGP1
SGP2
SYD1
SYD3
UK1
WAW1
US-EAST-VA-1
US-WEST-OR-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
AP-SOUTH-MUM-1
Public Cloud Storage || Block storage
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA6
An incident is in progress on our Block Storage offer.
Please be aware that this is currently affecting the Availability.
Here is detail for this incident :
Start time : 19/12/2024 14:40 UTC Service impact : Customers cannot access the Block Storage service. Ongoing actions : Investigating
Our teams are fully mobilized and we will keep you informed of developments and the resolution of the incident.
We apologize for the inconvenience and thank you for your understanding.
GRA7
GRA9
GRA11
Public Cloud UK
SBG5
SBG7
SGP1
SYD1
SYD3
UK1
US-EAST-VA-1
US-WEST-OR-1
WAW1
AP-SOUTH-MUM-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
Public Cloud Storage || Cloud Archive
DE
GRA
UK
WAW
BHS
SYD
SGP
US-EAST-VA
US-WEST-OR
Public Cloud Storage || Cold Archive
RBX-ARCHIVE
Public Cloud Storage || Instance Backup
BHS1
BHS3
BHS5
DE1
GRA1
GRA3
GRA5
GRA7
GRA9
GRA11
SBG5
SBG7
SGP1
SYD1
SYD3
UK1
US-EAST-VA-1
US-WEST-OR-1
WAW1
AP-SOUTH-MUM-1
EU-WEST-PAR-A
EU-WEST-PAR-B
EU-WEST-PAR-C
Public Cloud Storage || Object storage
BHS
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
GRA
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
DE
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
RBX
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
SBG
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
Public Cloud Compute - Instance || 3AZ
SGP
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
PAR
SYD
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
UK
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
US-EAST-VA-1
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
US-WEST-OR-1
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
WAW
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
YYZ
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
LIM
We are currently experiencing an event affecting our Object Storage S3 offer. Start time : 01/01/2025 00:00 UTC
Our teams are fully committed to investigating this issue and working towards a resolution as soon as possible. As investigations are ongoing, we will share any new findings or updates with you as soon as possible.
We apologize for any inconvenience caused and appreciate your understanding.
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
AP-SOUTH-MUM-1
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-PAR-A
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-PAR-B
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.
EU-WEST-PAR-C
We have determined the origin of the incident affecting our Object Storage offer.
Here are some supplementary details : Start time : 25/03/2025 09:00 UTC Impacted Service(s) : Object storage offer Customers Impact : Between 25/03/2025 09:00 UTC and 18/04/2025 16:00 UTC, when customers deleting an object with MPU (only objects with more than 100 parts), only the first 100 parts were deleted. Since 18/04/2025 16:00 UTC, the bug is fixed and all parts are now correctly deleted. Because of this bug, customers will have a discrepancy between the actual bucket size and the size displayed in the Manager because of parts that not have been deleted but should have been. In the meantime, our technical teams are working on the storage size of the impacted customers between those 2 dates. Root Cause : This incident is caused by a software issue. Ongoing Actions : The incident has been identified and our teams are mobilised to restore service as quickly as possible.
We will keep you updated on the progress and resolution. We apologize for any inconvenience caused and appreciate your understanding.