All Systems Operational
UpdatedIncident Length
- Less Than 1HR
- More Than 1HR
- More Than 6HR
Incident Severity
- Minor Incident
- Intermediate Incident
- Major Incident
All incidents
All Systems Operational
No further failed SOS operations were observed.
We are back to a nominal state in all zones. All systems are operational.
We are back to a nominal state in all zones. All systems are operational.
Everything is back to normal.
We are back to a nominal state in all zones. All systems are operational.
All systems operational
Mitigations brought back the service to nominal.
Everything back to nominal.
Interaction with API is back to nominal state.
All systems operational.
The issue should be solved.
All systems operational
Mitigations brought back the service to nominal.
All systems operational
Traffic is back to normal
No changes since last update, closing the incident
All systems operational
Network latency issue is now resolved
All systems back to nominal
All systems operational
All systems operational
All systems operational
Service is back up and running
After monitoring for a while, it seems the issue is now resolved.
We reloaded the malfunctioning component. Service is restored. We're still monitoring closely the issue.
All systems are operational
Compute API situation is back to normal.
Situation is back to nominal.
The situation is now resolved. All services are running normally. We will provide a post mortem as soon as possible.
All systems operational
Fix webpage
No further outage was observed.
From 15:00 CET until 18:00 CET, some SKS clusters were not available in the Zürich zone.
We have identified the issue and deployed a fix. All systems are green.
All systems operational.
All systems operational
Nominal station is stable
Issue has been resolved
The issue is now resolved.
Issue has been resolved.
All systems operational
The issue has been mitigated
All systems operational
Issue has been resolved
All systems operational
All systems operational
All systems operational
All systems operational
All systems operational
Copy-object operations have been re-enabled.
System back to nominal
Instance pool API is fully operational again.
DNS api fully operational.
Supernumerary event.
Please see https://exoscalestatus.com/incidents/82153f8a-8914-4d38-81c5-80b91bdc22d5/
Supernumerary event.
Please see https://exoscalestatus.com/incidents/82153f8a-8914-4d38-81c5-80b91bdc22d5/
All systems operational.
The issue has been resolved.
All systems operational
All systems operational
The issue has now been resolved.
All systems operational. We'll provide a post-mortem on this incident in the coming days.
duplicate
The issue has been resolved.
Situation is stable.
The service is fully back now. We are closing this incident.
The issue has been permanently fixed and full redundancy has been restored for all the customers.
No further issues have been observed since late yesterday.
The service is fully back now. We are closing this incident.
Incident closed
We monitored the service for some hours and we can confirm that latency is back to nominal.
The situation is back to normal
The situation has been resolved
Faulty equipment has been replaced, all systems operational
All services are up and running.
All systems operational. We'll provide a brief post-mortem about this outage in the coming hours
All systems operational
The situation is now back to normal.
All systems operational
All systems are back to nominal.
We are monitoring the situation
All systems operational
All systems operational
All systems operational
All systems operational
All systems operational
We are back to nominal state.
This incident has been resolved.
All systems operational
All systems operational
The root is identified and fixed. Closing the incident.
All systems operational
All systems operational
All systems operational
All systems operational
The API is up and running.
All systems operational
All systems operational
API is back in nominal state
duplicate
DNS is stable in FRA1 and MUC1.
All systems operational
All systems operational
The problem has been indetified and solved by our upstream provider.
All systems are now back to nominal.
All systems operational
All systems operational
All systems operational
All systems operational
Service Operational
All systems operational (Issue has been resolved on Aug 29 10:40)
The problem has been fixed.
Network connectivity is back to nominal.
All systems are operational.
The issue has been mitigated by Akamai. The CDN service is fully back.
All systems back to nominal
All systems operational
The issue is now resolved.
All systems operational
The DNS performance issue is now resolved.
All systems back to normal
All systems operational
All systems operational
All systems operational
The NLB FRA1 issue is resolved.
The situation is resolved.
All systems operational
All systems operational
All systems back to normal
All systems are operational
The incident in SOS MUC1 is resolved.
A fix has been roll out and the situation is now stable.
All systems back to nominal
All systems back to nominal
All systems operational
All systems operational
All systems operational
All systems operational
Situation has been resolved. All systems green.
Portal and API are stable and operational.
All systems operational
The problem has been fixed.
All systems operational
The situation is now stable and everything is back to normal in MUC1.
The situation is now stable and everything is back to normal in FRA1.
All systems operational
All systems operational
All systems back to nominal
All systems operational
All systems operational
All systems operational
All systems operational
The incident on SOS FRA1 is resolved.
All systems operational
Duplicate incident
DNS systems working as expected.
All systems operational
The incident on SOS FRA1 is resolved.
All systems back to nominal
All systems operational
All systems operational
Our DNS partner is currently experiencing an issue on several of their datacenters
All systems operational
All systems operational
All systems operational
All systems operational
All systems operational
All systems operational
All systems operational
The situation is back to normal.
All systems operational
The situation is back to normal.
The performance issue is now resolved.
Performance is now back to normal.
Incident is now resolved.
The situation is now back to normal.
All systems operational
Fiber link is up again.
Traffic is back to normal.
Root cause has been identified and fixed. The situation is now back to normal.
Situation is now back to normal.
Issues has been fixed.
We have managed to fix the redundancy mode and now everything is back to normal.
All systems operational
All systems operational
All systems operational
All systems operational
Issue has been identified and solved.
All system back to nominal.
All systems operational
All systems operational
All systems operational
All systems operational
duplicate
All systems operational
After monitoring the equipment, everything is back to normal.
Issue is related to https://exoscalestatus.com/incidents/d8b94e7e-ce96-40c4-a979-f9719fcff674/
All systems operational
After monitoring everything is working fine.
All systems operational
The issue has been resolved.
The issue has been resolved.
The issue has been resolved.
All systems operational
The issue has been resolved
All systems operational
Services are back to normal, duplicate of https://exoscalestatus.com/incidents/f972ae99-1154-4aad-a0bc-87962ba1ea3e/
API performance is now back to normal.
API calls for private network managment are back to normal behavior.
The issue has been resolved.
The latest fix restored normal API service for privnets in GV2.
The issue has been resolved
The service is back operational, monitoring the situation
The issue has been resolved
The issue has been resolved
The issue has been resolved
The issue has been resolved
Issue has been resolved
Issue is now resolved. It was related to a transit issue with one of our upstream providers
Object storage is now fully restored. Root cause of the routing issue is still to be determined.
The issue has been resolved
The error rate for SOS in Frankfurt is back to normal. The root cause was a faulty VPN link.
Issue has been resolved
Issue is resolved
The issue is now resolved
Issue is resolved.
The issue is now resolved
Root caused will be investigated with the provider.
The issue has been resolved.
The issue is now resolved. One of our upstream internet provider experienced a short outage.
The issue has been resolved yesterday.
Buckets are available but a subset have inaccessible objects
The fault hardware has been replaced.
The issue has been resolved
API issue has been resolved. There was a brief interruption.
The API service is operational again.
The issue is now resolved.
The issue has been resolved
The issue has been resolved
The issue has been resolved.
The issue has been resolved
The issue has been resolved.
The issue has been resolved
The issue has been resolved
Service in nominal state
GV2 is again available for deployment, sorry for the inconvenience.
The issue has been resolved.
The issue is now resolved. The root cause is still being investigated.
The issue has been resolved.
CH-GVA-2 is back to normal, VM operations are working again. Sorry for this failed start of the week.
Issue has been resolved.
The issue has been resolved
The issue has been resolved
The issue has been resolved and traffic is being re-activated on the related links
The connectivity is back to normal. The issue was related to a power outage on the core router of our upstream provider
The object storage is operational again. We are still investigating the root cause.
Snapshots are available
The issue is now resolved.
The issue is now resolved.
The issue is now resolved
The issue is now resolved. One of our upstream providers experienced a routing issue. We're still waiting on their root cause analysis.
The issue has been resolved
Snapshot are back and VM deployments too.
Increased errors 500 issue is now resolved
The issue is now resolved.
Issue is now resolved
Snapshots are now enabled
No ongoing network disruptions. Follow-up post-mortem to be posted
No further network issue during the night. Closing the incident.
No partial disruption since more than 7h. Closing.
Closing the incident has the attack did not come back today.
The service is back to normal in DK2 after monitoring the situation today
The issue has been resolved
The issue is now resolved.
The issue has been resolved.
The issue has been resolved
The issue is now resolved. A full post-mortem will be provided once we get all the information from our datacenter provider
The issue is now resolved
The issue has been resolved
The missing template has been identified and the problem is fixed now.
The compute API service is back to normal.
The service is back to normal. Sorry for the inconvenience.
The issue has been resolved.
The issue is now resolved.
Closing as duplicate
The incident is now resolved. The root cause has been identified. A post-mortem will be posted as soon as possible.
The issue is now resolved
The issue has been resolved
The issue is now resolved.
The issue is now resolved
Routing was dysfunctional on a newly allocated hypervisor which affected instance creation. Full connectivity is now restored.
All systems operational. We continue to monitor the situation. A post-mortem will be added to the incident after further investigation.
The issue has been fixed. Sorry for the inconvenience
The issue is now resolved
all systems operational
All systems operational
All systems operational.
All systems operational
All systems operational
All systems operational. We continue to monitor the situation. A post-mortem will be added to the incident after further investigation.
Incident has been escalated.
All systems operational
All systems operational. A post-mortem will be added to the incident as soon as we have more information from the datacenter provider.
Issue is now resolved. Corrupted routing table issue will be remediated by another separate scheduled maintenance
All Systems Operational
The issue is now resolved, snapshots have been re-enabled except for one specific hypervisor
The issue is now resolved
The issue is now resolved
Network connectivity is back to normal
The network incident is now resolved. A post mortem will follow as soon as we have the required information.
All the impacted VM have been restarted. A post-mortem should follow shortly.
Issue is now resolved. Post-mortem to follow.
Object storage I/O operation latency is now back to normal
Connectivity issue is now resolved
All systems operational
The issue is now resolved
The disk storage issue has been resolved, the service is back to normal.
The maintenance has been successfully performed by our ISP
The disk latency issue at GV1 is now resolved
The network latency issue has been resolved
The issue has been resolved on Monday December 14th
The degraded I/O performance was identified and fixed. All systems now run as expected.
To ensure no malfunction would persit, the hypervisor was restarted. Affected virtual machines were thus rebooted in the process. Operations are now back in nominal state.
Object storage network link is now stable as the DDOS attack against our provider seems to be over
The DDOS attack targeting our network provider seems to be over
All systems operational
All systems operational
All systems operational
Object Storage connectivity is now back. Root cause is under investigation
All systems operational
All systems operational
MySQL is now available again.
Apps routing issues are now resolved. One of the routing engine nodes was mishandling traffic.
Network connectivity is back. Root cause under investigation
All systems operational
All systems operational
All systems operational
The network outage at GV1 is now resolved. Investigation on the root cause will begin. Network trafic was impacted from 20h15 to 20h21 CET.
All systems operational
Network connectivity issue @GV2 is now resolved
Network connectivity issue is now resolved
Network connectivity issue is now resolved
Apps deployment issue is now fixed
Network connectivity is now recovering
Network connectivity is now recovering
Storage issue @GV1 is now resolved. No impact except higher storage latency for few minutes
The DNS maintenance is now completed
Network latency issue due to DDOS is now resolved
Network latency issue is now resolved
Incident is now resolved.
Previous issue with hypervisor has now been resolved. All instances should have now recovered network connectivity
We had a power issue on one managed cloud compute hypervisor. All impacted VMs have been restarted on another hypervisor.