Update - The system is unable to initialize cloud tests due to a major AWS outage.

Tests started locally with cloud streaming continue to work (`k6 run -o cloud script.js`).

For more info and current status, please check: https://status.aws.amazon.com/
Dec 7, 18:58 CET
Identified - The outage is caused by issues with AWS services, for more info and current status please check: https://status.aws.amazon.com/
Dec 7, 18:04 CET
Investigating - We are currently investigating an issue which causes k6 Cloud test runs to be stuck in initialising state.
Dec 7, 17:43 CET
Investigating - We are seeing some additional test run being aborted by system due to a transient issue with AWS Elastic Cache being temporarily unavailable in some instances.

A number of occurrences of this issue have been reported/observed during last 48 hour period and we are raising this incident to increase the visibility to our users. We will keep this incident open while we are monitoring the infrastructure and implementing a mitigation strategy for similar interruptions.
Dec 3, 16:43 CET
k6.io website Operational
90 days ago
100.0 % uptime
Today
app.k6.io Frontend Application Operational
90 days ago
100.0 % uptime
Today
api.k6.io Cloud REST API Partial Outage
90 days ago
99.86 % uptime
Today
Test Scheduling Operational
90 days ago
100.0 % uptime
Today
Test Runners Partial Outage
90 days ago
99.92 % uptime
Today
"k6 cloud script.js" remote execution mode Partial Outage
90 days ago
99.86 % uptime
Today
"k6 run -o cloud script.js" streaming execution mode Operational
90 days ago
99.98 % uptime
Today
Load zones AWS Partial Outage
90 days ago
99.92 % uptime
Today
Payment processing Operational
90 days ago
100.0 % uptime
Today
Braintree PayPal Processing Operational
90 days ago
100.0 % uptime
Today
Braintree United States Processing Operational
90 days ago
100.0 % uptime
Today
Braintree European Processing Operational
90 days ago
100.0 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Past Incidents
Dec 7, 2021

Unresolved incidents: Cloud execution disruption due to AWS outage, Tests being aborted by system.

Dec 6, 2021

No incidents reported.

Dec 5, 2021

No incidents reported.

Dec 4, 2021

No incidents reported.

Dec 3, 2021
Dec 2, 2021

No incidents reported.

Dec 1, 2021

No incidents reported.

Nov 30, 2021

No incidents reported.

Nov 29, 2021

No incidents reported.

Nov 28, 2021

No incidents reported.

Nov 27, 2021

No incidents reported.

Nov 26, 2021
Resolved - We have implemented a retry mechanism to mitigate the temporary unavailability of the AWS Elastic Cache service. The intermittent connection problems with AWS Elastic Cache should no longer impact k6-cloud.
Nov 26, 19:23 CET
Update - The service is available and working.
This morning we have had two interruptions. At 8:27 UTC and 10:23 UTC. In each case, running tests were aborted, and service was unavailable for 4 minutes. Since 10:28 UTC, the service has been stable (5 hours).
The incident was caused by temporary unavailability of AWS Elastic Cache.
We keep this incident open while we are monitoring the infrastructure and implementing a mitigation strategy for similar interruptions.
Nov 25, 16:41 CET
Monitoring - A fix has been implemented and we are monitoring the progress.
Nov 25, 13:58 CET
Investigating - We are currently investigating an issue that is causing a portion of k6 Cloud test runs to be aborted (by system)
Nov 25, 09:50 CET
Nov 25, 2021
Nov 24, 2021

No incidents reported.

Nov 23, 2021
Resolved - This incident was caused by an unexpected memory consumption and table locking during a scheduled schema migration of our main metrics database. The database migration started on Monday at 20:00 UTC and ran in the background for 8 hours and 12 minutes without impacting the service. At 00:15 UTC a large database table storing HTTP metrics began migration. At 4:12 UTC, the migration consumed about 60GB of memory and started impacting INSERT performance, possibly due to a database lock (still investigating). Affected users started seeing delays in metrics insertion and errors retrieving data using app.k6.io. Our engineers began investigating the issue and looking for the cause. The database migration was aborted. The service was fully restored by 7:30 UTC.

64 k6 test runs timed out or got aborted during this time window.

We continue investigating the root cause of the incident and revise our internal procedures for monitoring long-running database migrations.

We apologize for any impact the service disruption may have had on your organization.
Nov 23, 05:30 CET