We'd like to keep Spreaker up 100% of the time. When that doesn't happen, we write about it here.
|Website||So far so good|
|Api||So far so good|
|Streaming||So far so good|
|Mobile apps||So far so good|
We’re currently experiencing slow response times, due to a performance issue on our primary database server. We’re investigating it.
Streaming servers are currently under heavy load and you may not be able to listen to Spreaker audio tracks. We’re turning on more servers: it should be fixed in few minutes.
From 18:40 till 19:05 UTC we got a failure on 1 web server: during this timeframe, all requests on that server failed with 503 error. The issue is now fixed.
From 01:33 till 09:04 UTC we got a failure on some European servers hosted in Topix, Italy. European users may have experienced a connection drop, but they have been forwarded to other locations in few minutes, thanks to our high-available and fault-tolerant infrastructure.
Our service provider AWS has some serious networking issues on the CDN service (CloudFront) and you may experience some issues while using Spreaker. This outage is affecting Spreaker and other thousands web applications around the globe. We’re really sorry for the inconvenience and we hope AWS will fix it soon.
UPDATE at 01:00 UTC: AWS is currently investigating increased error rates for DNS queries for CloudFront distributions.
UPDATE at 01:38 UTC: the service is gradually restoring. You should now be able to access most Spreaker services (including website), yet some random failures could still occur.
We’re experiencing intermittent networking issues on Spreaker. We’re alerted Amazon Web Services and we hope they will fix the issue soon.
UPDATE at 15:26 UTC: AWS is investigating Internet provider connectivity issues in the EU-WEST-1 Region.
FIXED at 15:58 UTC: according to AWS, “we experienced impaired Internet connectivity affecting some instances in the EU-WEST-1 Region. The issue has been resolved and the service is operating normally.”
Due to a severe issue with our emailing system, we sent out blank emails in the last hours. We’ve just pushed a fix to production and we also introduced more checks in order to avoid such issue in the future.
We’re really sorry for the inconvenience.
Spreaker disabled #SSLv3 protocol support in response to vulnerability published today. Update your browser if you’ve any issue navigating Spreaker via HTTPS.
We just resolved an issue with our Tube service that affected many US customers. We’re really sorry for the inconvenience and we’re working hard to avoid the same issue in the future.
Few months ago we switched our traffic to a new high-available and fault-tolerant recording infrastructure. Once of the core components of this infrastructure is the so called “balancer”. The balancer is the entry point for each recording connection and routes the traffic to the nearest available recording server.
Unfortunately, last night one of these balancers stopped to work (after 91 days of uptime) in a bad way, so that the health check was OK, but balancer was unable to route requests.
We’re currently working to investigate the root cause of the issue and detect such condition during the health checks, so that if it happens again in the future we’ll be able to automatically recover it.
Sorry again for the inconveniente. If you have any further question, don’t hesitate to contact us at http://help.spreaker.com
UPDATE (Oct, 3rd)
We investigated the issue and it looks was a bug in a client library we use to connect to Redis. We just pushed a new balancer version to production, that includes two changes:
Issue at 17:30 UTC: We’re currently experiencing some networking issues while publishing your on-demand tracks at the end of your live broadcast. No worry! Your tracks are not lost: it will just take more time to get published.
We’re working to fix it as soon as possible. New recording connections will not be affected by the issue (we currently disabled affected servers).
Resolved at 21:00 UTC: The network issue is now fixed and all on-demand tracks have been published. We’re really sorry for the inconvenience.