We'd like to keep Spreaker up 100% of the time. When that doesn't happen, we write about it here.
|Website||So far so good|
|Api||So far so good|
|Streaming||So far so good|
|Mobile apps||So far so good|
Due to a severe issue with our emailing system, we sent out blank emails in the last hours. We’ve just pushed a fix to production and we also introduced more checks in order to avoid such issue in the future.
We’re really sorry for the inconvenience.
Spreaker disabled #SSLv3 protocol support in response to vulnerability published today. Update your browser if you’ve any issue navigating Spreaker via HTTPS.
We just resolved an issue with our Tube service that affected many US customers. We’re really sorry for the inconvenience and we’re working hard to avoid the same issue in the future.
Few months ago we switched our traffic to a new high-available and fault-tolerant recording infrastructure. Once of the core components of this infrastructure is the so called “balancer”. The balancer is the entry point for each recording connection and routes the traffic to the nearest available recording server.
Unfortunately, last night one of these balancers stopped to work (after 91 days of uptime) in a bad way, so that the health check was OK, but balancer was unable to route requests.
We’re currently working to investigate the root cause of the issue and detect such condition during the health checks, so that if it happens again in the future we’ll be able to automatically recover it.
Sorry again for the inconveniente. If you have any further question, don’t hesitate to contact us at http://help.spreaker.com
UPDATE (Oct, 3rd)
We investigated the issue and it looks was a bug in a client library we use to connect to Redis. We just pushed a new balancer version to production, that includes two changes:
Issue at 17:30 UTC: We’re currently experiencing some networking issues while publishing your on-demand tracks at the end of your live broadcast. No worry! Your tracks are not lost: it will just take more time to get published.
We’re working to fix it as soon as possible. New recording connections will not be affected by the issue (we currently disabled affected servers).
Resolved at 21:00 UTC: The network issue is now fixed and all on-demand tracks have been published. We’re really sorry for the inconvenience.
We know how important reliability is to you, and so in these past weeks we worked to provide you a high-available and fault-tolerant recording infrastructure.
We’re progressively rolling out this new infrastructure to all users. Currently all PRO users that broadcast with 3rd party applications are routed to this new infrastructure; in the next weeks we’ll open it up to all users and apps.
In this post, we’d love to share some tech details about it with you, in order to show you how it works and how we handle interruptions.
How it works
The image below shows the big picture.
When an application starts live broadcasting, it connects to icecast.spreaker.com. This DNS entry is resolved to the load-balancer closest to you (latency-based routing), and then the connection is routed to an available server inside that datacenter.
This design guarantees that:
Spreaker Recording’s infrastructure is currently deployed in 3 datacenters: Europe (Ireland), US East (Virginia), US West (Oregon).
What if the connection between the client and the balancer drops?
If the connection between the client and the load balancer drops, the client will automatically retry to connect to icecast.spreaker.com. Once the connection is re-established, the balancer will route the connection to the same exact server where the client was connected before, so that it can continue to broadcast.
What if a balancer is down?
The DNS icecast.spreaker.com is managed by AWS Route 53. It constantly checks the health status of each balancer and, if a balancer is down, it temporarily removes the affected balancer from the pool of available ones.
So, when a balancer goes down:
What if a server is down?
The infrastructure constantly monitors the health of each server. When a server is down, it’s temporarily removed from the pool of available servers. The balancer will route new requests (or reconnection requests) to other available servers in the same datacenter.
The worst case scenario is when all servers in a datacenter are down. In this case, the balancer will route new requests (or reconnection requests) to available servers in other datacenters.
We’re experiencing some networking issues between two datacenters. Some of you could be temporarily unable to broadcast or once your live broadcast ends, the recorded track could take more time than usual to get ready. We’re working to fix it as soon as possible.
Issue opened at 1:40 UTC
We’re experiencing networking issues. The issue The issue looks related internal our provider (Amazon Web Services - also confirmed by many other customers) and we’ve already alerted them. We’re waiting for a fix.
We apologize for any inconvenience.
Update at 2.10 UTC
The issue has been confirmed by AWS and engineers are working on that.
As a side node, the chart below looks how the networking issues affected Spreaker users. Looks that **about 30% **of users are affected.
Update at 2.15 UTC
Many users are reporting that networking issues have been fixed now, but still didn’t receive any official bulletin from AWS.
Resolved at 2.28 UTC
AWS confirms the issue has been fixed.
We’re currently experiencing some networking issues from / to US. Spreaker web servers are currently hosted at AWS (Amazon Web Services) data centers in Europe, and there’re some networking failures between AWS Europe and some US nodes.
We’re really sorry for the inconvenience.
UPDATE at 21.12 UTC
Networking issues are caused by a truncated Trans-Atlantic link. Telia advised they’re working to resolve a major network issue.
Despite other cloud providers, AWS didn’t disclosure any action yet to route the network traffic through other network providers, so some of you may still experience networking issues.
UPDATE at 21.53 UTC
The following map from Akamai shows the affected area.
UPDATE at 22.06 UTC
According to DigitalOcean, Telia has repaired their issue.
We’re experiencing an high failure rate on our API. We’re investing it. More updates will be published here.
UPDATE: the issue was caused by an high load on our RabbitMQ servers (we currently have 2 masters). This caused a chain effect that led to a temporary service failure in our API. Since most of our applications (both web and mobile) are based upon our API, most of you were unable to use Spreaker. We’re really sorry for that and we’ve already planned some improvements for the next week, in order to avoid such issue again.
as you may know, 2 years ago we deprecated XML and JSONP support, keeping JSON as the only officially supported format.
In the last 30 days we finally didn’t get any request with XML and JSONP response formats, so today we completely removed its support. You should notice no difference, since all of you are now using JSON.
Thanks for your help to make it happen!