All Systems Operational

About This Site

This page will let you know if all the systems powering TaskRabbit are up and running. You can subscribe to updates about any downtime we might have.

Global API   ? Operational
www.TaskRabbit.com   Operational
www.TaskRabbit.co.uk   Operational
blog.taskrabbit.com   Operational
tech.taskrabbit.com   Operational
Third Party Services Operational
Twilio REST API   Operational
Twilio Outgoing SMS   Operational
Twilio Outgoing Phone Calls   Operational
Stripe API   Operational
Twilio Incoming Client Calls   Operational
GitHub   Operational
Stripe Webhooks   Operational
Braintree API   Operational
Braintree API   Operational
Urban Airship Automation Services   Operational
Braintree European Processing   Operational
Braintree United States Processing   Operational
Braintree PayPal Processing   Operational
Urban Airship API   Operational
Urban Airship Apple iOS Push Services   Operational
Urban Airship Google Android Push Services   Operational
AWS cloudFront   Operational
AWS ec2-us-east-1   Operational
AWS route53   Operational
AWS s3-us-standard   Operational
SendGrid API v2   Operational
SendGrid API   Operational
SendGrid API v3   Operational
SendGrid SMTP   Operational
Cloudinary Image Transformation API   Operational
Cloudinary Image CDN Delivery   Operational
Cloudinary Upload API   Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
USA API Status Check ?
Fetching
UK API Status Check ?
Fetching
www.taskrabbit.com page load time ?
Fetching
www.taskrabbit.co.uk page load time ?
Fetching
Past Incidents
Aug 16, 2018

No incidents reported today.

Aug 15, 2018
Resolved - Our automated systems have detected a few issues but this doesn't affect end-users. We will continue investigating and monitoring the issues.
Aug 15, 16:13 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 15, 12:15 PDT
Resolved - We experienced a brief spike which triggered an alert. The load normalized shortly afterwards.
Aug 15, 11:07 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 15, 10:40 PDT
Resolved - A service was temporarily unavailable. Upon restart it recovered.
Aug 15, 00:30 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 14, 23:55 PDT
Aug 14, 2018
Resolved - In the process of updating our system we triggered an alert. All is well and nothing was impacted negatively.
Aug 14, 22:09 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 14, 18:58 PDT
Resolved - A worker was briefly unavailable and caused a noisy alert. Services are running fine.
Aug 14, 15:49 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 14, 12:12 PDT
Resolved - A worker was briefly unavailable. The team is looking into changing our alert thresholds so we have less noisy updates.
Aug 14, 11:06 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 14, 10:59 PDT
Aug 13, 2018
Resolved - A worker was briefly unavailable but recovered.
Aug 13, 18:59 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 13, 15:10 PDT
Resolved - All queues are below the alert threshold.
Aug 13, 13:16 PDT
Monitoring - A deploy caused some queues to get backed up. They are currently draining and we are monitoring progress
Aug 13, 13:01 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 13, 11:11 PDT
Aug 12, 2018
Resolved - The ElasticSearch cluster eventually recovered after we deleted a shard that was likely damaged when we filled disk space. We then ingested the dumped data. All systems are back to normal.
Aug 12, 19:36 PDT
Update - We are continuing to monitor cluster recovery. This will take several hours. Event data is being dumped so we can run those through later.
Aug 12, 14:24 PDT
Monitoring - We were unable to write metrics events to our ElasticSearch cluster. We have upped the disc space and are letting the cluster recover before turning metrics workers back on and reprocessing the events.
Aug 12, 12:58 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 12, 10:51 PDT
Aug 11, 2018
Resolved - A worker was briefly unavailable but recovered. Nothing was impacted.
Aug 11, 13:25 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 11, 12:12 PDT
Aug 10, 2018
Resolved - We experienced a buildup in our queues during a deploy. After the workers were turned back on the queues drained. No systems were adversely impacted
Aug 10, 15:27 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 10, 11:50 PDT
Aug 9, 2018
Resolved - A worker was temporarily unavailable but recovered shortly after the alert.
Aug 9, 15:15 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 9, 15:13 PDT
Aug 8, 2018
Resolved - Spike from a recurring job. Resolving
Aug 8, 15:22 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 8, 12:12 PDT
Aug 7, 2018
Resolved - A brief spike caused an alert. Marking as resolved.
Aug 7, 16:25 PDT
Investigating - Our automated systems have detected a problem. Our engineers have been notified and we are looking into it.
Aug 7, 13:14 PDT
Aug 6, 2018

No incidents reported.

Aug 5, 2018

No incidents reported.

Aug 4, 2018

No incidents reported.

Aug 3, 2018

No incidents reported.

Aug 2, 2018

No incidents reported.