RESOLVED: Monzo Services Degraded/Outage - Card Payments may fail (16/01/18)

Thanks @cookywook. Will certainly be good to get a write up like last time as people have previously said.

All seem to be working for me - Just paid for a coffee and for the notification and showed up in feed instantly.

1 Like

All working here and thanks for the transparency. As a technology professional myself I accept that things happen and I hope we get a detailed technical write up like the great one previously given by @oliver

If you do publish that including what you’ve learnt from this then my confidence in Monzo will actually increase!

3 Likes

Hi everyone, sorry about these issues. We will aim to share a post-mortem with you, but it might be a few days or next week until that’s possible. Hang tight :heart:

29 Likes

As someone who worked Poundworld support (often on my own for a third party company) years ago, yes it would have caused issues. All of their terminals are Verifone. Those days were often very interesting to say the least as the problem was out of my hands, 150 plus stores and just myself trying to contain calls on a Saturday… fun!

3 Likes

I know this is marked as resolved but my partners card is still being declined. Are you aware of any other issues with card payments?

Hi Frank, is everything working now? The issue was definitely resolved yesterday so there may be something else up with your partners card. Feel free to DM me with some info, or have your partner reach out directly via in-app chat :+1:

It was a message I got from her “card declined again” and no notification within the app. Will get her to start an in app chat.

So… ?

Just want to hear some ‘rest assured…’ please.

1 Like

Hi everyone. I’m a backend engineer at Monzo and I’m here to provide an overview of what happened last week when we experienced an outage. Last week, we experienced problems that affected both the prepaid and current account for around 30 minutes. Similar to the last major incident we experienced, I’d like to provide an overview on the causes of this outage, the timeline, and the steps we’ve taken to prevent similar events from happening again.

Background

Unlike the outage in October 2017 in which our entire platform was unavailable, the outage last week was more localised. @oliver, our Head of Engineering, has previously posted an overview of our architecture on our blog (which is a great read) but here’s a rundown of the parts that were relevant during the outage last week.

Our microservice architecture requires us to be able to pass messages between different services (of which we have over 400). A message can represent something as simple as “send an email to a customer” or as complex as “apply this Direct Debit instruction to an account”. Within our system, we pass messages in many different ways. For time-sensitive messages, we might make a synchronous RPC directly to the service. For passing asynchronous messages, which are less time-dependent, we use two different message-passing systems, NSQ and Kafka. Both messaging systems have different use-cases and characteristics. This outage was caused by our use of NSQ.

When a service publishes messages to NSQ it only waits for an acknowledgement that the message was sent to NSQ successfully. The service doesn’t need confirmation that the message has been consumed (received) by another service, and in fact this may happen much later – this is why it is “asynchronous.” If messages are being sent faster than the downstream services can consume them, they are queued and consumed over a longer period of time.

Most messages are multicast to more than one service. Different services might process the same message in different ways and over different periods of time. We can also pause message processing for a particular service. For example, if we notice an issue with a service we can stop it from processing any more messages until we fix it.

It is important to note that while NSQ is an important part of our message passing system, it is not a critical component. By design, most messages can be delayed for delivery for an indefinite amount of time. Our core services are designed to work without reliance on NSQ, and this decoupling allowed many of our systems to continue operating through this outage.

Timeline

15th January

On Monday 15th January, we were alerted to higher-than-normal latencies when publishing messages to NSQ. For our users, this resulted in transactions being delayed before appearing in the app and delayed push notifications. We started investigating and mitigated these high latencies by “pausing” less time-sensitive services. Within half an hour we were back to normal operation and no further customer impact was experienced on Monday.

After the incident, we formed a team dedicated to finding and fixing the underlying cause for this increased latency. The cause was found to be that NSQ was overloaded at peak times. We had not increased the capacity of this cluster recently, and with the increase in customers and transactions, it could not keep up.

With our NSQ topology (and more generally in our platform), we can take nodes offline without any downtime. By Monday evening, we had upgraded a single node that formed part of our NSQ cluster and we monitored its performance for signs of improvement.

16th January

The team decided the change was effective and decided to proceed upgrading the rest of the nodes. We were pausing and unpausing individual, non-critical message channels during the upgrade in order to test the new nodes.

At 14:37 our COps (Customer Operations) team noticed an increase in support requests and escalated this to the engineering team. At this point we were experiencing a partial outage. While payments continued to process normally, customers stopped receiving transactions, push notifications, and some other updates in their apps.

At 14:54 we identified that we had accidentally stopped message processing of all messages being queued to NSQ. Pausing some of these consumers has no impact on customers as messages can be delayed and processed later, but some of the consumers that had been accidentally paused were necessary to update customers’ apps. This caused a backlog of tens of millions of messages within minutes.

At 15:00, we unpaused all message consumption. As our services started to process the huge backlogs, some became saturated with messages. We had distributed millions of messages into our own services, effectively a self-DDoS, and this was when the problems became more serious: some customers experienced failures to load the app and a small number of payments began to fail.

Because the NSQ cluster was experiencing such extreme load, it was unresponsive to our commands to pause message processing to alleviate the load. :turtle: By 15:30 we did succeed in getting the load under control, and this restored most functionality for customers.

At this time, our team worked on getting our NSQ cluster into a state where we were ready to resume message delivery. However, due to message-saturation issues described earlier, we could not unpause all channels at the same time, but had to unpause channels individually and monitor performance of downstream services. In order to mitigate the effect of this on our customers, we created a new NSQ cluster and directed our services to use this new cluster for new messages.

By 18:30 we were processing all new messages normally and the backlog for payments and transactions made after this time had been processed.

Mitigation and Prevention

The original capacity issue that prompted the upgrade of NSQ nodes was not noticed until it started affecting our services. We have now improved our visibility into NSQ performance and will be able to identify potential issues before reaching a point where it affects our service. This includes more metrics around message publishing latencies, as well as system-level monitoring of our NSQ nodes so we can identify when we are close to reaching resource limits.

The decision to unpause all message consumption in retrospect was poor, and could have been prevented if we were fully aware of the consequences of doing this in the presence of large backlogs. We are improving our team knowledge and awareness of how we use NSQ in order to prevent escalation of partial outages into full outages.

Looking further ahead, we realise that all core services need to be able to scale automatically and without human intervention, in response to increased or decreased load, and we will implement systems to make this possible.

We take all outages really seriously and we’re sorry for the impact this incident caused to our customers. Our aim has always been to build a bank that you can depend on. We’ve written this post-mortem because we believe you deserve to know what happened, and what steps we’ve taken to address these issues. If you have any questions, please do ask me and our team.

59 Likes

Most of it went over my head :exploding_head: but this is reassuring;

Thanks for being so open and for this detailed explanation. :+1::raised_hands:

2 Likes

And this is why I believe in Monzo.
Humans screw up sometimes, that is inevitable, I have huge respect :fist: for a company that can put they’re hand up and admit to making a mistake, explain clearly (even if it goes over my head a bit) why it happened and what they’re doing so it doesn’t happen again.

Now if we could only get our politicians to admit when they were wrong, we’d be a lot better off :yum:

4 Likes

Some other banks should take note on how to inform their customers instead of hiding behind the curtains :eyes:

At least it’s fixed and lessons learned though :slight_smile:

Yes I am talking about them :joy_cat: :tea:

7 Likes
  • Are you going to monitor pausing?
  • Do you have anything neat like LastFM?
2 Likes

Are you going to monitor pausing?

Yes, we have shiny new graphs and alerts just for this.

Do you have anything neat like LastFM?

We do have dashboard TV’s in the office, but not related to NSQ at the moment. This is a good idea though!

Awesome! Others should look to you as a model. When one of your competitors had an issue recently, they pretended it never happened and their CEO resorted to personal attacks. Go Monzo and team!

4 Likes

So paraphrasing (and oversimplifying) to make it clearer for a non-technical audience

On the 15th you experienced overloading of NSQ so you upgraded part of it. On the 16th you proceeded to upgraded the rest of NSQ, made a mistake and as a result overloaded a number of services

You’ve now have monitoring in place to make it easier to detect NSQ overloading and processes in place to make it harder to make the same mistake again

Is that correct or have I misunderstood?

2 Likes

Eliding some context I think this is accurate. :+1:

3 Likes

Automate all the things! :metal: Don’t forget scalability, reliability, availability, security and everything nice… :sleepy:

1 Like

If you search Twitter for mentions of said CEO, there appear to be clusters of coordinated praise with similar timestamps and wording from different accounts of real people. Good PR dept / CPO there.

1 Like