Monzo on k8s - Ingress traffic management

Hi!

Long-time lurker, first-time poster!

This is a question about Monzo’s architecture, not a question relating to the public API

Background:
I’m currently designing a bespoke piece of software and we’re adopting a microservices architecture running on Kubernetes (AWS EKS).

Question:
How does Monzo route ingress traffic? I’m concerned that exposing 1 service via a loadbalancer and just tagging a route is going to become unmanageable. e.g ‘api.monzo.com/customer/1’ Do you use an API gateway such as Gloo or Ambassador?

Luckily, all ingress/egress traffic will remain in our cluster so there is no requirement to even do this but I just wanted to get other opinions.

We’re planning to use Istio as a service mesh and Calico policies to restrict pod-to-pod access.

Whilst planning, we think that we’ll deploy 500+ services and I read somewhere that Monzo publishes many more and still seems to be really organised.

If this question is out of scope, I apologise! Please delete/close the thread.

Thanks

I think @oliver is the head of backend,

But read these threads and blog posts they might be able to help.

1 Like

I’m sure @oliver is likely to know as he’s the Head of Engineering. But @chrisevans is the Head of Platform (which is specifically this domain) and might be able to provide some insight.

3 Likes

Thanks,

Is there a dev slack , that people can join?

1 Like

No it closed down earlier in the year

3 Likes

Thanks so much!

1 Like

Thanks so much - hopefully he sees this :slight_smile:

Hello :wave: Thanks for tagging me @danbeddows. I’d be happy to explain how this works.

To allow traffic into the cluster, we use a Kubernetes NodePort service to expose a fixed port on each of our nodes, and an externally Terraformed AWS Application Load Balancer (ALB) targeting those hosts. Whilst we could use a Kubernetes LoadBalancer service to do the whole job, we already use Terraform for the rest of the cluster, and like having direct control over the ALB.

On the Kubernetes side, we’ve written a custom proxy in Golang, which acts similar to an ingress controller like Nginx. Instead of using Kubernetes Ingress objects to define where requests should be routed, we rely on implicit routing based on a common naming convention. In slightly less abstract terms, if I write a service called service.api.foo , the edge proxy will automatically route requests from api.monzo.com/foo to it.

Overall the logical flow of traffic looks like:

Request for api.monzo.com/foo/something -> ALB -> one of Kubernetes nodes -> a Kubernetes node running the edge proxy -> edge proxy process -> service.api.foo/something

I believe much of Monzo’s success is down to conventions like this that make it super easy to focus on differentiated business logic, rather than repeated boiler plate.

8 Likes

Hi Chris!

Thanks for the detailed response. Super helpful :clap:

Is the custom proxy OSS?

This is really neat. We’re looking into Terraform now!

It’s not, I’m afraid. It’s quite heavily tied into other parts of our platform which also aren’t open source.

If you’re just starting out, one of the off-the-shelf ingress controllers would be a reasonable and battle tested approach :slight_smile: