Publish event and store at the same time (PostgreSQL/Cassandra -> Kafka)

Hi,

First of all, thank you all for great sharing and being open. I watched talks of Matt Heath, Oliver Beattie and Simon Vans Colina from youtube. I am just trying to understand microservice architecture more specifically how to guarantee to write a data to database ( in your architecture cassandra for most services and sql for ledger as far as i understand ) and also publish message to kafka. I dont think your system uses some kind of 2PC. So I guess you are tailing log for events and publish them to kafka. Is it correct? If so is there any of-the-shelf product or open-source project you are using or similar to yours.

3 Likes

Paging @matt, @oliver, and @simon!

Here’s Simon’s answer :slight_smile:

(The best place to ask these questions is the developer’s Slack channel).

Also this:

:slight_smile:

3 Likes

As @simon says, we’ll be in a better position to talk about this in the coming months, but specifically to the point about writing to a database and publishing to a message queue reliably and simultaneously: this is a hard problem.

This is solved largely by flipping the model around. So message publication comes first, and “defines” when something is committed. Other components of the system may then consume off the queue and update their databases. This is obviously not practical in all situations – our ledger is an example where this is not how things work, and it has a lot of logic around handling failure cases.

I am definitely planning to write a blog post about how this works in lots of detail when the time is right :nerd:

7 Likes

You are all super helpful, I am looking forward to see talk/blog post. I wish you all the best.

1 Like