Event store could be the only point of failure?

For a couple of days now I’ve been trying to figure out how to tell other microservices that a new entity has been created in microservice A that stores this entity in MongoDB.

I would like to:

  • Low link between micro services

  • Avoid distributed transactions between microservices such as Two Phase Commit (2PC)

At first, a message broker such as RabbitMQ seems to be a good tool to work with, but then I see the problem of fixing a new document in MongoDB and posting a message to a non-atomic broker.

Why is the source of events? by eventuate.io: enter image description here

One way to solve this problem is to make the document schema a little messier by adding a note that indicates that the document was published in a broker, and a background process is planned for finding unpublished documents in MongoDB, and then published in the broker using confirmation , when confirmation arrives, the document will be marked as published (using semantics of idempotency at least once). This solution is proposed in this, and these are the answers.

After reading Chris Richardson's Introduction to Microservices , I ended up with a great presentation, Developing Functional Domain Models Using Event Sources, where one of the slides asked:

How to atomically update a database and publish events and publish events without 2PC? (double recording problem).

The answer is simple (on the next slide)

Update the database and post events

This is a different approach to this, which is based on CQRS a la Greg Young .

The domain repository is responsible for publishing events, usually this happens within one transaction along with the storage of events in the event store.

I think delegating responsibility for storing and posting events to the event repository is good because it avoids the need for a 2PC or background process.

However, in a certain way, it is true that :

If you rely on an event repository to post events, you will have a close relationship with the storage engine.

But we could say the same if we adopted a message broker for microservice communications.

What bothers me the most is that the event store seems to be becoming a single point of failure.

If we look at this example from eventuate.io enter image description here

we see that if the event store does not work, we cannot create accounts or money transfers, losing one of the advantages of microservices. (although the system will continue to respond to requests).

So, is it correct to state that the event store used in the event example is the only point of failure?

+9
source share
5 answers

What you are facing is an example of two common problems . Essentially, you want the two entities on the network to agree on something, but the network is not fault tolerant . Leslie Lampport proved that this is impossible.

Therefore, no matter how many new objects you add to your network, and there is only one message queue, you will never have 100% certainty that an agreement will be reached. In fact, the opposite happens: the more entities you add to your distributed system, the less you can be sure that an agreement will ultimately be reached.

The practical answer to your case is that 2PC is not so bad if you think about adding even more complexity and single points of failure. If you absolutely do not need a single point of failure and you want to assume that the network is reliable (in other words, the network itself cannot be the only point of failure), you can try a P2P algorithm such as DHT , but I bet it comes down to simple 2PC.

+5
source

We handle this using the Outbox method in NServiceBus:

http://docs.particular.net/nservicebus/outbox/

This approach requires that the initial trigger for the whole operation comes as a message in the queue, but it works very well.

+1
source

You can also create a flag for each entry within the event repository that reports whether this event has already been posted. Another process could poll the event store for these unpublished events and put them in a message queue or topic. The disadvantage of this approach is that consumers of this queue or topic should be designed to remove duplicates of incoming messages, since this template only guarantees delivery at least once. Another drawback may be latency due to the polling rate. But since we have already entered the agreed area here, this may not be such a serious problem.

+1
source

How about having two event stores, and whenever a domain event is created, it is queued for both of them. And the event handler on the request side processes events that fall out of both event stores.

Of course, every event should be idempotent . But wouldn’t this solve our event storage problem as the only entry point?

+1
source

Not particularly a mongodb solution, but you were considering using the Streams feature introduced in Redis 5 to implement a robust event store. Check out this introduction here.

I found that it has a rich feature set, such as message tracking, message confirmation, and the ability to easily retrieve unconfirmed messages. This, of course, helps to implement messaging guarantees at least once. It also supports message load balancing using the “consumer group” concept, which can help with the scaling of the processing part.

As for your concern about being the only point of failure, according to the documentation, flows and consumer information can be replicated across the nodes and stored on disk (I believe that the usual Redis mechanisms are used). This helps eliminate a single point of failure. I am currently considering using this for one of my microservice projects.

0
source

All Articles