Hazelcast etc. - two very different systems. The reason is the CAP theorem .
The CAP theorem states that no distributed system can have consistency, availability, and separation. Distributed systems typically approach CA or CP. Hazelcast is an AP system, and etcd (which is an implementation of Raft) is CP. So your choice between consistency and availability / performance.
Overall, Hazelcast will be much more productive and will be able to handle more crashes than Raft, etc., but at the cost of potential data loss or consistency issues. How Hazelcast works, it splits data and stores pieces of data on different nodes. Thus, in cluster 5 node, the key “foo” can be stored on nodes 1 and 2, and the rod can be stored on nodes 3 and 4. You can control the number of nodes to which Hazelcast replicates data through Hazelcast and map configurations. However, during a network or other failure, there is a certain risk that you will see old data or even lose data in Hazelcast.
Alternatively, Raft, etc. are a single-user highly reliable system that stores data on all nodes. This means that it is not ideal for storing large amounts of fortune. But even during a network failure, etcd can guarantee that your data will remain unchanged. In other words, you will never see old / obsolete data. But it is expensive. CP systems require that most of the cluster be operational for normal operation.
The consistency problem may or may not be relevant for the underlying keystore, but it can be extremely relevant for locks. If you expect your locks to be consistent across the cluster - this means that only one node can hold the lock even during network or another failure - do not use Hazelcast. Since Hazelcast sacrifices consistency in favor of accessibility (again, see CAP Theorem), it is entirely possible that a network failure could lead to two nodes in order to assume that the lock is free to purchase.
Alternatively, Raft ensures that during a network failure, only one node will remain the leader of the etcd cluster, and therefore all decisions are made through this node. This means that etcd can guarantee that it has a constant view of the state of the cluster at all times and can guarantee that something like locking can only be obtained by one process.
Indeed, you need to consider what you are looking for in your database and look for it. The use cases for CP and AP datastores are significantly different. If you want consistency to store small amounts of states, consecutive locks, leader choices and other coordination tools, use a CP system like ZooKeeper or Consul. If you need high availability and performance at the potential cost of consistency, use Hazelcast or Cassandra or Riak.
Source: I am the author of the implementation of the raft