We connect to the external Hazelcast cluster (version 3.7.2) using the Java Hazelcast client, but when reconnecting, problems arise if the cluster does not work.
We create our customer using HazelcastClient.newHazelcastClient . Once we do this, we save a copy of HazelcastInstance and use it to interact with the Hazelcast cluster ( getMap , getSet , etc.). We also store maps, sets, etc., which we get from HazelcastInstance in potentially long-lived objects. Everything works well on a happy trail. However, if the cluster ever drops and returns, we get a HazelcastInstanceNotActiveException when we try to access these objects that were created before the cluster went down.
Is there a way to automatically reconnect with the client when the cluster returns to the network, so we can resume using objects (maps, sets, etc.) that we previously extracted from Hazelcast before the cluster went down? Or do we need to have additional code to catch a HazelcastInstanceNotActiveException , and then rebuild the HazelcastInstance and any objects that we saved in the client application? The latter seems to be quite invasive and definitely undesirable to deal with in every instance where we store one of these Hazelcast objects.
Most of the things read relate to NetworkConfig parameters for connection timeout, attempt limit, and attempt timeout. We currently use the default values, but they do nothing when accessing an object that we have already retrieved. Any access to a pre-existing object immediately fails with a HazelcastInstanceNotActiveException even after backing up the cluster.
This seems like a common problem that many people have encountered. What is the best practice to solve this problem?
java hazelcast
nolt2232
source share