Mongodb replica set with multiple primers and pingMS = 0

I am trying to install a replica set with two nodes: Node0 and Node1. From Node0, I initialized a replica set named "rs0" and added Node1 to it. The problem is that it is added as a primary node instead of a secondary node, and the end result is a set of replicas with two primary nodes.

This is the result of the rs.status() command from Node0

  "set" : "rs0", "date" : ISODate("2012-10-23T21:03:37Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "Node0:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 61185, "optime" : Timestamp(1350967947000, 1), "optimeDate" : ISODate("2012-10-23T04:52:27Z"), "self" : true }, { "_id" : 1, "name" : "Node1:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 58270, "optime" : Timestamp(1350956423000, 1), "optimeDate" : ISODate("2012-10-23T01:40:23Z"), "lastHeartbeat" : ISODate("2012-10-23T21:03:37Z"), "pingMs" : 0 } ], 

If I execute the same command from Node1, then only the specified node itself. Note that pingMs is 0. Attempting to add a third node or arbiter to give similar results: each of them is added as primary, and pingMS is always 0.

+6
source share
2 answers

You mentioned that you are running rs.initiate() on both servers. This needs to be done on only one thing.

I suggest you start from scratch by deleting the dbpath directory for each node (backup data before if db was not empty). Then run all the mongod processes, enter one of them, then call

  • rs.initiate()
  • rs.add(<other node 1>)

The other node automatically gets the replica set configuration through the first. Repeat the `rs.add () command for each additional node you want to add.

+5
source

I came across the same situation mistakenly doing rs.initiate() in duplicate. I solved this by disabling the second instance, deleting the data directory and restarting the instance. After a restart, it is correctly defined as a member of the replica set, correctly synchronized, and, most importantly, there is only one primary.

This operation should not be dangerous, because, as far as I know, a set of replicas replicates all the data on the nodes. Of course, you can simply move the data directory after turning off the secondary node to save a backup in case something goes wrong.

0
source