The replica set configuration is invalid or we are not a member of it; we work in the quernet

I posted this problem from here

I run sharded mongodb in a kubernetes environment, with three shards and three instances on each shard. for some reason my copy of mongodb was moved to another machine.

the problem is that the mongodb instance has been moved to another instance, its replica config will be invalidated. resulting in this error below.

  > rs.status() { "state" : 10, "stateStr" : "REMOVED", "uptime" : 2110, "optime" : Timestamp(1448462710, 6), "optimeDate" : ISODate("2015-11-25T14:45:10Z"), "ok" : 0, "errmsg" : "Our replica set config is invalid or we are not a member of it", "code" : 93 } > 

this is config

  > rs.config().members [ { "_id" : 0, "host" : "mongodb-shard2-service:27038", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : 0, "votes" : 1 }, { "_id" : 1, "host" : "shard2-slave2-service:27039", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : 0, "votes" : 1 }, { "_id" : 2, "host" : "shard2-slave1-service:27033", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : 0, "votes" : 1 } ] 

and sample db.serverStatus() migrated mongodb instance

  > db.serverStatus() { "host" : "mongodb-shard2-master-ofgrb", "version" : "3.0.7", "process" : "mongod", "pid" : NumberLong(8), 

Hope I make sense .. because I will be using it live very soon .. thanks !!

+7
mongodb google-container-engine kubernetes
source share
3 answers

finally, kubernetes PetSet solves this problem. he works like magic

-one
source share

Sorry for delay. Here is a post detailing how to educate mongodb: https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474#.x197hr2ps

One service and one instance of ReplicationController with one replica per instance is the current approach for a stateful application where each instance requires a stable, predictable identity. With this approach, it is also easy to allocate PersistentVolume for each module.

Other solutions are possible, such as the sidecar approach shown in this example and the custom seed provider in the Cassandra example, but a little more complicated.

Kubernetes 1.2 will provide a means to set the host name (as seen by the container) for each container. Kubernetes 1.3 will add a new controller for instantiation.

+1
source share

For those who want to use the old mongo configuration method (using ReplicationControllers or Deployments instead of PetSet), the problem seems to be related to the delay in kubernetes Services hostname assignment. The solution is to add a 10 second delay at the entry point to the container (before starting a real mongo):

 spec: containers: - name: mongo-node1 image: mongo command: ["/bin/sh", "-c"] args: ["sleep 10 && mongod --replSet rs1"] ports: - containerPort: 27017 volumeMounts: - name: mongo-persistent-storage1 mountPath: /data/db 

: https://jira.mongodb.org/browse/SERVER-24778

+1
source share

All Articles