MongoDB: A new splinter appears, but the content is not displayed. Is this expected?

I have a Mongo cluster with 2 shards, RS1 and RS2. RS1 has about 600G (*), RS2 about 460G. A few minutes ago I added a new splinter, RS3. When I connect to the mongo and check the status, this is what I see:

mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "RS1", "host" : "RS1/dbs1d1:27018" } { "_id" : "RS2", "host" : "RS2/dbs1d2:27018" } { "_id" : "RS3", "host" : "RS3/dbs3a:27018" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "demo", "partitioned" : false, "primary" : "RS1" } { "_id" : "cm_prod", "partitioned" : true, "primary" : "RS1" } cm_prod.profile_daily_stats chunks: RS2 16 RS1 16 too many chunks to print, use verbose if you want to force print cm_prod.profile_raw_stats chunks: RS2 157 RS1 157 too many chunks to print, use verbose if you want to force print cm_prod.video_latest_stats chunks: RS1 152 RS2 153 too many chunks to print, use verbose if you want to force print cm_prod.video_raw_stats chunks: RS1 3257 RS2 3257 too many chunks to print, use verbose if you want to force print [ ...various unpartitioned DBs snipped...] 

So, the new RS3 shard appears in the list of shards, but not in the list of β€œhow many pieces each shard has.” I would expect it to appear on this list with a score of 0 for all private collections.

Is it expected that it will be sorted if I want a little?

+6
source share
2 answers

He will begin to move the pieces, yes, in fact, this will be the default target for each movement of the piece in the foreseeable future (the main choice is to switch from a fragment with a larger part to a fragment with the smallest pieces). Each primary fragment can take part in only one movement at a time, therefore, with the fact that many pieces for its movement will take some time, especially if the other two are busy.

I saw cases when people turned off balancing and forgot about it. Given that your other 2 shards are balanced pretty well, I don't think this is the case here, but just in case ....

You can check the status of the balancer by connecting to the mongo, and then do the following:

 use config; db.settings.find( { _id : "balancer" } ) 

Make sure the stopped value is not set to true.

To find out what holds the lock and therefore balances at this time:

 use config; db.locks.find({ _id : "balancer" }); 

Finally, to check what the balancer does, look at the mongos log on this machine. The balancer logs the lines with the prefix [Balancer] . You can also look for migration messages in the logs of primary mongod instances in the logs.

EDIT: This is probably caused by SERVER-7003 - an error detected in version 2.0.0 post. If deleted objects are deleted in the range (piece) deleted from the source fragment, sometimes this can cause such paralysis when all packet migrations are interrupted, and the target fragment always takes part in the migration, when in fact it is not.

Since this was fixed in 2.2.1, updating is the recommended way to solve the problem. Although this can be resolved by reboots and / or when a poor state on the target shard resolves itself, as is the case in the comments below.

+3
source

use db.printShardingStatus(true); instead db.printShardingStatus(true); he will print out a list of fragments, pieces and all other details.

+1
source

Source: https://habr.com/ru/post/924224/


All Articles