He will begin to move the pieces, yes, in fact, this will be the default target for each movement of the piece in the foreseeable future (the main choice is to switch from a fragment with a larger part to a fragment with the smallest pieces). Each primary fragment can take part in only one movement at a time, therefore, with the fact that many pieces for its movement will take some time, especially if the other two are busy.
I saw cases when people turned off balancing and forgot about it. Given that your other 2 shards are balanced pretty well, I don't think this is the case here, but just in case ....
You can check the status of the balancer by connecting to the mongo, and then do the following:
use config; db.settings.find( { _id : "balancer" } )
Make sure the stopped value is not set to true.
To find out what holds the lock and therefore balances at this time:
use config; db.locks.find({ _id : "balancer" });
Finally, to check what the balancer does, look at the mongos log on this machine. The balancer logs the lines with the prefix [Balancer] . You can also look for migration messages in the logs of primary mongod instances in the logs.
EDIT: This is probably caused by SERVER-7003 - an error detected in version 2.0.0 post. If deleted objects are deleted in the range (piece) deleted from the source fragment, sometimes this can cause such paralysis when all packet migrations are interrupted, and the target fragment always takes part in the migration, when in fact it is not.
Since this was fixed in 2.2.1, updating is the recommended way to solve the problem. Although this can be resolved by reboots and / or when a poor state on the target shard resolves itself, as is the case in the comments below.