Why is mongorestore painfully slow?

I took a db dump having only 1 collection and two indexes. The collection was about 6.5 million documents. When I tried to restore it, to my great surprise, this process was painstakingly slow. Some statistics:

Wed Aug 14 12:34:08.346 Progress: 333818/1378309050 0% (bytes) Wed Aug 14 12:34:11.077 Progress: 1530246/1378309050 0% (bytes) Wed Aug 14 12:34:26.177 Progress: 2714544/1378309050 0% (bytes) Wed Aug 14 12:34:30.145 Progress: 3355627/1378309050 0% (bytes) Wed Aug 14 12:34:34.504 Progress: 3895668/1378309050 0% (bytes) Wed Aug 14 12:34:53.246 Progress: 4334159/1378309050 0% (bytes) Wed Aug 14 12:34:56.318 Progress: 4963878/1378309050 0% (bytes) Wed Aug 14 12:34:59.545 Progress: 5617794/1378309050 0% (bytes) Wed Aug 14 12:35:08.042 Progress: 6923804/1378309050 0% (bytes) Wed Aug 14 12:35:16.424 Progress: 7342576/1378309050 0% (bytes) Wed Aug 14 12:35:23.168 Progress: 7987560/1378309050 0% (bytes) Wed Aug 14 12:35:29.703 Progress: 9295140/1378309050 0% (bytes) Wed Aug 14 12:35:38.582 Progress: 9943758/1378309050 0% (bytes) Wed Aug 14 12:35:43.574 Progress: 11128693/1378309050 0% (bytes) Wed Aug 14 12:35:46.008 Progress: 11982044/1378309050 0% (bytes) Wed Aug 14 12:35:50.134 Progress: 12421241/1378309050 0% (bytes) Wed Aug 14 12:35:54.548 Progress: 13166696/1378309050 0% (bytes) Wed Aug 14 12:35:58.152 Progress: 13837935/1378309050 1% (bytes) 

As you can conclude from the above data, the total dump (in bson) is approximately 1.3 Gig. And this leads to the fact that mongorestore disappoints 110 seconds to recover 1% of it, which is 13 MB.

If anyone has an explanation, let me know. I wish I were doing something wrong, as these numbers are too slow compared to the standards of computing in this century.

EDIT


I ran the command again, following the following two options, hoping that they would speed up the process:

 --noobjcheck --noIndexRestore 

But, to my great surprise, the process has become slower !. Here are some of the features.

 Wed Aug 14 13:13:53.750 going into namespace [temp_raw_tweet_db.tweets] Wed Aug 14 13:14:00.258 Progress: 871186/1378309050 0% (bytes) Wed Aug 14 13:14:04.424 Progress: 2070390/1378309050 0% (bytes) Wed Aug 14 13:14:07.482 Progress: 2921304/1378309050 0% (bytes) Wed Aug 14 13:14:11.895 Progress: 3647526/1378309050 0% (bytes) Wed Aug 14 13:14:57.028 Progress: 4984815/1378309050 0% (bytes) Wed Aug 14 13:15:01.015 Progress: 6202286/1378309050 0% (bytes) Wed Aug 14 13:15:05.051 Progress: 6797800/1378309050 0% (bytes) Wed Aug 14 13:15:08.402 Progress: 8133842/1378309050 0% (bytes) Wed Aug 14 13:15:12.712 Progress: 8872607/1378309050 0% (bytes) Wed Aug 14 13:15:15.259 Progress: 9964997/1378309050 0% (bytes) Wed Aug 14 13:15:19.266 Progress: 14684145/1378309050 1% (bytes) Wed Aug 14 13:15:22.364 Progress: 16154567/1378309050 1% (bytes) Wed Aug 14 13:15:29.627 Progress: 16754495/1378309050 1% (bytes) Wed Aug 14 13:15:35.225 Progress: 17726291/1378309050 1% (bytes) Wed Aug 14 13:15:39.447 Progress: 18333902/1378309050 1% (bytes) Wed Aug 14 13:15:43.717 Progress: 19055308/1378309050 1% (bytes) Wed Aug 14 13:15:46.481 Progress: 19305912/1378309050 1% (bytes) Wed Aug 14 13:15:49.902 Progress: 20038391/1378309050 1% (bytes) Wed Aug 14 13:15:53.868 Progress: 20389108/1378309050 1% (bytes) Wed Aug 14 13:15:58.578 Progress: 21127296/1378309050 1% (bytes) Wed Aug 14 13:16:03.706 Progress: 21837923/1378309050 1% (bytes) Wed Aug 14 13:16:56.512 Progress: 22092536/1378309050 1% (bytes) Wed Aug 14 13:16:59.035 Progress: 22583057/1378309050 1% (bytes) Wed Aug 14 13:17:02.313 Progress: 22817464/1378309050 1% (bytes) Wed Aug 14 13:17:05.044 Progress: 23178521/1378309050 1% (bytes) Wed Aug 14 13:17:26.023 Progress: 23434010/1378309050 1% (bytes) Wed Aug 14 13:17:39.161 Progress: 23670222/1378309050 1% (bytes) Wed Aug 14 13:17:42.846 Progress: 24049639/1378309050 1% (bytes) Wed Aug 14 13:17:59.125 Progress: 24284177/1378309050 1% (bytes) Wed Aug 14 13:18:02.722 Progress: 24515270/1378309050 1% (bytes) Wed Aug 14 13:18:06.827 Progress: 25018013/1378309050 1% (bytes) Wed Aug 14 13:18:09.234 Progress: 25253850/1378309050 1% (bytes) Wed Aug 14 13:18:14.282 Progress: 25617812/1378309050 1% (bytes) Wed Aug 14 13:18:46.296 Progress: 25983107/1378309050 1% (bytes) Wed Aug 14 13:18:51.303 Progress: 26604320/1378309050 1% (bytes) Wed Aug 14 13:18:55.500 Progress: 26971559/1378309050 1% (bytes) Wed Aug 14 13:19:00.656 Progress: 27444735/1378309050 1% (bytes) Wed Aug 14 13:19:04.100 Progress: 28064675/1378309050 2% (bytes) 

It takes about 4 minutes to go from 1% to 2% . Of course, there is something wrong here.

+8
mongodb ubuntu mongorestore
source share
2 answers

This is a very old thread, but lately I have had a similar problem, possibly for various reasons, and I came across this issue.

If you are using mongo on AWS, be sure to use the correct instance and volume types.

T-type instances have CPU credits that will work when the large mongorestore process starts. Your recovery will start quickly and then slow down the scan ... It never stops, but it will take several days to complete.

If you are trying to save money using EBS magnetic volume, this is a bad idea. SC1 is especially wrong because it has similar credits for transactions per second ... mongorestore will record all your IOP credits no matter how many you have in minutes, after which the speed will decrease to 3 operations per second, and recovery can take WEEKS to finish.

I hope this helps someone.

+8
source share

Unfortunately, these numbers are not unusual. mongorestore in a collection of 300 million days.

You basically have two options.

First, just do a long recovery and run it in one night:

 nohup mongoresotre [args] & 

Secondly, copy the files to your database directory (default / data / db) instead of using mongodump / mongoresotre. If you can close your database for periods of time, this is best. Otherwise, you should use file system snapshots or similar. See This for a larger official mongo publication: http://docs.mongodb.org/manual/core/backups/

+5
source share

All Articles