Explain Mongo metric errors in the mongostat

I have a question about the error metric in the mongostat. I am running mongo 2.0, on ubuntu, with two disks (each 32G) in raid-0 configuration.

Mongo upload test 5 million user profiles. I execute this process in one thread and use insert (the main part of 1000 records).

When I configure mongo for the first time and load profiles into it, I see a lot of errors in mongostat (2.5 and even 15) at boot time.

Then I download the download again: first I delete the old collection and then start the download. The following points are faults = 0 almost all the time.

Why is this?

+4
source share
1 answer

MongoDB transfers memory management to the OS through a memory mapping file engine. In principle, this mechanism allows the program to open files much more than the amount of installed RAM. When a program tries to access part of this file, the OS looks to see if this part (page) is in RAM. If this is not the case, then a page error occurs and this page is loaded from disk. faults/s metric in mongostat shows exactly this: how many page errors occur per second.

Now, when you start mongo and load data into it, the data files are not displayed in memory, and they need to be loaded from disk (page errors). When you drop a collection, it is deleted logically, but the corresponding physical files are not deleted and will be reused. Since they are already in RAM, there are no page errors.

If you drop the database instead, it takes files with you, so next time you should see page errors.

+5
source

All Articles