I'm curious about MongoDB memory consumption. I have read the relevant sections of the manual and other questions on this topic, but I think this situation is different. May I ask you for your advice?
This is an error from the DB log file:
Fri Oct 26 20:34:00 [conn1] ERROR: mmap private failed with out of memory. (64 bit build) Fri Oct 26 20:34:00 [conn1] Assertion: 13636:file /docdata/mongodb/data/xxx_letters.5 open/create failed in createPrivateMap (look in log for more information)
These are the data files:
total 4.0G drwxr-xr-x 2 mongodb mongodb 4.0K 2012-10-26 20:21 journal -rw------- 1 mongodb mongodb 64M 2012-10-25 19:34 xxx_letters.0 -rw------- 1 mongodb mongodb 128M 2012-10-20 22:10 xxx_letters.1 -rw------- 1 mongodb mongodb 256M 2012-10-24 09:10 xxx_letters.2 -rw------- 1 mongodb mongodb 512M 2012-10-26 10:04 xxx_letters.3 -rw------- 1 mongodb mongodb 1.0G 2012-10-26 19:56 xxx_letters.4 -rw------- 1 mongodb mongodb 2.0G 2012-10-03 11:32 xxx_letters.5 -rw------- 1 mongodb mongodb 16M 2012-10-26 19:56 xxx_letters.ns
This is the result of free -tm :
total used free shared buffers cached Mem: 3836 3804 31 0 65 2722 -/+ buffers/cache: 1016 2819 Swap: 4094 513 3581 Total: 7930 4317 3612
Is it really necessary to have enough system memory to include the largest data files? Why grow so much? (From the sequence shown above, I expect the next file to be 4 GB.) I will try to expand the RAM, but the data will eventually grow even more. Or maybe this is not a memory problem?
I have a 64-bit Linux system and am using the 64-bit MongoDB 2.0.7-rc1. There is a lot of disk space, CPU load is 0.0. This is uname -a :
Linux xxx 2.6.32.54-0.3-default
source share