MongoDB: from memory

I'm curious about MongoDB memory consumption. I have read the relevant sections of the manual and other questions on this topic, but I think this situation is different. May I ask you for your advice?

This is an error from the DB log file:

Fri Oct 26 20:34:00 [conn1] ERROR: mmap private failed with out of memory. (64 bit build) Fri Oct 26 20:34:00 [conn1] Assertion: 13636:file /docdata/mongodb/data/xxx_letters.5 open/create failed in createPrivateMap (look in log for more information) 

These are the data files:

 total 4.0G drwxr-xr-x 2 mongodb mongodb 4.0K 2012-10-26 20:21 journal -rw------- 1 mongodb mongodb 64M 2012-10-25 19:34 xxx_letters.0 -rw------- 1 mongodb mongodb 128M 2012-10-20 22:10 xxx_letters.1 -rw------- 1 mongodb mongodb 256M 2012-10-24 09:10 xxx_letters.2 -rw------- 1 mongodb mongodb 512M 2012-10-26 10:04 xxx_letters.3 -rw------- 1 mongodb mongodb 1.0G 2012-10-26 19:56 xxx_letters.4 -rw------- 1 mongodb mongodb 2.0G 2012-10-03 11:32 xxx_letters.5 -rw------- 1 mongodb mongodb 16M 2012-10-26 19:56 xxx_letters.ns 

This is the result of free -tm :

  total used free shared buffers cached Mem: 3836 3804 31 0 65 2722 -/+ buffers/cache: 1016 2819 Swap: 4094 513 3581 Total: 7930 4317 3612 

Is it really necessary to have enough system memory to include the largest data files? Why grow so much? (From the sequence shown above, I expect the next file to be 4 GB.) I will try to expand the RAM, but the data will eventually grow even more. Or maybe this is not a memory problem?

I have a 64-bit Linux system and am using the 64-bit MongoDB 2.0.7-rc1. There is a lot of disk space, CPU load is 0.0. This is uname -a :

 Linux xxx 2.6.32.54-0.3-default #1 SMP 2012-01-27 17:38:56 +0100 x86_64 x86_64 x86_64 GNU/Linux 
+3
source share
1 answer

ulimit -a solved the secret:

 core file size (blocks, -c) 1 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 30619 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 3338968 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 30619 virtual memory (kbytes, -v) 6496960 file locks (-x) unlimited 

It worked after setting the maximum amount of memory and virtual memory to an unlimited number and restarting all. BTW, the next file again had 2 GB.

Sorry to bother you, but I was desperate. Perhaps this helps someone "search" with a similar problem.

+8
source

All Articles