Mongo uses 26 GB of memory, is that bad?

Recently, I tried to compare my mongodb servers, and I assume they are overloaded, this is the result of serverStatus ():

SECONDARY> db.serverStatus().mem { "bits" : 64, "resident" : 26197, "virtual" : 161106, "supported" : true, "mapped" : 79994, "mappedWithJournal" : 159988 } 

So, if I understood correctly, MongoDB uses 26 GB of memory. If my server has 32 GB and it only works with mongoDb, would getting a new server and dragging my data be a good idea?

+4
source share
2 answers

How MongoDB caching works, eventually any available memory will be used. Performance will decrease significantly when the resident part encounters shared memory, but will depend on your data access patterns. This is usually normal if not everything is in memory all the time, but you want to have room for your working set. See Working Set Size and serverStatus (). Mem for general tips and details.

+5
source

All that Joshua said with one caveat, you should always make sure that you have a swap. Otherwise, you may run into OOM Killer problems, see here:

http://www.mongodb.org/display/DOCS/The+Linux+Out+of+Memory+OOM+Killer

Basically, a busy MongoDB with enough data will tend to make the most of its memory and then stay there. This does not indicate the problem as such, and, as a rule, exactly what should happen. You should look elsewhere for diagnosing the source of any performance problems.

+3
source

All Articles