Solr always uses more than 90% of physical memory.

I have 300,000 documents stored in the solr index. And used 4 GB of RAM for the solr server. But it consumes more than 90% of physical memory. Therefore, I switched to my data on a new server with 16 GB of RAM . Again, solr consumes more than 90% of the memory. I do not know how to solve this problem. I used the standard MMapDirectory and solr version 4.2.0. Explain to me if you have any solution or reason for this.

+8
solr
source share
2 answers

MMapDirectory tries to make full use of OS memory (OS Cache), as far as possible, this is normal behavior, it will try to load the entire index into memory, if available. This is actually good. Since this memory is available, it will try to use it. If another application on the same computer requires more, the OS will release it for him. This is one of the reasons Solr / Lucene requests are fast-paced, since most server calls end with memory (depending on the size of the memory), and not with the disk.

JVM memory is another matter, it can be controlled, only working request response objects and some cache entries use JVM memory. Thus, the size of the JVM can be customized based on number requests and cache entries.

+7
source share

what value of -Xmx do you use when calling jvm? If you do not use an explicit value, jvm will set it based on the characteristics of the machine.

Once you give the maximum amount of Solr heap, solr will potentially use all of this if necessary, and it works. If you are limiting 2GB, use -Xmx = 2000m when you call jvm. Not sure how big your documents are, but 300k docs are considered a small index.

+2
source share

All Articles