After some testing of durability with docker (docker 1.5 and 1.6 without memory limitations) on (centos 7 / rhel 7) and observing systemd-cgtop statistics for working containers, I noticed that memory is apparently very heavily used. Typically, a particular application running in a non-container state uses only 200-300 megabytes of memory. Over a 3-day period, I realized that systemd-cgtop reports that my container occupied up to 13G of used memory. Although I am not a Linux expert, I started digging into this, which pointed to the following articles:
https://unix.stackexchange.com/questions/34795/correctly-determining-memory-usage-in-linux
http://corlewsolutions.com/articles/article-6-understanding-the-free-command-in-ubuntu-and-linux
So basically I understand that the actual free memory in the system unit will be to look at the buffers // buffers / cache: inside "free -m" and not on the top line, as I also noticed that the top the line inside "free -m" will constantly grow using memory and constantly show a reduced amount of free memory, just like what I see with my container through systemd-cgtop. If I see the line - / + buffers / cache: line, I will see the actual stable amount of used / free memory. In addition, if I observe the actual process inside the vertex on the host, I see that the process itself uses only 1% of the memory (0.8% of 32G).
I am a little confused what is going on here. If I set a memory limit of 500-1000M for the container (I think it will be twice as much due to the swap), my process will eventually stop when I reach the memory limit, although the process itself is not used anywhere near such a big memory? If anyone has any feedback on the first, that would be great. Thanks!
source share