Linux's memory history is not used by default, but you can access it with a simple command line tool such as sar .
As for your memory problem: If it were an OOM-killer that messed up the machine a bit, you have one great option to make sure it doesn't happen again (of course, after reducing the heap size of the JVM).
By default, linux kernel allocates more memory than it actually does. In some cases, this can cause the OOM-killer to kill most memory-related processes if there is no memory for kernel tasks. This behavior is controlled by the vm.overcommit sysctl parameter.
So you can try setting it to vm.overcommit = 2 is sysctl.conf and then run sysctl -p .
This will prohibit excessive work and make killing an OOM killer very unpleasant. You can also consider adding a small amount of swap space (if you donβt already have one) and setting vm.swappiness to some really low value (for example, 5 , for example, the default value is 60 ), so in an ordinary workflow your application will not will be inserted into swap, but if you really will be a bit in memory, it will start using it temporarily, and you can even see it with df
WARNING , this can cause processes to receive a "Cannot allocate memory" error if your server is overloaded with memory. In this case:
- Try to limit memory usage by applications
- Move some of them to another machine.
Pavel kazhevets
source share