Limit buffer cache used for mmap

I have a data structure that I would like to process on page on demand. mmap seems like an easy way to start some initial experiments. However, I want to limit the amount of buffer cache that mmap uses. The machine has enough memory to map the entire data structure to the cache, but for testing reasons (and some production reasons) I do not want this to be done.

Is there a way to limit the amount of buffer cache used by mmap ?

Alternatively, the mmap alternative, which can achieve something similar and still limit memory usage, will work too.

+9
c ++ memory mmap
source share
4 answers

As far as I understand, this is not possible. The display of memory is controlled by the operating system. The kernel will make decisions about how best to use the available memory, but generally looks at the system. I do not know if quotas for caches are supported at the process level (at least I have not seen such APIs on Linux or BSD).

There is madvise to give hints to the kernel, but it does not support limiting the cache used for a single process. You can give him hints like MADV_DONTNEED , which will reduce the cache load of other applications, but I expect this to do more harm than good, as it is likely to make caching less efficient, which will lead to a larger cache increase. The total load on the I / O system.

I see only two alternatives. One is trying to solve the problem at the operating system level, and the other is trying to solve it at the application level.

At the OS level, I see two options:

  1. You can start the virtual machine, but most likely this is not what you want. I also expect this to not improve overall system performance. However, this would be at least a way to determine the upper limits of memory consumption.
  2. Docker is another idea that comes to mind, it also works at the operating system level, but as far as I know, it does not support cache quotas. I do not think this will work.

This leaves only one option, which is to consider at the application level. Instead of using files with mapped memory, you can use explicit file system operations. If you need full control over the buffer, I think this is the only practical option. This is more work than displaying memory, and it is also not guaranteed that it will work better.

If you want to stay with the memory mapping, you can also display only parts of the file in memory and uncheck the other parts when you exceed the memory quota. It also has the same problem as explicit file I / O operations (more implementation work and non-trivial setup to find a good caching strategy).

Having said that, you can call into question the requirement to limit the use of cache memory. I expect the kernel to do a good job of allocating memory resources. At least it will be better than the solutions that I sketched. (Explicit file I / O, plus the internal cache, can be fast, but it's easy to implement and configure. Here's a comparison of the trade-offs: mmap () and reading blocks .)

During testing, you can run the application with ionice -c 3 and nice -n 20 to slightly reduce the impact on other powerful applications. There is also a tool called nocache . I have never used it, but when reading the documentation it seems like this is related to your question.

+3
source share

Perhaps this will be possible using mmap() and the Linux Linear Management Groups (generally here or here ). After installation, you have the opportunity to create arbitrary restrictions on the amount, among other things, of physical memory used by the process. As an example, here we limit the physical memory to 128 megabytes and swap the memory to 256 megabytes:

 cgcreate -g memory:/limitMemory echo $(( 128 * 1024 * 1024 )) > /sys/fs/cgroup/memory/limitMemory/memory.limit_in_bytes echo $(( 256 * 1024 * 1024 )) > /sys/fs/cgroup/memory/limitMemory/memory.memsw.limit_in_bytes 
+1
source share

I would only go the route along the part of the file map at a time so that you can fully control how much memory is being used.

+1
source share

you can use the ipc shared memory segment, you will be the main memory segment.

0
source share

All Articles