My first concern was to make sure that you are measuring something important. βMemoryβ can mean many different things. There is a huge difference between running out of virtual memory space and running out of RAM. There is a huge difference between a performance problem caused by shredding a page file and a performance problem caused by creating too much pressure in the GC.
If you donβt understand the relationship between RAM, virtual memory, working set, and the page file, start by reading until you understand all of this. The way you formulated this question makes me suspect that you think that virtual memory and random access memory are the same thing. They, of course, are not.
I suspect the arithmetic you are doing is:
- I have eight processes, each of which consumes 500 million bytes of virtual address space.
- I have 4 billion bytes of RAM
- So I'm going to get an OutOfMemory exception
This syllogism is completely invalid. This is a syllogism:
- I have eight quarts of ice cream
- I have room for nine quarts of ice cream in the freezer
- Therefore, if I get two more quarts of ice cream, something will melt.
when in fact you have a whole warehouse cold store next door. Remember that RAM is just a quick way to store things near it, like your refrigerator. If you have more things to keep, who cares if you run out of place? You can always appear next door and place material that you use less often in deep freeze mode - the page file. This is less convenient, but nothing melts.
You get an "out of memory" exception when a process expires from the virtual address space, and not when all the RAM in the system is consumed. When all RAM in the system is consumed, you do not get an error, you get crap performance because the operating system spends all its time working with the disk.
So, anyway, start by understanding what you are measuring and how memory works in Windows. In fact, you should look for:
Is any process a threat to use more than two billion bytes of virtual memory on a 32-bit system? The process receives only 2 GB of virtual memory (and not RAM, remember that virtual memory has nothing to do with RAM: therefore its called "virtual" is not hardware ) on win32, which is addressed by the user code; you will get OOM if you try to use more.
Is there any process that may arise when trying to allocate a huge block of virtual memory so that there is no continuous block of this size? For example, are you likely to allocate ten million bytes of data in one array? Again, OOM.
Is the working set - that is, the pages of the virtual memory of a process * that should be in RAM for performance reasons - all processes that are smaller than the amount of RAM? If not, soon you will beat, but not OOM.
Is your page file large enough to handle pages of virtual memory that can be unloaded to disk if RAM starts to shrink?
So far, none of this has anything to do with .NET. Once you have actually determined that there is a real problem - maybe not so - then start an investigation based on the real problem. Use the memory profiler to verify what the memory allocator and garbage collector do. See if there are huge blocks in a large heap of an object or unexpectedly large graphs of living objects that cannot be assembled, or what. But use good engineering principles: understand the system, use tools to study actual empirical characteristics, experiment with changes and carefully evaluate their results. Do not start randomly spanking magic IDisposable interfaces on several classes and hope that this will make the problem - if there is one - go away.
Eric Lippert
source share