The maximum capacity of the <T> collection is different than expected for x86

The main question is about the maximum number of elements that can be in a collection, for example List. I searched for the answers here, but I do not understand the reasoning.

Suppose we are working with a List<int> with sizeof(int) = 4 bytes ... It seems that everyone is sure that for x64 you can have a maximum of 268,435,456 int and for x86 a maximum of 134,217,728 int . References:

However, when I experienced this myself, I see that this is not the case for x86. Can someone point me where I can be wrong?

 //// Test engine set to `x86` for `default processor architecture` [TestMethod] public void TestMemory() { var x = new List<int>(); try { for (long y = 0; y < long.MaxValue; y++) x.Add(0); } catch (Exception) { System.Diagnostics.Debug.WriteLine("Actual capacity (int): " + x.Count); System.Diagnostics.Debug.WriteLine("Size of objects: " + System.Runtime.InteropServices.Marshal.SizeOf(x.First().GetType())); //// This gives us "4" } } 

For x64: 268435456 (expected)

For x86: 67108864 (2 times less than expected)

Why do people say that the list containing 134217728 int is exactly 512 MB of memory ... when you have 134217728 * sizeof (int) * 8 = 4,294,967,296 = 4 GB ... which path exceeds 2 GB per process. While 67108864 * sizeof (int) * 8 = 2,147,483,648 = 2GB ... which makes sense.

I use .NET 4.5 on a 64-bit machine with Windows 7 8 GB of RAM. Running my tests on x64 and x86.

EDIT . When I set the capacity directly to List<int>(134217728) , I get a System.OutOfMemoryException .

EDIT2 : error in my calculations: multiplying by 8 is wrong, really MB = / = Mbps. I calculated Mbps. However, 67108864 ints will only be 256 MB ... which is much less than expected.

+6
source share
3 answers

The base storage for the List<T> class is the array T[] . The strict requirement for an array is that the process must be able to allocate a continuous chunk of memory to store the array.

This is a problem in a 32-bit process. Virtual memory is used for the code and data that you select from the holes left between them. And although a 32-bit process will have 2 gigabytes of memory, you will never get anywhere near a hole that is close to this size. The largest hole in the address space that you can get immediately after starting the program is about 500 or 600 megabytes. Give or take, it depends on which DLLs are loaded into the process. Not only the CLR, jitter, and native images of assembly assemblies, but also a view that has nothing to do with managed code. Like anti-malware and a raft of “useful” utilities that are worms in all processes, such as Dropbox and shell extensions. A poorly founded one can carve a good big hole in two small ones.

These holes will also decrease as the program allocates and frees memory for a while. A common problem called address space fragmentation. A long-term process may fail with a distribution of 90 MB, even though there is a lot of unused memory.

You can use the SysInternals VMMap utility to get more information. A copy of Rusinovich’s book “Windows Internals” is usually needed to understand what you see.

+7
source

It may also help, but I was able to replicate this limit of 67108864 by creating a test project with the code provided

in the console, winform, wpf, I was able to get the limit of 134217728

in asp.net i was getting 33554432 limit

so in one of your comments you said [TestMethod] , this seems to be the problem.

+1
source

While you can have MaxValue elements, in practice, before that, you will run out of memory.

By running x86, the largest number of bars that you can even have in the x46 field will be more than 4 GB, most likely 2 GB or 3 GB is the maximum if on the x86 version of Windows.

The available plunger is likely much smaller, since you could allocate the largest continuous space for the array.

0
source

All Articles