Garbage collection not only eliminates objects that are not tied to it, it also compacts the heap. This is a very important optimization. This not only improves memory efficiency (without unused holes), but also makes the processor cache more efficient. The cache is very interesting for modern processors, they are an order of magnitude faster than the memory bus.
Compacting is done simply by copying bytes. It takes time. The larger the object, the greater the likelihood that the cost of copying it outweighs the possible improvements in the use of the CPU cache.
Thus, they conducted a bunch of tests to determine the breakeven point. And reached 85,000 bytes as a cutoff point, where copying no longer improves performance. With a special exception for double arrays, they are considered "large" when the array has more than 1000 elements. This is another optimization for 32-bit code, a large object heap allocator has a special property that allocates memory at addresses that are aligned at 8, as opposed to a regular generation generator that allocates only alignment at 4. This alignment is a big deal for double, reading or writing an incorrectly aligned double is very expensive. Oddly enough, rare information Microsoft never mentions arrays for a long time, not sure about that.
Fwiw, there are many programmers who are longing for a large heap of an object that does not compact. This inevitably starts when they write programs that consume more than half of all available address space. Then you should use a tool like a memory profiler to find out why the program is bombed, despite the fact that there is still a lot of unused virtual memory. Such a tool shows holes in the LOH, unused pieces of memory, where a large object used to live, but garbage was collected. This is the inevitable price of LOH; a hole can be reused by distributing for an object that is equal to or smaller in size. The real problem is that the program needs to be allowed to consume all of the virtual memory at any time.
A problem that otherwise completely disappears by simply running the code on a 64-bit operating system. A 64-bit process has 8 terabytes of available virtual memory address space, 3 orders of magnitude larger than a 32-bit process. You just can't run dry.
In short, LOH makes code more efficient. By using the available address space of virtual memory is less efficient.
UPDATE, .NET 4.5.1 now supports compaction of the LOH property, GCSettings.LargeObjectHeapCompactionMode . Beware of the consequences.
Hans Passant Jan 21 2018-12-12T00: 00Z
source share