When and how is the managed .NET heap exchanged?

My little stress test, which distributes arrays of random lengths (100..200 MB each) in a loop, shows different behavior on a 64-bit Win7 machine and a 32-bit version of XP (in a virtual machine). Both systems usually usually allocate as many arrays as they fit in LOH. Then the LOH gets bigger and bigger until the virtual address space is full. Expected behavior so far. But how - on further requests - both behave differently:

While OutOfMemoryException (OOM) is thrown on Win7, on XP it seems that the heap is growing and even collapsing to disk - at least there is no OOM. (I do not know if this could be due to the fact that XP works in a virtual box.)

Question: How is runtime (or OS?) Determined, Be it requests for distributed memory management, if it is too large to receive the allocation, OOM is created or the heap of a large object is increased - in the end, it even gets replaced by disk If he exchanges when OOM occour occurs, what?

IMO this question is important for all production environments potentially dealing with larger datasets. Somehow, it seems more “safe” to know that the system will slow down significantly in such situations (by replacement) than just throwing OOM. At least it must be somehow deterministic, right?

@Edit: the application is a 32-bit application, therefore it works in 32-bit mode in Win 7.

+5
1

, Windows -. Windows. , , VirtualAlloc(), CLR OOM.

, , . OOM.

, XP, Win7 x64. OOM x64 AnyCPU , 64- . . 32- WOW, 4 , LARGEADDRESSAWARE Editbin.exe.

SysInteral VMMap, , .

+8

All Articles