16-bit Windows did some truly amazing feats of memory management, but it hindered being designed for a processor without memory management, namely the i8086 of the original PC (hardware people may not realize that the original PC used the i8088, which was identical, except for the data bus width )
Thus, in 16-bit Windows processes, the general form of memory is preserved.
One problem is that the address space of shared memory is not so large when many processes want to have their own chunks.
In addition, it is too easy for processes to stumble on each other's legs.
Windows offered some partial solutions, such as the ability for the process to inform Windows about when it actually used any memory (the process then locks this memory area), which means that Windows can move the contents of the memory around if necessary, free up space, but it was all voluntary and not very safe.
So, 32-bit Windows, Windows NT, used the new processor memory management to automate the best practices that Windows 16-bit programs should use. In fact, a process only deals with logical addresses that the processor automatically translates to physical addresses (which this process never sees). Well, on a 32-bit PC, translation is a two-step business, i.e. Internal intermediate form of address, but this is a complication that you do not need to know about.
One of the nice consequences of this hardware address translation is that the process can be completely isolated from knowing which physical addresses it uses. For example, it is easy to have two processes of the same program. They think they are dealing with the same addresses, but these are only logical addresses; in fact, their logical addresses translate to different physical addresses, so that they do not stomp in each other's memory areas.
And one of the consequences that we can say about a 20-20 retrospective assessment is not so nice, is that the translation scheme allows virtual memory , for example. to simulate RAM using disk space. Windows can copy the contents of the memory to disk, and then use this area of physical memory for something else. When a process that uses this memory area writes to or reads from it, Windows engages in some frantic activity to load data from the disk into some memory area (it may be the same or different) and maps the logical addresses of the process there. The result of this is that in the conditions of low memory, the PC turns from an electronic beastie into a mechanical beastie, performing slooooower thousands and millions of times. Ungood - but when RAM sizes were small, people thought virtual memory was neat.
The main problem with virtual memory in today's Windows is that in practice, it's almost impossible to disable this damn thing. Even if there is only one “main” program, and there is much more than enough physical RAM, Windows will actively change data to disk to be prepared for the possibility that even more logical memory might be required for this process. However, if this were corrected, then something else would most likely appear instead; this is the nature of the universe.
Cheers and hth.,