What is the difference between virtual memory and physical memory?

I am often confused with the concept of virtualization in operating systems. Given RAM as physical memory, why do we need virtual memory to execute the process?

Where is this virtual memory located when the process (program) from the external hard drive is transferred to the main memory (physical memory) for execution.

Who is involved in virtual memory and what is the size of virtual memory?

Suppose if the RAM size is 4 GB (i.e. 2 ^ 32-1 address spaces), what is the size of the virtual memory?

+80
memory-management virtualization ram virtual-memory operating-system
Jan 15 '13 at 21:24
source share
5 answers

Virtual memory is, among other things, an abstraction to give the programmer the illusion of having infinite memory in his system.

Virtual memory mappings correspond to actual physical addresses. operating system creates and processes these mappings β€” using the page table, among other data structures to support mappings. Virtual memory mappings are always in a page table or some similar data structure (in the case of other virtual memory implementations, we probably shouldn't call it a "page table"). The page table is also in physical memory - often in reserved kernel spaces that user programs cannot write.

Virtual memory is usually larger than physical memory β€” there would be little reason for virtual memory mappings if the virtual memory and physical memory were the same size.

Only the necessary part of the program is in memory, usually it is a topic called "swap". Virtual memory and paging are closely related, but not the same topic. There are other implementations of virtual memory, such as segmentation.

I could be wrong here, but I would bet that things that are difficult for you to fool are connected with specific implementations of virtual memory, most likely, with search calls. There is more than one way to create a swap β€” there are many implementations, and the one that your tutorial describes is most likely not the same as it appears on real operating systems such as Linux / Windows β€” there are probably subtle differences.

I could write a thousand paragraphs about paging ... but I think it’s better to leave another question that focuses on this topic.

+69
Jan 15 '13 at 21:30
source share

The software runs on an OS with a very simple premise - they require memory. The device OS provides it in the form of RAM. The required amount of memory can vary - some programs need huge memory, others - scanty memory. Most (if not all) users simultaneously run several applications in the OS, and given that the memory is expensive (and the size of the device is finite), the amount of available memory is always limited. Thus, given that all programs require a certain amount of RAM, and all of them can work simultaneously, the OS must take care of two things:

  1. This software always starts until the user interrupts it, that is, it should not be automatically interrupted because the OS does not have enough memory.
  2. The above actions, while maintaining respectable performance for running programs.

Now the main question is how memory is managed. What exactly determines where in the memory there will be data belonging to this software?

Possible Solution 1. Allow individual programs to explicitly indicate the memory address that they will use on the device. Suppose Photoshop declares that it will always use memory addresses in the range from 0 to 1023 (imagine that the memory is a linear array of bytes, so the first byte is in cell 0 , the 1024 byte is in cell 1023 ), i.e. it takes 1 GB memory. Similarly, VLC states that it will occupy a memory range from 1244 to 1876 , etc.

Benefits:

  1. Each application is pre-assigned a memory slot, so when it is installed and executed, it simply saves its data in this memory area, and everything works fine.

Disadvantages:

  1. It does not scale. Theoretically, an application might require a huge amount of memory when it does something really heavy. Thus, in order to ensure that it never runs out of memory, the memory area allocated to it should always be greater than or equal to this memory size. What if software whose maximum theoretical memory usage is 2 GB (therefore, 2 GB of memory allocation from RAM is required) is installed on a machine that has only 1 GB memory? Should the software just interrupt at startup, saying that the available RAM is less than 2 GB ? Or should this continue, and at the moment when the required memory exceeds 2 GB , just interrupt and display a message that there is not enough memory?

  2. Cannot prevent memory distortion. There are millions of programs, even if each of them was allocated only 1 kB memory, the total amount of required memory would exceed 16 GB , which is more than most devices offer. How, then, can different memory slots be allocated to different programs that do not affect each other? Firstly, there is no centralized software market that could regulate the fact that when releasing new software it should allocate so much memory for itself from this not yet occupied area, and secondly, even if it were, it is impossible to do, Because no. The software is almost endless (thus, endless memory is required to accommodate all of them), and the total amount of RAM available on any device is not enough to accommodate even part of what is required, which makes it inevitable to invade the limits of the memory of one software on this is another. So what happens when Photoshop assigns a memory region from 1 to 1023 and VLC assigns from 1000 to 1676 ? What if Photoshop stores some data at location 1008 , then VLC overwrites it with its own data, and later Photoshop accesses it, thinking that it is the same data that was stored there before? As you can imagine, bad things will happen.

So clearly, as you see, this idea is rather naive.

Possible Solution 2 : Let's try another scheme - where the OS will do most of the memory management. The software, when it needs some kind of memory, just asks for the OS, and the OS will adapt accordingly. Say, the OS guarantees that whenever a new process requests memory, it allocates memory from the lowest possible byte address (as mentioned earlier, RAM can be represented as a linear array of bytes, therefore for RAM 4 GB the address range for a byte is from 0 to 2^32-1 ) if the process starts, otherwise, if it is a running process requesting memory, it will be allocated from the last memory area where this process is still located. Since the software will send addresses without regard to what the actual memory address where this data is stored, the OS will need to support mapping for each software the address emitted by the software to the actual physical address (Note: this is one of two reasons, for by which we call this concept Virtual Memory programs do not care about the real memory address where their data is stored, they simply upload the addresses on the fly, and the OS finds a suitable place for this and find it later, if required).

Say the device just turned on, the OS just started, now there is no other running process (ignoring the OS, which is also a process!), And you decide to start VLC . Thus, the VLC is allocated part of the RAM from the lower byte addresses. Good. Now, while the video is on, you need to launch a browser to view a web page. Then you need to start Notepad to sketch the text. And then Eclipse to do some coding. Pretty soon, your 4 GB memory is completely used up, and RAM looks like this:

enter image description here

Problem 1: Now you can not start any other process, since all the RAM has been used up. Thus, programs should be written taking into account the maximum available memory (almost even less will be available, since other programs will work in parallel!). In other words, you cannot run a high-memory application on your shabby 1 GB PC.

So, now you decide that you no longer need to keep Eclipse and Chrome open, you close them to free up some of the memory. The space occupied by these processes in RAM is used by the OS, and now it looks like this:

enter image description here

Suppose these two free up 700 MB space - ( 400 + 300 ) MB. Now you need to run Opera , which takes up 450 MB space. Well, you have a total of more than 450 MB free space, but ... it is not contiguous, it is divided into separate pieces, none of which are large enough to accommodate 450 MB . So you came across a brilliant idea, let me move all the processes below as high as possible to leave 700 MB empty space in one block below. This is called compaction . Well, maybe ... all the processes that are there are running. Moving them will mean moving the address of all their contents (remember, the OS supports mapping the memory allocated by the software to the actual memory address. Imagine that the software issued address 45 with data 123 , and the OS saved it at location 2012 and created an entry on the map , comparing 45 to 2012 If the software is now moved to the memory, that was the location 2012 , will no longer be in 2012 , but in a new location, and the OS will have to update the map corresponds to a map 45 with the new address So that the software can obtain the expected data ( 123 ), when it requests a memory cell 45 On the software side, all he knows is that the address 45 contains data 123 !)! Imagine a process that references a local variable i . By the time he is accessed again, his address will change and he will no longer be able to find him. The same applies to all functions, objects, variables, basically all have an address, and moving the process will mean changing the address of all of them. Which brings us to:

Problem 2: You cannot move the process. The values ​​of all variables, functions and objects in this process have hard-coded values ​​that are output by the compiler during compilation, the process depends on whether they are in the same place during their lifetime, and changing them is expensive. As a result, processes leave behind large " holes ". This is called External Fragmentation .

Good. Suppose, in some miraculous way, you managed to push processes up. Now below 700 MB free space:

enter image description here

The opera blends in smoothly below. Now your RAM looks like this:

enter image description here

Good. Everything looks good. However, there is not much space left, and now you need to run Chrome again, the famous god of memory! Launching requires a lot of memory, and you have almost nothing left ... Except that ... now you notice that some processes that originally occupied a large space now do not need much space. You may have stopped your video in VLC , therefore, it still takes up some space, but not as much as it needs to work with high-resolution video. Similarly for notepad and photos . Your RAM now looks like this:

enter image description here

Holes , one more time! Returns to square one! Except for the fact that earlier holes arose due to termination of processes, and now due to processes requiring less space than before! And you have the same problem again: the combined holes give more space than required, but they are scattered around, and not so much used in isolation. Thus, you will have to move these processes again, which is an expensive and very frequent operation, since the processes often decrease in size over the life of the process.

Problem 3: Processes may decrease in size over their lifetime, leaving unused space, which, if necessary, will require the costly operation of moving many processes. This is called Internal Fragmentation .

Well, now your OS performs the necessary actions, moves processes and starts Chrome, and after a while your RAM looks like this:

enter image description here

Wow. Now suppose you resume viewing Avatar in VLC again. His memory requirement will grow! But ... he did not have room for growth, as Notepad is pressed to its lower part. So, again, all processes should move lower until the VLC finds enough space!

Problem 4: If processes need to grow, it will be a very expensive operation

Good. Now suppose that Photo is used to download some photos from an external hard drive. Accessing your hard drive takes you from the cache and RAM areas to the drive area, which is several orders of magnitude slower. It’s painful, irrevocably, transcendentally slower. This is an I / O operation, which means that it is not connected to the processor (it is rather the exact opposite), which means that it does not need to occupy RAM right now. However, it still takes RAM hard. If you want to start Firefox at the same time, you cannot do this because there is not much memory available, while if the photos were deleted from memory for the duration of the I / O action, this would free up a lot of memory followed by an (expensive) compaction followed by a Firefox insert .

Problem 5: I / O tasks continue to occupy RAM, which leads to insufficient use of RAM, which at the same time could be used by tasks related to the CPU.

So, as we see, we have so many problems, even using virtual memory.




There are two approaches to solving these problems - paging and segmentation . Let's discuss paging . With this approach, the virtual address space of the process is mapped onto physical memory in portions - the so-called pages . A typical page size is 4 kB . Mapping is supported using something called a page table , given the virtual address. Now all we need to do is find out which page address belongs to, and then from the page table find the appropriate location for that page in real physical memory (known as frame ), and considering that the offset of the virtual address on page same for page and frame , determine the actual address by adding this offset to the address returned by the page table . For example:

enter image description here

On the left is the virtual address space of the process. Let's say a virtual address space requires 40 units of memory. If there were also 40 memory units in the physical address space (on the right), it would be possible to map all locations on the left to the location on the right, and we would be very happy. But, with no luck, physical memory not only has fewer (24 here) available blocks of memory, but also needs to be distributed among several processes! Ok, let's see how we handle this.

When the process begins, let's say a memory access request is made for location 35 . Here, the page size is 8 (each page contains 8 locations, so the entire virtual address space of 40 locations contains 5 pages). So this place belongs to page number. 4 ( 35/8 ). On this page this location has an offset of 3 ( 35%8 ). Thus, this location can be specified using the tuple (pageIndex, offset) = (4,3) . This is only the beginning, so no part of the process has yet been stored in real physical memory. Thus, the page table , which supports displaying the pages on the left to the actual pages on the right (where they are called frames ), is currently empty. Thus, the OS frees the processor, allows the device driver to access the disk and get page number. 4 for this process (basically this is a piece of program memory on a disk with addresses from 32 to 39 ). When it arrives, the OS selects a page somewhere in RAM, say, the first frame itself, and the page table for this process takes note that page 4 mapped to 0 frame in RAM. Now the data is finally in physical memory. The OS again requests the page table for the tuple (4,3) , and this time the page table says that page 4 already mapped to frame 0 in RAM. Thus, the OS simply goes to frame 0 in RAM, accesses data with an offset of 3 in this frame (take a minute to figure this out. The entire page that was extracted from the disk moves to frame . So, regardless of the offset The individual memory location on the page was, it will be the same in the frame, since inside the page / frame the memory block is still in the same place relatively!) And returns the data! Since the data was not found in memory on the first request, but rather should have been ejected from disk to load into memory, this is a skip .

Good. Now suppose the memory access for location 28 is done. This reduces to (3,4) . Page table now has only one record, mapping page 4 to frame 0 . So this is a miss again, the process frees the processor, the device driver retrieves the page from disk, the process restores processor control again, and its page table updated. Say now page 3 mapped to frame 1 in RAM. Thus, (3,4) becomes (1,4) , and the data in this place is returned to RAM. Good. Thus, suppose the next memory access for location is 8 , which translates to (1,0) . Page 1 is not yet in memory, the same procedure is repeated, and page is highlighted in frame 2 in RAM. Now the display of the RAM process looks like in the picture above. At the moment, RAM, in which only 24 units of memory was available, is full. Assume the following memory access request for this process from address 30 . It maps to (3,6) , and the page table says that page 3 is in RAM, and maps to frame 1 . Hurrah! Thus, data is fetched from RAM (1,6) and returned. This is a hit since the required data can be obtained directly from RAM, which makes them very fast. Similarly, the next few access requests, say, for locations 11 , 32 , 26 , 27 are all hits , that is, the data requested by the process is located directly in RAM without having to search elsewhere.

Now suppose that a memory access request for location 3 coming. This translates to (0,3) , and the page table for this process, which currently has 3 entries, for pages 1 , 3 and 4 says that this page is not in memory. As in previous cases, it is removed from the disk, however, unlike in previous cases, the RAM is full! So what to do now? Here lies the beauty of virtual memory, a frame from RAM is evicted! (Various factors determine which frame should be deleted. It can be based on LRU , where there should be an LRU frame that was least accessed by the process. This may be the principle of " first-come-first-evicted , where is the frame that highlighted a long time ago, etc.) So some frames are evicted. Say frame 1 (just by randomly selecting it). However, this frame displayed on some page ! (Currently it is displayed in the page table of our the one and only process on page 3 ) .Thus, this process needs to To summarize this tragic news, that one frame , which, unfortunately, belongs to you, must be removed from RAM in order to free up space for other pages , the process must make sure that it updates its page table with this information, that is, deletes the entry for this duet of the page frame, so that the next time a request is made for this page , he correctly informs the process that this page no longer in memory and must be extracted from disk. Good. Thus, frame 1 is evicted, page 0 is inserted into and placed there in RAM, and the entry for page 3 is deleted and replaced on page 0 displaying the same frame 1 . So, now our display looks like this (note the color change in the second frame on the right side):

enter image description here

Saw what just happened? , , , , , , page ! , , , , , . : , miss , ? , , , locality of reference , .. , - , , page , page . , , . page , , , - , . , , . .

4: , , , , page , - .




1: . , , , , frame ( page ) . , , . Virtual Memory , , !

Wow. , , ( ). , , pages , , , LRU pages (, , page tables , , ), - !

3: , , frames , LRU pages , Internal Fragmentation compaction .

2, , , ! , , , ad hoc, frames . pages , , hole , , , , ! , 10 pages - , pages . , , ( ) !

2: , , , pages . , hole External Fragmentation .

, -, ! pages (, ), . / , pages (, pages , , , , I/O , , !)

5. -, , . .

, , . , page-table . , , , . , , , , , . page-table .

: , , .

, paging ( ) - , , ! , , , .. , , . :

  1. Paging , , , . ( page swap , , , page fault ), , -. . , . - , , , , ? , , ( thrashing ).

, OP,

? - , /, , , , , , . , , , , . .

, () ( ) ? - , , , // , , , . , , , , , the virtual memory .

? - , , . ( , ), - int i . , i . - . (, ) - . i - , , ( !), i , , , , i , , , , .

, 4 (.. 2 ^ 32-1 ), ? - , . , 32- Windows 16 TB , 64- Windows 256 TB . , , .

+49
16 . '18 22:15
source share

VIRT - Virtual Image (kb) , . , , , , , , .

SWAP - () , , . , , non-.

+16
12 . '14 17:54
source share

:

. , . , .

+5
07 . '16 2:14
source share

: https://en.wikibooks.org/wiki/Operating_System_Design/Physical_Memory

, (DIMM), . , , . ; , .

, (, ) , ( 2 ^ ( )) /.

0
13 . '18 6:47
source share



All Articles