On Linux, when a process requests some (virtual) memory from the system, it is simply registered in vma (the virtual memory descriptor of the process), but the physical page for each virtual is not saved during the call. Later, when the process gains access to this page, it will crash (access will cause the Fault page to crash), and the PF # handler will highlight the physical page and page of the update process pages.
There are two cases: a reading error can turn into a link to a zero page (a special global pre-reset page) that is write-protected; and a recording error (both on the page with the zero page and on the page with the required, but not physically correlated) will lead to the actual distribution of the personal physical page.
For mmaps (and brk / sbrk, which is also an internal mmap), this method is for each page; all mmaped areas are registered as a whole in vma (they have start and end addresses). But the stack is handled differently because it only has a start address (higher on a typical platform, incremented to lower addresses).
The question arises:
When I access the new unallocated memory near the stack, it will receive PF # and grow. How is this handled if I get access not to a page next to the stack, but to a page that is 10 or 100 pages away from the stack?
eg.
int main() { int *a = alloca(100); int *b = alloca(50*4096); int *c = alloca(100); a[0]=1; c[0]=1; }
Will this program receive 2 or 50 private physical pages allocated for the stack?
I think it might be beneficial to ask the kernel to distribute dozens of physical pages in one file, and then do dozens of pages highlighting page by page (1 interrupt + 1 context switch + simple, cached loop through N requests for page placement compared with N interrupts + N context switches + N page allocation when mm code can be pulled from Icache).
memory-management linux linux-kernel page-fault
osgx
source share