Maximum memory that can be allocated dynamically and at compile time in C ++

I play to understand how much memory can be allocated. Initially, I thought that the maximum memory that can be allocated is equal to physical memory (RAM). I tested my RAM on Ubuntu 12.04 by running a command as shown below:

~$ free -b total used free shared buffers cached Mem: 3170848768 2526740480 644108288 0 265547776 1360060416 -/+ buffers/cache: 901132288 2269716480 Swap: 2428497920 0 2428497920 

As shown above, the general physical memory is 3Gig (3170848768 bytes), of which only 644108288 bytes are free, so I suggested that I can only maximize this large memory allocation. I tested it by writing a small program with only two lines below:

 char * p1 = new char[644108290] ; delete p1; 

Since the code works fine, it means that it has successfully allocated memory. I also tried to allocate a memory larger than the available physical free memory, but it did not produce any error. Then to the question

the maximum memory that malloc can allocate

I thought he should use virtual memory. So I tested the code for free swap memory, and it worked too.

 char * p1 = new char[2428497920] ; delete p1; 

I tried to allocate a free swap plus free RAM bytes of memory

 char * p1 = new char[3072606208] ; delete p1; 

But this time code did not throw a bad_alloc exception. Why this code did not work this time.

Now I allocated memory at compile time in a new program, as shown below:

 char p[3072606208] ; char p2[4072606208] ; char p3[5072606208]; cout<<"Size of array p = " <<sizeof p <<endl; cout<<"Size of array p2 = " <<sizeof p2<<endl; cout<<"Size of array p2 = " <<sizeof p3; 

Output shows

 Size of array p = 3072606208 Size of array p1 = 4072606208 Size of array p2 = 777638912 

Could you help me understand what is happening here. Why is this allowed to allocate memory at compile time, but not dynamically. When compilation time is allocated, how p and p1 could allocate more memory than swap plus free RAM. Where p2 failed. How exactly does it work. This is some kind of undefined behavior or specific behavior. Thank you for your help. I am using Ubuntu 12.04 and gcc 4.6.3.

+8
c ++ memory-management new-operator ubuntu
source share
5 answers

Memory pages do not actually appear in your program until you use them. All malloc backs up a range of virtual address space. No physical RAM is mapped to these virtual pages until you try to read or write them.

Even if you allocate global or stack ("automatic") memory, there is no display of physical pages until you touch them.

Finally, sizeof() is evaluated at compile time, when the compiler has no idea what the OS will do later. Therefore, it will just tell you the expected size of the object.

You will find that everything will behave differently if you try to memset memory at 0 in each of your cases. Alternatively, you can try calloc , which resets your memory.

+5
source share

Interesting .... one point: when you write

 char p[1000]; 

you allocate (well, reserve) 100 bytes on the stack.

When you write

 char* p = malloc(100); 

you allocate 100 bytes per heap. A big difference. Now I do not know why stack distributions work - if the value between [] is not read as an int by the compiler and, thus, is wrapped to allocate a much smaller block.

Most operating systems do not allocate physical memory in any case, they provide you with pages from the virtual address space that remain unused (and therefore unallocated) until you use them, then the central processor manager memory block will be pinched to give the memory you asked for. Try writing these bytes that you allocated and see what happens.

In addition, at least in windows, when you allocate a block of memory, you can reserve only the largest continuous block available to the OS - since memory is fragmented by repeated allocations, the largest side block that you can malloc reduces. I do not know if Linux has this problem.

+2
source share

There is a huge difference between these two programs:

program1.cpp

 int main () { char p1[3072606208]; char p2[4072606208]; char p3[5072606208]; std::cout << "Size of array p1 = " << sizeof(p1) << std::endl; std::cout << "Size of array p2 = " << sizeof(p2) << std::endl; std::cout << "Size of array p3 = " << sizeof(p3) << std::endl; } 

program2.cpp:

 char p1[3072606208]; char p2[4072606208]; char p3[5072606208]; int main () { std::cout << "Size of array p1 = " << sizeof(p1) << std::endl; std::cout << "Size of array p2 = " << sizeof(p2) << std::endl; std::cout << "Size of array p3 = " << sizeof(p3) << std::endl; } 

The first allocates memory on the stack; he will get a segmentation error due. The second does nothing at all. This memory does not yet exist. This is in the form of data segments that are not affected. Let me modify the second program to touch the data:

 char p1[3072606208]; char p2[4072606208]; char p3[5072606208]; int main () { p1[3072606207] = 0; p2[3072606207] = 0; p3[3072606207] = 0; std::cout << "Size of array p1 = " << sizeof(p1) << std::endl; std::cout << "Size of array p2 = " << sizeof(p2) << std::endl; std::cout << "Size of array p3 = " << sizeof(p3) << std::endl; } 

This does not allocate memory for p1 , p2 or p3 on the heap or stack. This memory lives in data segments. This is part of the application itself. There is one big problem: on my machine this version will not even link.

+2
source share

The first thing to note is that on modern computers, processes do not directly access RAM (at the application level). Rather, the OS will provide each process with a "virtual address space." The OS intercepts calls to access virtual memory, reserving real memory as needed.

Therefore, when malloc or new says that enough memory has been found for you, it simply means that you have enough memory for you in the virtual address space. You can verify this by running the following program using the memset line and commenting on it. (caution, this program uses a busy cycle).

 #include <iostream> #include <new> #include <string.h> using namespace std; int main(int argc, char** argv) { size_t bytes = 0x7FFFFFFF; size_t len = sizeof(char) * bytes; cout << "len = " << len << endl; char* arr = new char[len]; cout << "done new char[len]" << endl; memset(arr, 0, len); // set all values in array to 0 cout << "done setting values" << endl; while(1) { // stops program exiting immediately // press Ctrl-C to exit } return 0; } 

When memset is part of a program, you will notice that the memory used by your computer jumps en masse, and without it you will hardly notice any difference, if any. When the memset to which it is called accesses all elements of the array, forcing the OS to make available space in physical memory. Since the argument for new is size_t (see here ), the maximum argument you can name is 2^32-1 , although it isn’t guaranteed success (this, of course, is not on my machine).

As for your stack allocations: David Hammam's answer says this is better than I could. I am surprised that you were able to compile these programs. Using the same setup as you (Ubuntu 12.04 and gcc 4.6), I get compilation errors such as:

test.cpp: In the function 'int main (int, char **):

test.cpp: 14: 6: error: the size of the variable 'arr is too large

+1
source share

try the following code:

 bool bExit = false; unsigned int64 iAlloc = 0; do{ char *test = NULL; try{ test = new char[1](); iAlloc++; }catch(bad_alloc){ bExit = true;} }while(!bExit); char chBytes[130] = {0}; sprintf(&chBytes, "%d", iAlloc); printf(&chBytes); 

In one run, other programs do not open; in another run, several large files in the application are loaded that use memory-mapped files.

It can help you understand.

0
source share

All Articles