Live bytes versus resident size

I profile my application and try to understand how the application will behave in a low memory situation.

I know that I can simulate a low memory warning using a simulator, but I don’t want to simulate how the application will behave if a memory warning is triggered, but instead we will see how the application will behave at the lowest memory level.

To increase memory usage, I created a method that calls

char * a = malloc (1024 * 1024);

And the result is quite interesting. I probably read it wrong, and where I need help.

On Allocation tools, at some point I had an application using more than 1 GB in the "Live Bytes" column, but the application did not work on the real device and worked fine.

live bytes screenshot

Looking at the Resident / Dirty size, on VM Tracker, I had a different result: about 134 MB on the resident and 78 MB on the dirty size.

resident size screenshot

What am I missing here? According to the docs:

Live Bytes : the number of bytes that have been allocated but not released.

Resident Size : The amount of actual memory used.

Why are the results different? Thanks

+4
source share
1 answer

A little difficult. I did some tests. A few comments:

1- :

char *a = (char*) malloc(1024*1024);

2- 1 , 1 , , for (1000 ), :

for (int i=0; i<1000; ++i)
{
   char* a =  malloc(1024*1024);
}

, 1 ; - , 1 . , 1 , .

3- :

  for (int i=0; i<1000; ++i)
{
    char* a =  (char*) malloc(1024*1024);
    memset(a, 'b', 1024*1024);
}

"b" 1 . . , mallocs.

,

0

All Articles