Is it good to use * not * to use free () in the allocated memory?

I study computer engineering and I have electronics courses. I heard from two of my professors (from these courses) that you can avoid using the free() function (after malloc() , calloc() , etc.), since the allocated memory spaces will most likely not be used again to allocate other memory . That is, for example, if you allocate 4 bytes, and then release them, you will have 4 bytes of space, which most likely will not be allocated again: you will have a hole.

I think it's crazy: you cannot have a non-toy program where you allocate memory on a bunch without releasing it. But I donโ€™t have the knowledge to explain exactly why it is so important that for each malloc() should be free() .

So: are there ever circumstances under which it might be wise to use malloc() without using free() ? And if not, how can I explain this to my professors?

+83
c ++ c memory-management heap memory-leaks
Mar 18 '14 at 13:38
source share
11 answers

Simple: just read the source of almost any semi-serious implementation of malloc()/free() . By this I mean the actual memory manager that handles the operation of calls. This can be in the runtime library, virtual machine, or operating system. Of course, the code is not equally accessible in all cases.

Making sure that the memory is not fragmented, connecting adjacent holes with large holes is very common. More serious distributors use more serious methods to ensure this.

So, suppose you perform three distributions and de-distributions and get the blocks laid out in memory in the following order:

 +-+-+-+ |A|B|C| +-+-+-+ 

The sizes of the individual distributions do not matter. then you free the first and last, A and C:

 +-+-+-+ | |B| | +-+-+-+ 

when you finally free B, you (initially, at least theoretically):

 +-+-+-+ | | | | +-+-+-+ 

which can be divided into simply

 +-+-+-+ | | +-+-+-+ 

i.e. one big free block, no fragments left.

Links on request:

  • Try reading the dlmalloc code. I am much more advanced, being a full-fledged implementation of the quality of production.
  • Defragmentation implementations are available even in embedded applications. See, for example, these notes in the heap4.c code on FreeRTOS .
+100
Mar 18 '14 at 13:41
source share

Other answers already perfectly explain that real implementations of malloc() and free() really combine (defragmented) holes into larger free pieces. But even if that were not the case, it would be nice to give up free() .

The fact is that your program simply allocated (and wants to free) these 4 bytes of memory. If it will work for a long period of time, it is likely that he will need to again allocate only 4 bytes of memory. Thus, even if these 4 bytes are never merged into a larger contiguous space, they can still be reused by the program itself.

+42
Mar 18 '14 at 14:16
source share

This is complete nonsense, for example, there are many different malloc implementations, some are trying to make a bunch more efficient, like Doug Lea or this one.

+10
Mar 18 '14 at 13:42
source share

How do your professors work with POSIX? If they are used to write many small minimalistic shell applications, the scenario in which I can imagine this approach would not be too bad - freeing up the whole heap immediately at the leisure of the OS is faster than releasing thousands of variables. If you expect your application to work for a second or two, you can easily leave without removing the selection.

Of course, this is bad practice (performance improvements should always be based on profiling, and not on the vague feeling of the gut), and this is not something that you should tell students without explaining other limitations, but I can imagine many tiny, piping applications- shells that should be written this way (unless static distribution is used immediately). If you are working on something that does not contribute to the release of your variables, you either work in extreme conditions with low latency (in this case, how can you afford dynamic allocation and C ++ ?: D), or you do something very , itโ€™s very wrong (for example, to select an integer array, allocating a thousand integers one by one, and not one block of memory).

+9
Mar 19 '14 at 8:49
source share

You mentioned that they were professors of electronics. They can be used to write real-time software / software that can accurately execute times when execution is often required. In those cases, knowing that you have enough memory for all allocations, and not for freeing and reallocating memory, you can get a more easily calculated runtime limit.

In some schemes, hardware-based memory protection can also be used to ensure that a procedure terminates in allocated memory or generates a trap in what should be very exceptional cases.

+5
Mar 18 '14 at 19:13
source share

Taking this from a different angle than previous commentators and answers, one of the possibilities is that your professors had experience working with systems in which memory was allocated statically (that is, when the program was compiled).

Static distribution happens when you do things like:

 define MAX_SIZE 32 int array[MAX_SIZE]; 

In many real-time and embedded systems (which are likely to occur with EO or CE), it is usually preferable to avoid dynamic memory allocation. Thus, the use of malloc , new and their deletion counterparts is rare. In addition, computer memory has exploded in recent years.

If you have 512 MB for you and you statically allocate 1 MB, you have about 511 MB to break through before your software explodes (well, not really ... but come with me here). Assuming you have 511 MB to abuse, if you malloc 4 bytes every second without freeing them, you can work almost 73 hours before you run out of memory. Given that many machines are turned off once a day, this means that your program will never run out of memory!

In the above example, the leak is 4 bytes per second, or 240 bytes per minute. Now imagine that you are decreasing this byte / min ratio. The lower this ratio, the longer your program can run without problems. If your malloc infrequent, this is a real possibility.

Damn, if you know that only once you go to malloc , and that malloc will never hit again, then this is very similar to a static distribution, although you do not need to know the size of what exactly you are distributing forward. For example: let's say we have 512 MB again. We need a malloc 32 array of integers. These are typical integers - 4 bytes each. We know that the sizes of these arrays will never exceed 1024 integers. There are no other memory allocations in our program. Do we have enough memory? 32 * 1024 * 4 = 131,072. 128 KB - so yes. We have a lot of space. If we know that we will never allocate more memory, we can easily malloc arrays without freeing them. However, this may also mean that you need to reboot the machine / device if your program crashes. If you start / stop your program 4,096 times, you will allocate all 512 MB. If you have zombie processes, it is possible that the memory will never be freed even after a crash.

Save yourself from pain and suffering, and use this mantra as the โ€œOne Truthโ€: malloc should always be associated with free . new should always have delete .

+2
Mar 19 '14 at 3:12
source share

I think that the application stated in the question is nonsense, if taken literally from the point of view of the programmer, but has the truth (at least some) from the representation of the operating system.

malloc () will eventually call either mmap () or sbrk (), which will retrieve the page from the OS.

In any non-trivial program, the likelihood that this page will ever be returned to the OS during the life cycle of the process is very small, even if you free () most of the allocated memory. Thus, free () 'd-memory will be available only to the same process most of the time, but not to others.

+2
Mar 19 '14 at 9:16
source share

Your professors are not mistaken, but also (they are at least misleading or simplifying). Memory fragmentation creates problems for performance and efficient use of memory, so sometimes you have to consider it and take measures to avoid it. One classic trick is that if you allocate a lot of things of the same size, grabbing a memory pool at startup, which is somewhat a multiple of this size and completely controls its use, thereby ensuring that you do not have fragmentation at the OS Level (and holes in your the internal memory core will exactly match the size of the next object of this type that comes).

There are whole third-party libraries that do nothing but process such things for you, and sometimes the difference between acceptable performance and what works too slowly. malloc() and free() require a significant amount of time, which you will begin to notice if you name a lot of them.

Thus, avoiding just naively using malloc() and free() , you can avoid fragmentation and performance problems - but when you get to it, you should always be sure that you are free() everything that you malloc() , if only you have a very good reason. Even when using the internal memory pool, a good application will be free() in the pool memory until it leaves. Yes, the OS will clean it, but if the application life cycle is subsequently changed, it would be easy to forget that the pool is still hanging around ...

Long-term applications, of course, should be extremely scrupulous in terms of cleaning or disposing of everything that they have allocated, or they run out of memory.

+2
Mar 20 '14 at 11:20
source share

Your professors raise an important point. Unfortunately, the English usage is such that I'm not quite sure what it is. Let me answer the question from the point of view of non-toy programs that have certain characteristics of memory use, and with which I personally worked.

Some programs behave well. They distribute memory in waves: many small or medium distributions, followed by many free ones, in repeating cycles. In these programs, typical memory allocators work pretty well. They combine the freed blocks, and at the end of the wave most of the free memory is in large contiguous pieces. These programs are quite rare.

Most programs behave badly. They allocate and free memory more or less randomly in different sizes from very small to very large and retain a high degree of use of allocated blocks. In these programs, the ability to combine blocks is limited, and over time they end up a memory that is highly fragmented and relatively non-contiguous. If the total memory usage exceeds approximately 1.5 GB in a 32-bit memory space, and there are allocations (say) of 10 MB or more, eventually one of the large allocations will fail. These programs are common.

Other programs are free or with little memory until they stop. They gradually allocate memory during operation, freeing up only small amounts, and then stop, and at this time all memory is freed. The compiler is like this. It is also a virtual machine. For example, the .NET CLR runtime itself, written in C ++, probably never frees up memory. Why should it be?

And this is the final answer. In cases where the program is quite difficult to use memory, memory management using malloc and free is not a sufficient answer to the problem. If you are unlucky to deal with a well-organized program, you will need to develop one or more specialized memory allocators that first allocate large chunks of memory and then allocate according to the appropriate strategy of your choice. You cannot use the freeware at all, unless the program stops.

Not knowing exactly what your professors said, for truly large-scale production programs, I will probably go on their side.

EDIT

I will answer some criticisms. Obviously, SO is not a good place for this kind of message. Just to be clear: I have 30 years of experience writing such software, including a couple of compilers. I have no academic links, only my own bruises. I cannot but feel criticism in people with much narrower and shorter experience.

I will repeat my key message: balancing malloc and free is not a sufficient solution for allocating large amounts of memory in real programs. The coalescence block is normal, and buys time, but it does not . You need serious, smart memory allocators that tend to grab memory in pieces (using malloc or something else) and rarely use it. This is probably the message that the OP professors had in mind, which he misunderstood.

+1
Mar 18 '14 at 2:26
source share

I am surprised that no one has yet quoted The Book :

In the end, this may not be true, because the memories can become large enough so that it is impossible to free up free memory for the entire life of the computer. For example, there are about 3 โ‹… 10 13 microseconds per year, therefore, if we were against once in a microsecond, we would need about 10 15 memory cells to create a machine that can run for 30 years without running out of memory. Such a memory seems absurdly large by today's standards, but it is physically impossible. On the other hand, processors are becoming faster, and in the future a large number of processors running in parallel in the same memory can run on a computer, so it will be possible to use memory much faster than we expected.

http://sarabander.imtqy.com/sicp/html/5_002e3.xhtml#FOOT298

So, indeed, many programs can do just fine without even trying to free up memory.

+1
Mar 20 '14 at 3:33
source share

I know of one case where explicit memory release is worse than useless. That is, when you need all your data until the end of the process life cycle. In other words, their release is only possible until the end of the program. Since any modern OS takes care of freeing up memory when the program dies, calling free() in this case is not required. In fact, this can slow down the program, as it may require access to several pages in memory.

+1
Mar 22 '14 at 14:02
source share



All Articles