Your professors raise an important point. Unfortunately, the English usage is such that I'm not quite sure what it is. Let me answer the question from the point of view of non-toy programs that have certain characteristics of memory use, and with which I personally worked.
Some programs behave well. They distribute memory in waves: many small or medium distributions, followed by many free ones, in repeating cycles. In these programs, typical memory allocators work pretty well. They combine the freed blocks, and at the end of the wave most of the free memory is in large contiguous pieces. These programs are quite rare.
Most programs behave badly. They allocate and free memory more or less randomly in different sizes from very small to very large and retain a high degree of use of allocated blocks. In these programs, the ability to combine blocks is limited, and over time they end up a memory that is highly fragmented and relatively non-contiguous. If the total memory usage exceeds approximately 1.5 GB in a 32-bit memory space, and there are allocations (say) of 10 MB or more, eventually one of the large allocations will fail. These programs are common.
Other programs are free or with little memory until they stop. They gradually allocate memory during operation, freeing up only small amounts, and then stop, and at this time all memory is freed. The compiler is like this. It is also a virtual machine. For example, the .NET CLR runtime itself, written in C ++, probably never frees up memory. Why should it be?
And this is the final answer. In cases where the program is quite difficult to use memory, memory management using malloc and free is not a sufficient answer to the problem. If you are unlucky to deal with a well-organized program, you will need to develop one or more specialized memory allocators that first allocate large chunks of memory and then allocate according to the appropriate strategy of your choice. You cannot use the freeware at all, unless the program stops.
Not knowing exactly what your professors said, for truly large-scale production programs, I will probably go on their side.
EDIT
I will answer some criticisms. Obviously, SO is not a good place for this kind of message. Just to be clear: I have 30 years of experience writing such software, including a couple of compilers. I have no academic links, only my own bruises. I cannot but feel criticism in people with much narrower and shorter experience.
I will repeat my key message: balancing malloc and free is not a sufficient solution for allocating large amounts of memory in real programs. The coalescence block is normal, and buys time, but it does not . You need serious, smart memory allocators that tend to grab memory in pieces (using malloc or something else) and rarely use it. This is probably the message that the OP professors had in mind, which he misunderstood.