Why do we use memory managers?

I have seen that many code bases, especially server-side codes, have basic (sometimes advanced) memory managers. Is the real goal of the memory manager to reduce the number of malloc calls, or mainly for memory analysis, corruption checking or can be focused on other applications.

This is an argument to keep malloc calls reasonably reasonable, since malloc is itself a memory manager. The only performance increase I can justify is when we know that the system always requests memory of the same size.

Or the reason for having a memory manager is that free does not return memory to the OS, but stores it in a list. Thus, throughout the process, the process heap process may increase if we continue to do malloc / free due to fragmentation.

+7
c ++ c memory-management memory
source share
7 answers

malloc is a universal distributor - "not slow" is more important than "always fast".

Consider a feature that will be a 10% improvement in many common cases, but can lead to a significant decrease in performance in a few rare cases. An application distribution specialist can avoid the rare occasion and take advantage. A general purpose dispenser should not.


In addition to the number of calls in malloc, there are other relevant attributes:

distribution localization
On current hardware, this is easily the most important performance factor. The application has more knowledge of access patterns and can optimize distributions accordingly.

multithreading
A general-purpose distributor must allow malloc calls and exempt from different threads. This usually requires blocking or similar concurrency processing. If the bunch is very busy, this leads to massive debate.

An application that knows that some high-frequency alloc / frees come from only one stream can use its own stream-dependent heap, which not only avoids conflicts for these distributions, but also increases their locality and unloads the default distributor.

fragmentation
This is still a problem for long-term applications on systems with limited physical memory or address space. Fragmentation may require more and more memory or address space from the OS, even without increasing the actual working set. This is a serious problem for applications that need to run smoothly.

The last time I looked deeper into the distributors (which probably half a century ago), the consensus was that naive attempts to reduce fragmentation often contradict the never-delayed rule.

Again, an application that knows (some of it) distribution patterns can take a heavy load from the default allocator. One of the most common use cases is to build a syntax tree or something similar: there are small distribution gazilles that are never individually released, only as a whole. Such a template can be efficiently served using a very trivial dispenser.

fault tolerance and diagnostics
Last but not least, the default diagnostic and self-protection capabilities of the dispenser may not be sufficient for many applications.

+5
source share

Why do we have custom memory managers and not built-in ones?

Perhaps the number one reason is that the code base was original, written 20-30 years ago, when it was not good, and no one dared to change it.

But otherwise, as you say, because the application needs to manage fragmentation, grab memory at startup to ensure that memory is always available, for security, or for many other reasons - most of which can be achieved by using the built-in manager correctly.

+4
source share

C and C ++ are for deletion. They donโ€™t do much, which they obviously donโ€™t ask, so when the program asks for memory, it gets the minimum effort necessary to deliver this memory.

In other words, if you do not need it, you do not pay for it.

If finer memory management is required, this is the programmer's domain. If a programmer wants to trade bare metal speed for a system that will provide higher performance on the target equipment in combination with the program, often unique goals, better debugging support, or just love the look and the warm fluffies that come from using the manager, that is, to them. The programmer either writes something smarter, or finds a third-party library to do what they want.

+2
source share

You briefly touched on many reasons why you would use the memory manager in your question.

Is the real goal of the memory manager to reduce the number of malloc calls, or mainly for the purposes of memory analysis, corruption checking or other application-oriented purposes?

This is a big question. The memory manager in any application can be general (for example, malloc), or it can be more specific. The more specialized the memory manager becomes, the more likely it will be more effective in the specific task that it must perform.

Take this oversimplified example:

 #define MAX_OBJECTS 1000 Foo globalObjects[MAX_OBJECTS]; int main(int argc, char ** argv) { void * mallocObjects[MAX_OBJECTS] = {0}; void * customObjects[MAX_OBJECTS] = {0}; for(int i = 0; i < 1000; ++i) { mallocObjects[i] = malloc(sizeof(Foo)); customObjects[i] = &globalObjects[i]; } } 

In the example above, I pretend that this list of global objects is our "specialized memory allocator." This is just to simplify what I explain.

When allocating with malloc, there is no guarantee that it is next to the previous distribution. Malloc is a universal distributor and does a good job of this, but does not necessarily make the most effective choice for each application.

With the help of a custom allocator, you can place a front dedicated room for 1000 custom objects, and since they are of a fixed size, they return the exact amount of memory needed to prevent fragmentation and effectively allocate this block.

There is also a difference between memory abstraction and specialized memory allocators. STL allocators may be an abstraction model, not a specialized memory allocator.

Take a look at this link for more information on custom allocators and why they are useful: gamedev.net link

+2
source share

There are many reasons why we would like to do this, and it really depends on the application itself. Virtually all of the reasons you cited are valid.

I once created a very simple memory manager that kept track of shared_ptr distributions so that I could see what was not released properly at the end of the application.

I would say stick with your runtime if you don't need what it does not provide.

+1
source share

Memory managers are mainly used to effectively manage memory redundancy. Usually processes have access to a limited amount of memory (4 GB on 32-bit systems), from this you should subtract the virtual memory space reserved for the kernel (1 GB or 2 GB, depending on the OS configuration). Thus, practically a process has access, say, up to 3 GB of memory, which will be used to store all its segments (code, data, bss, heap and stack).

Memory managers (for example, malloc) try to fulfill various memory reservation requests issued by a process by requesting new pages of memory in the OS (using sbrk or mmap system calls). Each time this happens, this implies an additional cost of program execution, since the OS must look for a suitable memory page that will be assigned to the process (physical memory is limited, and all running processes want to use it), update process tables (TMP, etc. .). These operations are time consuming and affect the execution and productivity of the process. Thus, the memory manager usually tries to request the right pages for a successful order. For example, it might request a few more pages to avoid calling mmap calls in the near future. In addition, he is trying to solve problems such as fragmentation, memory alignment, etc. This basically offloads the process from this responsibility, otherwise everyone who writes some program that needs dynamic memory allocation must do this manually!

Actually, there are times when you may be interested in managing memory manually. This applies to systems with a built-in or high degree of availability, which must operate within 24/365. In these cases, even if the memory fragmentation is low, it can become a problem after a very long period of work (for example, 1 year). Thus, one of the solutions that are used in this case is to use a memory pool to allocate memory in front of you for application objects. After each time you need memory for an object, you simply use the memory already reserved.

+1
source share

For a server or any application that must run for long periods of time or indefinitely, the main problem is fragmentation of fragmented memory. After a long series of mallocs / new and free / delete, paged memory can end up with spaces on pages that lose space and can end up with virtual address space. Microsoft does this with the .NET framework, periodically pausing the process to repack the paged memory for the process.

To avoid slowing down when repackaging memory in a process, an application such as a server can use several processes for an application, so when repackaging one process, another process takes up more load.

0
source share

All Articles