One large pool or several types of pools?

I am working on a video game that requires high performance, so I am trying to set up a good memory strategy or a specific part of the game, the part that represents the β€œmodel” of the game, the presentation of the game. I have an object containing a whole presentation of the game, with different managers inside to coordinate the presentation, following the rules of the game. Each game object is currently created specific to a factory type, so I have several factories that allow me to isolate and change the memory management of these objects as I wish.

Now I am in the process of choosing between these two alternatives:

  • Presence of a memory pool for each type : this will allow really fast allocation / deallocation and minimal fragmentation, since the object pool already knows the size of the allocated objects. One thing that bothers me is to have multiple pools that are separate can make another solution more efficient ...
  • Having one large memory pool shared by all factories of the same game representation : (using something like boost :: pool with some adapter functions), so I have all the memory of game objects allocated together and can have one bit distribution for the game which I already know about the total size (this is not always the case). I'm not sure if this is a better solution than A, due to possible fragmentation inside the pool, since there would be objects of different sizes in the same pool, but it looks easier to analyze memory and other problems.

Now I had experience of real worlds with A, so I am not familiar with B and would like to consult with these solutions for a long-term project. Which solution seems best for a long-term project and why? (Note: the pool is really necessary in this case, because the game model is used to edit the game, so there will be a lot of allocation / release of small objects).

Edit for clarification: I am using C ++ if (not yet clear)

+7
c ++ memory pool
source share
6 answers

The correct answer applies to your problem area. But in the problem areas I work in, the first one is usually the one we choose.

I do in real time or next to a real time code. Audio editing and playback are mostly. In this code, we usually cannot afford to allocate memory from the heap in the playback engine. Most of the time, malloc returns quickly enough, but sometimes it is not. And that sometimes matters.

So, our solutions are to have specific pools for certain objects and use a common pool for everything else. In certain pools, a certain number of elements are pre-distributed and implemented as a linked list (actually a queue), so the distribution and release do not exceed a couple of pointer updates and the cost of entering and exiting a critical section.

As a reserve for unusual cases; when someone needs to be allocated from a special pool and it is empty, we will allocate a piece of shared memory (several objects) and add this to a special pool. When the allocation is part of a special pool, it will NEVER return to the shared pool until the application exits or launches a new project.

A good selection of the initial size and maximum size of special pools is an important part of customizing the application.

+7
source share

One of the problems you will encounter is that STL implementations are allowed to assume that two distributors of the same type are equivalent. For this reason, Boost.Pool uses only one pool (technically it uses different pools for each type). IE, your distributors are not allowed to have any non-static elements in the general case. If you are making a video game and know that your STL implementation does not have this problem, then do not worry about it - however, there may be some problems with list::splice and std::swap in containers.

+4
source share

It is not recommended to use stl or boost for any type of video game for starters. You can be absolutely sure that the second one that you use in the same stl container is fragmented and your performance is hopeless in the toilet compared to the ideal one at least (since most people in this category most of the time never notice and never can really compare with anything else). I didn’t always think so, but over time I saw that even a couple lines of code look like little gremlins, which in the end will cause you great pain.

The first method is the most common, and as someone who has done both, this is probably the only way that is practical if you do not want to spend a lot of time and energy on a problem than it probably costs you. The second way is better, because it is more general and can still be adapted to your specific needs, but this is a lot of work, and not something to easily jump in.

+4
source share

One possible solution is something between 1. and 2.

Use pools for small objects: one pool per object size. In this case, you can easily find the pool by storing the pointers in an array.

And besides, you can have one pool for large objects. In this case, fragmentation is less likely, and time overhead is not so important because large objects are not allocated and freed up very often.

A note about boost::pool . When testing the performance of boost::pool check not only the distribution, but also the release. I experienced that the release times of boost::pool and boost::fast_pool can be extremely long. My case was to distribute and allocate small objects of different sizes in one pool

+2
source share

I have no specific experience with the memory manager that you are considering, but here are some general guidelines that might help:

  • If you do not expect out of memory, option 1 would be better, because since you state that it is fast (faster than 2?), And with separate pools it is easier to identify allocation / free / buffer problems (provided that the manager the pool has decent error detection capabilities).
  • If memory can be a problem (since your game will have a lot of memory compared to the public memory of the target platform), having one large pool will provide more efficient use of memory. Also, if you cannot accurately predict the average and maximum memory requirements for a pool, this is the best choice if the memory manager cannot dynamically increase the memory pools (and ideally dynamically allocate blocks from the pool). The only thing I see is that it can be slower (is that so?), And errors in memory management can be harder to detect.

You can get the best of both worlds (assuming the speed is similar), developing with multiple pools, but doing the final testing and production release with one pool. This way you can identify distribution / management issues during development, but still benefit from a potentially more efficient single pool.

0
source share

Actually, I will go with 2. I can give an example from the Linux kernel. In the kernel, dentry entries (directory entries) and inodes must be kept in memory for a longer time to better respond to users. Because the inode object depends on the file system, each file system will create its own pool of objects. Another thing you could do if the objects are similar is to abstract the objects and store the common attributes in one abstract object and store the information about a specific object using a container. Refer to the code below for a complete idea.

http://lxr.linux.no/linux+v2.6.32/fs/ext2/super.c#L149

0
source share

All Articles