SUMMARY:
I have an application that consumes more memory than it should (about 250% of the expected amount), but I can not find memory leaks. Calling the same function (which makes many allocations) will increase memory usage to some point, and then it will not change and will remain there.
PROGRAM DETAILS:
The application uses the quadtree data structure to store "Points". You can specify the maximum number of points that must be stored in memory (cache size). "Points" are stored in "PointBuckets" (arrays of points associated with leaf nodes of the quadrant), which, if the maximum total number of points in the quadrant is reached, are serialized and stored in temporary files, which should be obtained when necessary. It all works fine.
Now, when the file is downloaded, a new Quadtree is created, and the old one is deleted, if it exists, then the points are read from the file and inserted into the square one at a time. Many memory allocations occur when creating and deleting buckets during node splitting, etc.
Problem:
If I upload a file that is expected to use 300 MB of memory once, I get the expected amount of memory consumed. All is well. If I keep downloading the same file again and again, memory usage continues to grow (I look at the RES column at the top, Linux) to 700 MB. This may indicate a memory leak. However, if I continue to download files, memory consumes only 700 MB.
Another thing: when I use the valgrind array and look at memory usage, it always stays in the expected limit. For example, if I set the cache size to 1.5 GB and run only my program, it will eventually consume 4 GB of memory. If I run it in an array, it will remain below 2 GB for all the time, and then in the graphs I can see that in fact it never allocated more than the expected 1.5 GB. My naive assumption is that this is because the array uses its own memory pool, which somehow prevents fragmentation.
What do you think is going on here? What solution should be sought to solve this problem if it is memory fragmentation?