I am currently working on a large-scale application project (written in C ++), which started from scratch some time ago, and we have reached the point where it is necessary to review the memory leak checks.
The application runs on Ubuntu Linux, has a lot of multimedia content and uses OpenGl, SDL and ffmpeg for various purposes, including 3D rendering of graphs, viewing windows, audio and video. You can think of it as a video game, although it is not, but the responsibilities of the application can be simplified by considering it as a video game.
I am currently a little unaware in determining whether we still have memory leaks or not. We used to identify some and delete them. However, these days the application is almost complete, and the tests we ran give me results that I cannot understand for sure.
The first thing I did was try to run the application through Valgrind ... unfortunately, then the application crashes when launched in the valgrind environment. The crash is in a “non-deterministic” state, as it falls in different places. So I abandoned Valgrind to easily identify the source of potential leaks, and ended up using two Linux commands: free and top.
free is used to check the use of system memory while the application is running
top is used with the '-p' option to examine the memory usage of the application process while it is running.
The output form at the top and free is dumped into files for further processing. I made two graphs with data that are connected at the bottom of the question.
The test is very simple: the memory data is examined after the application is already running, and it waits for commands. Then I start a sequence of commands that always do the same thing. It is expected that the application will load a lot of multimedia data into RAM, and then load it.
Unfortunately, the graph does not show me what I expected. Memory usage is increased by three different steps and then stopped. The memory, apparently, was never released, and this hinted to me that there was a HUGE memory leak. that would be fine, as it would mean that, most likely, we are not freeing the memory spent by the media.
But after the first three steps ... memory usage is stable ... there are more serious steps ... just small up and down that correspond to the expected loading and unloading of data. What is unexpected here is that the data to be loaded / unloaded amounts to hundredths of megabytes of RAM, instead, up and down are only a few megabytes (say, 8-10 MB).
Currently, I am pretty versed in interpreting this data.
Anyone have any tips or suggestions? What am I missing? Is the method I use to check for macroscopic memory leaks completely wrong? Do you know any other (preferably free) tool, other than Valgrind, for checking memory leaks?
System memory usage graph
Process memory usage graph