C ++ runs out of memory trying to draw a large image using OpenGL

I created a simple 2D C ++ image viewer using MFC and OpenGL. This image viewer allows the user to open the image, enlarge / reduce the image, pan and view the image in different color layers (cyan, yellow, magenta, black). The program works great for images with a moderate size. However, I do some stress tests on some very large images and I easily lose my memory. One such image that I have is 16,700x15,700. My program will be exhausted before it can do anything, because I dynamically create UCHAR[] with a size of height x width x 4 . I multiply it by 4 because there is one byte for each RGBA value when I feed this array to glTexImage2D(GLTEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGBA, GLUNSIGNED_BYTE, (GLvoid*)myArray)

I did a few searches and read a few things about dividing my image into tiles, instead of one large texture on one quadrant. Is this something I have to do? How will this help me with my memory? Or is there something better that I should do?

+6
source share
1 answer

Your selection has a size of 16.7k * 15.7k * 4 with a size of ~ 1 GB. The rest of the answer depends on whether you are compiling a 32-bit or 64-bit executable and if you use physical address extensions (PAEs). If you are not familiar with PAE, most likely you are not using it, by the way.

Assuming 32 bits

If you have a 32-bit executable, you can address 3 GB of this memory, so that one third of your memory will be used in one distribution. Now, to add to the problem, when you allocate a piece of memory, this memory must be available as a single continuous range of free memory. You may have more than 1 GB of free memory, but in pieces less than 1 GB, so people suggest you divide the texture into tiles. Dividing it into 32 x 32 smaller fragments means that you allocate 1024 allocations from 1 MB (this is probably too fine-grained). Note: a quote is required, but some linux modes only allow 2 GB.

Assuming 64 bits

It seems unlikely that you are creating a 64-bit executable, but if you were then the logical addressable memory is much higher. Typical numbers will be 2 ^ 42 or 2 ^ 48 (4096 GB and 256 TB, respectively). This means that large allocations should not be interrupted by anything other than artificial stress tests, and you will kill your page file before running out of logical memory.

If your limitations / hardware allow, I would suggest building up to 64 bits instead of 32 bits. Otherwise, see below.

Tile vs subsampling

Tiles and subsamples are not mutually exclusive, in front. You may need only one change to solve your problem, but you can choose a more complex solution.

Tiling is a good idea if you are in a 32-bit address space. This complicates the code, but removes the only 1 GB contiguous block problem that you seem to encounter. If you have to create a 32-bit executable, I would prefer to over-sub-sample the image.

Sub-sampling an image means that you have an additional (albeit smaller) memory block for the selected or original image. This may have a performance advantage inside openGL, but install it against additional memory pressure.

The third method, with additional complications, is to stream the image from the disk when necessary. If you zoom out to display the entire image, you will have sub-samples> 100 pixels per pixel of the screen on the 1920 x 1200 monitor. You might want to create an image that by default will be large sub-samples, and use it until you will be large enough to require a higher resolution of the subset . If you use an SSD, this may give acceptable performance, but it adds many additional complications.

0
source

All Articles