Your selection has a size of 16.7k * 15.7k * 4 with a size of ~ 1 GB. The rest of the answer depends on whether you are compiling a 32-bit or 64-bit executable and if you use physical address extensions (PAEs). If you are not familiar with PAE, most likely you are not using it, by the way.
Assuming 32 bits
If you have a 32-bit executable, you can address 3 GB of this memory, so that one third of your memory will be used in one distribution. Now, to add to the problem, when you allocate a piece of memory, this memory must be available as a single continuous range of free memory. You may have more than 1 GB of free memory, but in pieces less than 1 GB, so people suggest you divide the texture into tiles. Dividing it into 32 x 32 smaller fragments means that you allocate 1024 allocations from 1 MB (this is probably too fine-grained). Note: a quote is required, but some linux modes only allow 2 GB.
Assuming 64 bits
It seems unlikely that you are creating a 64-bit executable, but if you were then the logical addressable memory is much higher. Typical numbers will be 2 ^ 42 or 2 ^ 48 (4096 GB and 256 TB, respectively). This means that large allocations should not be interrupted by anything other than artificial stress tests, and you will kill your page file before running out of logical memory.
If your limitations / hardware allow, I would suggest building up to 64 bits instead of 32 bits. Otherwise, see below.
Tile vs subsampling
Tiles and subsamples are not mutually exclusive, in front. You may need only one change to solve your problem, but you can choose a more complex solution.
Tiling is a good idea if you are in a 32-bit address space. This complicates the code, but removes the only 1 GB contiguous block problem that you seem to encounter. If you have to create a 32-bit executable, I would prefer to over-sub-sample the image.
Sub-sampling an image means that you have an additional (albeit smaller) memory block for the selected or original image. This may have a performance advantage inside openGL, but install it against additional memory pressure.
The third method, with additional complications, is to stream the image from the disk when necessary. If you zoom out to display the entire image, you will have sub-samples> 100 pixels per pixel of the screen on the 1920 x 1200 monitor. You might want to create an image that by default will be large sub-samples, and use it until you will be large enough to require a higher resolution of the subset . If you use an SSD, this may give acceptable performance, but it adds many additional complications.