I'm looking for an idea on how to convert a series of TIFF images with a resolution of 30 + gb, 2000+ into a dataset that can be visualized in real time (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL) . I want to use a direct volume visualization approach instead of surface fitting (i.e. Raycasting instead of marching cubes).
The problem is twofold, first I need to convert my images to a 3D dataset. The first thing that occurred to me was to see all the images in the form of 2D textures and simply group them to create a 3D texture.
The second problem is the frequency of interactive frames. To do this, I probably will need some sort of downsampling in combination with “detailed information on demand” loading a high-resolution dataset when scaling or something like that.
The first point approach I found is:
- polygonization of full volume data through end-to-end processing and generation of the corresponding image texture;
- performing all basic transformations through operations with vertex processors;
- separation of polygonal slices into smaller fragments, where the corresponding coordinates of depth and texture are recorded;
- in fragment processing, deploying a vertex shader programming method to improve fragment rendering.
But I have no concrete ideas on how to start implementing this approach.
I would like to see some fresh ideas or ideas on how to start implementing the approach shown above.
source share