IDEA: How to Interactively Display Large Series of Images Using Direct GPU-based Volumetric Rendering

I'm looking for an idea on how to convert a series of TIFF images with a resolution of 30 + gb, 2000+ into a dataset that can be visualized in real time (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL) . I want to use a direct volume visualization approach instead of surface fitting (i.e. Raycasting instead of marching cubes).

The problem is twofold, first I need to convert my images to a 3D dataset. The first thing that occurred to me was to see all the images in the form of 2D textures and simply group them to create a 3D texture.

The second problem is the frequency of interactive frames. To do this, I probably will need some sort of downsampling in combination with “detailed information on demand” loading a high-resolution dataset when scaling or something like that.

The first point approach I found is:

  • polygonization of full volume data through end-to-end processing and generation of the corresponding image texture;
  • performing all basic transformations through operations with vertex processors;
  • separation of polygonal slices into smaller fragments, where the corresponding coordinates of depth and texture are recorded;
  • in fragment processing, deploying a vertex shader programming method to improve fragment rendering.

But I have no concrete ideas on how to start implementing this approach.

I would like to see some fresh ideas or ideas on how to start implementing the approach shown above.

+4
source share
2 answers

If anyone has fresh ideas in this area, they are likely to try to develop and publish them. This is an ongoing area of ​​research.

In your “point-to-point” approach, it seems like you have outlined a basic fragment-based volume rendering method. This may give good results, but many people are switching to hardware raycasting. Here is an example of this in the CUDA SDK if you are interested.

A good method for hierarchical rendering of volume has been described in detail by Crassin et al. in an article called Gigavoxels . It uses an octave-based approach that only loads the bricks needed in memory when they are needed.

Very good introductory book in this area. Real-time graphics .

+2
source

I did a small amount of rendering, although my code generated an isosource using marching cubes and displayed this. However, in my humble self-education of volume rendering, I came across an interesting short article: Volume Rendering on general computer equipment . It also comes with an initial example. I never got around to checking it out, but it seemed promising. This is DirectX, although not OpenGL. Maybe this can give you some ideas and a place to start.

+1
source

All Articles