Beam tuning in CUDA using the supplied OpenGL matrices

I am working on a project where I am transferring my CUDA code to use as a module in a large application that supports its own OpenGL State. My module is basically volume rendering. Now I am faced with the problem that I need to adjust the toner beams to accurately simulate an OpenGL camera, so that my rendering of the volume matches the rest of the rendering.

At the point where my CUDA code is called, there is a viewing matrix (without a model representation matrix) and a set of projection matrices. I have already extracted the trimming parameters and the position of the camera in world space.

u *= -c_pp.right; v *= -c_pp.top; Ray eyeRay; eyeRay.o = make_float3(c_camPosition); //origin eyeRay.d = normalize(make_float3(u, v, -c_pp.near)); //direction 

u and v are normalized screen coordinates from -1 to 1 . c_pp describes the type of truncation using top , right and near . Now I am looking for a suitable matrix that I have to multiply by so that eyeRay.d shows in the right direction. Still using a scanned matrix or its transposed or inverted version.

Update

Changed u *= -c_pp.right to u *= c_pp.right and everything works by multiplying eyeRay.d by the inverse of the view matrix.

Full fixed code:

 u *= c_pp.right; v *= -c_pp.top; Ray eyeRay; eyeRay.o = make_float3(c_camPosition); //origin eyeRay.d = make_float3(u, v, -c_pp.near)); //direction eyeRay.d = mul(c_invViewMatrix, eyeRay.d); 

with c_inViewMatrix is the inverse representation matrix.

+4
source share
1 answer

The original inline poster answered this question. The answer is to change u *= -c_pp.right to u *= c_pp.right (change of sign). See above.

I added this answer to reduce the number of unanswered questions in the CUDA tag to make it more useful.

+3
source

All Articles