Help me solve this error with my ray tracer

I will not post any code for this question because it will take too much context, but I will explain conceptually what I am doing.

I am building a simple ray tracer that uses affine transforms. I mean, I cross all the rays from the camera coordinates using common shapes. All forms have associated affine transformations, and the rays are first multiplied by the inverse transformations of these transformations until they intersect with the objects in the scene.

So, for example, let's say I need a sphere of radius 3 located at (10, 10, 10). I create a sphere and give it a transformation matrix representing this transformation.

I create a ray in the coordinates of the camera. I multiply the ray by the inverse sphere transformation matrix and intersect it with the common sphere (r = 1 at (0,0,0)). I take the distance along this common ray at the intersection point and using it, I find the common normal and the point along the original ray and save them into a transform object (together with the distance (t) and the actual transformation).

When it comes time to figure out the color of this intersection, I take a transform that inverts transposition and multiply it by the general normal to find the normal. An intersection point is just a point along the original untransformed ray if I use the t value from the intersection of the inverse transformed ray.

The problem is that when I do this, the transformations have weird effects. The main effect is that the transformations seem to drag lights from the scene along with them. If I create a bunch of images and apply a slightly larger rotation to the sphere with each frame, it seems to drag the lights in the scene around it. Here is an example

Honestly, I can’t understand what I’m doing wrong here, but I am tearing my hair out. I can not come up with any good reason for this. Any help would be greatly appreciated.

+3
source share
2 answers

DISCLAIMER: I'm not an expert in ray tracing, and I also missed the transpose permutation in the description of the problem.

When you calculate the normal at the intersection, you are in a “transformed coordinate space,” in particular, right? Thus, the normal will be in this coordinate space. Later, you transfer this vector only to the real intersection point, but the normal still rotates.

Assuming you have a common sphere, red on positive x and blue on negative x. Suppose the camera is at 20.0.0 and you are a 1-sphere only 180 degrees around the y axis (without transposition). Then the ray (1,0,0) will be the transformed ray -1,0,0 and will fall into the sphere from negative x to (-1,0,0) and t = 9. The normal should be (-1, 0,0) . When you transfer this normal value to the real intersection point, the normal will still be (-1.0.0). So, following this normal, you should get the right color, but also the light from the "back side" of the sphere.

+1
source

You have decided to make intersections in the coordinates of objects, and not in the coordinates of the world. IMHO, this is a mistake (if you do not do many instances). However, given this, you must calculate the intersection point in the space of objects, as well as the normal in the space of objects. They must be converted back to world coordinates using the transformation of objects - NOT its inverse. This is how an object falls into world space and how everything in the space of objects falls into world space. I am not sure how to convert the parameter t, so I would start to convert the intersection point initially until you get the correct results.

+1
source

All Articles