I will not post any code for this question because it will take too much context, but I will explain conceptually what I am doing.
I am building a simple ray tracer that uses affine transforms. I mean, I cross all the rays from the camera coordinates using common shapes. All forms have associated affine transformations, and the rays are first multiplied by the inverse transformations of these transformations until they intersect with the objects in the scene.
So, for example, let's say I need a sphere of radius 3 located at (10, 10, 10). I create a sphere and give it a transformation matrix representing this transformation.
I create a ray in the coordinates of the camera. I multiply the ray by the inverse sphere transformation matrix and intersect it with the common sphere (r = 1 at (0,0,0)). I take the distance along this common ray at the intersection point and using it, I find the common normal and the point along the original ray and save them into a transform object (together with the distance (t) and the actual transformation).
When it comes time to figure out the color of this intersection, I take a transform that inverts transposition and multiply it by the general normal to find the normal. An intersection point is just a point along the original untransformed ray if I use the t value from the intersection of the inverse transformed ray.
The problem is that when I do this, the transformations have weird effects. The main effect is that the transformations seem to drag lights from the scene along with them. If I create a bunch of images and apply a slightly larger rotation to the sphere with each frame, it seems to drag the lights in the scene around it. Here is an example



Honestly, I can’t understand what I’m doing wrong here, but I am tearing my hair out. I can not come up with any good reason for this. Any help would be greatly appreciated.
source share