3D affine transform problem in ray tracing

Everything,

I am writing a rather unusual beam tracer to calculate the heat transfer properties of various objects in a scene. In this beam detector, random beams are taken from the surface of my primitive objects into the scene to check for intersections.

This particular algorithm requires that each ray develops in primitive space, and then is affinely transformed by the original object into world space, and then affinely transformed back into the primitive space of other objects in the scene to check for intersection.

Everything is fine until I make an anisotropic scale, for example, scaling an object by [2 2 1] (isotropic scales are beautiful). This makes me think that I am not correctly transforming the directional component of the beam. Currently, I am transforming the direction of the rays from primitive space to world space by multiplying the directional component by transposing the matrix of the inverse transformation of the source objects, and then converting the ray from world space to each primitive space by multiplying by transposing the transformation of the destination matrix objects.

I also tried to multiply the original primitive transformation matrix in order to move from primitive to world space and multiply by the recipient's inverse transformation in order to move from world space to primitive space, but this was unsuccessful.

I believe that a ray launched from the surface of a primitive object (at a random point and in a random direction) should be transformed in the same way as the surface normal in regular ray tracing, but I'm not sure.

Any of the experts know what the flaw in my methodology is? Feel free to ask if additional information is required.


The basic algorithm for this beam indicator is as follows:

For each object, i, in scene { for each ray, r, in number of rays per object { determine random ray from primitive i convert ray from primitive space of i to world space for each object, j, in scene { convert ray to primitive space of object j check for intersection with object j } } } 

I hope that the issue becomes clear, let's look at an example. Suppose I have a cylinder that extends along the z axis (unit radius and height) and an annular space lying in the xy plane with an inner diameter of 7 and an outer diameter of 8. I want the cylinder scale to be 6 times in the x and y directions (but not in the z direction), so my affine transformation matrix looks like this:

 M(cylinder) = |2 0 0 0| M^-1(cylinder) = | .5 0. 0. 0. | |0 2 0 0| | 0. .5 0. 0. | |0 0 1 0| | 0. 0. 1. 0. | |0 0 0 1| | 0. 0. 0. 1. | M(annulus) = |1 0 0 0| M^-1(annulus) = |1 0 0 0| |0 1 0 0| |0 1 0 0| |0 0 1 0| |0 0 1 0| |0 0 0 1| |0 0 0 1| 

Suppose now that I have a ray that has a random starting point on the surface of the cylinder s and a random direction from the surface of the cylinder c giving the ray r (os) = s + ct.

I want to transform this ray from primitive space (object) to world space, and then check the intersection with other objects in the scene (ring space).

The first question is the correct way to convert the ray r (os) to world space, r (ws), using M (cylinder) or M ^ -1 (cylinder). The second question is how to properly convert the ray, r (ws), from world space to the space of objects, to check for intersection with other objects using M (ring space) and M ^ -1 (ring space),

Additional additional information:

This application is designed to calculate radiation heat transfer between N objects. A ray is launched from a random point of an object, and its direction is randomly selected within a hemispherical distribution oriented with a normal surface at a random point.


Here is some visualization of my problem. Direct ray distribution when it is first generated: Initial ray directional distribution

If you apply the transformation to world coordinates using the transformation matrix M: Direction transformed by M

If you apply the transformation to world coordinates using the inversion transformation matrix M ^ -1 Direction transformed by M ^ -1

+4
source share
2 answers

It appeared the other day in This question

One answer refers to an article called Ray Tracing News, which discusses the use of inverse transposition for normals.

I have to agree with JCooper on the question "what is really going wrong?" My first thought is that you seem to simulate heat radiation, and you have to be careful, there will be uneven scaling of objects. If you have a uniform distribution of "photons" on the radiated surface of objects, and then you apply uneven scaling of this object, you will get an uneven distribution of photons leaving the surface. This is one of the possible mistakes, but since you do not indicate what will go wrong, it is difficult to say if this is your problem.

To answer your questions about the right way to perform conversions, follow this link to Ray Tracing News

+1
source

The inverse transformation matrix maintains a constant rotation component, but inverts scaling. This means that scaling still exists. This is true for normals: Consider in 2d the segment from (0,0) to (.707, .707) . Normal (-.707, .707) . If we scale by (s, 1) , we get from (0,0) from (s * .707, .707) . In the limit, when s grows, we essentially have a flat line parallel to the x axis. This means that the normal should point along the y axis. Thus, we get normal (-.707 / s, .707) . However, it should be clear from this example that the transformed vector is no longer a unit length. Perhaps you need to normalize the directional component?

If we start by using the property that the transformation matrix can be represented as a scaling sandwiched between two rotations (a la SVD), we will get your outgoing transformation matrix, which looks like this: R2out * Sout ^ -1 * R1out , and then your input transformation matrix looks like this: R1in ^ -1 * Sin * R2in ^ -1 (how would I like SO to use Mathjax ...). This seems right if you normalize your vectors again.


Edit:

Thinking about it overnight, I decided that the thing of reverse transposition can only be valid for the normal case. Consider the above example. If s = 2 , then the slope of the line segment, initially 1 , turns into 1/2 . Similarly, the normal slope is converted from -1 to -2 . There is a 90 degree angle between the line segment and the beam. So far, so good. Now ... what if the vector in question is actually parallel to the line segment. We get slope 2 , no longer parallel.

So, in my opinion, I have two questions. What is actually going wrong in your program / What makes you think that it is not? And what is the correct behavior? Perhaps you can make a 2D plot.

+3
source

All Articles