Normal maps versus normal coordinates

I am currently working with OpenGL ES 1.1 and use the DrawElements convention along with vertex, normal, texture coordinates and index arrays.

I recently came across this while exploring the idea of โ€‹โ€‹using the Normal / Bump display, which I previously, although I could not, with OpenGL ES: http://iphone-3d-programming.labs.oreilly.com/ch08.html

I can generate a normal map of an apace object already from my 3D designer, but what I did not quite understand is whether an array of normal coordinates will be needed more if the second block of textures is implemented for normal display, or whether there will be a highlight + color texture combined with a normal card using the DOT3_RGB option?

EDIT - After exploring DOT3 Lighting a bit further, I'm not sure if the answer given by ognian is correct. This page http://www.3dkingdoms.com/tutorial.htm gives an example of its use, and if you look at the "Rendering and Final Result" code snippet, there is no normal ClientState array for regular arrays.

I also found this post here, What is DOT3 lighting? which explains it well ... but leads me to another question. In the comments, he stated that instead of translating normals, you are translating the direction of the light. I am confused by this, as if I had a game with a stationary wall ... why should I move the light for only one model? Hoping someone can give a good explanation for all this ...

+4
source share
1 answer

While normal mappings of tangent space perturbate the normals interpolated from the normals at the vertices, normal mappings of the object space already contain all the necessary information about the surface orientation on the map. Therefore, if you just do DOT3 coverage in OpenGL ES 1.1, you donโ€™t need to transfer normals again.

The reason why the other mentioned message translating the direction of the light rather than the normal is because both arguments for the point product (normal pixel and light vector) must be in the same coordinate space for the point product to make any sense . Since you have a normal map of an object-space, your regular pixel will always be in the local coordinate space of your object, and the texture environment does not provide any means for applying further transformations. Most likely, your light vectors are in some other space, so the transformation mentioned is to convert from another space back to the local space of objects.

+1
source

Source: https://habr.com/ru/post/1315252/


All Articles