As I understand it, shadow matching is done by rendering the scene in terms of light to create a depth map. Then you re-render the scene from the POV camera, and for each point (fragment in GLSL) in the scene, you calculate the distance from there to the light source; if this corresponds to what you have on your shadow map, then it is in the light, otherwise it will be in the shadow.
I just read this tutorial to get an idea of ββhow to do shadow mapping using spot / omnidirectional light.
Section 12.2.2 states:
We use one shadow map for all light sources.
And then in 12.3.6 it says:
1) Calculate the square of the distance from the current pixel to the light source.
...
4) Compare the calculated distance value with the selected shadow map value to determine if we are in the shadow.
This is roughly what I said above.
What I do not get is that we baked all our lights into one shadow map, and what kind of light do we need to compare with the distance? The distance baked on the map doesn't have to match anything, because it's a mixture of all the lights, right?
I'm sure something is missing from me, but hopefully someone can explain it to me.
Also, if we use a single shadow map, how can we mix it for all light sources?
For one light source, a shadow map simply stores the distance of the nearest object to the light (i.e. a depth map), but for several light sources, what would it contain?
mpen
source share