How to use one shadow map for multiple point light sources?

As I understand it, shadow matching is done by rendering the scene in terms of light to create a depth map. Then you re-render the scene from the POV camera, and for each point (fragment in GLSL) in the scene, you calculate the distance from there to the light source; if this corresponds to what you have on your shadow map, then it is in the light, otherwise it will be in the shadow.

I just read this tutorial to get an idea of ​​how to do shadow mapping using spot / omnidirectional light.

Section 12.2.2 states:

We use one shadow map for all light sources.

And then in 12.3.6 it says:

1) Calculate the square of the distance from the current pixel to the light source.
...
4) Compare the calculated distance value with the selected shadow map value to determine if we are in the shadow.

This is roughly what I said above.

What I do not get is that we baked all our lights into one shadow map, and what kind of light do we need to compare with the distance? The distance baked on the map doesn't have to match anything, because it's a mixture of all the lights, right?

I'm sure something is missing from me, but hopefully someone can explain it to me.


Also, if we use a single shadow map, how can we mix it for all light sources?

For one light source, a shadow map simply stores the distance of the nearest object to the light (i.e. a depth map), but for several light sources, what would it contain?

+7
source share
1 answer

You reduced the offer ahead of schedule:

We use one shadow map for all light sources, creating an image with multi-pass rendering and performing one pass for each light source.

Thus, the shadow map contains data for one light source at a time, but they use only one map because they display only one light at a time.

I think this fits into your second question - the light is additive, so you combine the results from several light sources by simply adding them together. In the case of GPU Gems, they are combined directly into the frame buffer, no doubt due to the relatively limited number of texture storage probes available on GPUs at that time. Currently, you probably want to make a combination of combinations in the frame buffer and directly in the fragment shader.

You also usually use the test "a pixel burns if it is less than or equal to the distance in the shadow buffer plus a little", and not exactly equal, due to the accumulation of round-off errors with floating point.

+5
source

All Articles