Saving vertex depth information in texture in opengl shadow mapping

I am currently programming shadow mapping (more precisely, cascading shadow mapping) in my C ++ opengl engine. So I want to have a texture containing the distance between my light source and every pixel on my shadow map. What type of texture should I use?

I saw that there is an internal GL_DEPTH_COMPONENT texture format, but it scales the data that I want the texture to be [0,1]. Do I have to invert my length once when I create the shadow map, and then a second time during my final rendering in order to return the real length? It seems completely useless!

Is there a way to use textures to store lengths without inverting them 2 times? (once when creating the texture, once during its use).

+4
source share
1 answer

I'm not sure what you mean by inverting (I'm sure you cannot flip the distance since this will not work). What you are doing is converting the distance to the light source into the range [0,1].

This can be done by constructing a conventional projection matrix to represent the light source and applying it to the vertices in the shadow map construction passage. Thus, their distance to the light source is written to the depth buffer (to which you can connect a texture with the format GL_DEPTH_COMPONENT or glCopyTexSubImage , or FBOs). In the last pass, you, of course, use the same projection matrix to calculate the texture coordinates for the shadow map using projective texturing (using the sampler2DShadow sampler when using GLSL).

But this transformation is not linear, since the depth buffer has higher accuracy near the viewer (or the light source in this case). Another drawback is that you need to know the allowable range of distance values ​​(the farthest point your light source affects). Using shaders (which I suppose you do), you can make this transformation linear by simply dividing the distance from the light source by this maximum distance and manually assign this fragment depth value ( gl_FragDepth in GLSL), which is what you probably means invert.

Separation (and knowing the maximum distance) can be prevented by using a floating-point texture for the range of the light and simply recording the distance as a color channel, and then do the depth comparison on the final pass yourself (using normal sampler2D ). But linear floating-point texture filtering is only supported on newer hardware, and I'm not sure if this will be faster than one division per fragment. But the advantage of this method is that it paves the way for things like “dispersion shadow maps” that won't work so well with regular ubyte textures (due to low precision) nor with depth textures.

So, to summarize, GL_DEPTH_COMPONENT is a good compromise between ubyte textures (which do not have the necessary accuracy, since GL_DEPTH_COMPONENT must have at least 16-bit precision) and floating textures (which older hardware is not so fast or fully supported). But because of its fixed-point format, you won’t be able to get around the conversion to the [0,1] -range (whether it is linear from the projective). I’m not sure if floating point textures will be faster since you save only one split, but if you are on the latest hardware that supports linear (or even trilinear) filtering of floating point textures and 1 or 2 component floating textures and rendering objects may be worth a try.

Of course, if you use a pipeline with a fixed function, you only have GL_DEPTH_COMPONENT as an option, but in relation to your question, I assume that you are using shaders.

+4
source

All Articles