Converting a Depth Texture Sample to a Distance

I am currently reading a depth texture in the postoperative depth of a field shader using the following GLSL code:

vec4 depthSample = texture2D(sDepthTexture, tcScreen); float depth = depthSample.x * 255.0 / 256.0 + depthSample.y * 255.0 / 65536.0 + depthSample.z * 255.0 / 16777216.0; 

And then converting the depth value to the distance of the viewing space based on the near and far distances:

 float zDistance = (zNear * zFar) / (zFar - depth * (zFar - zNear)); 

All this works pretty well, however, I am interested to know how to perform the above calculation, based only on the current projection matrix, without the need to use separate zNear and zFar .

My initial attempt was to multiply (vec4(tcscreen.x, tcScreen.y, depth, 1.0) * 2.0 - 1.0) by the inverse projection matrix, dividing the result by w , and then getting the resulting z value as a distance, but that seems to be did not work. What is the right approach here?

Also, when using oblique truncation truncation to offset the near plane to the selected clipping plane, is the nearest plane distance now potentially different for each pixel? And if this is so, does this mean that any shaders that calculate the distance from the depth texture should know about this case and not assume a constant distance near the plane?

Thanks!

+4
source share
2 answers

It turns out I forgot to deny the final value of Z to get a positive distance in front of the nearest plane (OpenGL camera looks down -Z). For further use, the GLSL code for obtaining the distance in front of the near plane:

 float depth = /* sampled from depth texture as in the original question */ ; vec4 screenPos = vec4(tcScreen.x, tcScreen.y, depth, 1.0) * 2.0 - 1.0; vec4 viewPosition = projectionMatrixInverse * screenPos; float z = -(viewPosition.z / viewPosition.w); 

If you need a position in world space (for example, SuperPro), then this can be found by combining the matrix of representations and projections, and then using the inverse of this matrix, and not just the inverse projection matrix.

Since only the Z and W viewPosition , the above GLSL for calculating the viewPosition can be somewhat simplified. It is enough to use two point products instead of full matrix multiplication, and there is no need to submit the full matrix of back projections to the shader, since only the bottom two lines are needed:

 vec2 viewPositionZW = vec2( dot(projectionMatrixInverseRow2, screenPos), dot(projectionMatrixInverseRow3, screenPos) ); float z = -(viewPositionZW.x / viewPositionZW.y); 

The performance of this is a little worse than using short and long range, presumably due to additional products with dots, I got a 5% reduction. Near and far distance (zNear * zFar) can also be optimized by supplying (zNear * zFar) and (zFar - zNear) as constants, but I have not seen any measurable improvements when doing this.

Interestingly, when you combine the above with a projection matrix using oblique truncation, I can’t get anything reasonable from it, but I get a reasonable output when using the near and far distance equation with the same projection matrix, although with some kind of distortion of the values depth (although this can only be associated with a loss in the accuracy of the depth inherent in cutting an inclined truncated cone). If someone can shed light on what exactly is happening mathematically, I would appreciate it, although perhaps this should be in another matter.

+2
source

I use the following code in a lightning shader to calculate the direction of lightning. The Wold position is also calculated by multiplying the screen position by the inverse projection matrix.

Unfortunately, HLSL:

 float depth = tex2D(DepthMapSampler, PSIn.TexCoord).r; float4 screenPos; screenPos.x = PSIn.TexCoord.x*2.0f-1.0f; screenPos.y = -(PSIn.TexCoord.y*2.0f-1.0f); screenPos.z = depth; screenPos.w = 1.0f; float4 worldPos = mul(screenPos, xViewProjectionInv); worldPos /= worldPos.w; 

Works great, so I believe Worldposition is correct!

+1
source

All Articles