I am trying to recover the 3D coordinates of the world from the depths in my deferred renderer, but I have damn time. Most of the examples that I find on the Internet suggest a standard perspective transformation, but I do not want to make this assumption.
In my geometry, pass the vertex shader, I compute gl_Position using:
gl_Position = wvpMatrix * vec4(vertexLocation, 1.0f);
and in my lighting transmission shader, I am trying to get world coordinates using:
vec3 decodeLocation() { vec4 clipSpaceLocation; clipSpaceLocation.xy = texcoord * 2.0f - 1.0f; clipSpaceLocation.z = texture(depthSampler, texcoord).r; clipSpaceLocation.w = 1.0f; vec4 homogenousLocation = viewProjectionInverseMatrix * clipSpaceLocation; return homogenousLocation.xyz / homogenousLocation.w; }
I thought that everything was correct, and indeed, the objects next to the camera seemed bright. But recently, I realized that moving further, objects are lit, as if they are farther from the camera than in reality. I played with my lighting passage and checked that my world coordinates were the only thing that miscalculated.
I cannot help but think that my clipSpaceLocation.z and clipSpaceLocation.w files are the source of the problem, but I tried all the options that I can come up with to calculate them, and the above code leads to the most correct results.
Any ideas or suggestions?
source share