Using DirectX 11, I created a three-dimensional volume texture that can be tied to a rendering target:
D3D11_TEXTURE3D_DESC texDesc3d;
texDesc3d.Usage = D3D11_USAGE_DEFAULT;
texDesc3d.BindFlags = D3D11_BIND_RENDER_TARGET;
m_dxDevice->CreateTexture3D(&texDesc3d, nullptr, &m_tex3d);
m_dxDevice->CreateRenderTargetView(m_tex3d, nullptr, &m_tex3dRTView);
Now, I would like to update the entire rendering target and populate its procedural data generated in the pixel shader, like updating the target 2D rendering with a "full-screen pass". All I need to generate the data is the coordinates of the UVW pixel in question.
For 2D, you can build a simple vertex shader that displays a full screen triangle:
struct VS_OUTPUT
{
float4 position : SV_Position;
float2 uv: TexCoord;
};
// input: three empty vertices
VS_OUTPUT main( uint vertexID : SV_VertexID )
{
VS_OUTPUT result;
result.uv = float2((vertexID << 1) & 2, vertexID & 2);
result.position = float4(result.uv * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
return result;
}
It's hard for me to envelop how to adopt this principle for 3D. Is this possible in DirectX 11, or do I need to display individual fragments of the volume texture, as described here ?