Why is the geodesic shader processed after the vertex shader?

In OpenGL and Direct3D pipelines, the geometric shader is processed after the vertex shader and before the fragments / pixel shaders. Now, obviously, processing the geometric shader after the fragment / pixel shader does not make sense, but what I wonder, why not put it in front of the vertex shader?

From a software / high-level point of view, at least it makes sense as follows: first you run the geometric shader to create all the vertices you need (and unload any data related only to the geometric shader), then you run the vertex shader on all vertices created in this way. There, the obvious drawback is that the vertex shader should now run on each of the newly created vertices, but any logic that should be executed there in the current pipelines should be executed for each vertex in the geometric shader, presumably; therefore there is not much performance.

I assume that the geodesic shader is in this position in both pipelines, which is either a hardware reason or an unobvious pipeline reason, which makes sense.

(I know that the polygon snapping must be done before the geometry shader starts (maybe not, if it accepts separate points as inputs?), But I also know that it must start after the geometry shader too, so that would still make sense run vertex shader between these steps?)

+5
source share
1 answer

This is mainly because the "geometric shader" was a rather stupid choice of words in the Microsoft part. It should be called a primitive shader . "

Geometric shaders make the programmable stage of primitive assembly, and you cannot assemble the primitives before you calculate the input stream of the calculated vertices. There is some overlap in functionality, since you can take one input primitive type and spit out a completely different type (often requiring the calculation of additional vertices).

These additional emitted vertices do not need to be turned off in the pipeline before the vertex shader stage - they are fully calculated when the geometric shader is called. This concept should not be too alien, since the shaders for controlling and evaluating tessellation are also very similar to vertex shaders in form and function.

There are many stages of vertex conversion, and what we call vertex shaders is just the tip of the iceberg. In a modern application, you can expect that the output of the vertex shader will go through several additional steps before you have a finalized vertex for rasterizing and painting pixels (which is also poorly named).

+4
source

All Articles