In OpenGL and Direct3D pipelines, the geometric shader is processed after the vertex shader and before the fragments / pixel shaders. Now, obviously, processing the geometric shader after the fragment / pixel shader does not make sense, but what I wonder, why not put it in front of the vertex shader?
From a software / high-level point of view, at least it makes sense as follows: first you run the geometric shader to create all the vertices you need (and unload any data related only to the geometric shader), then you run the vertex shader on all vertices created in this way. There, the obvious drawback is that the vertex shader should now run on each of the newly created vertices, but any logic that should be executed there in the current pipelines should be executed for each vertex in the geometric shader, presumably; therefore there is not much performance.
I assume that the geodesic shader is in this position in both pipelines, which is either a hardware reason or an unobvious pipeline reason, which makes sense.
(I know that the polygon snapping must be done before the geometry shader starts (maybe not, if it accepts separate points as inputs?), But I also know that it must start after the geometry shader too, so that would still make sense run vertex shader between these steps?)
source share