I have many sprites for rendering, and I wanted to get feedback from people who clicked on performance in this area.
So, I sort by shader and texture. And have batches of sprites with the same rendering settings in VBOs to send shaders for rendering. All normal things. My sprites are square and have the same basic data: center position (P), orientation (O), scale (S), rgb color (Col) and global opacity (Alpha). I need to update the position and orientation in the processor code (although about 50% of the sprites do not change between each given pair of frames), and the scale, color and opacity almost never change for the sprite, but actually never.
I canβt imagine geometric shaders (I will support them, but in this case the issue is debatable).
Should I:
When I update the positions of the sprites, we calculate the positions of the vertices on the processor. Creating a vertex shader is a simple conversion step. (The advantage of significantly less data to update each frame, but the CPU must execute many triggers).
Put the POS data in VBO as additional data, duplicated for 4 vertices, then the vert position will simply be offsets (-1, -1; -1,1; 1,1; 1, -1) and execute the trigger in the shader (advantage in that the GPU does more computation, but each vertex has 5 additional data words).
This is not obvious, which is better, so both approaches need profiling to see what happens.
Obviously, I can do 3, but I thought it would be useful to ask this question to see if I have enough gestalt about what should be faster. And in any case, the answer may help other serious sprite / particle developers later.
source share