So, as I understand it, there is no answer. However, I did some tests on my hardware. I have 2 GPUs in my inventory, Intel HD Graphics 3000 and NVidia GeForce GT 555M . I tested my program (the program itself is written in java / scala) with matrix multiplication in the vertex shader, and then transferred the multiplication to the CPU program and checked again.
(ball N is a continuously rotating sphere with 2 * N ^ 2 quads drawn using glDrawElements (GL_QUADS, ...) with 1 texture and without any lighting effects / other effects)
matrix multiplication in the vertex shader:
intel: sphere400: 57.17552887364208 fps sphere40: 128.1394156842645 fps nvidia: sphere400: 134.9527665317139 fps sphere40: 242.0135527589545 fps
matrix multiplication by cpu:
intel: sphere400: 57.37234652897303 fps sphere40: 128.2051282051282 fps nvidia: sphere400: 142.28799089356858 fps sphere40: 247.1576866040534 fps
Tests show that multiplicative (uniform) matrices in the vertex shader are a bad idea, at least on this hardware. Therefore, you canβt generally rely on appropriate GLSL compiler optimizations.
Sarge borsch
source share