You cannot compare frame rates directly with GPUs in WebGL by clicking frames. Rather, you need to find out how much work you can do in a single frame.
So, basically choose some target frame rate, and then continue to work more and more until you get to your goal. When you hit your goal, how much work you can do. You can compare this to any other machine or GPU using the same technique.
Some people suggest using glFinish to check time. Unfortunately, this does not actually work, because it stops the graphics pipeline, and this locking is not what usually happens in a real application. It will be like how fast a car can go from point A to point B, but instead of starting long before A and ending longer after you slam shut on your brakes before you get to B and measure the time when you get to B. This time includes all the time it takes to slow down, which is different on every GPU and different between WebGL and OpenGL and even different for every browser. You have no way of knowing how much time was spent on slowing down, and how much of it was spent on what you really wanted to measure.
So, instead, you need to go at full speed all the time. Just like a car that you accelerate to maximum speed before you reach point A and continue to develop maximum speed until you pass B. In the same way, they are on cars on qualifying laps.
Usually you donβt stop the GPU by clapping for breaks (glFinish), so adding stopping time to your time measurements does not matter and does not give you useful information. Using glFinish, you will draw a graph + stop. If one GPU dials 1 second and stops at 2, and the other GPU draws after 2 seconds and stops at 1, your time will say 3 seconds for both GPUs. But if you ran them without stopping one GPU, it would draw 3 things per second, and the other GPU would do just 1.5 things per second. One GPU is clearly faster, but using glFinish you will never know.
Instead, you perform full speed, drawing as much as possible, and then measure how much you could make and maintain full speed.
Here is an example: http://webglsamples.org/lots-o-objects/lots-o-objects-draw-elements.html
Basically, he draws every frame. If the frame rate was 60 frames per second, she draws 10 more objects of the next frame. If the frame rate is less than 60 frames per second, it is less.
Since browser time is not perfect, you can choose a slightly lower target, such as 57fps, to find out how fast it can go.
In addition, WebGL and OpenGL really just talk to the GPU, and the GPU really works. The work performed by the GPU will take the same amount of time, whether the WebGL asks the GPU or OpenGL about it. The only difference is the overhead for setting up the GPU. This means that you really do not want to draw anything heavy. Ideally, you would draw almost nothing. Make your canvas 1x1 in size, draw one triangle and check the time (as in how many single triangles you can draw one triangle at a time in WebGL and OpenGL at 60 frames per second).
However, it is getting worse. The real application will switch shaders, switch buffers, switch textures, update attributes, and uniforms. So what are you doing? How many times can you call
gl.drawBuffers at 60 frames per second? How many times can you call
gl.enable or
gl.vertexAttribPointer or
gl.uniform4fv at 60 frames per second? Some combination? What is a reasonable combination? 10% of calls to
gl.verterAttribPointer + 5% of calls to
gl.bindBuffer + 10% of calls to
gl.uniform . The timing of these calls are the only things different between WebGL and OpenGL, as they end up talking to the same GPU and that the GPU will run at the same speed independently.