WebGL / OpenGL: performance comparison

For educational purposes, I need to compare WebGL performance with OpenGL. I have two equivalent programs written in WebGL and OpenGL, now I need to take their frame rate and compare them.

In Javascript, I use requestAnimationFrame for animation, and I noticed that this causes the frame rate to always be 60 FPS, and it is omitted only if I switch the tab or window. On the other hand, if I always call the rendering function recursively, the window freezes for obvious reasons.

This is how I take FPS:

 var stats = new Stats(); stats.domElement.style.position = 'absolute'; stats.domElement.style.left = '450px'; stats.domElement.style.top = '750px'; document.body.appendChild( stats.domElement ); setInterval( function () { stats.begin(); stats.end(); }, 1000 / 60 ); var render= function() { requestAnimationFrame(render); renderer.render(scene,camera); } render(); 

Now the problem, if there is always a scene with 60 FPS, is that I can not actually compare it with the OpenGL frame rate, because OpenGL redraws the scene only when it is somehow modified (for example, if I rotate the object) and glutPostRedisplay() is called.

So, I think if in WebGL there is a way to redraw the scene only when necessary, for example, when the object is rotated or some attributes in the shaders are changed.

+3
javascript opengl-es opengl webgl
source share
3 answers

You cannot compare frame rates directly with GPUs in WebGL by clicking frames. Rather, you need to find out how much work you can do in a single frame.

So, basically choose some target frame rate, and then continue to work more and more until you get to your goal. When you hit your goal, how much work you can do. You can compare this to any other machine or GPU using the same technique.

Some people suggest using glFinish to check time. Unfortunately, this does not actually work, because it stops the graphics pipeline, and this locking is not what usually happens in a real application. It will be like how fast a car can go from point A to point B, but instead of starting long before A and ending longer after you slam shut on your brakes before you get to B and measure the time when you get to B. This time includes all the time it takes to slow down, which is different on every GPU and different between WebGL and OpenGL and even different for every browser. You have no way of knowing how much time was spent on slowing down, and how much of it was spent on what you really wanted to measure.

So, instead, you need to go at full speed all the time. Just like a car that you accelerate to maximum speed before you reach point A and continue to develop maximum speed until you pass B. In the same way, they are on cars on qualifying laps.

Usually you don’t stop the GPU by clapping for breaks (glFinish), so adding stopping time to your time measurements does not matter and does not give you useful information. Using glFinish, you will draw a graph + stop. If one GPU dials 1 second and stops at 2, and the other GPU draws after 2 seconds and stops at 1, your time will say 3 seconds for both GPUs. But if you ran them without stopping one GPU, it would draw 3 things per second, and the other GPU would do just 1.5 things per second. One GPU is clearly faster, but using glFinish you will never know.

Instead, you perform full speed, drawing as much as possible, and then measure how much you could make and maintain full speed.

Here is an example: http://webglsamples.org/lots-o-objects/lots-o-objects-draw-elements.html

Basically, he draws every frame. If the frame rate was 60 frames per second, she draws 10 more objects of the next frame. If the frame rate is less than 60 frames per second, it is less.

Since browser time is not perfect, you can choose a slightly lower target, such as 57fps, to find out how fast it can go.

In addition, WebGL and OpenGL really just talk to the GPU, and the GPU really works. The work performed by the GPU will take the same amount of time, whether the WebGL asks the GPU or OpenGL about it. The only difference is the overhead for setting up the GPU. This means that you really do not want to draw anything heavy. Ideally, you would draw almost nothing. Make your canvas 1x1 in size, draw one triangle and check the time (as in how many single triangles you can draw one triangle at a time in WebGL and OpenGL at 60 frames per second).

However, it is getting worse. The real application will switch shaders, switch buffers, switch textures, update attributes, and uniforms. So what are you doing? How many times can you call gl.drawBuffers at 60 frames per second? How many times can you call gl.enable or gl.vertexAttribPointer or gl.uniform4fv at 60 frames per second? Some combination? What is a reasonable combination? 10% of calls to gl.verterAttribPointer + 5% of calls to gl.bindBuffer + 10% of calls to gl.uniform . The timing of these calls are the only things different between WebGL and OpenGL, as they end up talking to the same GPU and that the GPU will run at the same speed independently.
+4
source share

Actually, you don’t want to use frame rate to compare these things, because, as you just mentioned, you are artificially limited to 60 FPS due to VSYNC.

The number of frames presented will be limited by the clipboard operation when using VSYNC, and you want this mess to be from your performance measurement. What you need to do is start the timer at the beginning of your frame, and then at the end of the frame (just before the swap exchange) issue glFinish (...) and end the timer. Compare the number of milliseconds to draw (or any other resolution of your timer) instead of the number of frames.

+2
source share

The correct solution is to use the ANGLE_timer_query extension, if available.

Quote from specification:

OpenGL implementations have historically provided little useful time information. Applications can get some idea of ​​the time reading timers on the CPU, but these timers are not synchronized with the graphics rendering pipeline. Reading the processor timer does not guarantee the completion of a potentially large number of graphics, the work accumulated before the timer was read, and thus wildly inaccurate results. glFinish () can be used to determine when previous rendering commands were completed , but the graphics pipeline is idle and adversely affects application performance.

This extension provides a query mechanism that you can use to determine the amount of time it takes to completely complete a set of GL commands and without stopping the rendering pipeline. . It uses query object mechanisms, first introduced in an extension occlusion query, which allow you to asynchronously poll the time intervals for applications.

(my accent)

+1
source share

All Articles