What causes runtime fluctuations when rendering a renderbuffer? (Opengl)

Here's what happens:

  • The drawGL function is called at the exact end of the frame thanks to usleep , as suggested. This already supports a constant frame rate.

  • The actual renderbuffer view happens with drawGL() . Measuring the time needed for this gives me fluctuations in runtime, which leads to stuttering in my animation. This timer uses mach_absolute_time, so it is extremely accurate .

  • At the end of my frame, I measure timeDifference . Yes, it is an average of 1 millisecond, but it deviates significantly , from 0.8 milliseconds to 1.2 with peaks to more than 2 milliseconds.

Example:

 // Every something of a second I call tick -(void)tick { drawGL(); } - (void)drawGL { // startTime using mach_absolute_time; glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; // endTime using mach_absolute_time; // timeDifference = endTime - startTime; } 

My understanding is that after creating the framebuffer, the rendering render of the buffer should always make the same effort, regardless of the complexity of the frame? It's true? And if not, how can I prevent this?

By the way, this is an example application for the iPhone. Therefore, we are talking about OpenGL ES here, although I do not think this is a platform-specific problem. If so, then what happens? And should it not be? And again, if so, how can I prevent this?

+4
source share
5 answers

The abnormalities you encounter can be caused by many factors, including an OS scheduler that starts and gives the CPU to another process or similar problems. In fact, a normal person will not tell the difference between the rendering time of 1 and 2 ms. Moving images work at a speed of 25 frames per second, which means that each frame is displayed for about 40 ms and it looks fluid to the human eye.

As for stuttering animations, you should learn how to maintain a constant animation speed. The most common approach I've seen looks something like this:

 while(loop) { lastFrameTime; // time it took for last frame to render timeSinceLastUpdate+= lastFrameTime; if(timeSinceLastUpdate > (1 second / DESIRED_UPDATES_PER_SECOND)) { updateAnimation(timeSinceLastUpdate); timeSinceLastUpdate = 0; } // do the drawing presentScene(); } 

Or you can just pass lastFrameTime to update. Animate each frame and interpolate between animation states. The result will be even more fluid.

If you're already using something like the above, maybe you should look for the culprits in other parts of the rendering cycle. In Direct3D, expensive things were calls to draw primitives and change display states, so you can check their OpenGL counterparts.

+1
source

My favorite OpenGL expression of all time is "concrete implementation . " I think it’s very good here.

+1
source

A quick search for mach_absolute_time leads to this article: Link

It seems that the accuracy of this timer on the iPhone is only 166.67 ns (and possibly worse). Although this may explain the big difference, it does not explain that there is a difference at all.

There are probably three main reasons:

  • Different execution paths during renderbuffer presentation. Much can happen in 1 ms, and just because you call the same functions with the same parameters does not mean that the exact instructions are executed. This is especially true if other equipment is involved.
  • Interruptions / other processes, something else always happens that distracts the processor. As far as I know, iPhone OS is not a real-time operating system, so there is no guarantee that any operation will be completed within a certain period of time (and even the real-time OS will have temporary variations).
  • If there are any other OpenGL calls that are still being processed by the GPU that might delay presentRenderbuffer. To make it easier to test, just call glFinish () before starting the start time.
+1
source

It is better not to rely on a high constant frame rate for a number of reasons, the most important thing is that the OS can do something in the background, which slows down the work. It’s better to try the timer and determine how much time each frame has passed, this should ensure smooth animation.

0
source

Is it possible that the timer is not accurate for the sub ms level, even if it returns decimal values ​​of 0.8-> 2.0?

0
source

All Articles