How can I implement the exact (but mutable) FPS limit / limit in my OpenGL application?

I am currently working on an OpenGL application to display to the user several 3D spheres that they can rotate, move, etc. Speaking of which, there are not many difficulties, so the application runs at a fairly high frame rate (~ 500 FPS).

Obviously, this is an excess - even 120 will be more than enough, but my problem is that running the application when fully loaded resets my processor, causing excessive heat, power consumption, etc. What I want to do is able to allow the user to install the FPS cap so that the processor is not used excessively when it is not needed.

I work with freeglut and C ++ and have already set up animation / event handling to use timers (using glutTimerFunc ). However, glutTimerFunc allows the installation of an integer number of milliseconds, so if I want 120 FPS, the closest I can get is (int)1000/120 = 8 ms , which corresponds to 125 FPS (I know that this is a fuzzy amount, but I'm all I just want to set the FPS limit and get exactly this FPS if I know that the system can render faster).

Also, using glutTimerFunc to limit FPS never works sequentially. Let's say I close the application to 100 FPS, it usually never rises above 90-95 FPS. Again, I tried to determine the time difference between rendering / calculations, but then it always exceeded the limit by 5-10 FPS (possibly a timer resolution).

I believe that the best comparison here would be a game (for example, Half Life 2) - you install your FPS cap, and it always reaches this exact amount. I know that I can measure the time delta before and after I create each frame, and then cycle until I need to draw the next one, but this does not solve the problem of 100% CPU usage and does not solve the problem with time resolution.

Is there any way to implement an efficient, cross-platform, variable limiter / frame rate limiter in my application? Or, in another way, is there any cross-platform (and open source) library that implements high-resolution timers and sleep functions?

Edit: I would prefer to find a solution that does not rely on the end user who can use VSync, since I will let them specify the FPS cap.

Edit # 2: to anyone who recommends SDL (which I ported my application to SDL ), is there a difference between using the glutTimerFunc function glutTimerFunc call a draw or use SDL_Delay to wait between a draw? The documentation for each mentions the same reservations, but I was not sure if it was more or less effective than the other.

Edit # 3: Basically, I'm trying to figure out if there is a (easy way) to implement the exact FPS limiter in my application (again, like Half Life 2). If this is not possible, I will most likely switch to SDL (it makes more sense to use the delay function and then use glutTimerFunc to call the render function every x milliseconds).

+8
c ++ cross-platform 3d opengl freeglut
source share
5 answers

I would suggest using sub-ms system timers (QueryPerformanceCounter, gettimeofday) to get time data. It will also help you improve profile performance in optimized versions.

+1
source share

I would recommend using SDL. I personally use it to control my timers. In addition, it can limit your fps screen refresh rate (V-Sync) with SDL 1.3. This allows you to limit CPU usage at maximum screen performance (even if you have more frames, they will not be able to be displayed, since your screen does not refresh fast enough).

Function

SDL_GL_SetSwapInterval (1);

If you need some code for timers using SDL, you can see it here:

my timer class

Good luck :)

+4
source share

You should not try to limit the rendering speed manually, but synchronize with the vertical update display. This is done by enabling V-sync in the graphics driver settings. In addition to preventing (your) programs from rendering to high speed, it also increases image quality, avoiding tearing.

Exchange interval extensions allow your application to fine-tune V-sync behavior. But in most cases, just turning on V-synchronization in the driver and turning on the buffer swap block until the synchronization is sufficient.

+4
source share

The easiest way to solve this problem is to enable Vsync. This is what I do in most games so that my laptop does not get too hot. As long as you make sure that the speed of your rendering path is not related to other logic, this should be good.

There is a glutGet function (GLUT_ELAPSED_TIME) that returns the time since the start in milliseconds, but probably still not fast enough.

An easy way is to create your own timer method that uses HighPerformanceQueryTimer for windows and the getTimeOfDay system for POSIX.

Or you can always use the timer functions from SDL or SFML, which are basically similar to the ones above.

+3
source share

I think a good way to achieve this, no matter which graphics library you use, is to have one clock measurement in gameloop to account for every tick (ms). Thus, the average fps will be exactly the same as in Half-Life 2. I hope the following code fragment will explain what I'm talking about:

 //FPS limit unsigned int FPS = 120; //double holding clocktime on last measurement double clock = 0; while (cont) { //double holding difference between clocktimes double deltaticks; //double holding the clocktime in this new frame double newclock; //do stuff, update stuff, render stuff... //measure clocktime of this frame //this function can be replaced by any function returning the time in ms //for example clock() from <time.h> newclock = SDL_GetTicks(); //calculate clockticks missing until the next loop should be //done to achieve an avg framerate of FPS // 1000 / 120 makes 8.333... ticks per frame deltaticks = 1000 / FPS - (newclock - clock); /* if there is an integral number of ticks missing then wait the remaining time SDL_Delay takes an integer of ms to delay the program like most delay functions do and can be replaced by any delay function */ if (floor(deltaticks) > 0) SDL_Delay(deltaticks); //the clock measurement is now shifted forward in time by the amount //SDL_Delay waited and the fractional part that was not considered yet //aka deltaticks the fractional part is considered in the next frame if (deltaticks < -30) { /*dont try to compensate more than 30ms(a few frames) behind the framerate //when the limit is higher than the possible avg fps deltaticks would keep sinking without this 30ms limitation this ensures the fps even if the real possible fps is macroscopically inconsitent.*/ clock = newclock - 30; } else { clock = newclock + deltaticks; } /* deltaticks can be negative when a frame took longer than it should have or the measured time the frame took was zero the next frame then won't be delayed by so long to compensate for the previous frame taking longer. */ //do some more stuff, swap buffers for example: SDL_RendererPresent(renderer); //this is SDLs swap buffers function } 

I hope this SDL example helps. It is important to measure time only once for each frame, so each frame is taken into account. I recommend modulating this time in a function that also makes your code clearer. This piece of code has no comments in case they just annoyed you in the latter:

 unsigned int FPS = 120; void renderPresent(SDL_Renderer * renderer) { static double clock = 0; double deltaticks; double newclock = SDL_GetTicks(); deltaticks = 1000.0 / FPS - (newclock - clock); if (floor(deltaticks) > 0) SDL_Delay(deltaticks); if (deltaticks < -30) { clock = newclock - 30; } else { clock = newclock + deltaticks; } SDL_RenderPresent(renderer); } 

Now you can call this function in your mainloop instead of the swapBuffer function ( SDL_RenderPresent(renderer) in the SDL). In the SDL, you must ensure that the SDL_RENDERER_PRESENTVSYNC flag SDL_RENDERER_PRESENTVSYNC disabled. This function depends on the global FPS variable, but you can think of other ways to store it. I just put it all in the library namespace.


This method of limiting the frame rate provides a precisely defined average frame rate if there are no large differences in cycle time over several frames from outside the 30 ms limit to deltaticks . Limit deltaticks . When the FPS limit is higher than the actual deltaticks frame deltaticks , it will fall indefinitely. Also, when the frame rate rises again above the FPS limit, the code will try to compensate for the lost time if each frame immediately leads to a huge frame rate until deltaticks returns to zero. You can change 30 ms according to your needs, this is just an estimate. I did a couple of tests with Fraps. It works with any imaginary frame rate and gives excellent results from what I tested.


I must admit that I only encoded this yesterday, so there is hardly any mistake. I know that this question was asked 5 years ago, but these answers did not inform me. Also feel free to edit this post as it is my first and probably erroneous.

EDIT: My attention was drawn to the fact that SDL_Delay very inaccurate on some systems. I heard a case where he lingered too much on an android. This means that my code may not be portable for all your desired systems.

+3
source share

All Articles