I am currently working on an OpenGL application to display to the user several 3D spheres that they can rotate, move, etc. Speaking of which, there are not many difficulties, so the application runs at a fairly high frame rate (~ 500 FPS).
Obviously, this is an excess - even 120 will be more than enough, but my problem is that running the application when fully loaded resets my processor, causing excessive heat, power consumption, etc. What I want to do is able to allow the user to install the FPS cap so that the processor is not used excessively when it is not needed.
I work with freeglut and C ++ and have already set up animation / event handling to use timers (using glutTimerFunc ). However, glutTimerFunc allows the installation of an integer number of milliseconds, so if I want 120 FPS, the closest I can get is (int)1000/120 = 8 ms , which corresponds to 125 FPS (I know that this is a fuzzy amount, but I'm all I just want to set the FPS limit and get exactly this FPS if I know that the system can render faster).
Also, using glutTimerFunc to limit FPS never works sequentially. Let's say I close the application to 100 FPS, it usually never rises above 90-95 FPS. Again, I tried to determine the time difference between rendering / calculations, but then it always exceeded the limit by 5-10 FPS (possibly a timer resolution).
I believe that the best comparison here would be a game (for example, Half Life 2) - you install your FPS cap, and it always reaches this exact amount. I know that I can measure the time delta before and after I create each frame, and then cycle until I need to draw the next one, but this does not solve the problem of 100% CPU usage and does not solve the problem with time resolution.
Is there any way to implement an efficient, cross-platform, variable limiter / frame rate limiter in my application? Or, in another way, is there any cross-platform (and open source) library that implements high-resolution timers and sleep functions?
Edit: I would prefer to find a solution that does not rely on the end user who can use VSync, since I will let them specify the FPS cap.
Edit # 2: to anyone who recommends SDL (which I ported my application to SDL ), is there a difference between using the glutTimerFunc function glutTimerFunc call a draw or use SDL_Delay to wait between a draw? The documentation for each mentions the same reservations, but I was not sure if it was more or less effective than the other.
Edit # 3: Basically, I'm trying to figure out if there is a (easy way) to implement the exact FPS limiter in my application (again, like Half Life 2). If this is not possible, I will most likely switch to SDL (it makes more sense to use the delay function and then use glutTimerFunc to call the render function every x milliseconds).