I have an asynchronous data stream system written in C ++. In a data flow architecture, an application is a set of instances of components that are initialized at startup, then they exchange data with predefined messages. There is a component like Pulsar that provides a “clock signal” to other components that connect to one of them (for example, Delay). It starts a message (calls the data flow manager API) every X ms, where X is the value of the "frequency" parameter, which is set in ms.
In short, the task is to call a function (method) in every X ms. The question is, what is the best / official way to do this? Is there any template for this?
There are several methods that I have found:
- Use SIGALRM. I think the alarm is not suitable for this purpose. Moreover, the resolution is 1 second, it is too rare.
- Use the HW interrupt. I do not need this accuracy. In addition, I know using the HW solution (the server is compiled for several platforms, for example ARM).
- Measure elapsed time and lower () until the next call. I'm not sure if this is the best way to measure the time for making system calls in time on 5 threads, every time every 10 times, but maybe I'm wrong.
- Use the kernel functions of RealTime. I don't know anything about this. In addition, I do not need a crystal exact call, it is not an atomreactor, and I can not install the RT kernel on some platforms (the 2.6.x kernel is also available).
Perhaps the best answer is a short part of the source code of the audio / video player (which I cannot find / understand myself).
UPDATE (requested by @MSalters): the co-author of the DF project uses Mac OSX, so we need to find a solution that works on most Posix-compilant commands. systems, not just Linux. Perhaps in the future there will be a target device using BSD or limited Linux.
source
share