Implement submillisecond processing in python without busywait

How can I implement array processing with millisecond precision using python under linux (runs on a single Raspberry Pi core).

I am trying to parse information from a MIDI file that has been pre-processed by an array, where every millisecond I check to see if the array has entries in the current timestamp and runs some functions if this happens.

I am currently using time.time () and using wait (as concluded here ). This absorbs the entire processor, so I choose the best solution.

# iterate through all milliseconds for current_ms in xrange(0, last+1): start = time() # check if events are to be processed try: events = allEvents[current_ms] # iterate over all events for this millisecond for event in events: # check if event contains note information if 'note' in event: # check if mapping to pin exists if event['note'] in mapping: pin = mapping[event['note']] # check if event contains on/off information if 'mode' in event: if event['mode'] == 0: pin_off(pin) elif event['mode'] == 1: pin_on(pin) else: debug("unknown mode in event:"+event) else: debug("no mapping for note:" + event['note']) except: pass end = time() # fill the rest of the millisecond while (end-start) < (1.0/(1000.0)): end = time() 

where last is the millisecond of the last event (known from preprocessing)

This is not a question of time () vs clock () more about expectations and expectations of waiting .

I really can't sleep in the "fill a millisecond" loop due to the too low accuracy of sleep () . If I were to use ctypes , how would I do it right?

Is there some Timer library that calls back every millisecond reliably?

My current implementation is on GitHub . With this approach, I get a skew of about 4 or 5 ms on drum_sample, which is 3.7 s (with layouts, so no real hardware is attached). On the 30.7 s sample, the skew is about 32 ms (therefore, it is at least not linear!).

I tried using time.sleep () and nanosleep () via ctypes with the following code

 import time import timeit import ctypes libc = ctypes.CDLL('libc.so.6') class Timespec(ctypes.Structure): """ timespec struct for nanosleep, see: http://linux.die.net/man/2/nanosleep """ _fields_ = [('tv_sec', ctypes.c_long), ('tv_nsec', ctypes.c_long)] libc.nanosleep.argtypes = [ctypes.POINTER(Timespec), ctypes.POINTER(Timespec)] nanosleep_req = Timespec() nanosleep_rem = Timespec() def nsleep(us): #print('nsleep: {0:.9f}'.format(us)) """ Delay microseconds with libc nanosleep() using ctypes. """ if (us >= 1000000): sec = us/1000000 us %= 1000000 else: sec = 0 nanosleep_req.tv_sec = sec nanosleep_req.tv_nsec = int(us * 1000) libc.nanosleep(nanosleep_req, nanosleep_rem) LOOPS = 10000 def do_sleep(min_sleep): #print('try: {0:.9f}'.format(min_sleep)) total = 0.0 for i in xrange(0, LOOPS): start = timeit.default_timer() nsleep(min_sleep*1000*1000) #time.sleep(min_sleep) end = timeit.default_timer() total += end - start return (total / LOOPS) iterations = 5 iteration = 1 min_sleep = 0.001 result = None while True: result = do_sleep(min_sleep) #print('res: {0:.9f}'.format(result)) if result > 1.5 * min_sleep: if iteration > iterations: break else: min_sleep = result iteration += 1 else: min_sleep /= 2.0 print('FIN: {0:.9f}'.format(result)) 

The result of my i5 is

FIN: 0.000165443

and on RPi -

FIN: 0.000578617

which suggest a sleep period of about 0.1 or 0.5 milliseconds, with this jitter (usually sleep longer), which helps me the most to reduce the load a bit.

+5
source share
1 answer

One possible solution using the sched module:

 import sched import time def f(t0): print 'Time elapsed since t0:', time.time() - t0 s = sched.scheduler(time.time, time.sleep) for i in range(10): s.enterabs(t0 + 10 + i, 0, f, (t0,)) s.run() 

Result:

 Time elapsed since t0: 10.0058200359 Time elapsed since t0: 11.0022959709 Time elapsed since t0: 12.0017120838 Time elapsed since t0: 13.0022599697 Time elapsed since t0: 14.0022521019 Time elapsed since t0: 15.0015859604 Time elapsed since t0: 16.0023040771 Time elapsed since t0: 17.0023028851 Time elapsed since t0: 18.0023078918 Time elapsed since t0: 19.002286911 

Besides some constant offset of about 2 milliseconds (which you can calibrate), the jitter seems to be on the order of 1 or 2 milliseconds (as time.time itself time.time ). Not sure if this is good enough for your application.

If you need to do some useful work, you should study multithreading or multiprocessing.

Note : The standard Linux distribution running on RPi is not a hard real-time operating system. In addition, Python can show non-deterministic time, for example. when he starts collecting garbage. This way, your code may work fine with low jitter most of the time, but you may have random "hickups" where there is a slight delay.

+3
source

All Articles