Handle a lock function call in Python

I work with the Gnuradio framework . I process the flow graphs that I generate to send / receive signals. These flowgraphs are initialized and started, but they do not return the control flow to my application:

I imported time

 while time.time() < endtime: # invoke GRC flowgraph for 1st sequence if not seq1_sent: tb = send_seq_2.top_block() tb.Run(True) seq1_sent = True if time.time() < endtime: break # invoke GRC flowgraph for 2nd sequence if not seq2_sent: tb = send_seq_2.top_block() tb.Run(True) seq2_sent = True if time.time() < endtime: break 

The problem is this: only the first if statement invokes the schedule flow (which interacts with the equipment). I am stuck in this. I could use Thread, but I don't know how to disable threads in Python. I doubt this is possible because it seems that killing threads is not part of the API. This script should only work with Linux ...

How do you handle blocking functions with Python correctly - without destroying the entire program. Another specific example of this problem:

 import signal, os def handler(signum, frame): # print 'Signal handler called with signal', signum #raise IOError("Couldn't open device!") import time print "wait" time.sleep(3) def foo(): # Set the signal handler and a 5-second alarm signal.signal(signal.SIGALRM, handler) signal.alarm(3) # This open() may hang indefinitely fd = os.open('/dev/ttys0', os.O_RDWR) signal.alarm(0) # Disable the alarm foo() print "hallo" 

How do I get print "hallo" .;)

Thanks Marius

+4
source share
8 answers

First of all, you should avoid using signals at all costs:

1) This can lead to a deadlock. SIGALRM can get to the process before syscall locks (imagine the ultra-high load on the system!), And syscall will not be interrupted. Dead end.

2) Playing with signals can have some unpleasant non-local consequences. For example, syscalls in other threads may be interrupted, which is usually not what you want. Typically, system calls restart when a (non-lethal) signal is received. When you set up a signal handler, it automatically disables this behavior for the whole process or group of threads, so to speak. Mark 'man siginterrupt' on this.

Believe me, I met two problems before, and they are not at all funny.

In some cases, locking can be avoided explicitly - I highly recommend using select () and friends (check select module in Python) to handle write and read locks. However, this does not solve the block call to open ().

For this, I tested this solution and works well for named pipes. It opens in a non-blocking manner, then disables it and uses the select () call to wait for it if nothing is available.

 import sys, os, select, fcntl f = os.open(sys.argv[1], os.O_RDONLY | os.O_NONBLOCK) flags = fcntl.fcntl(f, fcntl.F_GETFL, 0) fcntl.fcntl(f, fcntl.F_SETFL, flags & ~os.O_NONBLOCK) r, w, e = select.select([f], [], [], 2.0) if r == [f]: print 'ready' print os.read(f, 100) else: print 'unready' os.close(f) 

Check it out with

 mkfifo /tmp/fifo python <code_above.py> /tmp/fifo (1st terminal) echo abcd > /tmp/fifo (2nd terminal) 

With the extra effort of select (), the call can be used as the main loop of the entire program, aggregating all events - you can use libev or libevent or some Python wrappers around them.

If you cannot explicitly force the non-blocking behavior, say you just use an external library, then it will be much more complicated. Threads can do, but, obviously, this is not the most modern solution, usually just the wrong one.

I’m afraid that in general you cannot solve this for real - it really depends on what you are blocking.

+6
source

IIUC, each top_block has a stop method. That way you can really start top_block in the stream and release a stop if the timeout has arrived. It would be better if top_block wait () also had a timeout, but alas, it is not.

In the main thread, you need to wait for two cases: a) the top block completes, and b) the timeout expires. Busy expectations are evil :-), so you should use thread join-with-timeout to wait for a thread. If the stream is still alive after connecting, you need to stop top_run.

+4
source

You can set an alarm that interrupts your call with a timeout:

http://docs.python.org/library/signal.html

 signal.alarm(1) # 1 second my_blocking_call() signal.alarm(0) 

You can also install a signal handler if you want to make sure that it does not destroy your application:

 def my_handler(signum, frame): pass signal.signal(signal.SIGALRM, my_handler) 

EDIT: What is wrong with this code? This should not interrupt your application:

 import signal, time def handler(signum, frame): print "Timed-out" def foo(): # Set the signal handler and a 5-second alarm signal.signal(signal.SIGALRM, handler) signal.alarm(3) # This open() may hang indefinitely time.sleep(5) signal.alarm(0) # Disable the alarm foo() print "hallo" 

The fact is that

  • The default handler for SIGALRM is to abort the application, if you install the handler, then it should no longer stop the application.

  • Reception usually interrupts system calls (then unlocks your application)

+2
source

The simple part of your question is signal processing. In terms of Python runtime, the signal that was received while the interpreter was making a system call is presented to your Python code as an OSError exception with errno attributed to the corresponding errno.EINTR

So this probably works something like you expected:

  #!/usr/bin/env python import signal, os, errno, time def handler(signum, frame): # print 'Signal handler called with signal', signum #raise IOError("Couldn't open device!") print "timed out" time.sleep(3) def foo(): # Set the signal handler and a 5-second alarm signal.signal(signal.SIGALRM, handler) try: signal.alarm(3) # This open() may hang indefinitely fd = os.open('/dev/ttys0', os.O_RDWR) except OSError, e: if e.errno != errno.EINTR: raise e signal.alarm(0) # Disable the alarm foo() print "hallo" 

Note. I have moved the time import from the function definition as it seems to be a bad shape to hide the import in this way. It’s not at all clear to me why you sleep in your signal handler and, in fact, this seems like a pretty bad idea.

The key point I'm trying to do is that any (not ignored) signal interrupts your main Python code execution line. Your handler will be called with arguments indicating which signal number caused execution (allowing you to use one Python function to process many different signals) and a frame object (which can be used for debugging or some kind of toolkit).

Since the main thread through the code is interrupted, you need to wrap this code when handling some exceptions in order to regain control after such events have occurred. (By the way, if you write code in C, you will have the same care, you should be prepared for any of your library functions with basic system calls to return errors and handle -EINTR in system errno, returning back to retrying or branching to any alternative in your main line (for example, switching to another file or without any file / input, etc.).

As other respondents indicated in their answers to your question, using your approach to SIGALARM is likely to be fraught with portability and reliability issues. Worse, some of these problems can be race conditions that you will never encounter in your test environment, and can only occur in conditions that are extremely difficult to reproduce. Ugly details are usually found in re-entry cases --- what happens if signals are sent during the execution of your signal handler?

I used SIGALARM in some scenarios and this was not a problem for me on Linux. The code I worked on was appropriate for the task. It may be adequate to your needs.

It's hard to answer your basic question without knowing more about how this Gnuradio code behaves, what objects you create from it, and which objects are returned.

Looking at the documents you contacted, I see that they do not seem to offer any “timeout” argument or parameter that can be used to limit the lock behavior directly. In the table in the section "Controlling the flow schedules", I see that they specifically say that .run() can be executed indefinitely or until SIGINT is received. I also note that .start() can start threads in your application and seems to return control to your Python code line while it works. (It seems to depend on the nature of your flow graphs, which I don't understand enough).

It looks like you could create your flow graphs, .start() them, and then (after some processing time or sleeping in your main Python code line) call the .lock() method on the control object (tb?). This, I suppose, puts the Python view of the state ... of the Python object ... in rest mode so that you can query the state or, as they say, reconfigure the flow schedule. If you call .run() , it will call .wait() after calling .start() ; and .wait() will apparently be executed until all the blocks “indicate that they are done” or until you call the object method .stop() .

It looks like you want to use .start() and neither .run() nor .wait() ; then call .stop() after doing any other processing (including time.sleep() ).

Maybe something simple:

  tb = send_seq_2.top_block() tb.start() time.sleep(endtime - time.time()) tb.stop() seq1_sent = True tb = send_seq_2.top_block() tb.start() seq2_sent = True 

.. although I am suspicious of my time.sleep() . You might want to do something else when you request the state of the tb object (perhaps as a result of sleeping for shorter intervals, calling its .lock() method and accessing attributes that I know nothing about, and then calling it .unlock() before going to bed again.

+2
source
 if not seq1_sent: tb = send_seq_2.top_block() tb.Run(True) seq1_sent = True if time.time() < endtime: break 

If "if time.time () <endtime: ', then you exit the loop and the seq2_sent stuff will never suffer, maybe you mean" time.time ()> endtime "in this test?

+1
source

you can try deferred execution ... Twisted framework uses their alot

http://www6.uniovi.es/python/pycon/papers/deferex/

0
source

You mention thread destruction in Python - this is partially possible, although you can kill / interrupt another thread only when you run Python code, not C code, so this may not help you as you want.

see the answer to another question: python: how to send packets to multiple threads and then kill the stream

or google for python killable threads for more details: http://code.activestate.com/recipes/496960-thread2-killable-threads/

0
source

If you want to set the timeout of the lock function, execute threading.Thread as the join (timeout) method, which blocks before the timeout.

Basically, something like this should do what you want:

 import threading my_thread = threading.Thread(target=send_seq_2.top_block) my_thread.start() my_thread.join(TIMEOUT) 
-1
source

All Articles