Exit Error for Python Multiprocessing

I see this when I press Ctrl-C to exit the application

Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function p.join() File "/usr/lib/python2.6/multiprocessing/process.py", line 119, in join res = self._popen.wait(timeout) File "/usr/lib/python2.6/multiprocessing/forking.py", line 117, in wait return self.poll(0) File "/usr/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) OSError: [Errno 4] Interrupted system call Error in sys.exitfunc: Traceback (most recent call last): File "/usr/lib/python2.6/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function p.join() File "/usr/lib/python2.6/multiprocessing/process.py", line 119, in join res = self._popen.wait(timeout) File "/usr/lib/python2.6/multiprocessing/forking.py", line 117, in wait return self.poll(0) File "/usr/lib/python2.6/multiprocessing/forking.py", line 106, in poll pid, sts = os.waitpid(self.pid, flag) OSError: [Errno 4] Interrupted system call 

I use twisted on top of my own things,

I registered a Ctrl-C signal with the following code

  def sigHandler(self, arg1, arg2): if not self.backuped: self.stopAll() else: out('central', 'backuped ALREADY, now FORCE exiting') exit() def stopAll(self): self.parserM.shutdown() for each in self.crawlM: each.shutdown() self.backup() reactor.stop() 

and when they signal to others the completion of work, he tries to tell them to stop working through

 exit = multiprocessing.Event() def shutdown(self): self.exit.set() 

where are all my processes in one form or another,

 def run(self): while not self.exit.is_set(): do something out('crawler', 'crawler exited sucessfully') 

Any idea what the error is? I only get it when I have multiple instances of a specific thread.

+6
python exception multiprocessing
source share
2 answers

This is due to the interaction of OS system calls, signals and methods of their processing in the multiprocessor module. I'm not sure if this is a bug or function, but it is on a somewhat complicated territory, since where python meets os.

The problem is that the multiprocessor blocks waitpid until the child waiting for it finishes. However, since you installed a signal handler for SIGINT, and your program receives this signal, it interrupts the system call to execute your signal handler, and waitpid exits, indicating that it was interrupted by the signal. The way python handles this case is exceptions.

As a workaround, you can wrap offensive sectors in while and try / catch loops, for example, at the place where you expect threads to complete or a multiprocessing subprocess. Popen:

 import errno from multiprocessing import Process p = Process( target=func, args=stuff ) p.start() notintr = False while not notintr: try: p.join() # "Offending code" notintr = True except OSError, ose: if ose.errno != errno.EINTR: raise ose 

To work with .Popen multiprocessing, you need to do something like this:

 import errno from multiprocessing import Process from multiprocessing.forking import Popen import os # see /path/to/python/libs/multiprocessing/forking.py class MyPopen(Popen): def poll(self, flag=os.WNOHANG): # from forking.py if self.returncode is None: # from forking.py notintr = False while not notintr: try: pid, sts = os.waitpid(self.pid, flag) # from forking.py notintr = True except OSError, ose: if ose.errno != errno.EINTR: raise ose # Rest of Popen.poll from forking.py goes here p = Process( target=func args=stuff ) p._Popen = p p.start() p.join() 
+5
source share

I saw this, but he left when I tried my own signal handlers. Use the .run reactor (installSignalHandlers = False) and define your own functions for SIGINT, SIGTERM, etc.

0
source share

All Articles