Catch Ctrl + C / SIGINT and output multiprocesses gracefully in python

How to catch Ctrl + C in a multiprocessor python program and gracefully exit all processes, I need a solution to work on both unix and windows. I tried the following:

import multiprocessing import time import signal import sys jobs = [] def worker(): signal.signal(signal.SIGINT, signal_handler) while(True): time.sleep(1.1234) print "Working..." def signal_handler(signal, frame): print 'You pressed Ctrl+C!' # for p in jobs: # p.terminate() sys.exit(0) if __name__ == "__main__": for i in range(50): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() 

And it kind of works, but I don't think this is the right solution.

EDIT: This may be a duplicate of this.

+58
python signals multiprocessing
Jul 03 2018-12-12T00:
source share
3 answers

The previously made decision has race conditions and does not work with map and async functions.

The correct way to handle Ctrl + C / SIGINT with multiprocessing.Pool :

  • Before the Pool process is created, make the process ignore SIGINT . Thus, the created child processes inherit the SIGINT handler.
  • Restore the original SIGINT handler in the parent process after creating the Pool .
  • Use map_async and apply_async instead of blocking map and apply .
  • Wait for the results with a timeout, because the default lock waits to ignore all signals. This is a Python bug https://bugs.python.org/issue8296 .

Combining this:

 #!/bin/env python from __future__ import print_function import multiprocessing import os import signal import time def run_worker(delay): print("In a worker process", os.getpid()) time.sleep(delay) def main(): print("Initializng 2 workers") original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) pool = multiprocessing.Pool(2) signal.signal(signal.SIGINT, original_sigint_handler) try: print("Starting 2 jobs of 5 seconds each") res = pool.map_async(run_worker, [5, 5]) print("Waiting for results") res.get(60) # Without the timeout this blocking call ignores all signals. except KeyboardInterrupt: print("Caught KeyboardInterrupt, terminating workers") pool.terminate() else: print("Normal termination") pool.close() pool.join() if __name__ == "__main__": main() 

As Yakov Shklarov noted, there is a window of time between ignoring the signal and not naming it in the parent process, during which the signal can be lost. Using pthread_sigmask instead of temporarily blocking signal delivery in the parent process will not allow signal loss, however it is not available in Python-2.

+36
Feb 01 '16 at 15:33
source share

The solution is based on this link and this link , and it solved the problem, I had to move to Pool , though:

 import multiprocessing import time import signal import sys def init_worker(): signal.signal(signal.SIGINT, signal.SIG_IGN) def worker(): while(True): time.sleep(1.1234) print "Working..." if __name__ == "__main__": pool = multiprocessing.Pool(50, init_worker) try: for i in range(50): pool.apply_async(worker) time.sleep(10) pool.close() pool.join() except KeyboardInterrupt: print "Caught KeyboardInterrupt, terminating workers" pool.terminate() pool.join() 
+32
Jul 03 2018-12-12T00:
source share

Just handle KeyboardInterrupt-SystemExit exceptions in your workflow:

 def worker(): while(True): try: msg = self.msg_queue.get() except (KeyboardInterrupt, SystemExit): print("Exiting...") break 
+10
May 01 '13 at 18:44
source share



All Articles