This does not work the way you intend, because calling sys.exit() in the workflow terminates only the employee. This does not affect the parent process or other employees, as they are separate processes, and raising SystemExit only affects the current process. You need to send a signal back to the parent process to tell it that it should shut down. One way to do this for your use case is to use the Event created in multiprocessing.Manager server:
import multiprocessing def myfunction(i, event): if not event.is_set(): print i if i == 20: event.set() if __name__ == "__main__": p= multiprocessing.Pool(10) m = multiprocessing.Manager() event = m.Event() for i in range(100): p.apply_async(myfunction , (i, event)) p.close() event.wait() # We'll block here until a worker calls `event.set()` p.terminate() # Terminate all processes in the Pool
Output:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
As indicated in Luke's answer, there is a race: there is no guarantee that all workers will work in order, so it is possible that myfunction(20, ..) will run before myfuntion(19, ..) , for example. It is also possible that other workers after 20 will be executed before the main process can act on the established event. I reduced the size of the race window by adding an if not event.is_set(): call before printing i , but it still exists.
dano
source share