I have code that needs to be run against several other systems that may freeze or have problems that are not under my control. I would like to use python multiprocessing to start the child processes to start regardless of the main program, and then when they freeze or they run out of problems, but I'm not sure about the best way to do this.
When terminate is called, it kills the child process, but then it becomes a non-existent zombie that is not freed until the process object disappears. The code example below, where the loop never ends, works to kill it and allow respawn when called again, but it doesnβt seem to be a good way around this (i.e.. Multiprocessing .Process () would be better in __init __ ()).
Anyone have a suggestion?
class Process(object): def __init__(self): self.thing = Thing() self.running_flag = multiprocessing.Value("i", 1) def run(self): self.process = multiprocessing.Process(target=self.thing.worker, args=(self.running_flag,)) self.process.start() print self.process.pid def pause_resume(self): self.running_flag.value = not self.running_flag.value def terminate(self): self.process.terminate() class Thing(object): def __init__(self): self.count = 1 def worker(self,running_flag): while True: if running_flag.value: self.do_work() def do_work(self): print "working {0} ...".format(self.count) self.count += 1 time.sleep(1)
python multiprocessing python-multiprocessing
Dan littlejohn
source share