I am working on renderfarm and I need my clients to be able to run multiple instances of the renderer without blocking so that the client can receive new commands. This works for me correctly, but I am having problems completing the created processes.
At the global level, I define my pool (so that I can access it from any function):
p = Pool(2)
Then I invoke my renderer with apply_async:
for i in range(totalInstances): p.apply_async(render, (allRenderArgs[i],args[2]), callback=renderFinished) p.close()
This function ends, starts processes in the background and waits for new commands. I made a simple command that will kill the client and stop rendering:
def close(): 'close this client instance' tn.write ("say "+USER+" is leaving the farm\r\n") try: p.terminate() except Exception,e: print str(e) sys.exit() sys.exit()
It does not seem to give an error (it will throw an error), python is terminating, but the background processes are still working. Can anyone recommend a better way to manage these running programs?
python multiprocessing pool
tk421storm
source share