Python multiprocessing manager list error: [Errno 2] No such file or directory

I am running a multiprocessor program in python. I am using multiprocessing.Manager().list() to share a list as part of a subprocess. First I add some tasks to the main process. And then, start some subprocesses to perform tasks that are in the general list, subprocesses also add tasks to the general list. But I got an exception:

  Traceback (most recent call last): File "/usr/lib64/python2.6/multiprocessing/process.py", line 232, in _bootstrap self.run() File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run self._target(*self._args, **self._kwargs) File "gen_friendship.py", line 255, in worker if tmpu in nodes: File "<string>", line 2, in __contains__ File "/usr/lib64/python2.6/multiprocessing/managers.py", line 722, in _callmethod self._connect() File "/usr/lib64/python2.6/multiprocessing/managers.py", line 709, in _connect conn = self._Client(self._token.address, authkey=self._authkey) File "/usr/lib64/python2.6/multiprocessing/connection.py", line 143, in Client c = SocketClient(address) File "/usr/lib64/python2.6/multiprocessing/connection.py", line 263, in SocketClient s.connect(address) File "<string>", line 1, in connect error: [Errno 2] No such file or directory 

I find something about how to use a shared list in python multiprocessing, for example. But there is still some kind of exception. I have no idea the meaning of the exception. And what's the difference between a shared list and manager.list?

code as follows:

  nodes = multiprocessing.Manager().list() lock = multiprocessing.Lock() AMOUNT_OF_PROCESS = 10 def worker(): lock.acquire() nodes.append(node) lock.release() if __name__ == "__main__": for i in range(i): nodes.append({"name":"username", "group":1}) processes = [None for i in range(AMOUNT_OF_PROCESS)] for i in range(AMOUNT_OF_PROCESS): processes[i] = multiprocessing.Process(taget=worker, args=()) processes[i].start() 
+8
python multiprocessing
source share
1 answer

The problem is that your main process exits immediately after starting all of your workflows, which disables your Manager . When your Manager shuts down, none of the children can use the shared list that you gave them. You can fix this by using join to wait for all children to complete. Just make sure you actually start all your processes before calling join :

 for i in range(AMOUNT_OF_PROCESS): processes[i] = multiprocessing.Process(taget=worker, args=()) processes[i].start() for process in processes: process.join() 
+14
source share

All Articles