I have a function that takes the node identifier of the graph as input and calculates something on the graph (without changing the graph object), then it saves the results in the file system, my code looks like this:
...
The problem is that when I load a graph in Python, it takes up a lot of memory (about 2G, this is a big graph with thousands of nodes actually), but when it starts to go to the parallel part of the code (parallel map function execution) It seems that everyone a separate copy of g is provided to the process, and I just ran out of memory on my machine (she received 6G ram and 3G swap), so I wanted to see that there was a way to give each process the same copy of g, so that only memory was needed to store one copies? Any suggestions are welcome and in advance in advance.
python multiprocessing python-multiprocessing
habedi
source share