I believe that you can use Manager to split the dict between processes. This theoretically allows you to use the same cache for all functions.
However, I think that a more reasonable logic would be a single process that responds to requests by looking at them in the cache, and if they are not present, then delegating the work to the subprocess and caching the result before returning it. You can easily do this with
with concurrent.futures.ProcessPoolExecutor() as e: @functools.lru_cache def work(*args, **kwargs): return e.submit(slow_work, *args, **kwargs)
Note that work will return Future objects that the consumer will have to wait. lru_cache will cache future objects so that they automatically return; I believe that you can access your data several times, but you cannot check it right now.
If you are not using Python 3, you will need to install the backported versions of concurrent.futures and functools.lru_cache .
source share