Create a simple remote dispatcher using multiprocessor systems.

Consider the following code:

Server:

import sys
from multiprocessing.managers import BaseManager, BaseProxy, Process

def baz(aa) :
    l = []
    for i in range(3) :
      l.append(aa)
    return l

class SolverManager(BaseManager): pass

class MyProxy(BaseProxy): pass

manager = SolverManager(address=('127.0.0.1', 50000), authkey='mpm')
manager.register('solver', callable=baz, proxytype=MyProxy)

def serve_forever(server):
    try :
        server.serve_forever()
    except KeyboardInterrupt:
        pass

def runpool(n):
    server = manager.get_server()
    workers = []

    for i in range(int(n)):
        Process(target=serve_forever, args=(server,)).start()

if __name__ == '__main__':
    runpool(sys.argv[1])

Customer:

import sys
from multiprocessing.managers import BaseManager, BaseProxy

import multiprocessing, logging

class SolverManager(BaseManager): pass

class MyProxy(BaseProxy): pass

def main(args) :
    SolverManager.register('solver')
    m = SolverManager(address=('127.0.0.1', 50000), authkey='mpm')
    m.connect()

    print m.solver(args[1])._getvalue()

if __name__ == '__main__':
    sys.exit(main(sys.argv))

If I started the server using only one process: python server.py 1

then the client works as expected. But if I create two processes ( python server.py 2) to listen for connections, I get an unpleasant error:

$python client.py ping
Traceback (most recent call last):
  File "client.py", line 24, in <module>
sys.exit(main(sys.argv))
  File "client.py", line 21, in main
    print m.solver(args[1])._getvalue()
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 637, in temp
    authkey=self._authkey, exposed=exp
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 894, in AutoProxy
    incref=incref)
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 700, in __init__
    self._incref()
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 750, in _incref
    dispatch(conn, None, 'incref', (self._id,))
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 79, in dispatch
    raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError: 
---------------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 181, in handle_request
    result = func(c, *args, **kwds)
  File "/usr/lib/python2.6/multiprocessing/managers.py", line 402, in incref
    self.id_to_refcount[ident] += 1
KeyError: '7fb51084c518'
---------------------------------------------------------------------------

My idea is pretty simple. I want to create a server on which many workers will be created who will share the same socket and process requests independently. Maybe I'm using the wrong tool here?

The goal is to create a three-tier structure, where all requests are processed through an HTTP server, and then sent to nodes in the cluster, and from nodes to workers through multiprocessing managers ...

, node x ... , , (I ) ... , ? , , - ... .

+5
1


, .

, , , .

Celery.

+1

All Articles