To be useful, your web application will have a database. I would create a table in the database that is specifically designed for these tasks. You will have a “state” for each job.
This simplifies your system because you can simply send your request to get started and pass it on to the backend workers (zmq is a good solution for this IMO). Since you are using python for internal use, it is very difficult to get work orders in order to either update the current work task in the database or have another "updater" whose only task is to update the fields in the database (saving the logic separately will make for a better solution, will allow you to run several "updater" if you do a lot of updates)
Then for your interface, since you don't want to poll the server, I would do something like 'long poll' . What you basically do is poll the server, but server untruths actually "respond" until there is a change in the data that interests you. As soon as changes occur, you will respond to the request. On the interface, you have JS to reconnect as soon as it receives the latest update. This solution is cross-browser compatible if you use the JS framework, which is also a cross-browser (I would suggest jQuery).
To exclude polling a web application database, follow these steps:
make the initial request a long survey request for a web application. The web application sends the zmq message to your server (perhaps this needs to be done using the REQ / REP socket) and waits. It waits until it receives a status message from the zmq backend. When it receives a state change, it responds to the external interface with the change. At this point, the external interface will send a new request for a long poll (with this current job identifier, which may be its identifier), and the web application will connect to the server again and wait for another state change. The trick for this work is to use ZMQ ZMQ_IDENTITY for the socket when it was originally created (in the first request). This will allow the web application to reconnect to the same server socket and receive new updates. When the backend has a new update to send, it will signal a web application, which, in turn, will respond to a request with a large poll with its state change. Thus, there is no polling , no backend database, and everything comes from active clients.
I would install some kind of guard dog, which, if the external interface goes away (switches pages or closes the browser), the internal connectors will be properly closed. There is no need for them to sit where they were locked when they changed state.
g19fanatic
source share