I have a simple producer / consumer amqp created as follows:
producer -> e1:jobs_queue -> consumer -> e2:results_queue -> result_handler
The manufacturer sends a number of jobs. The consumer pulls tasks one at a time and processes them, pushing the result into another queue. They are then pulled by the result_handler, which publishes the results to the database.
Sometimes a consumer fails - it can be killed by the operating system or an exception is thrown. If this happens during message processing, this message is lost, no corresponding result is obtained, and I am sad. I would be happy again if the failed work was regrouped.
What I'm looking for is a design pattern to ensure that any consumer processes the task before completion and puts the corresponding result in * results_queue *, or if it does not work, then the task returns to * jobs_queue *. Since the consumer is what fails, the consumer should not be responsible for managing any messages related to his own observation.
We know that the consumer could not process the task if:
- he took the job from * job_queue *, and after some timeout the result was not received
- he took a job from * job_queue * and then died
For my application, we can probably capture the second case, just waiting for the job to process before the timeout. There will be many workers in production who will control, all pull out tasks from the general list of tasks and display the results in a single exchange of results / queue.
source share