Celery did not put the task back into the RabbitMQ queue after a timeout

I work on celery workers on Geroku, and one of the tasks fell into the time limit. When I repeated this manually, everything worked fine, so this is probably a connection problem. I use RabbitMQ as a broker, and Celery is set to late task recognition (CELERY_ACKS_LATE = True). I expected the task to be returned to the RabbitMQ queue and processed again by another worker, but this did not happen. Do I need to configure anything else for the task to return to the RabbitMQ queue when the user disconnects?

Here are the logs:

Traceback (most recent call last): File "/app/.heroku/python/lib/python3.4/site-packages/billiard/pool.py", line 639, in on_hard_timeout raise TimeLimitExceeded(job._timeout) billiard.exceptions.TimeLimitExceeded: TimeLimitExceeded(60,) [2015-09-02 06:22:14,504: ERROR/MainProcess] Hard time limit (60s) exceeded for simulator.tasks.run_simulations[4e269d24-87a5-4038-b5b5-bc4252c17cbb] [2015-09-02 06:22:18,877: INFO/MainProcess] missed heartbeat from celery@420cc07b-f5ba-4226-91c9-84a949974daa [2015-09-02 06:22:18,922: ERROR/MainProcess] Process 'Worker-1' pid:9 exited with 'signal 9 (SIGKILL)' 
+6
source share
1 answer

It looks like you hit the celery restrictions. http://docs.celeryproject.org/en/latest/userguide/workers.html#time-limits

Celery does not follow the default task repetition logic because it does not know if attempts are safe for your tasks. Namely, your task must be idempotent so that retries are safe.

Thus, any attempts due to task failures must be performed in the task. See an example here: http://docs.celeryproject.org/en/latest/reference/celery.app.task.html#celery.app.task.Task.retry

There are several reasons why your task could be exhausted, but you would know better. The task could be a timeout because it took too long to process the data or because it took too long to retrieve the data.

If you think that the task is not trying to connect to any service, I suggest reducing the connection timeout interval and adding retry logic to your task. If your task takes too much time to process the data, try splitting your data into pieces and processing it that way. Celery has good support for this: http://docs.celeryproject.org/en/latest/userguide/canvas.html#chunks

+2
source

All Articles