Celery Storage of fatal task failures for subsequent re-serving

I use the djkombu transport for my local development, but I will probably use amqp (rabbit) in the production process.

I would like to be able to iterate over errors of a certain type and resubmit. This would be in the case of something that failed on the server or some kind of error in the edge caused by some new data change.

So, I can resubmit the tasks up to 12 hours later after fixing any error or backing up a third-party site.

My question is: is there a way to access old failed tasks through the backend of the result and just resend them with the same parameters, etc.?

+4
source share
2 answers

From IRC

<asksol> dpn`: args and kwargs tasks are not saved with the result

<asksol> dpn`: but you can create your own model and save it there (for example, using the task_sent signal)

<asksol> we do not store anything when sending a task, we only send a message. but it’s very easy to do it yourself.

This is what I expected, but hoped to avoid.

At least I have an answer :)

+3
source

You can probably access old tasks using:

CELERY_RESULT_BACKEND = "database" 

and in your code:

 from djcelery.models import TaskMeta task = TaskMeta.objects.filter(task_id='af3185c9-4174-4bca-0101-860ce6621234')[0] 

but I'm not sure if you can find the arguments that this task starts with ... Maybe something with TaskState ...

I have never used it that way. But can you consider the task.retry function? Example from celery docs:

 @task() def task(*args): try: some_work() except SomeException, exc: # Retry in 24 hours. raise task.retry(*args, countdown=60 * 60 * 24, exc=exc) 
+4
source

All Articles