Resque: mission critical tasks that are performed sequentially for each user

My application creates resque jobs that must be processed sequentially for each user, and they should be processed as quickly as possible (maximum delay 1 second).

Example: job1 and job2 are created for user1 and job3 for user2. Resque can handle job1 and job3 in parallel, but job1 and job2 must be processed sequentially.

I have different thoughts for solving:

  • I could use different queues (e.g. queue_1 ... queue_10) and run a worker for each queue (e.g. rake resque:work QUEUE=queue_1 ). Users are assigned a queue / employee at runtime (for example, when entering the system, every day, etc.).
  • I could use dynamic "user queues" (for example, queue _ # {user.id}) and try to extend resque so that only one worker can process the queue at a time (as specified in Resque: one worker in the queue )
  • I could put jobs in the queue without a call and use the "meta-task for each user" with the resque-lock function ( https://github.com/defunkt/resque-lock ) that processes these jobs.

Do you have any experience with one of these scenarios in practice? Or are there other ideas worth thinking about? I would appreciate any input, thanks!

+5
source share
2 answers

Thanks to @Isotope's answer, I finally came up with a solution that seems to work (using resque-retry and locks in redis:

 class MyJob extend Resque::Plugins::Retry # directly enqueue job when lock occurred @retry_delay = 0 # we don't need the limit because sometimes the lock should be cleared @retry_limit = 10000 # just catch lock timeouts @retry_exceptions = [Redis::Lock::LockTimeout] def self.perform(user_id, ...) # Lock the job for given user. # If there is already another job for the user in progress, # Redis::Lock::LockTimeout is raised and the job is requeued. Redis::Lock.new("my_job.user##{user_id}", :expiration => 1, # We don't want to wait for the lock, just requeue the job as fast as possible :timeout => 0.1 ).lock do # do your stuff here ... end end end 

I am using Redis :: Lock from https://github.com/nateware/redis-objects here (it encapsulates the template from http://redis.io/commands/setex ).

+5
source

I have done this before.

The best solution for consistently providing these things is that at the end of job1 is job2. job1 and job2 can either go in the same queue, or in different queues, it does not matter for the sequence, it is up to you.

Any other solution, such as setting 1 + 2 queues at the same time, but a job2 message to start in 0.5 seconds will lead to race conditions, so it is not recommended.

Running job3 trigger job2 is also very easy to do.

If you need another option for this: my final suggestion would be to combine both tasks into one task and add a parameter in order to also activate the second part.

eg.

 def my_job(id, etc, etc, do_job_two = false) ...job_1 stuff... if do_job_two ...job_2 stuff... end end 
+2
source

All Articles