quirkey/resque-status

Problem working with resque-retry

Opened this issue · 4 comments

Delayed retrying with "resque-retry" gem is not working in a Job class which has "resque-status" integrated.
Retrying jobs without delay is working fine i.e. if i only specify the option @retry_limit = <no. of retries>, then it retries that many times.
But if i specify @retry_delay = <delay_in_seconds> OR use Exponential backoff like @backoff_strategy = [0, 60, 600, 3600, 10800, 21600], then it does not even retry the Job at all.

I am currently looking into this. Will let you know if i have a fix for this.

I'm seeing this as well, it appears that the resque-status uuid that resque-retry passes back via resque-scheduler causes the enqueue_to call to fail (splatting the args causes an ArgumentError).

Here's my hack, I don't think it's a good enough solution to submit a pull request

    #override because resque-retry passes in resque-status uuid as the first arg
    def self.scheduled(queue, klass, *args)
      if args.size == 2
        uuid, options = args
        Resque.enqueue_to(queue, klass, uuid, options)
        uuid
      else
        self.enqueue_to(queue, self, *args)
      end
    end

This is almost the type of hack i used in my system (with a little modification though). See this:

def self.scheduled(queue, klass, *args)
  if args.is_a?(Hash) || args.is_a?(ActiveSupport::HashWithIndifferentAccess)
    self.enqueue_to(queue, self, *args)
  else
    uuid, options = args
    Resque.enqueue_to(queue, klass, uuid, options)
    uuid
  end
end

I am also looking into a solution which could turn into a pull request.

self.scheduled

@stevenjackson Quick question, when the job gets retried by resque-retry does the status which is tracked by resque-status get updated automatically to pending back from failed in redis?