Support job max retries
anh opened this issue · 3 comments
there is no way to stop retrying job after job.ErrorCount > max_retries_allowed
. I was thinking of adding MaxRetries
to Job however it means some of the job queries must be updated. Is it good practice to add MaxRetries attribute to Worker
type and WorkerPool
instead and then checking the retrying inside WorkOne()
?
if err = wf(j); err != nil {
if j.ErrorCount > int32(w.MaxRetries) {
if err = j.Delete(); err != nil {
log.Printf("attempting to delete job %d: %v", j.ID, err)
}
} else {
j.Error(err.Error())
}
return
}
Have you taken a look at what the Ruby Que library does here? My goal is to
stay as close to that as possible, and maybe they already have a solution
for max retries.
On Friday, June 12, 2015, anh notifications@github.com wrote:
there is no way to stop retrying job after job.ErrorCount >
max_retries_allowed. I was thinking of adding MaxRetries to Job however
it means some of the job queries must be updated. Is it good practice to
add MaxRetries attribute to Worker type and WorkerPool instead and then
checking the retrying inside WorkOne()?if err = wf(j); err != nil {
if j.ErrorCount > int32(w.MaxRetries) {
if err = j.Delete(); err != nil {
log.Printf("attempting to delete job %d: %v", j.ID, err)
}
} else {
j.Error(err.Error())
}
return
}—
Reply to this email directly or view it on GitHub
#6.
Thank you for reminding me about Ruby Que library, I have a look at it and find a section on Not retrying certain failed jobs . As I understand it customizes Que::Job with SkipRetries which deletes failed job when errors occur then reraise. I'll study more to see how to adapt it to que-go.