stackkit/laravel-google-cloud-tasks-queue

Failed Jobs: Integrity constraint violation Duplicate entry

lukasmedia opened this issue · 2 comments

Hi,

How to deal with failed jobs? I have several jobs that could take longer then expected. Sometimes longer then the request timeout (cloud run). If that is the case; the job is sent to "failed jobs" but still active in "cloud tasks" because it keeps retrying. The insert in failed jobs is not what i want because the job still exists on the queue.

Meanwhile for every job timeout there is a insert in failed jobs and the second (third, fourth) time i get:

SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry 'xxxx-yyyy-zzzz' for key 'failed_jobs_uuid_unique' (SQL: insert into ...

So can we have more direction about the failed jobs logic? Or is this more a Laravel issue?

[Update]

I got this issue because after every failed job i got a "MaxAttemptsExceededException". I finally found the cause: "markJobAsFailedIfAlreadyExceedsMaxAttempts" (in worker.php) cannot deal with the value -1. So the maxAttempts value should be at least 1 or more.

Also i could not find any implementation for "Max retry duration". This value needs to be checked. There is a "RetryConfig" class in cloud-tasks/V2 but it is not used. The worker.php uses $job->retryUntil and is now always 01:00:00 which could generate weird behaviour.

Hi, thanks for the info. I am working on making some changes to the package so it hopefully works better:

  • Setting maxAttempts to -1 now actually makes the job retry indefinitely
  • If a job fails, it is now removed from the queue so it won't be attempted again
  • Working on supporting the "Max retry duration"

Released a new version 2.2.0 with the above changes. Closing this now, but feel free to reopen if needed.