ShouldBeUnique Support
Closed this issue · 12 comments
Hey there!
Is it possible to use ShouldBeUniqueUntilProcessing
, ShouldBeUnique
interfaces with this package?
https://laravel.com/docs/8.x/queues#keeping-jobs-unique-until-processing-begins
Hi @zek
You mean that same event could be published from different sources at the same time?
Actually no. I need to block tasks within the same group and make sure they processed in order.
(Group key included in task data)
Let me tell you my use case.
I have a microservice which collects data and pushes this data to rabbitmq and data handled by queue workers. There are 400 instances of this microservice.
For scaling purposes I have multiple queue workers on multiple machines.
In my case task order is very important.
I'm afraid if task order may be mixed up.
Right now I use cache lock feature and block task as following.
Cache::lock('group:'.$groupKey, 10)->block(...)
So multiple events may pushed at the same second or even milisecond and I need to make sure they're executed in order by multiple queue workers.
The order issue is clear for me. And I still don't realise how to solve this without the scan of the full list of current tasks. It should be some dispatcher which will be the only point to return messages or so.
But this isn't the issue of the Uniqueness.
I'm truly believe that the Rabbitevents works in FIFO principle. So if many listeners are listening to one queue messages will be handled in the right order.
I'm truly believe that the Rabbitevents works in FIFO principle. So if many listeners are listening to one queue messages will be handled in the right order.
Indeed but in my case each task within the same group must wait for previous tasks to be completed.
I guess I mention wrong Interface, should be WithoutOverlapping
.
Even Cache::lock
doesn't solve my issue :/
I have 10 workers and have 2 events that queued at the same second with the same group key.
Event A (group X) queued at 16:14:46.7181
Event B (group X) queued at 16:14:46.7237
I just need Event B waits for Event A to be completed. But in my case Event B processed before Event A.
I came up with a feature in RabbitMQ. Might be a solution
https://github.com/rabbitmq/rabbitmq-server/tree/master/deps/rabbitmq_consistent_hash_exchange
Regarding your issue: how a listener should work? They both has taken their messages in the correct order but for some reason (Database load, server etc.) second listener could finish it's job faster. But this is the problematic which could be solved by the atomic processing.
I see that an issue you have could be solved like this:
- worker has got a message
- worker creates a mutex
- no new messages could be taken until a mutex exists
I've tried atomic lock but it didn't solve my issue. I've read a few blog posts and seems like this is a common problem in tech firms such as e-commerce firms. They mostly points to rabbitmq_consistent_hash_exchange.
Regarding to consistent hash exchange docs,
Tasks in the same group goes to same worker therefore worker must finish the task 1 then it can receive task 2.
That means none of the listeners should never get the task 2 even if queue has only task 2.
It's really easy to implement however it requires routing key as integer. But rabbitevents uses "routing-key" as an event name. So requires to get event-name from the data or properties.
the routing key - is the part of a queue name too
Close