zarusz/SlimMessageBus

[Host.Redis] Scale out - multiple producers and multiple consumers in the same topic/queue

NinjaCross opened this issue ยท 5 comments

I've read the documentations and took a general overview to all the samples.
It seems everything well-explained, but I admin it's not very clear to me (sorry if I missed it somewhere) the behaviour of the system in the following cloud/distributed scenario:

  • Multiple processess acting as producers
  • Multiple processess acting as consumers
  • A producer sends a message "X" into the topic/queue

How many consumers will receive the message "X" ?
Is it guaranteed (or can it be guaranteed) that only one consumer will receive only once the message "X" ?
Is the message always removed after a consumer receives it ?
If it's not the default behavior, which kind of configuration allows to enforce it ?

I'm particularly interested in using Redis and/or Sql Server as storage/transport.
Thankyou for any clarification.

zarusz commented

Hello @NinjaCross, thanks for the feedback on the docs.

It is a loaded question :)

In general, SMB does not make any guarantees and it pretty much depends on the underlying broker/messaging system chosen and its guarantees and semantics. Kafka, Azure Service Bus, Redis, etc will all have different characteristics.

Now, if we focus on Redis Pub/Sub in particular:

  • SMB is using StackExchange.Redis
  • The published message goes into a Redis topic
  • Every consumer process connected will get a copy (they all act as individual subscribers). That's how Redis works.
  • Delivery semantics is at-most-once by Redis.
  • Order-wise Messages sent by other clients to these channels will be pushed by Redis to all the subscribed clients. Subscribers receive the messages in the order that the messages are published.
  • More here: https://redis.io/docs/manual/pubsub/

To achieve the requirement to have exactly one consumer get the published message X, you'd have to use the queue emulation on the SMB which uses the list type on Redis which gives you FIFO ordering. More here: https://redis.io/docs/data-types/lists/.

Does that explain?
I would love some feedback or a PR to help clarify the Redis behaviour with SMB.

If you were to consider let say Kafka transport, Azure Service Bus with session or Event Hub then this could also be achived what you need. For example with Kafka putting your consumer processes in one group would ensure that the published message X gets delivered to exactly 1 consumer in the group at the time. Combined with topic partitioning you also get increased throughtput.

Thankyou @zarusz for your feedback, that's exactly the kind of answer I was hoping to receive.
May I suggest that you add this clarifications to the documentations too ?
I think it would be very usefull :)

zarusz commented

@NinjaCross can you please review this docs improvement PR if this is better?
#171

@zarusz that's much better now, thanks :)

zarusz commented

Cool thanks :)