sky-uk/kafka-message-scheduler

Reader stream is terminated when the publisher queue is full

lacarvalho91 opened this issue · 0 comments

The scheduler gets into a zombie state when the publisher queue buffer is full because the reader stream gets terminated (thinking a fatal error has occurred) but the publisher is still running.

Excerpt from Slack conversation with @mishamo:

In one of our environments we’re seeing `Publisher stream has died WARNING arguments left: 1`, then a bunch of
Message [akka.kafka.KafkaConsumerActor$Internal$Stop$] without sender to Actor[akka://kafka-message-scheduler/system/kafka-consumer-1#44317897] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.```
, followed by
Publishing scheduled message bafd8c9b-f77c-4a52-a3bb-4616e52a164d to scheduler-healthcheck and deleting it from ACCOUNT_JANITOR-SCHEDULE```
around 100ms later, without an actual shutdown happening.

If we see that first log line, we would expect an application shutdown, right?

Then a 100ms later or so, it stops consuming and becomes a zombie