logstash-plugins/logstash-input-kafka

Kafka auto-commit on failed index events

cdenneen opened this issue · 0 comments

Currently if my kafka input tries to send the event to Elasticsearch/OpenSearch and it fails (in my recent scenario the write index alias was missing) the events during that time appear to be "dropped on the floor" but the consumer group offset doesn't account for them after the alias was fixed.

After the alias was fixed all new events have ingested properly.
However the 15 hour window when the alias wasn't working for indexing those events haven't been ingested.

Need way to track those items so they can sync. Changing the consumer to offset of earliest from latest would involve a lot of duplicate events so that isn't an option.

The commit to ZK should account for the fact that the event wasn't successfully indexed.