Avoid duplications on kafka
eldimious opened this issue · 0 comments
eldimious commented
- V1: commit offset on consumer manually (commitSync / commitAsync). Still we can have duplication if service crashes before commit offset but for this reason we have proceed_events to avoid re-handle same events.
- V2: Here we have the proceed_events which help us with de-duplication. We can store the commit offset in database inside a transaction. Then the consumer should seek and use the stored offset to retrieve records. We need to store the offset in db, to avoid duplications created if our service throws an exception before commits offset. Storing offset in db inside a transaction, makes all operation atomic.
- V3: We are using exactly_once so kafka internally handles all the transactions, but worth to re-check the failure in rest calls