BulkIngester retry policies
swallez opened this issue · 2 comments
The BulkProcessor in the High Level Rest Client (HLRC) has two kinds of retries:
- re-sending a request if the ES server replied with a 429 (too many requests)
- when a bulk response has failed items, retry those items.
The new BulkIngester added in #474 doesn't retry for now:
-
for the 429 handling, we can argue that this belongs to the transport layer (low level rest client), that already retries on all cluster nodes in case of failure and should also handle 429 responses.
-
for individual item retries, the approach used in the
BulkProcessorto retry all failed items has some shortcomings: a number of errors will result in the same error when retried: e.g. version verification failure, partial update failure because of script error or bad document structure, deletion of a non-existing document, etc.The items worth retrying are probably those with a 429 status, which may happen if the coordinating node accepted the request but the target node for the item's operation was overloaded.
A way to handle this in the new BulkIngester would be to define a retry policy by means of delay behavior (linear, exponential, etc) like in HLRC and also a predicate to select the failed items that need to be retried.
for the 429 handling, we can argue that this belongs to the transport layer (low level rest client), that already retries on all cluster nodes in case of failure and should also handle 429 responses.
I would just like to point out this issue where adding such behavior to the low level rest client was discussed: elastic/elasticsearch#21141 (comment)
I also think that is more of a feature for a high level client, rather than a low level one.
That is to say, please get some consensus on where this would belong, because right now it seems like every application developer has to roll their own solution.