Some bulk requests are retried for ever even when we are trying to stop a pipeline
pfcoperez opened this issue · 1 comments
Logstash information:
- Logstash version: >=8.5.2, maybe previous versions too
- Installation source: Any
- How is it being run? As part of a managed Docker container by internal Elastic Cloud orchestration system.
- The plugin is included in the default LS distribution.
JVM (e.g. java -version
):
Bundled
OS version (uname -a
if on a Unix-like system):
N/A
Description of the problem including expected versus actual behavior:
At Elastic cloud we are deploying some Logstash instances as managed services, that is, their configurations get updated automatically by an external service reacting to some cloud configuration changes.
Once scenario we contemplate is the possibility of evicting certain pipelines when their target ES cluster become unhealthy, unresponsive or unavailable. That eviction action isn't much more than a dynamic reconfiguration of the pipelines which boils down to stopping the affected pipeline.
For some of the problems we do tests this works nicely as the for-ever-retry loop ares stopped when the pipeline is stopping but, for others, as bad authentication response code (401
) the pipeline never stops as it gets trapped in the loop of retries for ever.
By reading the plugin source code we found that both connectivity errors and unexpected errors have a retry safe-guard stopping the retry loop when the pool/pipeline is stopping. e.g: Last line in the following snippet.
However, other response codes (like 401
) don't:
Our reasoning is that this latter branch should have the same safeguard because in this use case as well as others it might be that the pipeline was configured with parameters that are not longer valid (e.g: expired API keys) being the best solution to the management problem to just reconfigure the pipeline without restarting the whole LS instance.
We are filling a PR replacing retry
with retry unless @stopping.true?
in the branch dealing with known retry-able events.
Steps to reproduce:
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
Provide logs (if relevant):
The problem is not limited to just conditioning the retry attempt on @stopping.true?
on all the bulk request branches with retry unless @stopping.true?
.
We found that @stopping.true?
never becomes true. The reason for this, as discussed with the Logstash team, is that close
on the output code is not invoked until all the batches are completed:
def shutdown_workers
@shutdownRequested.set(true)
@worker_threads.each do |t|
@logger.debug("Shutdown waiting for worker thread" , default_logging_keys(:thread => t.inspect))
t.join
end
filters.each(&:do_close)
outputs.each(&:do_close)
end
The logic is as follows:
- The shutdown flag is set thus triggering the shut-down process.
- When all workers have finished shutting down, then they get their
close
method invoked: - This close method is what sets the
@stopping
flag:
logstash-output-elasticsearch/lib/logstash/outputs/elasticsearch.rb
Lines 441 to 445 in 4507240
- Which is, in turn, the signal to stop retrying in
So the condition to stop retrying can only be fulfilled when the plugin has already stopped retrying leading to a deadlock.
The incoming fix PR will cover for this too by replacing checks on @stopping
with the following condition encapsulated in a helper method:
def shutting_down?
@stopping.true? || (execution_context.pipeline.shutdown_requested? && !execution_context.pipeline.worker_threads_draining?)
end
Where:
execution_context.pipeline.shutdown_requested?
is set right in the first line of the body ofshutdown_workers
.!execution_context.pipeline.worker_threads_draining?
Makes sure we keep retrying when the pipeline is configured to drain the events before being stopped.