Encountered warning [413]: Failed to flush the buffer..error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error=could not push logs to Elasticsearch
latan9 opened this issue · 3 comments
Dear team,
Fluentd pods are showing warning error code [413] frequently
[warn] : [out_es] Failed to flush the buffer. retry_times=1 next_retry_time=2023-04-04 03:39:56 +0000 chunk="5f74fb50c7268c61110c61bdd7f31b7a" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>''elasticsearch-sample-elastic.apps.example.locadomain", :port=>443, :scheme=>"https", :user=>"elastic", :password=>"obfuscated"}): 413 "
I am using fluentd buffer configuration:
bulk_message_request_threshold 10MB
slow_flush_log_threshold 200s
@type "file"
path "/var/log/fluentd/buffers/containerlogs"
total_limit_size 5120MB
flush_thread_count 4
flush_interval 10s
overflow_action drop_oldest_chunk
Dear team,
Please let us know how this issue can be fixed. we are stuck on this issue.
Dear team,
Please reply on this issue, as this error is occurring frequently.
[warn] : [out_es] Failed to flush the buffer. retry_times=1 next_retry_time=2023-04-04 03:39:56 +0000 chunk="5f74fb50c7268c61110c61bdd7f31b7a" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>''elasticsearch-sample-elastic.apps.example.locadomain", :port=>443, :scheme=>"https", :user=>"elastic", :password=>"obfuscated"}): 413 "
The trailing error code 413
means Content Too Large
.
Probably you send too large data due to too high input rate.
How about limiting the data size of one request by chunk_limit_size
?
https://docs.fluentd.org/configuration/buffer-section#buffering-parameters
BTW I believe it's not Fluentd's bug, it's usage issue. so I close this issue.
If you have any question about usage of Fluentd, please post your question to other community:
https://github.com/fluent/fluentd#more-information