kiwigrid/helm-charts

Fluentd stops sending logs to elasticsearch all of a sudden

Closed this issue · 1 comments

Is this a request for help?:

Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:

K8S version: 1.13

helm version:
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}

Which chart in which version:
Chart: fluentd-elasticsearch-4.4.0
Version: 2.6.0

What happened:
fluentd does not send data in between despite logs being generated

What you expected to happen:

logs to pe published to ES and eventually view it in Kibana

How to reproduce it (as minimally and precisely as possible):

very dynamic, not sure on how to reproduce it any given time

Anything else we need to know:

So I see this error when I publush fluentd logs -
2019-11-26 12:41:16 +0000 [warn]: [elasticsearch] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=74.0003387699835 slow_flush_log_threshold=20.0 plugin_id="elasticsearch"

But im not sure if this stops sending data to elasticsearch, if this is the issue do we have any way where in I can change this value slow_flush_log_threshold?
If yes, would that affect anything else ? Iam doing this on a production cluster
Please help
Selection_018

This was due to the JVM memory pressure issue and all my disks being full, issue is fixed now thanks