kiwigrid/helm-charts

[fluentd-elasticsearch] not all logs making it to elastic

Closed this issue · 1 comments

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Version of Helm and Kubernetes:
helm: 3.1.2
k8s: 1.17.5

Which chart in which version:
fluentd-elasticsearch-8.0.1

What happened:
Not all logs are making to elastic. I do have a cronjob that runs every 5 minutes, brings up 5 pods that run a simple dig against github.com. The output is placed at /var/log/containers/dns-timeout-test-1587900600-xzjzm_rzneo_dns-timeout-test-6031e396704949d5b68ee2974c8a5cd32a5fa6e7619620946453082e99cd7335.log

 cat dns-timeout-test-1587900600-xzjzm_rzneo_dns-timeout-test-6031e396704949d5b68ee2974c8a5cd32a5fa6e7619620946453082e99cd7335.log
{"log":"github.com.\n","stream":"stdout","time":"2020-04-26T11:30:18.233996891Z"}
{"log":"140.82.118.4\n","stream":"stdout","time":"2020-04-26T11:30:18.234020291Z"}

However, these logs are not making it to elastic search. While logs from other containers are going it properly. Is the current configuration looking if the pod is still running before grabbing the logs?

What you expected to happen:
The stdout logs of these cronjob triggered pods should appear in the logs

How to reproduce it (as minimally and precisely as possible):
Deploy the helm chart as it is, run a container that created output to stdout. Kill the container. Look into elastic search if the logs are available.

I took some time but now they are there.
No idea what the reason is - but I am okay this way