mittwald/kubernetes-replicator

How to reduce log level to warning from info

Mahesh-Gunda-Maersk opened this issue · 10 comments

Describe the bug
right now, replicator logs show log levels as info and logging every 30 mins. we would like to reduce the log level to a warning as we fwd logs to the Central Monitoring system. how do we achieve that?
Couldn't find options in the helm chart. Please share if any args. we can change the log level to info or debug whenever we wanted to by passing arguments in deployment yaml file.

Example Log Trace

time="2023-04-17T14:08:25Z" level=info msg="Checking if kube-public/cluster-tls-certificate exists? true" kind=Secret source=hybrid-cloud/cluster-tls-certificate target=kube-public/cluster-tls-certificate

Expected behavior
Log only when error not everything.

Environment:

  • Kubernetes version: [1.25]
  • kubernetes-replicator version: [v2.7.3]

Additional context
passing log levels to deployment yaml arguments.

The replicator supports a --log-level flag, which you should be able to use.

flag.StringVar(&f.LogLevel, "log-level", "info", "Log level (trace, debug, info, warn, error)")

The Helm chart has an args value which can be used to pass custom command-line arguments to the Deployment. Something like the following in your Helm values should do the trick:

args:
  - --log-level=warn

ok. was looking for something like this. will test this. Thank you.

Though I set log-level as warn, the log level is showing as an error .. is this expected? also when I set it to trace I see both debug, info & error logs. Please help us in understanding this behaviour

image
image

Hi, pls check if this behavior is fine. Thank you.

Though I set log-level as warn, the log level is showing as an error .. is this expected? also when I set it to trace I see both debug, info & error logs. Please help us in understanding this behaviour

I'm unsure if I understand the issue. Setting a log level customarily results in logging items with a severity equal to OR HIGHER than the configured log level, so a log level of warn will also log anything more severe than warnings (like errors and panics).

yes, we are on the same page. I have set log-level to the warning, but I see logging severity as error. just want to understand if this behavior is expected. anything we need to worry about?

Alright, then. 🙂 Regarding the actual logs with error severity: No, in normal operation, the replicator should not log any errors. What is the actual error that is being reported? I'm referring to the error field that is cropped in your screenshot.

below is the log snippet with log-level=warn

time="2023-04-19T06:13:15Z" level=error msg="could not replicate object to other namespaces" error="Replicated xyz-streams/local-abc-configuration to 7 out of 17 namespaces: 10 errors occurred:\n\t* Failed to replicate Secret xyzl-streams/local-abc-configuration -> kube-public: Failed to update secret kube-public/local-abc-configuration: secrets \"local-kafka-configuration\" already exists: Failed to update secret kube-public/local-abc-configuration: secrets \"local-kafka-configuration\" already exists\n\t ...........................

same error message for other secrets.

Hello @martin-helmich, were you able to reproduce this issue at your end? hope the error snippet shared helped.

sorry to bother you @martin-helmich. did you get a chance to look at the log snippet and do you have anything to comment or advise on this behavior?