pinojs/pino

Pino stopped writing logs

Closed this issue · 8 comments

Hi!
Yesterday I encountered a problem. Our heap size (old_space) started increasing throughout the day.

Here are the metrics:
Screenshot 2023-10-11 at 14 02 06

There wasn't any unusual activity from the users (we are a b2b app, so the load on our servers is pretty predictable).

In the process of trying to find out what was happening, I found that only a single Kubernetes pod was having that problem. We have 3 in total, and only 1 was affected.

I decided to check the logs, see if anything could help me there, and realised that logs were missing:
Screenshot 2023-10-11 at 14 05 00

So logs were not being written at the time the heap was increasing. My guess is that pino was storing them in the memory, due to not being able to write them to stdout, but I am lost at where to look at the moment. Could you help me understand where the problem could be?

The problem was a one-time thing, nothing like that happened before, or after (yet).

The app has been using pino as the logger for years, the only thing that changed is that we updated the version of our dependencies + node version a few weeks back.

pino-http configuration:

{
  pinoHttp: {
    genReqId: (req) => req['requestId'],
    level: logLevel ?? LogLevel.info,
    base: undefined,
    autoLogging: false,
    formatters: {
      level: (label) => ({ level: label.toString().toUpperCase() }),
    },
    redact: {
      remove: true,
      paths: REDACTED_FIELDS,
    },
  },
}

Node: 20.6.1 (alpine docker image)
We're using nestjs-pino version 3.4.0 and pino-http version 8.5.0.
Sorry if the problem could not be because of pino, but because of something else, but I am a bit lost at where to look.

if you need any more information about our configuration/app/anything else please tell me, I will provide everything to the best of my abilities.

Thanks!

What's your pino configuration?

@mcollina I edited the issue to contain the exact config

Just to add to the question, we are logging a lot of objects and stuff we get from other services in the company (due to legal requirements we have to log basically everything we send/receive).
Could we encounter a problem like this due to the size/depth/types of fields/objects we are logging?

It seems you are logging to stdout on Kubernetes, so this means that your logs are picked up by kubernetes log system, which might saturate. In this case, the logs will accumulate inside the process.

You can configure https://github.com/pinojs/sonic-boom to drop logs over a threshold.

You are correct about Kubernetes.
I'll look into the configuring "sonic-boom", thanks for the suggestion.

FYI for anyone who encounters the issue, adding sonic-boom as a destination stream like this:

new SonicBoom({
    fd: process.stdout.fd,
    maxLength: 16 * mebibyteInBytes,
})

Solved the issue of infinitely increasing RAM usage, although the problem when the logs just stop writing and start piling up in memory still exists. I have yet to figure out which part of the infrastructure or application is the culprit, but I will post an update when/if I find out.

We are also experiencing the exact same problem. We have the same stack, kubernetes, nest.

Also, after a random amount of time, memory consumption begins to increase sharply and logs disappear. Our DevOps team assumes that there are no problems with oversaturation / infrastructure here
image
image

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.