pinojs/pino

`pino.transport.pipeline` may have lost logs

Opened this issue · 10 comments

const pino = require("pino").default;
const log = pino(
  {},
  pino.transport({
    pipeline: [
      {
        target: "pino-syslog",
        options: {
          enablePipelining: true,
        },
      },
    ],
  }),
);

const jj = async () => {
  log.info("bbbb");
};
async function test() {
  await jj();
  log.info("cccccccccccc");
}
test();

I only added pino syslog output
source code: https://github.com/pinojs/pino-syslog/blob/91669fbeb108ae1bb7298d289c2af19001a1fe31/lib/transport.js#L26-L29

      transform (obj, enc, cb) {
        const msg = processMessage(obj)
        console.log("[syslog] msg %s", msg)
        cb(null, msg)
      }

This is a screenshot of my test, which shows a probability of losing the log output of cccccc
image

there is my versions

 node: '20.10.0'
pino: '8.19.0'
os: '14.2.1 (23C71)'

TODO

// If this code is added at the end of the above code. There won't be a situation where `ccccc` is lost
process.once("beforeExit", () => {
  setTimeout(() => {}, 1_000);
});

Can you reproduce without pino-syslog? I think the bug is there and not in pino proper.

More specifically, pino-syslog does not implement the close() callback (https://github.com/pinojs/pino-abstract-transport?tab=readme-ov-file#buildfn-opts--stream) to tear down the destination stream properly. As a result, whatever is "in flight" there will be lost.

@mcollina

I tried adding a close callback function, but it did not prevent or "delay" the thread(Worker)'s exit.

In other words, the thread had already closed before I could do anything

https://github.com/pinojs/thread-stream/blob/main/lib/worker.js#L147-L153

You linked the logic that catches unhandledRejections.. how is that related?

You linked the logic that catches unhandledRejections.. how is that related?

Sorry, the link above is incorrect, and in the above text, I mistakenly wrote the main process as a thread

Please add reproductions.

thank you for your prompt reply

pino/lib/transport.js

Lines 60 to 67 in c109804

function autoEnd (stream) {
stream.ref()
stream.flushSync()
stream.end()
stream.once('close', function () {
stream.unref()
})
}

My confusion stems from the fact that once Node.js enters the exit event, even if we add a ref to the worker stream, the main process will not wait for any active resources and will terminate immediately after the autoEnd function completes. Is there a mechanism within Pino itself that can prompt the main process to wait?

The transport functionality I’m aiming to implement is as follows:

  1. To collect logs passed down from upstream and, depending on their level, write them to different files (similar to how pm2 does it).
  2. Then, to synchronize these logs with a syslog server, which is why I need pino-syslog.

like : #318

My confusion stems from the fact that once Node.js enters the exit event, even if we add a ref to the worker stream, the main process will not wait for any active resources and will terminate immediately after the autoEnd function completes. Is there a mechanism within Pino itself that can prompt the main process to wait?

Yes, pino has its logic there to handle that case, while pino-syslog doesn't use it.

Yes, pino has its logic there to handle that case, while pino-syslog doesn't use it.

I’m seeking guidance. How can I ‘delay’ the exit process?

image

My confusion stems from the fact that once Node.js enters the exit event, even if we add a ref to the worker stream, the main process will not wait for any active resources and will terminate immediately after the autoEnd function completes. Is there a mechanism within Pino itself that can prompt the main process to wait?

Yes, pino has its logic there to handle that case, while pino-syslog doesn't use it.

Add autoEnd : false ?

if (workerOpts.autoEnd !== false) {