pinojs/pino-elasticsearch

writing to elastic isnt working

eran10 opened this issue · 8 comments

Hi, i would like to send my pino logs to elastic, we are using elastic 6 version with latest version of pino and pino-elastic and when i config the below for pino :

let pino = require('pino'),
    pinoElastic = require('pino-elasticsearch');

const streamToElastic = pinoElastic({
    index: `log-test-%{DATE}`,
    type: 'info',
    consistency: 'one',
    node: 'http://myuser:mypass@localhost:9200',
    'trace-level': 'info',
    'es-version': 6,
    'bulk-size': 1,
    ecs: true
});

and then

const logger = pino(pinoOptions, streamToElastic);
logger.info('test');

the app is running ok, but no logs is printed to console and no logs sent to elastic with no errors at all,
am i missing something?

You should use https://github.com/pinojs/pino-multi-stream to print to stdout as well.

As for the reason logs are not popping up in Elastic... I don't know. cc @delvedor

thanks i will check

@eran10, you had any development regarding this issue?

I can share an update, we have worked for improving the ECS support, and if you are using Pio v6 now you can use @elastic/ecs-pino-format instead of enabling the ecs options here.
@mcollina we should probably deprecate it :)

I found if I write more logs, it sends some of the logs.
So I think it's might be cause of the flushBytes.

If I set flushBytes to 10, all logs appears.

This might be an issue that the last few logs can not be flushed before node.js app terminated.

I found if I write more logs, it sends some of the logs.
So I think it's might be cause of the flushBytes.
If I set flushBytes to 10, all logs appears.

This is the correct behavior :) By default we collect 5 MB of logs before sending them, to avoid overloading Elasticsearch. You can easily change that limit with the --flush-bytes option.

This might be an issue that the last few logs can not be flushed before node.js app terminated.

It should not, as soon as the process end, the bulk indexer does a final flush.

Anyhow in the next version of the bulk indexer, there will be a flush timeout option as well :)

This might be an issue that the last few logs can not be flushed before node.js app terminated.

It should not, as soon as the process end, the bulk indexer does a final flush.

I didn't see this behavior.
How to trigger it?
Or could you indicate the code related to this behavior?
Thanks

If you run the main process and pipe this transport, it works automatically, as the stream from the main process and the transport ends.

node example.js | ./cli.js

If you pass this library directly to the pino options, and then kill the process, there is no guarantee that all the logs will be sent, as the process will be destroyed.
As I was saying the next version will support a flush interval.
We can also think about a force flush method.