corelight/json-streaming-logs

create ephermeral file with datestamp instead of count for rollover

gimmic opened this issue · 3 comments

I'm finding an error where the process watching the logs is not smart enough to understand the file was renamed and rather re-reads the entire file when it is renamed from .log to .1.log.

It might be better to initially start the file as json_streaming_conn_{ts}.log and on roll over a new file is created with the new timestamp. After N files, it starts to delete the older timestamps(in order).

This way the newest file is always the most recent timestamp, and the files themselves are never actually renamed but rather just stop being written to.

I hope that makes sense. I'll try to hack it in and if it works properly will submit a pull request.

Wow, I'm coming to this bug late. If you're still getting email about this, what log shipper are you using that is getting confused by the rename? From what I understand, most of these tools watch the node number which isn't changing when file rotation occurs.

Hey Seth,
Yeah, this is old. I fixed it with some zeek scripting but at this point it feels like another lifetime ago. I think the corelight plugin basically now does what I was hacking in at the time. (elastic) Filebeat(or logstash.. or syslog-ng?) was what I was using if I recall correctly.

Either way, I know it isn't a problem today, and I've since changed environments so I can't check easily.

Thanks!