Timestamp is from filebeat processing, not actual mail.log
popindavibe opened this issue · 2 comments
Hello,
I managed to set things up following the README file, but I realized today that the timestamp inserted in Elasticsearch is not the one from the mail.log file, it's from Filebeat's time of precessing.
Is there something extra to do to prevent filebeat / logstash from overwriting the original timestamp?
You might look at https://github.com/whyscream/postfix-grok-patterns/blob/54618d75501d1e98c227f4d18b5b75891a459d65/ALTERNATIVE-INPUTS.md.
Are you sure the timestamp isn't correct?
Thanks, I didn't spot the extra 'timestamp' in the link you provided (compared to the README instruction of this repo).
I guess it should work, it appears to achieve the same than what I ended up doing in my 48-beats-postfix-prepare.conf
(while being more simple):
filter {
if [postfix] {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA:message}" }
overwrite => "message"
}
date {
match => [ "syslog_timestamp" , "MMM dd HH:mm:ss" , "MMM d HH:mm:ss" ]
remove_field => [ "syslog_timestamp" ]
}
}
}
Closing this.