robcowart/synesis_lite_snort

[event][host] problem

HyperDevil opened this issue · 8 comments

Logstash error:

[2018-10-26T20:14:26,568][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2018.10.26", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x749d4fc9], :response=>{"index"=>{"_index"=>"snort-1.0.0-2018.10.26", "_type"=>"doc", "_id"=>"lB-VsWYB9Ov0hgQBpt4F", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [event.host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:608"}}}}}
[2018-10-26T20:14:26,573][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2018.10.26", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x688bf2c6], :response=>{"index"=>{"_index"=>"snort-1.0.0-2018.10.26", "_type"=>"doc", "_id"=>"lR-VsWYB9Ov0hgQBpt4H", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [event.host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:608"}}}}}
[2018-10-26T20:14:41,307][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2018.10.26", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x5da37f7e], :response=>{"index"=>{"_index"=>"snort-1.0.0-2018.10.26", "_type"=>"doc", "_id"=>"lh-VsWYB9Ov0hgQB396Y", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [event.host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:608"}}}}}

snort alert:

10/26-18:12:37.582706 [] [1:100000:1] HTTP Web Viewing [] [Classification: Potentially Bad Traffic] [Priority: 2] {TCP} x.x.x.x:58756 -> x.x.x.x:80

Someone here: robcowart/synesis_lite_suricata#1 already found the right fix

People using PfSense could use this grok

"%{DATESTAMP:[snort_timestamp]}%{SPACE},%{INT:[gid]},%{INT:[sid]},%{INT:[rev]},%{QUOTEDSTRING:[signature]},%{WORD:[proto]},%{IPORHOST:[src_ip]},%{INT:[src_port]},%{IPORHOST:[dest_ip]},%{INT:[dest_port]},%{INT:[xref]},%{CISCO_REASON:[class]},%{INT:[priority]}"

11/06/18-23:10:42.252414 ,1,2008578,6,"ET SCAN Sipvicious Scan",UDP,x.x.x.x,5063,x.x.x.x,5060,13918,Attempted Information Leak,2

Hi,
I also got the exact same error as HyperDevil. do you recommend using specific Snort output configuration i.e. unified2, tcpdump format etc for it, because Snort output formats may probably effect grok patterns!?.
Thanks for amazing project

No to fix (the HyperDevil first post error) you have to change the line in this file 20_filter_suricata.logstash.conf form:

"[host]" => "[event][host]"
to
"[host][hostname]" => "[event][host]"

If you have pfsense and use filebeat to get snort logging you need to add this grok

"%{DATESTAMP:[snort_timestamp]}%{SPACE},%{INT:[gid]},%{INT:[sid]},%{INT:[rev]},%{QUOTEDSTRING:[signature]},%{WORD:[proto]},%{IPORHOST:[src_ip]},%{INT:[src_port]},%{IPORHOST:[dest_ip]},%{INT:[dest_port]},%{INT:[xref]},%{CISCO_REASON:[class]},%{INT:[priority]}"

to parse this format
11/06/18-23:10:42.252414 ,1,2008578,6,"ET SCAN Sipvicious Scan",UDP,x.x.x.x,5063,x.x.x.x,5060,13918,Attempted Information Leak,2

Logstash error:

[2018-10-26T20:14:26,568][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2018.10.26", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x749d4fc9], :response=>{"index"=>{"_index"=>"snort-1.0.0-2018.10.26", "_type"=>"doc", "_id"=>"lB-VsWYB9Ov0hgQBpt4F", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [event.host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:608"}}}}}
[2018-10-26T20:14:26,573][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2018.10.26", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x688bf2c6], :response=>{"index"=>{"_index"=>"snort-1.0.0-2018.10.26", "_type"=>"doc", "_id"=>"lR-VsWYB9Ov0hgQBpt4H", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [event.host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:608"}}}}}
[2018-10-26T20:14:41,307][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2018.10.26", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x5da37f7e], :response=>{"index"=>{"_index"=>"snort-1.0.0-2018.10.26", "_type"=>"doc", "_id"=>"lh-VsWYB9Ov0hgQB396Y", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [event.host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:608"}}}}}

hi
the problem is where you must update your logstash filter config to match the last version of filebeat,( if you are using 6.4.2 and up) so the messages receiving by filebeat are synchronized with the order of filter config.
good luck

I have same issue on logstash-plain.log

when I flow the below link

https://github.com/robcowart/synesis_lite_snort

2019-02-19T17:18:15,169][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"snort-1.0.0-2019.02.19", :_type=>"logs", :_routing=>nil}, 2019-02-19T06:12:57.313Z %{host} %{message}], :response=>{"index"=>{"_index"=>"snort-1.0.0-2019.02.19", "_type"=>"logs", "_id"=>"t_NnBGkBJ7r60F19pLY4", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [event.host] of type [keyword]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:508"}}}}}

The [event][host] in sysnlite_snort.template.json needs to be modified to be compatible for later versions of filebeat and logstash. I changed them to
...
"host": {
"type": "object",
"properties": {
"hostname": {
"type": "keyword"
},
"os": {
"type": "object",
"properties": {
"kernel": {
"type": "keyword"
},
"codename": {
"type": "keyword"
},
"name": {
"type": "keyword"
},
"family": {
"type": "keyword"
},
"version": {
"type": "keyword"
},
"platform": {
"type": "keyword"
}
}
},
"containerized": {
"type": "boolean"
},
"name": {
"type": "keyword"
},
"id": {
"type": "keyword"
},
"architecture": {
"type": "keyword"
}
}
},
...
Then PUT the modified template manually using curl with parameter _include_type_name=true
_
It worked on ELK version 7.6.2. But, the dashboard.json isn't usable anymore since newer version of kibana use ndjson instead. I have to create visualizations myself.

Closing all issues as this project has been archived.