robcowart/synesis_lite_suricata

Unable to index more than 8 GB of Suricata logs

vivekshwarup opened this issue · 6 comments

Dear Team,

we are unable to index more than 8 gb of network data, below are the error. Please suggest
WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400,

yellow open suricata_stats-1.1.0-2019.11.19 E56hJBsWT7SLwwdBe9fAxQ 3 1 54 0 1.3mb 1.3mb
green open .opendistro-alerting-alerts IGu4a-sYR9K5qsSc1MonzQ 1 0 0 0 283b 283b
green open .kibana_1 pNwqmVMpT2auYw4awigqTw 1 0 912 53 425.2kb 425.2kb
green open .opendistro-alerting-alert-history-2019.11.18-000002 YEElqcjvSgeplAIg3SCPUw 1 0 0 0 283b 283b
yellow open suricata-1.1.0-2019.11.19 409tfWehQvynPZKOHEpj3Q 3 1 591230 0 156.2mb 156.2mb
yellow open suricata-1.1.0-2019.11.18 ZY4zVSvhRx2BR8kYl_kCGg 3 1 2945 0 1.3mb 1.3mb
yellow open suricata_stats-1.1.0-2019.11.18 cExiCSn0TxCeY90qTJZYMA 3 1 4 0 150.3kb 150.3kb

Can you share the full log message?

[2019-11-19T16:29:31,368][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2019-11-19T16:29:33,849][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}} [2019-11-19T16:29:36,375][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2019-11-19T16:29:36,702][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16} [2019-11-19T16:29:37,136][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>16} [2019-11-19T16:29:38,935][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}}
[2019-11-19T16:29:41,384][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2019-11-19T16:29:44,009][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}} [2019-11-19T16:29:46,394][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2019-11-19T16:29:49,065][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}}
[2019-11-19T16:29:51,420][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2019-11-19T16:29:52,716][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>32}
[2019-11-19T16:29:53,154][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>32}
[2019-11-19T16:29:54,140][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}} [2019-11-19T16:29:56,430][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2019-11-19T16:29:59,225][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}}
[2019-11-19T16:30:01,442][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2019-11-19T16:30:04,259][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}} [2019-11-19T16:30:06,458][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2019-11-19T16:30:09,314][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}}
[2019-11-19T16:30:11,469][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2019-11-19T16:30:14,372][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}} [2019-11-19T16:30:16,476][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://127.0.0.1:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} [2019-11-19T16:30:19,405][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>48, "name"=>"[synlite_suricata]<beats", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/logstash-input-beats-6.0.3-java/lib/logstash/inputs/beats.rb:204:in run'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["[metadata]"], "id"=>"bf572865074c03e2b6cdae88fd91e150be92113146940a399fecda825e13bb6a"}]=>[{"thread_id"=>46, "name"=>"[synlite_suricata]>worker0", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}, {"thread_id"=>47, "name"=>"[synlite_suricata]>worker1", "current_call"=>"[...]/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in sleep'"}]}}

[2019-11-19T17:01:03,234][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"suricata-1.1.0-2019.11.19", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x2932ec93], :response=>{"index"=>{"_index"=>"suricata-1.1.0-2019.11.19", "_type"=>"_doc", "_id"=>"9JJug24B_YcKlTLnHOge", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.flags] of type [long] in document with id '9JJug24B_YcKlTLnHOge'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: "81a0""}}}}}
[2019-11-19T17:01:03,247][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"suricata-1.1.0-2019.11.19", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x45b8ba7f], :response=>{"index"=>{"_index"=>"suricata-1.1.0-2019.11.19", "_type"=>"_doc", "_id"=>"9ZJug24B_YcKlTLnHOge", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.flags] of type [long] in document with id '9ZJug24B_YcKlTLnHOge'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: "81a0""}}}}}
[2019-11-19T17:01:03,252][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"suricata-1.1.0-2019.11.19", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x14c09e72], :response=>{"index"=>{"_index"=>"suricata-1.1.0-2019.11.19", "_type"=>"_doc", "_id"=>"9pJug24B_YcKlTLnHOge", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.flags] of type [long] in document with id '9pJug24B_YcKlTLnHOge'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: "81a0""}}}}}
[2019-11-19T17:01:03,256][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"suricata-1.1.0-2019.11.19", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x4c7887e2], :response=>{"index"=>{"_index"=>"suricata-1.1.0-2019.11.19", "_type"=>"_doc", "_id"=>"95Jug24B_YcKlTLnHOge", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.flags] of type [long] in document with id '95Jug24B_YcKlTLnHOge'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: "81a0""}}}}}
[2019-11-19T17:01:03,264][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"suricata-1.1.0-2019.11.19", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0x2a89574d], :response=>{"index"=>{"_index"=>"suricata-1.1.0-2019.11.19", "_type"=>"_doc", "_id"=>"-JJug24B_YcKlTLnHOge", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.flags] of type [long] in document with id '-JJug24B_YcKlTLnHOge'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: "81a0""}}}}}
[2019-11-19T17:01:03,266][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"suricata-1.1.0-2019.11.19", :routing=>nil, :_type=>"_doc"}, #LogStash::Event:0xd1c7646], :response=>{"index"=>{"_index"=>"suricata-1.1.0-2019.11.19", "_type"=>"_doc", "_id"=>"-ZJug24B_YcKlTLnHOge", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [dns.flags] of type [long] in document with id '-ZJug24B_YcKlTLnHOge'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: "81a0""}}}}}

In the first log It looks like your Elasticsearch node is unavailable.

The second is due to a field type conflict. I will need to make a change to the index template to change the mapping type from a long to a keyword.

Yesterday it was working and indexes up to 8GB data, but today morning onwards unable to index.
curl localhost:9200/_cat/indices
green open .opendistro-alerting-alerts IGu4a-sYR9K5qsSc1MonzQ 1 0 0 0 283b 283b
green open .kibana_1 pNwqmVMpT2auYw4awigqTw 1 0 912 53 425.2kb 425.2kb
green open .opendistro-alerting-alert-history-2019.11.18-000002 YEElqcjvSgeplAIg3SCPUw 1 0 0 0 283b 283b
green open suricata-1.1.0-2019.11.19 X-iCI27JR5qrJ5L59k4_8w 30 0 270282 0 228.4mb 228.4mb
green open suricata_stats-1.1.0-2019.11.19 pQ8tl0mgR2mHWWd3tvDDYQ 30 0 24 0 904.5kb 904.5kb

Issues resolved! added pipeline workers along with pipeline batch size. On ubuntu 18.04 64 bit 4 core with 16 GB RAM.

vi /etc/logstash/pipelines.yml

  • pipeline.id: synlite_suricata
    path.config: "/etc/logstash/synlite_suricata/conf.d/*.conf"
    pipeline.workers: 4
    pipeline.batch.size: 1000