pfelk/docker

Logstash PipelineAction ERROR on startup

8270647 opened this issue · 3 comments

Continuing to see Logstash failure at startup when docker-compose set to one node

logstash     **| [ERROR] 2020-10-18 19:10:35.633 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}**
logstash     | [INFO ] 2020-10-18 19:10:35.661 [Ruby-0-Thread-9: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.6.2-java/lib/logstash/outputs/elasticsearch/common.rb:40] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash     | [INFO ] 2020-10-18 19:10:35.699 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
logstash     | [INFO ] 2020-10-18 19:10:40.721 [LogStash::Runner] runner - Logstash shut down.
logstash exited with code 1```

Thanks! Will be updating the docker instance shortly. However, the current setup was tested as depicted within the instructions...unable to replicate the error.

@8270647 - Try again but use this yml file instead

Couldnt get it to start. I can close this issue and wait for a future release. Will go with scripted install for now. Thanks for looking!