logstash-plugins/logstash-input-s3

S3 authentication error kills entire logstash pipeline

Opened this issue · 0 comments

Logstash Version: logstash-6.7.1-1.noarch (RPM)

If I put ridiculous values in for authentication tokens on the S3 input and attempt to run Logstash, it causes the entire pipeline to fail. See (truncated) logs below.

Apr 29 00:35:06 esx-devbox logstash: [2019-04-29T00:35:06,233][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::S3 bucket=>\"asdfsadfsd\", access_key_id=>\"asdfasdfsdf\", backup_to_bucket=>\"sdfsdsdfdf\", codec=><LogStash::Codecs::CloudTrail id=>\"cloudtrail_a5a6ba3a-5237-4663-affc-35f57f5a9aaa\",

Apr 29 00:35:17 esx-devbox logstash: [2019-04-29T00:35:17,730][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>Aws::S3::Errors::Forbidden, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/aws-sdk-core-2.11.236/lib/seahorse/client/plugins/raise_response_errors.rb:15:in"

To me, it would make more sense to log errors than it does to abort the entire pipeline. We have many other things running in our "main" pipeline and don't want a simple auth error to cause the whole thing to come crashing down. What if AWS had a short outage?

Thanks for your time,
Nick