Errors when using jdbc_default_timezone with logstash 6.6.0
Crazybus opened this issue · 15 comments
- Version: 6.6.0
- Operating System: CentOS Linux release 7.6.1810 (official docker image)
- Description:
When using jdbc_default_timezone
with logstash-input-jdbc in logstash 6.6.0 an error occurs originating from the sequel
(version 5.16.0) library. In 6.5.4 this error is not present with the same config which uses sequel==5.14.0
. This library is not currently pinned to a specific version in the gemspec. I'm currently trying to figure out how to test downgrading this gem to see if the issue goes away.
- Config File:
input {
jdbc {
jdbc_driver_library => "./mysql-connector-java-5.1.36.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:32771/test"
jdbc_user => "root"
jdbc_password => "my-secret-pw"
schedule => "* * * * *"
statement => 'SELECT t1 FROM test_timestamp'
jdbc_default_timezone => "Etc/UTC"
}
}
output {
stdout {}
}
- Steps to Reproduce:
Start a mysql docker container
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -p 3306 -d mysql:5.7
Attach into the mysql docker container
docker exec -ti some-mysql sh
Connect to mysql and add some test data with a timestamp
mysql -pmy-secret-pw
create database test;
use test;
CREATE TABLE IF NOT EXISTS test_timestamp (
t1 TIMESTAMP
);
SET time_zone='+00:00';
INSERT INTO test_timestamp
VALUES('2008-01-01 00:00:01');
SELECT t1 FROM test_timestamp;
Run logstash 6.6.0 with config. jdbc_default_timezone
needs to be set to trigger the error.
input {
jdbc {
jdbc_driver_library => "./mysql-connector-java-5.1.36.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:32771/test"
jdbc_user => "root"
jdbc_password => "my-secret-pw"
schedule => "* * * * *"
statement => 'SELECT t1 FROM test_timestamp'
jdbc_default_timezone => "Etc/UTC"
}
}
output {
stdout {}
}
The error:
[2019-01-31T14:53:00,295][INFO ][logstash.inputs.jdbc ] (0.002092s) SELECT t1 FROM test_timestamp
{ 2032 rufus-scheduler intercepted an error:
2032 job:
2032 Rufus::Scheduler::CronJob "* * * * *" {}
2032 error:
2032 2032
2032 Sequel::InvalidValue
2032 NoMethodError: undefined method `utc_total_offset_rational' for #<TZInfo::OffsetTimezonePeriod:0x55b909f8>
Did you mean? utc_total_offset
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/extensions/named_timezones.rb:71:in `convert_input_datetime_other'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/timezones.rb:119:in `convert_input_datetime_no_offset'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/timezones.rb:156:in `convert_input_timestamp'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/timezones.rb:77:in `convert_timestamp'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/database/misc.rb:298:in `to_application_timestamp'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:686:in `timestamp_convert'
2032 org/jruby/RubyMethod.java:119:in `call'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:793:in `block in process_result_set'
2032 org/jruby/RubyArray.java:1734:in `each'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:792:in `process_result_set'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:726:in `block in fetch_rows'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:244:in `block in execute'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:675:in `statement'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:239:in `block in execute'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/connection_pool/threaded.rb:92:in `hold'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/database/connecting.rb:270:in `synchronize'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:238:in `execute'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/dataset/actions.rb:1083:in `execute'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/adapters/jdbc.rb:726:in `fetch_rows'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/sequel-5.16.0/lib/sequel/dataset/actions.rb:151:in `each'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:257:in `perform_query'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:229:in `execute_statement'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:277:in `execute_query'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:258:in `block in run'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:234:in `do_call'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:258:in `do_trigger'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:300:in `block in start_work_thread'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:299:in `block in start_work_thread'
2032 org/jruby/RubyKernel.java:1292:in `loop'
2032 /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:289:in `block in start_work_thread'
2032 tz:
2032 ENV['TZ']:
2032 Time.now: 2019-01-31 14:53:00 UTC
2032 scheduler:
2032 object_id: 2012
2032 opts:
2032 {:max_work_threads=>1}
2032 frequency: 0.3
2032 scheduler_lock: #<Rufus::Scheduler::NullLock:0xfbb6a5b>
2032 trigger_lock: #<Rufus::Scheduler::NullLock:0x1100c8>
2032 uptime: 560.731469 (9m20s731)
2032 down?: false
2032 threads: 2
2032 thread: #<Thread:0x16aa34ac>
2032 thread_key: rufus_scheduler_2012
2032 work_threads: 1
2032 active: 1
2032 vacant: 0
2032 max_work_threads: 1
2032 mutexes: {}
2032 jobs: 1
2032 at_jobs: 0
2032 in_jobs: 0
2032 every_jobs: 0
2032 interval_jobs: 0
2032 cron_jobs: 1
2032 running_jobs: 1
2032 work_queue: 0
} 2032 .
So it is actually the update of tzinfo from 1.2.5 to 2.0.0 which has broken this functionality.
From their changelog:
The TZInfo::TimezonePeriod#utc_total_offset_rational method has been removed. Equivalent information can be obtained using the TZInfo::TimezonePeriod#observed_utc_offset method.
Issue filed upstream jeremyevans/sequel#1596 has been resolved, and is waiting a release (which appears to have a monthly cadence).
Got hit with this, and it hurt. Thanks for opening and tracking.
is it fixed in logstash 7.0.0?
Edit: I used logstash 7 and the issue persists
@datablackhole we are still waiting on an upstream dependency to release its next version, which they do on a monthly cadence. When they do, we'll release an updated version of this plugin that links to it.
+1
+1
+1
I also need this. This makes all my datetimes wrong if I remove the jdbc_default_timezone. Anyone found a workaround for this?
+1
This seems to be fixed in logstash 6.7.0 which uses gem sequel 5.18.0. Thanks :)
+1
+1
To all writing +1, can you confirm this is fixed in 6.7.0?
To all writing +1, can you confirm this is fixed in 6.7.0?
Just upgraded to 6.7.0 and the error described in the original post is no longer present. The output also seems to be correct for me.