yokawasa/fluent-plugin-azure-loganalytics

Error configuring buffer with copy output

Closed this issue · 7 comments

Hello,

Thanks for your work on integrating fluentd with Azure Insights, I'm trying to configure the output to go to Azure insights (while adjusting the buffer settings) and stdout (for debugging purposes). But shipping to Azure Insights fails with this configuration:

<match system.**>
  @type copy

  <store>
    @type azure-loganalytics

    customer_id XXX
    shared_key XXX
    log_type onedrive

    <buffer time>
      timekey 1m
      timekey_wait 1m
    </buffer>
  </store>
  <store>
    @type stdout
  </store>
</match>

The error I'm getting is:

2019-08-29 15:31:02 +0000 [error]: #0 fluent/log.rb:362:error: error on output thread error_class=NoMethodError error="undefined method `-' for nil:NilClass"
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/buffer.rb:460:in `block in dequeue_chunk'
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /usr/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize'
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/buffer.rb:453:in `dequeue_chunk'
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:1086:in `try_flush'
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:1428:in `flush_thread_run'
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:458:in `block (2 levels) in start'
  2019-08-29 15:31:02 +0000 [error]: #0 plugin_helper/thread.rb:78:block in thread_create: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-08-29 15:31:02 +0000 [warn]: #0 fluent/log.rb:342:warn: thread exited by unexpected error plugin=Fluent::Plugin::AzureLogAnalyticsOutput title=:flush_thread_0 error_class=NoMethodError error="undefined method `-' for nil:NilClass"
2019-08-29 15:31:02 +0000 [error]: #0 fluent/log.rb:362:error: unexpected error error_class=NoMethodError error="undefined method `-' for nil:NilClass"
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/buffer.rb:460:in `block in dequeue_chunk'
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /usr/lib/ruby/2.3.0/monitor.rb:214:in `mon_synchronize'
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/buffer.rb:453:in `dequeue_chunk'
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:1086:in `try_flush'
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:1428:in `flush_thread_run'
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin/output.rb:458:in `block (2 levels) in start'
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:551:block in run_worker: /var/lib/gems/2.3.0/gems/fluentd-1.7.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-08-29 15:31:02 +0000 [error]: #0 fluent/log.rb:362:error: unexpected error error_class=NoMethodError error="undefined method `-' for nil:NilClass"
  2019-08-29 15:31:02 +0000 [error]: #0 fluent/supervisor.rb:732:main_process: suppressed same stacktrace

I'm quite convinced it's my configuration as I can see log shipping without the copy plugin ...

Actually, just tried it without the copy plugin and I think adding the <buffer> section gets it to generate that error.

@marono I'll take a look and get back to you soon

@marono
copy plugin is included in fluentd core. You didn't get the error when you tried without <buffer> section?

Unfortunately the problem has not been reproduced on my environment.
Can you please share the following info?:

  • fluentd version
  • local gem list (the one you can get with gem list )
  • ruby version (ruby -v)

I've tried even:

<match system.**>
  @type azure-loganalytics

  customer_id XXX
  shared_key XXX
  log_type onedrive

  <buffer time>
    timekey 1m
    timekey_wait 1m
  </buffer>
</match>

with same results. My debug info is:

fluentd 1.7.0

gem list
*** LOCAL GEMS ***

azure-loganalytics-datacollector-api (0.1.5)
bigdecimal (1.2.8)
concurrent-ruby (1.1.5)
cool.io (1.5.4)
did_you_mean (1.0.0)
dig_rb (1.0.1)
domain_name (0.5.20190701)
fluent-plugin-azure-loganalytics (0.4.1)
fluent-plugin-td (1.0.0)
fluentd (1.7.0, 0.12.43)
http-accept (1.7.0)
http-cookie (1.0.3)
http_parser.rb (0.6.0)
httpclient (2.8.3)
io-console (0.4.5)
json (1.8.3)
mime-types (3.2.2)
mime-types-data (3.2019.0331)
minitest (5.9.0)
msgpack (1.3.1)
net-telnet (0.1.1)
netrc (0.11.0)
power_assert (0.2.7)
psych (2.1.0)
rake (10.5.0)
rdoc (4.2.1)
rest-client (2.1.0)
serverengine (2.1.1)
sigdump (0.2.4)
string-scrub (0.0.5)
strptime (0.2.3)
td-client (1.0.7)
test-unit (3.1.7)
tzinfo (2.0.0)
tzinfo-data (1.2019.2)
unf (0.1.4)
unf_ext (0.0.7.6)
yajl-ruby (1.4.1)

ruby  -v
ruby 2.3.3p222 (2016-11-21) [arm-linux-gnueabihf]

Running on raspberryPI, to test I run fluentd, generate some events in syslog and then hit Ctrl+C, at this point it will try to ship the data points and get the failure.

All I want to do with this configuration is to decrease the shipping type to 1 min.

@marono thanks for the info! Even with the same ruby & fluentd version, the problem was not reproduced on my environment.

Can you please try this configuration and see if you still get the same error ? The purpose is to clarify that this is caused by fluent-plugin-azure-loganalytics.

<match system.**>
   @type file
   path /var/log/fluentd
  <buffer time>
    timekey 1m
    timekey_wait 1m
  </buffer>
</match>

Sorry for the big delay ...
Tried the file output config above and the issue doesn't reproduce. Also noticed the resulting folder contained lots of buffer.* files, which I assume it's expected ...

@marono
Sorry for my not updating this. I don't think i can reporduce the same issue, thus let me close the issue. Please create a new issue if you observe the same issue again.