fluent/fluent-logger-java

Increased connections to the in_forward plugin server

shinji62 opened this issue · 5 comments

Hi

We are sending logs from the client to the in_forward input plugin. In
production we the number of connections to the in_forward port increases over
time. This results in huge memory usage on our applications.

In my initial investigation, we think that when keys get expired in [1], the
sender instance on that key doesn't close properly. So when the [2] tried to
recreate the key [2], another connection will be made. Is this a correct
interpretation on what's happening?

[1] https://github.com/fluent/fluent-logger-java/blob/master/src/main/java/org/fluentd/logger/FluentLoggerFactory.java#L36
[2] https://github.com/fluent/fluent-logger-java/blob/master/src/main/java/org/fluentd/logger/FluentLoggerFactory.java#L56

muga commented

Hi @shinji62,

Thank you for your reporting and investigating. I talk about it with other collaborators. So, please wait.

Thanks,
Muga

@shinji62 ,

Thanks for the report. We want to reproduce the connection leak. Could you let me know the followings?

  1. How many did you create Loggers? In other words, How often did you change the arguments for FluentLogger.getLogger()?
  2. How many did those connections remain? What was the connection status ("established", "close_wait" or etc.)?
  3. Did you use multiple FluentLoggerFactory?

We wrap the FluentLogger in a logback appender like the following:

doAppend(stuffToLog) {
  FluentLogger logger = FluentLogger.getLogger("sametag", "samehost", "sameport") 

  logger.log(stuffToLog)
}

So getLogger gets called everytime with the same arguments on each log call.

In our runs, we see an increased connections in CONNECTION_ESTABLISHED state.
In our app constantly receiving traffic, it just continues increasing until you
hit system memory limits.

I couldn't reproduce this connection leak issue with logback AppenderBase class and concurrently making alot of GC.

Regarding the code, FluentLogger calls RawSocketSender#close() in its finalize(). So it looks that FluentLogger closes the TCP connection even if it's released from the WeakHashMap.

One question. Did you use the logger in multi-threaded environment?

Also, can you create minimal code which can reproduce this issue? If you can, it's very helpful for fixing the issue.

Thanks

I know it has some time, but maybe it can help others.
Could it happen because logger.close()is never called?

doAppend(stuffToLog) {
  FluentLogger logger = null; 
  try{
      logger = FluentLogger.getLogger("sametag", "samehost", "sameport")
      logger.log(stuffToLog)
  } catch(...){
      ...
  } finally{
     if(logger != null)
         logger.close();
  }
}