Direct buffers leaks on large log messsages
fred01 opened this issue · 5 comments
GELF appender 1.12.1
Usually I log messages of various sizes, often larger than 8k. As time goes by, I see constant growth of memory allocated for direct buffers and it never getting freed. I made a patch to limit the size of logged messages to 8k. Here is Prometeus graph showing the size of allocated direct buffers. You can see that yellow line where message size is limit 8k is steady, while the green one, with unlimited message size is growing contantly.
logstash-gelf
uses pooled buffers and memory consumption grows over time (with the number of threads that write GELF messages through a logger).
Duplicate of #169.
Some missing info:
After three or four days using gelf appender direct buffer size grows up to 8,5Gb and never freed again. Is it expected behavior?
I use UDP to send messages
<if condition='!property("gralylog").equals("skip")'>
<then>
<appender name="GELF" class="biz.paluch.logging.gelf.logback.GelfLogbackAppender">
<host>udp:${gralylog}</host>
<port>12201</port>
<version>1.1</version>
<facility>avanpos-fn-service</facility>
<extractStackTrace>true</extractStackTrace>
<filterStackTrace>true</filterStackTrace>
<mdcProfiling>true</mdcProfiling>
<timestampPattern>yyyy-MM-dd HH:mm:ss,SSSS</timestampPattern>
<maximumMessageSize>8192</maximumMessageSize>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
</appender>
</then>
</if>
It's somewhat expected but 8.5GB is a lot. How many threads do you have running? You can disable buffer pooling, see #169.
Will disabling pooling dramatically affects performance? It is production servers, so i need to be careful. Is disabling pool enough or i need set buffer to 0 also?
That app have about 400-500 threads on peak load.
I know, it's a bad practice to log big messages, but i don't think it's expected to have 8,5Gb non-freed memory for log appender, even with default settings. May be i'm wrong, sorry if so :).
Rolled out GELF with disabled pooling, saw no visual performance impact, according my metrics. Now buffers memory get freed over time, seems like acceptable result.
Although i still think it's not good to use such huge memory for appender, and even sometime drive app to throw OOM:Direct buffer exception, i solve it somehow, so close for now. Thank you for your attention