Possible memory leak - datasource proxy
Closed this issue · 9 comments
Hello, today I have come across possible memory leak in datasource proxy inside TracingListenerStrategy
After 250k requests to server, the strategy instance acumulates nearly 900MB of memory and never releases it.
I have latest version of datasource proxy spring boot starter.
<dependency>
<groupId>com.github.gavlyukovskiy</groupId>
<artifactId>datasource-proxy-spring-boot-starter</artifactId>
<version>1.5.3</version>
</dependency>
configuration properties
decorator:
datasource:
datasource-proxy:
query:
enable-logging: false
slow-query:
threshold: 1
count-query: true
When using p6spy, the problem does not seem to be present.
Hi @tomorrow10, thank you for reporting the issue.
The only difference between p6spy and datasource-proxy is using different keys in ConcurrentHashMap
to track the spans produced by connections, statements and result sets. For p6spy those are ConnectionInformation
, StatementInformation
, ResultSetInformation
all of which do not override equals()
/hashCode()
, while datasource-proxy implementation uses actual Statement
and ResultSet
implementations coming from the driver (not very good idea though) which equals()
/hashCode()
might depend on its mutable state and therefore might get stuck inside of ConcurrentHashMap
.
I didn't find evidence of that in ojdbc6-12.1.0.2, but probably I'm looking into wrong place or version.
It would be really helpful if you can share exact version of Oracle JDBC driver you're using and its configuration so I can reproduce it on my end.
Hi @gavlyukovskiy, I have found it if UCP data pool and ojdbc 8 driver
<!--oracle dependencies-->
<dependency>
<groupId>com.oracle.jdbc</groupId>
<artifactId>ucp</artifactId>
<version>12.2.0.1</version>
</dependency>
<dependency>
<groupId>com.oracle.jdbc</groupId>
<artifactId>ojdbc8</artifactId>
<version>12.2.0.1</version>
</dependency>
@tomorrow10 I tried to configure jdbc using both OracleDataSource
and with PoolDataSource
on top of it, but still I cannot reproduce the issue.
If you can provide some sample application it would be helpful.
Hi @gavlyukovskiy,
We run into the memory leak issue as well, I have captured the details but have not been able to come up with the fix yet, I think the problem could be with datasource proxy (may need more time to understand). I will be working on helping to address the leak, but here is what is happening: the mix of multi-layered proxies (due to some JDBC pools) and real classes.
Components (please find the complete project at https://github.com/reta/sleuth-spring-boot-2-datasource-leak):
- Spring Boot 2.1.4
- Spring Cloud Sleuth 2.1.0
- Tomcat JDBC Pool 9.0.17
- datasource-proxy-spring-boot-starter 1.5.3
- datasource-proxy 1.5 (1.5.1 has the same behavior)
At the trace time, the sequence of following evens happen (I am omitting some due to be not really relevant):
TracingListenerStrategy::afterQuery
:statementKey
is JDK Proxy created by Tomcat JDBC (StatementFacade$StatementProxy
)TracingListenerStrategy::beforeResultSetNext
:statementKey
is a real class (depends on JDBC driver)TracingListenerStrategy::afterConnectionClose
:nestedStatements
is a collection of JDK Proxy objects, whereasnestedResultSetsByStatement
is a map with real class as the key. Because of that, keys mismatch andnestedResultSetsByStatement::remove
operation does not remove anything but accumulates only. With time, it grows a lot and causes the leak.
It is 100% reproducible, sadly I don't have the clear solution for the issue but hopefully we could hammer it together, will keep the issue updated with my discoveries.
Thank you.
Hi @reta. Thank you very much for provided sample project. I've found that problem occurs using this driver because when Statement
is closed the delegate inside proxy is set to null
, so it won't match to key in the Map
anymore.
Initially I created this logic of tracking statements and result sets to avoid span leaks when resources weren't closed or closed in different order. For example closing only Connection
without closing any produced statements or result sets.
I refactored tracing strategy to have connection as key to whole tree. So that using this driver statement will be leaked temporary until connection is closed and when it happens whole tree becomes garbage.
I would like you to review my PR to fix this issue: #32
Thanks a lot @gavlyukovskiy , looking!
My apologies, @gavlyukovskiy , would you mind to cut the release with this fix? It would be very helpful, thanks a lot!
@reta version 1.5.5 (release 1.5.4 was made from wrong branch) is released and will be available in Maven Central shortly.
Thank you very much!