OpenJP2ImageReader memory leak
opaetzel opened this issue · 3 comments
Hello,
there is a memory leak when only getting the info of a jpeg2000 file. Here is a minimal example for this behaviour, I am using the version 0.2.8 from maven and openjpeg built from the current master (51f097e):
public class TestMemoryLeak {
public static void main(String[] args) throws InterruptedException, IOException {
ImageReader reader = null;
reader = ImageIO.getImageReadersByFormatName("JPEG2000").next();
for (int i = 0; i < 1000; i++) {
try (InputStream inputStream = Files.newInputStream(Paths.get("example.jp2"))) {
ImageInputStream iis = ImageIO.createImageInputStream(inputStream);
reader.setInput(iis);
reader.getWidth(0);
reader.dispose();
} catch (FileNotFoundException e1) {
e1.printStackTrace();
}
}
System.out.println("done. sleeping");
Thread.sleep(10000);
}
}
I read the image 1000 times to make the leak better visible.
I have tracked down part of the problem. It is that after the getInfo()
, the streamWrapper
is newly initialized and never destroys it's memory. To handle this, and closing the stream
object, I overrode the dispose()
method in OpenJp2ImageReader
:
@Override
public void dispose() {
if (this.streamWrapper != null) {
try {
this.streamWrapper.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if (this.stream != null) {
try {
this.stream.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
this.info = null;
}
This already helps a lot (2GB leak to 200MB leak with my example image), but there still is a leak. Do you have any hints where I might have overlooked some native memory that needs to be cleaned up? I would then create a pull request with a full fix for the problem.
It is also possible that I am using the library in a wrong way. If so, please tell me how to do it right :-)
I'll have a look and also will try to get Valgrind set up with a small test, will get back to you soon.
Thank you for all the research and the great writeup!
I think your implementation of dispose()
was actually all that was needed!
I just monitored a run of 10e6 iterations of the test in your issue with memleak
from the bcc eBPF tools.
Before the fix, the memory leak is frappant:
$ sudo /usr/sbin/bpfcc-memleak -p 21180
[20:42:19] Top 10 stacks with outstanding allocations:
# ....
23632200 bytes in 196935 allocations from stack
opj_stream_create+0x19 [libopenjp2.so.2.3.0]
206773944320 bytes in 197195 allocations from stack
opj_stream_create+0x2d [libopenjp2.so.2.3.0]
After inserting your dispose()
implementation, memory usage stays constant and the probes don't report any unfreed allocations anymore (besides false positives from the JVM itself).
I also ran the test in a JVM profiler, there doesn't seem to be a memory leak on the JVM-side either, the heap memory always stays within the limits defined by -Xmx
.
Could you try running memleak
on your setup where you are still seeing leaks and report on your findings?
I just tested with the current master (0.2.9-SNAPSHOT) that includes the fixes and don't see any memory leak.
Thanks for looking at this and sorting it out!