twitter/hadoop-lzo

LzoCompressor.realloc() fails to free the old buffer via cleaner

Closed this issue · 0 comments

I noticed that this in the log while running the unit tests:

WARNING: Couldn't realloc bytebuffer
java.lang.IllegalAccessException: Class com.hadoop.compression.lzo.LzoCompressor can not access a member of class java.nio.DirectByteBuffer with modifiers "public"
at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:95)
at java.lang.reflect.AccessibleObject.slowCheckMemberAccess(AccessibleObject.java:261)
at java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:253)
at java.lang.reflect.Method.invoke(Method.java:594)
at com.hadoop.compression.lzo.LzoCompressor.realloc(LzoCompressor.java:249)
at com.hadoop.compression.lzo.LzoCompressor.init(LzoCompressor.java:264)
at com.hadoop.compression.lzo.LzoCompressor.reinit(LzoCompressor.java:216)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:105)
at com.hadoop.compression.lzo.TestLzoCodec.testCodecPoolChangeBufferSize(TestLzoCodec.java:57)

The old buffer may be eventually freed by the garbage collector, but at least the portion of code that frees the direct buffer explicitly via the cleaner isn't working. I see this both on Linux (open JDK) and Mac.

I suspect this is because we're not setting the accessible flag to true, and can be remedied easily.

FYI, you won't see this stack trace in the current master because with hadoop 2.0 the log4j logging isn't coming out properly. But it still happens there too. If you go back one commit and run the unit tests, you'll easily see this in the log.