eventd.log WARN [Camel Thread #11 - NettyServerTCPWorker] i.n.c.AbstractChannelHandlerContext: Failed to mark a promise as failure because it has failed already
mscbpi opened this issue · 4 comments
Hi,
Thanks for the effort on the docker image. It works well, I have it deployed from image 21.0.3-1 with a docker-compose file close to the one here, excepting volume management to match my infrastructure.
Out of the box, it launches, but before going to any config, eventd.log gets polluted with LOTS of these:
Any of you had seen that as well ? Might not be related to the dockerization itself, but since it happens with a vanilla boot of this one, I was wondering.
2018-02-22 05:26:11,641 WARN [Camel Thread #11 - NettyServerTCPWorker] i.n.c.AbstractChannelHandlerContext: Failed to mark a promise as failure because it has failed already: DefaultChannelPromise@6cec21fa(failure: java.nio.channels.ClosedChannelException), unnotified cause: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
at io.netty.buffer.AbstractReferenceCountedByteBuf.release0(AbstractReferenceCountedByteBuf.java:101) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:89) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:84) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:793) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1291) ~[netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1089) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1136) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1078) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-all-4.1.9.Final.jar:4.1.9.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
Got the trigger of this event, it's grafana running in the docker stack as well. I'd just let this issue open if it makes sense to anyone.
This issue is documented here and is not specifically related to the Docker image. Can we move the problem discussion to this issue here: https://issues.opennms.org/browse/NMS-9873?
@mscbpi thank you very much for your help to investigate this problem. The hint with Grafana which is consuming from the ReST API is very helpful.
thanks for the feedback ! It's indeed the same message, however, my SNMP work. When I click refresh on Grafana this is where I get the errors in the log.