netty/netty-incubator-codec-http3

unsupported message type: DefaultHttp3DataFrame

poisonriver opened this issue · 10 comments

Hi,
I'm running a sample Http3 server code and sometimes get the following exception:

Caused by: java.lang.UnsupportedOperationException: unsupported message type: DefaultHttp3DataFrame
  at io.netty.incubator.codec.quic.QuicheQuicStreamChannel$QuicStreamChannelUnsafe.write(QuicheQuicStreamChannel.java:682)
  at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
  at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:877)
  at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:863)
  at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
  at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
  at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:865)
  at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
  at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
  at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:865)
  at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
  at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
  at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:865)
  at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1245)
  at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
  at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
  at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566)
  at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

when writing DefaultHttp3DataFrame to ChannelHandlerContext
Can you advise please?

looks like the http3 codec was not setup on the pipeline. Can you share how exactly to reproduce this ?

Everything is set up the same way as in the example class for the Http3 server. Most of the requests are being processed correctly, clients receive data, but sometimes I see this error in logs. The code is below:

...
        int maxThreads = http3ServerConfig.getMaxThreads() > 0 ? http3ServerConfig.getMaxThreads() : Runtime.getRuntime().availableProcessors();
        nioEventLoopGroup = new NioEventLoopGroup(maxThreads);
        QuicSslContext sslContext = QuicSslContextBuilder.forServer(new File(http3ServerConfig.getSSLPrivateKey()),
                http3ServerConfig.getSSLPassword(), new File(http3ServerConfig.getSSLCert()))
            .applicationProtocols(Http3.supportedApplicationProtocols())
            .earlyData(http3ServerConfig.isEarlyDataEnabled())
            .build();
        List<InetAddress> addresses = http3ServerConfig.getBindingAddresses();
        for (InetAddress address : addresses) {
            ChannelHandler handler = Http3.newQuicServerCodecBuilder()
                .sslContext(sslContext)
                .maxIdleTimeout(http3ServerConfig.getMaxIdleTimeout(), TimeUnit.MILLISECONDS)
                .initialMaxData(http3ServerConfig.getSessionRecvWindow())
                .initialMaxStreamDataBidirectionalLocal(http3ServerConfig.getBiStreamRecvWindow())
                .initialMaxStreamDataBidirectionalRemote(http3ServerConfig.getBiStreamRecvWindow())
                .initialMaxStreamsBidirectional(http3ServerConfig.getMaxBiStreams())
                .congestionControlAlgorithm(http3ServerConfig.getCongestionControl())
                .tokenHandler(InsecureQuicTokenHandler.INSTANCE)
                .handler(new Http3ChannelInitializer()).build();
            Channel quicChannel = new Bootstrap().group(nioEventLoopGroup)
                .channel(NioDatagramChannel.class)
                .option(ChannelOption.SO_SNDBUF, http3ServerConfig.getUDPSndSocketBufferSize())
                .option(ChannelOption.SO_RCVBUF, http3ServerConfig.getUDPRcvSocketBufferSize())
                .handler(handler)
                .bind(address, http3ServerConfig.getPort()).sync().channel();
        }
...

private class Http3ChannelInitializer extends ChannelInitializer<QuicChannel> {
        // Called for each connection
        @Override
        protected void initChannel(QuicChannel ch) {
            ch.pipeline().addLast(new Http3ServerConnectionHandler(
                new ChannelInitializer<QuicStreamChannel>() {
                    // Called for each request-stream,
                    @Override
                    protected void initChannel(QuicStreamChannel ch) {
                        Object quicheQuicConnection = getQuicheQuicConnection(ch.parent());
                        ch.pipeline().addLast(new Http3RequestStreamInboundHandler() {
                            private Http3Headers headers;

                            @Override
                            protected void channelRead(ChannelHandlerContext ctx, Http3HeadersFrame frame) throws Exception {
                                headers = frame.headers();
                                ReferenceCountUtil.release(frame);
                            }

                            @Override
                            protected void channelRead(ChannelHandlerContext ctx, Http3DataFrame frame) throws Exception {
                                ReferenceCountUtil.release(frame);
                            }

                            @Override
                            protected void channelInputClosed(ChannelHandlerContext ctx) throws Exception {
                                if (headers != null) { //this is not a broken connection
                                    processRequest(ctx, headers);
                                }
                            }

                            private void processRequest(ChannelHandlerContext ctx, Http3Headers http3Headers) throws Exception{
                                      //process request based on headers
                            }
                        });
                    }
                }, null, null, null, http3ServerConfig.isQPackDynamicTableDisabled()));
        }
    }

@poisonriver can you show me where you call the write*(...) method ? From the stack trace it seems like it is triggered outside of the EventLoop.

Hi @normanmaurer ,
Yes, write methods are triggered outside of the EventLoop from a separate thread pool, but this shouldn't be an issue, right?
Here's the code:

public int write(byte[] data, int offset, int count, int flags) throws IOException {
        DefaultHttp3DataFrame frame = new DefaultHttp3DataFrame(Unpooled.wrappedBuffer(data, offset, count))
        if (last) {
            context.writeAndFlush(frame).addListener(new SendFrameCallback(size, true));
        } else {
            context.write(frame).addListener(new SendFrameCallback(size, false));
        }

SendFrameCallback just updates the counters for the data being sent

Thats right... I just wonder if there is maybe a race in the sense that we already teardown the QuicStreamChannel and so remove the codec from the pipeline.

Also can you tell me where the "context" is coming from ?

Context is coming from this call:

protected void channelInputClosed(ChannelHandlerContext ctx) throws Exception {
                                if (headers != null) { //this is not a broken connection
                                    processRequest(ctx, headers);
                                }
                            }

If a channel is shutting down then the codec is removed, right?

@poisonriver sorry didn't have time to look into this... was this resolved as you closed it ?

Looks like it's resolved with upgrading to 0.0.23

@poisonriver what version did you use before ?