nats-io/nats.net.v2

Sending too large payloads causes timeouts

theast opened this issue · 3 comments

Observed behavior

Sending to large payloads (larger than max_payload configured on nats server) causes timeouts, "The operation has timed out.". The subsequent messages of the message that is too large also receives timeouts, even if that payload is well within the size of max_payload.

Expected behavior

  1. I'm thinking a more accurate exception message could be good, something more connected to the payload size rather than a timeout.
  2. Should the subsequent messages fail?

Server and client version

server: 2.10.5
client: 2.1.0-preview.5

Host environment

No response

Steps to reproduce

No response

mtmk commented

Thanks for the report. This is a bug.

using Microsoft.Extensions.Logging;
using NATS.Client.Core;
using NATS.Client.Services;

var nats = new NatsConnection(new NatsOpts
{
    LoggerFactory = LoggerFactory.Create(builder => builder.AddConsole().SetMinimumLevel(LogLevel.Error)),
});
try
{
    await nats.PublishAsync("x", new byte[1024 * 1024 + 1]);
}
catch (Exception e)
{
    Console.WriteLine($">>> Error: {e.GetType()}: {e.Message}");
}

await nats.PublishAsync("x", new byte[128]);
fail: NATS.Client.Core.Internal.NatsReadProtocolProcessor[1005]
      Server error Maximum Payload Violation
fail: NATS.Client.Core.Commands.CommandWriter[1007]
      Unexpected error in send buffer reader loop
      System.Net.Sockets.SocketException (10054): An existing connection was forcibly closed by the remote host.
         at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.CreateException(SocketError error, Boolean forAsyncThrow)
         at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.SendAsync(Socket socket, CancellationToken cancellationToken)
         at System.Net.Sockets.Socket.SendAsync(ReadOnlyMemory`1 buffer, SocketFlags socketFlags, CancellationToken cancellationToken)
         at NATS.Client.Core.Commands.CommandWriter.ReaderLoopAsync(ILogger`1 logger, ISocketConnection connection, PipeReader pipeReader, Channel`1 channelSize, CancellationToken cancellationToken) in C:\Users\mtmk\src\nats.net.v2\src\NATS.Client.Core\Commands\CommandWriter.cs:line 359
         at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.ExecutionContextCallback(Object s)
         at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
         at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext(Thread threadPoolThread)
         at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.ExecuteFromThreadPool(Thread threadPoolThread)
         at System.Threading.ThreadPoolWorkQueue.Dispatch()
         at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
      --- End of stack trace from previous location ---
         at NATS.Client.Core.Commands.CommandWriter.ReaderLoopAsync(ILogger`1 logger, ISocketConnection connection, PipeReader pipeReader, Channel`1 channelSize, CancellationToken cancellationToken) in C:\Users\mtmk\src\nats.net.v2\src\NATS.Client.Core\Commands\CommandWriter.cs:line 406
fail: NATS.Client.Core.Internal.NatsReadProtocolProcessor[1005]
      Server error Maximum Payload Violation
>>> Error: System.OperationCanceledException: The operation was canceled.
Unhandled exception. System.InvalidOperationException: Concurrent reads or writes are not supported.
   at System.IO.Pipelines.Pipe.GetFlushResult(FlushResult& result)
   at System.IO.Pipelines.Pipe.GetFlushAsyncResult()
   at System.IO.Pipelines.Pipe.DefaultPipeWriter.GetResult(Int16 token)
   at System.Threading.Tasks.ValueTask`1.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state)

I also noted this today.

Good to see it has been solved. Do you have an idea when the next version will be released ?

mtmk commented

I also noted this today.

Good to see it has been solved. Do you have an idea when the next version will be released ?

@thinkbeforecoding we can make another quick preview release if that's good enough for you in the short term. Stable release will take a little more time. I need to run a few more tests.