rabbitmq/rabbitmq-dotnet-client

Intermittent deadlock when closing a channel using CloseAsync in 7.x

Opened this issue · 7 comments

Describe the bug

Hi there,

Ever since upgrading from 6.x to 7.x, I've been running into intermittent deadlocks whenever I try to close a channel via CloseAsync.
I haven't been able to reproduce it locally, but I've been able to do some remote debugging, but I could not get any insight. (all TP threads are waiting for work)

I did however manage to run a dotnet-dump dumpasync during one of these deadlocks and got the following info:

First dump

00007ebcbe050400 00007efcd1b1e298 ( ) System.Threading.Tasks.Task
  00007ebcbe9d7c48 00007efcd74554b8 (0) RabbitMQ.Client.ConsumerDispatching.ConsumerDispatcherChannelBase+<WaitForShutdownAsync>d__17
    00007ebcbe915d68 00007efcd7453028 (3) RabbitMQ.Client.Impl.Channel+<CloseAsync>d__73
      00007ebcbe915e10 00007efcd74533e0 (0) RabbitMQ.Client.Impl.AutorecoveringChannel+<CloseAsync>d__53
        00007ebcbe915ea0 00007efcd7453788 (0) <My code>

Second dump (another instance)

00007f0a56a17290 00007f4a69375380 ( ) System.Threading.Tasks.Task
  00007f0a597ed238 00007f4a6d2dd968 (0) RabbitMQ.Client.ConsumerDispatching.ConsumerDispatcherChannelBase+<WaitForShutdownAsync>d__17
    00007f0a59573cf8 00007f4a6d2da998 (3) RabbitMQ.Client.Impl.Channel+<CloseAsync>d__73
      00007f0a59573da0 00007f4a6d2dad50 (0) RabbitMQ.Client.Impl.AutorecoveringChannel+<CloseAsync>d__53
        00007f0a59573e30 00007f4a6d2db0f8 (0) <My code>

I noticed that in both dump instances, the stacks aren’t displayed with the usual Awaiting: notation you often see in async stack traces, but it might be normal.

Reproduction steps

I haven’t pinned down a reliable way to reproduce this, but calling CloseAsync more frequently seems to increase the chances of hitting the deadlock. It also appears more common on Linux than Windows, though that might just be due to hardware differences rather than OS behavior.

Expected behavior

When calling CloseAsync, I’d expect the channel to close normally without causing a deadlock.

Additional context

No response

Hi, thanks for the report. As I'm sure you're aware of, there's not much to work with here 😸 Obviously, the gold standard is to provide code that reproduces this issue, or at least some idea of steps to do so.

calling CloseAsync more frequently

What does this mean? Do you have some way in your application to increase the frequency of channel closure?

What does this mean? Do you have some way in your application to increase the frequency of channel closure?

We're running tests that create and close channels very frequently, and it appears that the test suite that do this the most; is the the one that is usually getting stuck.

Anyhow, I can try to look dig into this further and see if I can provide something that will help you reproduce it.

Thanks

@Andersso channel and connection churn are workloads explicitly recommended against.

We're running tests that create and close channels very frequently, and it appears that the test suite that do this the most; is the the one that is usually getting stuck.

It would be extremely helpful for you to share your test code. If you can't do that, describe the test as best you can:

  • How many channels are created at any given point?
  • Are they created concurrently?
  • Is your test code a separate, console app, or using a test framework like xunit?

My guess is that you could be hitting a ThreadPool limit which prevents a Task from being scheduled, while another Task waits for the result. If you'd like to test that theory, please add the following code to the startup of your test program / test suite:

ThreadPool.SetMinThreads(16 * Environment.ProcessorCount, 16 * Environment.ProcessorCount);

This is a related issue:
#1354

Also note that management UI has connection and channel churn metrics, on the Overview page but also on the node page IIRC.

So at the very least it should be easy to see the churn rate: is it 50 channels opened per second? Is it 200?

@Andersso @ZajacPiotr98 - I've modified a test app in this project to try and trigger the error in this issue, or the error in #1749, and it works fine every time in my environment:

https://github.com/rabbitmq/rabbitmq-dotnet-client/blob/rabbitmq-dotnet-client-1749/projects/Applications/CreateChannel/Program.cs

Hi again, and sorry for the delayed response. I hope you guys had a good Christmas and new year!

  • How many channels are created at any given point?
    The application creates a total of 20 channels, with 5 being recreated between each test case.

  • Are they created concurrently?
    No, they are created one at a time and stopped one at a time.

  • Is your test code a separate, console app, or using a test framework like xunit?
    The actual application is a console application, running separately from the test application (different instances). The tests use RabbitMQ to send messages to the application but maintain their own connection/channel, which is kept alive for the entire test suite.

I’ve been working on reproducing the issue in a test project but haven’t had any success. I’ve tried experimenting with different thread pool sizes, but it didn’t seem to affect the outcome. Based on my investigation of my latest memory dump, there’s no indication of thread pool starvation; all the threads in the pool are idle and waiting for work. It is also worth mentioning that my application is a console app so it does not have any synchronization context.

Regarding the connection churn, wouldn’t that have caused issues in the 6.x versions as well? We’ve had this setup running fine for years without any problems until the upgrade to 7.x.

I’ve done some additional digging by analyzing the memory dump. Specifically, I’ve looked at the tasks being awaited in the method that always seem to get stuck (according to the async dump):

    class ConsumerDispatcherChannelBase
    {
        public async Task WaitForShutdownAsync()
        {
            if (_disposed) // Value=0
            {
                return;
            }

            if (_quiesce) // Value=1
            {
                try
                {
                    await _reader.Completion; // m_stateFlags=33555520 (WaitingForActivation)
                    await _worker; // m_stateFlags=117441536 (RanToCompletion)
                }
            }
        }
    }

It appears that the channel never gets completed, which prevents the method from ever completing.