golang/go

proposal: spec: add support for unlimited capacity channels

rgooch opened this issue ยท 67 comments

Proposal: when creating a channel, if the capacity passed to the make builtin function is negative, the channel will have unlimited capacity. Such channels will never block when sending and will always be ready for sending.

Rationale: channels are a natural way to implement queues. When processing streams of data, it is unknown how many data elements will be sent. In some cases, fixed length channels can lead to deadlocks. These deadlocks can be eliminated with unlimited capacity channels.

This is how I currently work around the limitation:

func NewQueue() (chan<- interface{}, <-chan interface{}) {
    send := make(chan interface{}, 1)
    receive := make(chan interface{}, 1)
    go manageQueue(send, receive)
    return send, receive
}

func manageQueue(send <-chan interface{}, receive chan<- interface{}) {
    queue := list.New()
    for {
        if front := queue.Front(); front == nil {
            if send == nil {
                close(receive)
                return
            }
            value, ok := <-send
            if !ok {
                close(receive)
                return
            }
            queue.PushBack(data)
        } else {
            select {
            case receive <- front.Value:
                queue.Remove(front)
            case value, ok := <-send:
                if ok {
                    queue.PushBack(data)
                } else {
                    send = nil
                }
            }
        }
    }
}

The disadvantage of this workaround is that it forces all the users to perform type casting, so you lose the compile-time type checking. If you want compile-time type checking you need to re-implement the above code over and over again for each queue type. Unlimited length channels would avoid the need for all that boilerplate.

All language changes are currently on hold so don't expect a timely response for this proposal. Others are welcome to discuss, though.

cznic commented

Unlimited capacity channels ask for a machine with unlimited memory.

Yes, technically this assumes a machine with unlimited memory in the case where there's no bound on work and no dequeuing before running out of memory. That's a narrow subset of workloads. That doesn't invalidate the merit of this proposal.

cznic commented

That's a narrow subset of workloads.

This happens In every long running process where any of the producers outraces, even by little, the respective consumer(s). That's why channel operations block in the first place. The blocking is not evil, it's the necessary synchronization between producers and consumers.

@cznic: Does that mean that you shouldn't be able to expand slices with append() because you might run out of RAM?

Maybe there could be an append-like functionality for channels... Might give more control over it that way.

@cznic: Just because you have a long running process with producers and consumers does not imply that producers which outrace consumers are always outracing the consumers. It's a common pattern to have a producer periodically stream a bounded (but unknowable ahead of time) quantity of data and the consumer sometimes falls behind for a while and then either catches up or the producer stops for a while or permanently.

Regarding the question that @DeedleFake poses: it does seem inconsistent to say "growth through append is OK but growth through channels is unsafe".

My proposal does not force you to accept unbounded memory growth: you can still set the size of channels as before. My proposal simply gives people the option to allow for automatic growth, in a way that is clean and efficient (compared to say the workaround I described in my opening post).

@DeedleFake: What syntax would you propose? Would it work seamlessly with the existing syntax for reading, writing and selecting on channels? How exactly would your proposal give more control? With my proposal, if you are concerned about bounding growth - but don't want to set a hard cap - you have the option of checking the length of the channel and applying some application specific back-pressure.

@rsc: What will it take to move this from "hold" to active consideration?

I'm not sure about syntax. I was just thinking out loud, mostly. The idea was that with a channel, when if len becomes greater than cap, it blocks when sending. append() increases cap for slices, though, although it does involve reallocation and copying, obviously. I was just thinking that there's no way to change the capacity of a channel without just making an entirely new channel, manually copying everything from the previous channel, and making sure that the new channel replaces the old channel in every thread.

rsc commented

We're not considering any significant language changes today. I'm just organizing.

To my mind, the value of building this into the language (or enabling implementation in a plugin?) is in reducing the expense of a-goroutine-plus-two-channels per channel -- much more than an ordinary channel.

I'm working on an app where every recently-connected client needs one of these.

@rsc: How is a "significant" change to the language defined? This is 100% backwards compatible and is a very minor API tweak.

What's the timeline for go2?

@rgooch: "significant" is anything that's more than an obvious bug fix (say, because compilers do something else than what the spec says) or clarification (compilers disagree, spec unclear). That is, anything that's an actual language change.

@rsc will talk about "The future of Go" at GopherCon in Denver (https://www.gophercon.com/schedule, Day 1, Main Stage). I you're not attending, all talks will be recorded; and I'm sure important things will be tweeted as well. That's probably a good talk to listen to regarding a future Go 2.

kjk commented

@griesemer

https://www.youtube.com/watch?v=6GMkuPiIZ2k ?

https://github.com/golang/proposal/blob/master/design/18130-type-alias.md changed the syntax of the language and was accepted to 1.9.

This proposal asks for ch := make(chan bool, -1) to create infinite channel instead of current behavior of producing a compile time error or panicing at runtime.

If type aliases satisfy Go1 compatibility guidelines, then so does this proposal.

A buffered channel could dynamically allocate its buffer, instead of malloc on make:

make(chan bool, -200)

A virtually infinite buffer:

make(chan bool, math.MinInt64)

The present alternative to the goroutine-plus-two-channels method is to select on every put and take evasive action in default. Lower mem cost, but higher cpu.

select {
  case ch <- i: // thank goodness
  default: // hm, push i to storage?
}

@kjk, Google engineers were asking for type aliases. Membership has its privileges :-)

@kjk Thanks for the Pirates of the Caribbeans reference; much appreciated! The Spec however defines the plot, and thus is more than just a guideline... :-)

Type aliases are crucial for refactoring at scale and arguably an oversight in the original design (I've commented on that at length in the type alias discussion). They were discussed in excruciating detail (in fact it will have taken a year from initial discussion to actual release). The proposed feature here, while perhaps desirable, doesn't quite carry the same weight (at least I don't see the respective strong demand from the community).

Again, for reasons discussed elsewhere, we have stopped adding backward-compatible language changes, however small and compatible, for the time being so that they can be considered as a whole. If it's any consolation, there are several small, "obvious", and backwards-compatible language changes that were proposed by the Go Team (myself included), and we also postponed them just the same.

I believe Russ will discuss a plan for next steps at his GopherCon talk, and we will be looking for community input. No matter what, the tree is frozen for such changes for Go 1.9 anyway.

rsc commented

The limited capacity of channels is an important source of backpressure in a set of communicating goroutines. It is typically a mistake to use an unbounded channel, because you lose that backpressure. If one goroutine falls sufficiently behind, you usually want to take some action in response, not just queue its messages forever. The appropriate response varies by situation: maybe you want to drop messages, maybe you want to keep summary messages, maybe you want to take different responses as the goroutine falls further and further behind. Making it trivial to reach for unbounded channels keeps developers from thinking about this, which I believe is a strong disadvantage.

The point is not that we certainly shouldn't do this - I don't know - but only that the decision is more complex than it may seem at first glance. Yes, language changes right now must be backwards compatible with earlier versions of Go, but we're not going to take every backwards-compatible change. In fact, as I said before, we're not considering significant language changes (or in fact any language changes) today.

rsc commented

@networkimprov, Type aliases did not happened because "Google engineers were asking for them". Rob, Robert, and I observed a recurring problem in managing large code bases and proposed a solution, to make Go more useful when scaling to large code bases, one of its explicit goals. We definitely did not communicate the motivation well enough in the initial alias proposal, and we tried to (and I think did) do better in the type alias proposal. For more details about the motivation, please see the article and videos linked at #18130.

As I said, we definitely did not communicate the motivation or criteria for significant language changes well enough in the handling of the original alias proposal. My upcoming Gophercon talk is in part an attempt to do that better. If you won't be at Gophercon, don't worry, I will publish a blog post shortly after the talk too.

@rsc, it's fine to provide back-pressure if it can be detected efficiently; Posix has EWOULDBLOCK. From what I gather, select { ... default: } is not similarly inexpensive?

But the real problem with channel buffers isn't that they're not infinite, but that they're not dynamically allocated/allocable. This would seem to be easily fixed. One should be able to instantiate a large number of channels with sizeable buffers and only use some of them without incurring the buffer overhead for all of them.

I am inclined to agree with @rsc here on the subject of the proposal.

My first exposure to message passing style of concurrency was in Erlang, whose model of communication is similar to but not the same as Go:

  • Rather than channels being first-class objects, every process (Erlang process, not system process) has its own message queue.
  • Queues are unlimited.
  • Messages can be of any type.
  • When receiving a message, the receiving process can bind it via pattern matching, and it will grab the first message from the queue that matches. This effectively allows it to behave as if it has multiple separate by having, say, errors conform to a separate pattern so that they can be checked for independently.

Because queues are unlimited, a naively written piepline can behave quite poorly under load. The queue for the bottleneck process will grow without bound, and there is no easy way to resolve this (one article I found recommended simply putting your entire request pipeline into a single process!). In a worst case scenario, this causes crashes as the bottlenecked processes' queues grow without bound (note: Erlang allows multiple different nodes [VMs] running on multiple machines to all participate in one shared runtime and send native messages to one another; this can exacerbate the problem if the bottleneck is on a different machine from the ones sending since the sending machine is not constrained by the resources consumed byt the bottleneck)

By contrast, bounded channels provide very useful backpressure. When something gets overwhelmed, the entire pipeline grinds to a halt and stops processing more data. In cases where the feed into the pipeline might still operate, such as an HTTP server, it's still much easier to handle throttling. You can explicitly check for blocking with select and behave differently, such as by rejecting an incoming request with a 502 Unavaiable.

An unlimited buffer lets you do is move the bottleneck from a visibly stuck goroutine to an invisibly growing channel, and willl probably get misused in such a way that people do this very often when they really should not. Consider this: if your pipeline is generating input faster than it can output it, then you have a bug because the queue will grow indefinitely.

The only case I can think of where blocking is not desirable behaviour is if the messages are coming
(directly or indirectly) from an external source for which there are reasons to want to drain it as quickly as possible. For example, after executing a database lookup that reads a large number of rows which can be freed after they are read, it might be desired to move the rows to memory quickly so that the query can be released server-side. In this case, an unlimited channel buffer could store the rows. But i don't think a codespace solution to this problem is worth the benefit over a langspace one, especially given the way that this feature would become a trap for inexprienced users of the language.

rsc commented

... it's fine to provide back-pressure if it can be detected efficiently; Posix has EWOULDBLOCK. From what I gather, select { ... default: } is not similarly inexpensive?

Do you have measurements showing that?

A select with a single case and a default is a special-case fast path in the implementation that - in the case of falling into the default - doesn't even acquire a lock. It should be far cheaper than any system call that might return EWOULDBLOCK.

I agree with @networkimprov . Let's change the implementation to dynamically grow large channel buffers as needed. I think that's a good idea anyhow. If a program is running so close to memory limits that it can't allocate a new page for a channel buffer, then it is already in trouble; having the buffer pre-allocated would not save it. If we make that implementation change, I see no need for this language change.

rsc commented

Disagree - that's still effectively a language change, and we're not doing language changes today.

@rsc, @ianlancetaylor and I are not suggesting a language change (i.e. negative size in make(chan...)

Non-small channel buffers should never be malloc'd on make. There is no need to change the spec, just fix the memory allocation mistake, perhaps by applying the algorithm from append().

cznic commented

Non-small channel buffers should never be malloc'd on make. There is no need to change the spec, just fix the memory allocation mistake, perhaps by applying the algorithm from append().

I'd call a mistake if the channel buffer would be lazily allocated.

@cznic what use cases would suffer with dynamic allocation of a large channel buffer?

To address the "why would you do that" implicit question: consider the use-case in stream/event processing where the streams are bounded in length and will fit into memory, but their lengths are not known ahead of time. The producer asynchronously processes responses: it only needs to wait for the last response. Backpressure from channels is undesirable in this case, as the producer would stall waiting for responses whenever its send channel fills up and the consumer would stall on sending responses (and thus consuming requests) whenever its send channel fills up. Also, there is more potential for deadlocks, since you basically have two locks (channels) dependent on each other.

If you've got an event processing hub which is routing requests to different consumer/responders, the likelihood of stalls and deadlocks due to channel backpressure increases.

With unlimited buffering, no-one is blocked on send, and thus is always available to process incoming data.

cznic commented

@cznic what use cases would suffer with dynamic allocation of a large channel buffer?

Dynamic allocation was not discussed in this thread to be the problem, actually it's so far the only way how to create a buffered channel. Lazy allocation is the problem. It amplifies all the complications overcommit already brings on the OS level by recreating them once again in yet another layer within user space.

It amplifies all the complications overcommit already brings on the OS level...

That is a general statement; I was hoping for use cases. Lazy allocation of large channel buffers allows informal memory pooling among a big bunch of channels. Explicit pooling is preferable (see below) but if "language change" is verboten in Go 1, on-demand malloc by the runtime would work ok.

var pool [1<<26]byte;
ch1, ch2 := make(chan int, pool), make(chan string, pool)

@rgooch I think you should change your proposal to the above, assuming it satisfies your req's :-)

cznic commented

That is a general statement;

Imagine: "Allocate" a [too] big channel in the proposed on-demand way. Do not use it yet, so the process starts to do something useful. Later on, when the channel is used for the first time, OOM and work lost. It would be better if the process would not start in the first place.

Even if a cautious administrator has tuned the OS overcommit such that this should not happen, this proposal sneaks the problem back while completely taking the control of the bad behavior away.

when the channel is used for the first time, OOM and work lost

My proposal is to grow the channel buffer gradually as needed up to the specified size, not malloc the whole size on first use. I agree the latter would be bad.

cznic commented

My proposal is to grow the channel buffer gradually as needed up to the specified size, not malloc the whole size on first use.

It makes no difference except the problem may arise later, which is actually worse in some scenarios (more work lost).

It's all just cheating on the resources. And with more of the cheating, the less the some coders have to care about proper resource planning and they can will produce more suddenly failing programs.

The limited capacity of channels is an important source of backpressure in a set of communicating goroutines.

These (@rsc's) words shall be put in stone, IMO.

I'm not opposed to backpressure; I'm not advocating infinite channel buffers. One can know the overall workload without knowing how it is divided at a given moment. Therefore I am advocating memory pooling for a set of channel buffers. Such a feature is aligned with Go's ambition to compete with C & C++

Alternatively, a mechanism to make plugins that implement channels in custom ways would do the trick.

Maybe it would clarify my perspective to say that my app needs thousands of channels, and possibly millions, with average buffer sizes that are small, but the occasional large one. I will shut down a large one at some upper bound, but it is not reasonable to limit the channel count simply because channels cannot pool buffer memory.

A dynamically allocated channel buffer (from dedicated mem pool or malloc) probably should free memory that's gone unused for a while. That shouldn't require any more user intervention than the allocation side.

Firstly, I proposed an explicit memory pool for a subset of channels. The pool would be fully allocated before its channels are made. However in a situation where no "language changes" are considered, that seems to be off the table. So then we are concocting workarounds.

One workaround is the goroutine-plus-two-channels scheme described in the original proposal; it is expensive. Another could be to reserve some channels with large buffers and switch over to one of them if a small-buffer channel blocks; that is a lot of extra work. Another is incremental allocation of the buffer by the runtime; that's no extra expense or work, but could allow OOM bugs if you oversize many buffers and can't keep pace with channel traffic.

It is widely regarded as the duty of software engineers to understand the requirements of their application and limitations of its platform. Languages cannot save those who lack said understanding, and must not restrain those who possess it :-)

It's an amusing blog post, and while relevant for some patterns, isn't relevant for the pattern I described earlier.

I filed a proposal for channel buffer pools, #20868. These cannot cause the surprise OOM failures possible with incrementally-allocated (aka infinite) channel buffers.

I'd be interested to know if that proposal would address your case?

rsc commented

@AlekSi, thank you very much for the link to http://ferd.ca/queues-don-t-fix-overload.html!

Lazy allocation of channel buffers is a change that would be required in all implementations or else programs would run in some implementations but fail in others. That's the definition of a language change as I use the term. The language where make(chan int, 1e9) allocates nothing is qualitatively different from the language where it allocates 8 GB of memory.

I'm not sure I agree with that being a language change. The Go spec is admittedly vague on the memory model so it is difficult to arrive at an objective conclusion about this, but in most languages, how the compiler or library allocates memory is generally considered an implementation detail. Memory is always going to be system-specific anyway.

For instance, if you were running on a system with a fixed amount of memory (avoiding issues of memory availability and overcommit varying by machine), the behaviour might still be different from compiler to compiler since perhaps one compiler GCs more aggressively and the program relies on this. Or perhaps one compiler chooses a less memory-efficient representation of some structure in exchange for another tradeoff, but this makes enough difference to trigger an OOM on one compiler but not the other. Or perhaps additional instrumentation like profiling causes additional allocations, creating failure points where none existed before.

I have seen nothing in Go documentation or the spec that implies that any operation, including a channel send, must be incapable of triggering an OOM panic. So if the implementation changed to allocate large channel buffers lazily, I would personally attribute that to just being one of the many vagaries of memory allocation in Go, rather than being a true language change.

rsc commented

When I wrote "that's still effectively a language change, and we're not doing language changes today", I meant my definition of language change. You can argue that I meant something else, but I didn't.

The most productive way to move this conversation forward would be to document real production examples, including real code, where the lack of unlimited-capacity channels harms your ability to write or deploy or manage Go systems. Thanks.

I feel the ability to change the capacity of a chan during the runtime would be a better solution for the issue.

If a chan becomes full I think it is more flexible to check that the chan is full (e.g. using select) and then perform some actions based on that and then if appropriate explicitly increase the chan capacity rather than rely on chan capacity growing implicitly.

@networkimprov: your channel buffer pools proposal doesn't address my use case. I have just a couple of channels which are created when a burst stream starts and are closed when the burst finishes. It's unknowable ahead of time how long the burst will be, but the machine is sized so that any reasonable burst will fit.

On the topic of lazy allocation, I don't like APIs which do this when you've specified "I want N entries". Those APIs make it impossible to manage your memory consumption and pre-allocate so that you know (or have less risk) you won't run out of memory. This is why my proposal doesn't change existing behaviour: if you ask for a channel with N entries, you get exactly that many. Instead, it allows you to explicitly say "I don't know how many I'll need, so please allocate as needed".

@rsc: Here's a real production example: code to support adding objects in an objectserver, where new objects have to be stashed, sent upstream for hash collision detection and then committed locally: https://github.com/Symantec/Dominator/blob/master/objectserver/rpcd/lib/addObjectsWithMaster.go
See the ugly code where I have to create a pair of channels and a manager goroutine for each queue. I'd like to be able to eliminate this code with a simple ch := make(chan T, -1)

Since you only need a couple burst-stream channels, the resource cost of the channel goroutine is negligible, and its code is pretty simple. (Looking at your linked source, I imagine you could unify the two new*Queue and manage*Queue functions. )

You'll only advance this proposal with a problem that cries out for a language solution. How much pain is your current solution causing?

I have code that makes the queues generic: https://github.com/Symantec/Dominator/tree/master/lib/queue
but that then requires runtime type casting which throws away the benefits of compile-time type checking.

As I said at the start of this thread, I have a solution, but it's a bit ugly. With a simple tweak to the API I (and anyone else who wants queues) can throw away a bunch of boilerplate. Russ asked for examples, I provided one. Others can chime in with their examples :-)

Btw, not sure if it was mentioned earlier, but unlimited capacity channels are already possible, like this:

// make an unbuffered channel
ch := make(chan int)
// and here's how we send
go func() {
    ch <- 2
}()

That's not an unlimited capacity channel. That's a work-around that allows you to buffer writes to a channel. Syntactically it's quite different. In my opening post I explain the problems with work-arounds.

Well, your downsides are that it's a lot of code and it's not statically type-checked, none of which applies to what I wrote. Additionally, channels just happen to behave like queues, but they were never intended for implementing the queue data structure.

It seems then you've missed the point I was making. Yes, it's a lot of code, and yes, the generic version doesn't have static type checking. I originally stated that I didn't like these work-arounds and that the clean solution is to allow for creating unlimited capacity channels.

A specific disadvantage of your approach is that it creates a goroutine for each object you put on the queue. That costs far more memory, as the memory consumption of a goroutine is typically far greater than the size of an object.

I understand that, but I still say, that channels are not for creating queues, they're for communicating between goroutines. Here's a simple and actually fast code to have a queue in Go:

var queue []T

// push
queue = append(queue, x)

// pop
x, queue = queue[0], queue[1:]

It seems like this code would cause too many allocations, but it's really fast from my experience (I use it for real-time audio processing).

Well, I disagree about channels not being appropriate for creating queues.

Perhaps in your application the approach you outline is not too expensive. In my application, the queue would continue to grow until the stream burst is fully processed and the job terminates. With my work-around or with unlimited capacity channels (which I expect would be implemented using a linked list), the queue is only ever as large as the number of unconsumed objects. While the worst-case behaviour is similar, the typical behaviour is better with one of my approaches.

I implemented an optimized linked-list queue too (because I was worried that simply using a slice would be too slow), but it turned out not to be faster than the slice approach. I believe it could work for your use-case too, you never know the performance unless you try ;)

(EDIT: just in case of misunderstanding, the slice queue correctly garbage collects the popped elements after reaching capacity and growing, which happens quite often.)

As I said at the start, I have a work-around that works for me. I have a generic package (using reflect) and I have concrete typed versions of the queue. It is memory efficient and the computational overhead of managing the (concrete typed) queue is negligible; event processing and network traffic dominate. My motivation at this point is code cleanliness and eliminating boilerplate. The approach you appear to actually be using is quite similar to the concrete typed queue I have, although with my approach I have real channels to send/receive to/from, so I can use them in select statements. I consider this preferable as it allows me to follow the preferred patterns in Go for event processing.

@faiface

channels just happen to behave like queues, but they were never intended for implementing the queue data structure

A channel is a simple, thread-safe interface to a queue, and that is exactly what @rgooch coded. I don't agree that it's a lot of code, nor a workaround. However if you have a LOT of queues, this goroutine-plus-two-channels method is memory hungry relative to an ordinary channel with a relatively large static buffer. Hence proposal #20868.

I +1 this not because I want unlimited channel buffer length, but because I want unlimited size to the data passed through the channel. I'm currently at a point where I cannot add 1 more field to the JSON I'm passing through the channel, with a buffer length of 1, without a panic. This may be the wrong discussion for this, but it's the closest one I've found to addressing this issue.

fatal error: newproc: function arguments too large for new goroutine

I want unlimited size to the data passed through the channel

There must be a reason you're not sending pointers to json strings on the channel?

@swizzley: Channel elements are restricted to 64KB. Looking at the code I see no real reason for this, we could change it to 4GB with no trouble.

That said, you probably don't want to be sending large items by value. As @networkimprov said, passing by pointer is much more efficient. Less copies, and less work to do while holding the channel lock.

@swizzley, If you'd like to pursue raising the max element size, please open a separate issue. Let's keep this one about number of elements, not their size.

If we had generic types, unlimited channels could be implemented in a library with full type safety. A library would also make it possible to improve the implementation easily over time as we learn more. As various people said above, putting unlimited channels into the language seems like an attractive nuisance; most programs do need backpressure.

Closing on the assumption that we will get some type of generics. We can reopen if we decide that that is definitely not happening.

@ianlancetaylor "channels ... implemented in a library" implies a forthcoming mechanism to allow third-party channel implementations that are accessible with <- etc.

That would be a most welcome addition... is it true?

@networkimprov I think that kind of feature, which I usually call operator methods a la C++, would be fairly unlikely. I'm not aware of any current proposals for that.

Then how could a library, even given generics, "implement" unlimited channels? It would either have to reinvent the channel API and supporting runtime mechanism, or simply encapsulate the expensive a-goroutine-plus-two-channels scheme discussed above.

A scheme to allow third-party channel implementations would not look like C++ operator overloading per se. Such a channel implementation would have to provide a specific set of methods, as we do for an interface.

@networkimprov It seems to me that any system that needs to handle a lot of incoming messages to a channel faster than they can be processed directly would need a dedicated goroutine to drain that channel quickly; and possibly store the data for further processing (e.g., sending to a slower channel). Using an unlimited channel for this scenario is simply shifting the problem elsewhere (to the channel's implementation in the runtime). I'm not convinced that making the channel implementation more complex for this (I suspect) rare scenario is justified. It seems better to handle that via a dedicated library. If there is a form of genericity, that library can also be type-safe. I don't see why there's a need for operator overloading.

Such a library should do well in cases of very fast, "bursty" messages. A large enough buffered channel should be able to absorb bursts while a fast dedicated goroutine drains the channel into a ring buffer from which the messages are delivered at a slower pace to the final consumer of the messages. That ring buffer will need to be efficiently implemented, and will need to be able to grow efficiently (irrespective of size) and that will require some careful engineering. Better to leave that code to a library that can be tuned as needed than baking it into the runtime (and then possibly being at the mercy of release cycles).

If you disagree, it would be useful for us (and everybody else) to have a concrete scenario (experience report) showing how such an approach is not sufficient.

@griesemer I'm not actually a proponent of unlimited channels. The use case that concerns me is a large group of channels, any of which can see heavy traffic for limited periods. For that, I proposed #20868.

Re third-party channel implementations, which we might call "channel plugins," they would be useful where you wish to select on a non-channel I/O source along with some channels. However one can accomplish this today with a goroutine that links the I/O with a channel, and I assume that's not very expensive since no one has proposed channel plugins yet :-)

BTW, my great gratitude to you and your colleagues for this wonderful language! <3

For queuing this might be of interest:
https://apenwarr.ca/log/?m=201708#14

That's not really relevant to the use-case that I described at the start of this thread.

Like supply async io operations, an unlimited channel also contradict with go keyword. Limited buffer channel is OK due to user can create as many coroutines as he wants. Writing channel was blocked is no harm to the entire process, cpu resource will soon consumed by other coroutines without incur switch performance impact.

If you want async io or unlimited channel, may be you should consider change to another language. Doing this make go key feature(write code in sync way but has same effect of async compare to other language which has no language level coroutine) meaningless.