golang/go

proposal: context: add Merge

navytux opened this issue ยท 81 comments

EDIT 2023-02-05: Last try: #36503 (comment).
EDIT 2023-01-18: Updated proposal with 2 alternatives: #36503 (comment).
EDIT 2023-01-06: Updated proposal: #36503 (comment).
EDIT 2020-07-01: The proposal was amended to split cancellation and values concerns: #36503 (comment).


( This proposal is alternative to #36448. It proposes to add context.Merge instead of exposing general context API for linking-up third-party contexts into parent-children tree for efficiency )

Current context package API provides primitives to derive new contexts from one parent - WithCancel, WithDeadline and WithValue. This functionality covers many practical needs, but not merging - the case where it is neccessary to derive new context from multiple parents. While it is possible to implement merge functionality in third-party library (ex. lab.nexedi.com/kirr/go123/xcontext), with current state of context package, such implementations are inefficient as they need to spawn extra goroutine to propagate cancellation from parents to child.

To solve the inefficiency I propose to add Merge functionality to context package. The other possibility would be to expose general mechanism to glue arbitrary third-party contexts into context tree. However since a) Merge is a well-defined concept, and b) there are (currently) no other well-known cases where third-party context would need to allocate its own done channel (see #28728; this is the case where extra goroutine for cancel propagation needs to be currently spawned), I tend to think that it makes more sense to add Merge support to context package directly instead of exposing a general mechanism for gluing arbitrary third-party contexts.

Below is description of the proposed API and rationale:

---- 8< ----

Merging contexts

Merge could be handy in situations where spawned job needs to be canceled whenever any of 2 contexts becomes done. This frequently arises with service methods that accept context as argument, and the service itself, on another control line, could be instructed to become non-operational. For example:

func (srv *Service) DoSomething(ctx context.Context) (err error) {
	defer xerr.Contextf(&err, "%s: do something", srv)

	// srv.serveCtx is context that becomes canceled when srv is
	// instructed to stop providing service.
	origCtx := ctx
	ctx, cancel := xcontext.Merge(ctx, srv.serveCtx)
	defer cancel()

	err = srv.doJob(ctx)
	if err != nil {
		if ctx.Err() != nil && origCtx.Err() == nil {
			// error due to service shutdown
			err = ErrServiceDown
		}
		return err
	}

	...
}

func Merge

func Merge(parent1, parent2 context.Context) (context.Context, context.CancelFunc)

Merge merges 2 contexts into 1.

The result context:

  • is done when parent1 or parent2 is done, or cancel called, whichever happens first,
  • has deadline = min(parent1.Deadline, parent2.Deadline),
  • has associated values merged from parent1 and parent2, with parent1 taking precedence.

Canceling this context releases resources associated with it, so code should call cancel as soon as the operations running in this Context complete.

---- 8< ----

To do the merging of ctx and srv.serveCtx done channels current implementation has to allocate its own done channel and spawn corresponding goroutine:

https://lab.nexedi.com/kirr/go123/blob/5667f43e/xcontext/xcontext.go#L90-118
https://lab.nexedi.com/kirr/go123/blob/5667f43e/xcontext/xcontext.go#L135-150

context.WithCancel, when called on resulting merged context, will have to spawn its own propagation goroutine too.

For the reference here is context.Merge implementation in Pygolang that does parents - child binding via just data:

https://lab.nexedi.com/kirr/pygolang/blob/64765688/golang/context.cpp#L74-76
https://lab.nexedi.com/kirr/pygolang/blob/64765688/golang/context.cpp#L347-352
https://lab.nexedi.com/kirr/pygolang/blob/64765688/golang/context.cpp#L247-251
https://lab.nexedi.com/kirr/pygolang/blob/64765688/golang/context.cpp#L196-226

/cc @Sajmani, @rsc, @bcmills

Judging by #33502 this proposal seems to have been missed. Could someone please add it to Proposals/Incoming project? Thanks.

	ctx, cancel := xcontext.Merge(ctx, srv.serveCtx)

Is not the struct-held reference to a context a smell (regardless that it is long-lived)? If your server must be cancellable isn't it better practice to establish a "done" channel (and select on that + ctx in main thread) for it and write to that when the server should be done? This does not incur an extra goroutine.

@dweomer, as I already explained in the original proposal description there are two cancellation sources: 1) the server can be requested to be shutdown by its operator, and 2) a request can be requested to be canceled by client who issued the request. This means that any request handler that is spawned to serve a request must be canceled whenever any of "1" or "2" triggers. How does "select on done + ctx in main thread" helps here? Which context should one pass into a request handler when spawning it? Or do you propose we pass both ctx and done into all handlers and add done into every select where previously only ctx was there? If it is indeed what your are proposing, I perceive Merge as a much cleaner solution, because handlers still receive only one ctx and the complexity of merging cancellation channels is not exposed to users.

Re smell: I think it is not. Go is actually using this approach by itself in database/sql, net/http (2, 3, 4, 5, 6) and os/exec. I suggest to read Go and Dogma as well.

seebs commented

I just reinvented this independently. The situation is that I have two long-lived things, a shared work queue which several things could be using, and the individual things, and it's conceptually possible to want to close the shared work queue and make a new one for the things to use... And then there's an operation where the work queue scans through one of the things to look for extra work. That operation should shut down if either the thing it's scanning shuts down, or the shared work queue in general gets shut down.

Of course, context.Merge wouldn't quite help as one of them currently exposes a chan struct{}, not a Context.

rsc commented

I think I understand the proposal.

I'm curious how often this comes up. The Merge operation is significantly more complex to explain than any existing context constructor we have. On the other hand, the example of "server has a shutdown context and each request has its own, and have to watch both" does sound pretty common. I guess I'm a little confused about why the request context wouldn't already have the server context as a parent to begin with.

I'm also wondering whether Merge should take a ...context.Context instead of hard-coding two (think io.MultiReader, io.MultiWriter).

seebs commented

I do like the MultiReader/MultiWriter parallel; that seems like it's closer to the intent.

In our case, we have a disk-intensive workload that wants to be mediated, and we might have multiple network servers running independently, all of which might want to do some of that kind of work. So we have a worker that sits around waiting for requests, which come from those servers. And then we want to queue up background scans for "work that got skipped while we were too busy but we wanted to get back to it". The background scan of any given individual network server's workload is coming in parented by the network server, but now it also wants to abort if the worker decides it's closing. But the worker's not really contingent on the network server, and in some cases could be stopped or restarted without changing the network servers.

It's sort of messy, and I'm not totally convinced that this design is right. I think it may only actually matter during tests, because otherwise we wouldn't normally be running multiple network servers like this at once in a single process, or even on a single machine.

@seebs, if the background work is continuing after the network handler returns, it's generally not appropriate to hold on to arbitrary values from the handler's Context anyway. (It may include stale values, such as tracing or logging keys, and could end up preventing a lot of other data reachable via ctx.Value() from being garbage-collected.)

seebs commented

... I think I mangled my description. That's true, but it doesn't happen in this case.

Things initiated from the network requests don't keep the network request context if any of their work has to happen outside of that context. They drop something in a queue and wander off.

The only thing that has a weird hybrid context is the "background scanning", because the background scanning associated with a given network server should stop if that server wants to shut down, but it should also stop if the entire worker queue wants to shut down even when the network server is running. But the background scanning isn't triggered by network requests, it's something the network server sets up when it starts. It's just that it's contingent on both that server and the shared background queue which is independent from all the servers.

@rsc, thanks for feedback.

Yes, as you say, the need for merge should be very pretty common - practically in almost all client-server cases on both client and server sides.

I guess I'm a little confused about why the request context wouldn't already have the server context as a parent to begin with.

For networked case - when client and server interoperate via some connection where messages go serialized - it is relatively easy to derive handler context from base context of server and manually merge it with context of request:

  • client serializes context values into message on the wire;
  • client sends corresponding message when client-side context is canceled;
  • server creates context for handler deriving it from base server context;
  • server applies decoded values from wire message to derived context;
  • server stores derived context cancel in data structure associated with stream through which request was received;
  • server calls handler.cancel() when receiving through the stream a message corresponding to request cancellation.

Here merging can happen manually because client request arrives to server in serialized form.
The cancellation linking for client-server branch is implemented via message passing and serve loop. The data structures used for gluing resemble what Merge would do internally.

In other cases - where requests are not serialized/deserialized - the merge is needed for real, for example:

  1. on server a handler might need to call another internal in-process service ran with its own contexts;
  2. client and server are in the same process ran with their own contexts;
  3. on client every RPC stub that is invoked with client-provided context, needs to make sure to send RPC-cancellation whenever either that user-provided context is canceled, or underlying stream is closed;
  4. etc...

Since, even though they are found in practice, "1" and "2" might be viewed as a bit artificial, lets consider "3" which happens in practice all the time:

Consider any client method for e.g. RPC call - it usually looks like this:

func (cli *Client) DoSomething(ctx context.Context, ...) {
    cli.conn.Invoke(ctx, "DoSomething", ...)
}

conn.Invoke needs to make sure to issue request to server under context that is canceled whenever ctx is canceled, or whenever cli.conn is closed. For e.g. gRPC cli.conn is multiplexed stream over HTTP/2 transport, and stream itself must be closed whenever transport link is closed or brought down. This is usually implemented by way of associating corresponding contexts with stream and link and canceling stream.ctx <- link.ctx on link close/down. cli.conn.Invoke(ctx,...) should thus do exactly what Merge(ctx, cli.conn.ctx) is doing.

Now, since there is no Merge, everyone is implementing this functionality by hand with either extra goroutine, or by doing something like

reqCtx, reqCancel = context.WithCancel(ctx)

, keeping registry of issued requests with their cancel in link / stream data structures, and explicitly invoking all those cancels when link / stream goes down.

Here is e.g. how gRPC implements it:

And even though such explicit gluing is possible to implement by users, people get tired of it and start to use "extra goroutine" approach at some point:

In other words the logic and complexity that Merge might be doing internally, well and for everyone, without Merge is scattered to every user and is intermixed with the rest of application-level logic.

On my side I would need the Merge in e.g. on client,

and on server where context of spawned handlers is controlled by messages from another server which can tell the first server to stop being operational (it can be as well later told by similar message from second server to restart providing operational service):

https://lab.nexedi.com/kirr/neo/blob/85658a2c/go/neo/storage.go#L52-56
https://lab.nexedi.com/kirr/neo/blob/85658a2c/go/neo/storage.go#L422-431
https://lab.nexedi.com/kirr/neo/blob/85658a2c/go/neo/storage.go#L455-457
https://lab.nexedi.com/kirr/neo/blob/85658a2c/go/neo/storage.go#L324-343

and in many other places...


I often see simplicity as complexity put under control and wrapped into simple interfaces.
From this point of view Merge is perfect candidate because 1) it is a well-defined concept, 2) it allows to offload users from spreading that complexity throughout their libraries/applications, and 3) it kind of makes a full closure for group of context operations, which was incomplete without it.

On "3" I think the following analogies are appropriate:

Without Merge context package is like

  • Git with commit and branches, but no merge;
  • Go with go and channels, but no select;
  • SSA without ฯ† nodes,
  • ...

In other words Merge is a fundamental context operation.

Yes, Merge requires willingness from Go team to take that complexity and absorb it inside under Go API. Given that we often see reluctance to do so in other cases, I, sadly, realize that it is very unlikely to happen. On the other hand there is still a tiny bit of hope on my side, so I would be glad to be actually wrong on this...

Kirill

P.S. I tend to agree about converting Merge to accept (parentv ...context.Context) instead of (parent1, parent2 context.Context).

P.P.S. merging was also discussed a bit in #30694 where @taralx wrote: "While it is possible to do this by wrapping the handler and merging the contexts, this is error-prone and requires an additional goroutine to properly merge the Done channels."

rsc commented

@Sajmani and @bcmills, any thoughts on whether we should add context.Merge as described here? (See in particular the top comment.)

rsc commented

/cc @neild @dsnet as well for more context opinions

neild commented

Within Google's codebase, where the context package originated, we follow the rule that a context.Context should only be passed around via the call stack.

From https://github.com/golang/go/wiki/CodeReviewComments#contexts:

Don't add a Context member to a struct type; instead add a ctx parameter to each method on that type that needs to pass it along. The one exception is for methods whose signature must match an interface in the standard library or in a third party library.

This rule means that at any point in the call stack, there should be exactly one applicable Context, received as a function parameter. When following this pattern, the merge operation never makes sense.

While merging context cancellation signals is straightforward, merging context values is not. Contexts can contain trace IDs and other information; which value would we pick when merging two contexts?

I also don't see how to implement this efficiently without runtime magic, since it seems like we'd need to spawn a goroutine to wait on each parent context. Perhaps I'm missing something.

For values, Merge would presumably bias toward one parent context or the other. I don't see that as a big problem.

I don't think runtime magic is needed to avoid goroutines, but we would at least need some (subtle) global lock-ordering for the cancellation locks, since we could no longer rely on the cancellation graph being tree-structured. It would at least be subtle to implement and test, and might carry some run-time overhead.

Context combines two somewhat-separable concerns: cancelation (via the Deadline, Done, and Err methods) and values. The proposed Merge function combines these concerns again, defining how cancelation and values are merged. But the example use case only relies on cancelation, not values: https://godoc.org/lab.nexedi.com/kirr/go123/xcontext#hdr-Merging_contexts

I would feel more comfortable with this proposal if we separated these concerns by providing two functions, one for merging two cancelation signals, another for merging two sets of values. The latter came up in a 2017 discussion on detached contexts: #19643 (comment)

For the former, we'd want something like:

ctx = context.WithCancelContext(ctx, cancelCtx)

which would arrange for ctx.Done to be closed when cancelCtx.Done is closed and ctx.Err to be set from cancelCtx.Err, if it's not set already. The returned ctx would have the earlier Deadline of ctx and cancelCtx.

We can bikeshed the name of WithCancelContext, of course. Other possibilities include WithCanceler, WithCancelFrom, CancelWhen, etc. None of these capture Deadline, too, though.

rsc commented

@navytux, what do you think about Sameer's suggestion to split the two operations of WithContextCancellation and WithContextValues (with better names, probably)?

@Sajmani, @rsc, everyone, thanks for feedback.

First of all I apologize for the delay with replying as I'm overbusy this days and it is hard to find time to properly do. This issue was filed 7 months ago when things were very different on my side. Anyway, I quickly looked into what @Sajmani referenced in #36503 (comment), and to what other says; my reply is below:

Indeed Context combines two things in one interface: cancellation and values. Those things, however, are orthogonal. While merging cancellation is straightforward, merging values is not so - in general merging values requires merging strategy to see how to combine values from multiple sources. And in general case merging strategy is custom and application dependent.

My initial proposal uses simple merging strategy with values from parent1 taking precedence over values from parent2. It is simple merging strategy that I've came up with while trying to make Merge work universally. However the values part of my proposal, as others have noted, is indeed the weakest, as that merging strategy is not always appropriate.

Looking into what @Sajmani has said in #19643 (comment) and #19643 (comment), and with the idea to separate cancellation and values concerns, I propose to split Context interface into cancellation-part and values-part and rework the proposal as something like follows:

// CancelCtx carries deadline and cancellation signal across API boundaries.
type CancelCtx interface {
        Deadline() (deadline time.Time, ok bool)
        Done() <-chan struct{}
        Err() error
}

// CancelFunc activates CancelCtx telling an operation to abandon its work.
type CancelFunc func()

// Values carries set of key->value pairs across API boundaries.
type Values interface {
        Value(key interface{}) interface{}
}

// Context carries deadline, cancellation signal, and other values across API boundaries.
type Context interface {
        CancelCtx
        Values
}

// ... (unchanged)
func WithCancel   (parent Context) (ctx Context, cancel) 
func WithDeadline (parent Context, d  time.Time) (ctx Context, cancel) 
func WithTimeout  (parent Context, dt time.Duration) (ctx Context, cancel) 
func WithValue    (parent Context, key,val interface{}) Context 


// MergeCancel merges cancellation from parent and set of cancel contexts.
//
// It returns copy of parent with new Done channel that is closed whenever
//
//      - parent.Done is closed, or
//      - any of CancelCtx from cancelv is canceled, or
//      - cancel called
//
// whichever happens first.
//
// Returned context has Deadline as earlies of parent and any of cancels.
// Returned context inherits values from parent only.
func MergeCancel(parent Context, cancelv ...CancelCtx) (ctx Context, cancel CancelFunc)

// WithNewValues returns a Context with a fresh set of Values. 
//
// It returns a Context that satisfies Value calls using vs.Value instead of parent.Value.
// If vs is nil, the returned Context has no values. 
//
// Returned context inherits deadline and cancellation only from parent. 
//
// Note: WithNewValues can be used to extract "only-cancellation" and
// "only-values" parts of a Context via
//
//      ctxNoValues := WithNewValues(ctx, nil)           // only cancellation
//      ctxNoCancel := WithNewValues(Background(), ctx)  // only values
func WithNewValues(parent Context, vs Values) Context 

Values and WithNewValues essentially come from #19643. Merge is reworked to be MergeCancel and only merging cancellation signal, not values. This separates values vs cancellation concerns, is general (does not hardcode any merging strategy for values), and can be implemented without extra goroutine.

For the reference, here is how originally-proposed Merge could be implemented in terms of MergeCancel and WithNewValues:

// Merge shows how to implement Merge from https://github.com/golang/go/issues/36503
// in terms of MergeCancel and WithNewValues.
func Merge(parent1, parent2 Context) (Context, cancel) {
        ctx, cancel := MergeCancel(parent1, parent2)
        v12 := &vMerge{[]Values{parent1, parent2}}
        ctx = WithNewValues(ctx, v12)
        return ctx, cancel
}

// vMerge implements simple merging strategy: values from vv[i] are taking
// precedence over values from vv[j] for i>j.
type vMerge struct {
        vv []Values
}

func (m *vMerge) Value(key interface{}) interface{} {
        for _, v := range m.vv {
                val := v.Value(key)
                if val != nil {
                        return val
                }
        }
        return nil
}

Regarding implementation: it was already linked-to in my original message, but, as people still raise concerns on whether "avoid extra-goroutine" property is possible, and on lock ordering, here it is once again how libgolang implements cancellation merging without extra goroutine and without any complex lock ordering:

https://lab.nexedi.com/nexedi/pygolang/blob/0e3da017/golang/context.h
https://lab.nexedi.com/nexedi/pygolang/blob/0e3da017/golang/context.cpp

Maybe I'm missing something, and of course it will have to be adapted to MergeCancel and NewValues, but to me the implementation is relatively straightforward.

Kirill

/cc @zombiezen, @jba, @ianlancetaylor, @rogpeppe for #19643

rsc commented

Thanks for the reply. We're probably not going to split the Context interface as a whole at this point.
Note that even the ...CancelCtx would not accept a []Context, so that would be a stumbling block for users.

The value set merge can be done entirely outside the context package without any inefficiency. And as @neild points out, it's the part that is the most problematic.

The cancellation merge needs to be inside context, at least with the current API, or else you'd have to spend a goroutine on each merged context. (And we probably don't want to expose the API that would be required to avoid that.)

So maybe we should focus only on the cancellation merge and ignore the value merge entirely.

It still doesn't seem like we've converged on the right new API to add, though.

@bradfitz points out that not just the cancellation but also the timeouts get merged, right?
(And the error that distinguishes between those two cases gets propagated?)
So it's not really only merging cancellation.

It does seem like the signature is

func SOMETHING(parent Context, cancelv ...CancelCtx) (ctx Context, cancel CancelFunc)

Or maybe the op to expose is being able to cancel one context when another becomes done, like:

// Link arranges for context x to become done when context y does.
func Link(x, y Context) 

(with better names).

?

It seems like we're not yet at an obviously right answer.

@rsc, thanks for feedback. I think I need to clarify my previous message:

  • last time I did not proposed to include Merge - I showed only how it could be possible to implement Merge functionality in third-party place with what comes in the proposal;

  • my proposal consist only of context.MergeCancel and context.WithNewValues;

  • context.WithNewValues comes directly from #19643 (comment) and #19643 (comment) - it is exactly what @Sajmani proposed there;

  • context.WithNewValues and thinking about context.Context as being composed of two parts (cancellation + values) comes due to @Sajmani request to do so in #36503 (comment), where he says:

    "Context combines two somewhat-separable concerns: cancelation (via the Deadline, Done, and Err methods) and values
    ...
    I would feel more comfortable with this proposal if we separated these concerns by providing two functions, one for merging two cancelation signals, another for merging two sets of values..."

  • as @Sajmani explains WithNewValues cannot be efficiently implemented out of context package, that's why I brought it in to see full picture.

  • regarding merging of cancellation it is

    func MergeCancel(parent Context, cancelv ...CancelCtx) (ctx Context, cancel CancelFunc)

    the only potential drawback here is that automatic conversion of []Context to []CancelCtx does not work.

    However there is the same issue with e.g. io.MultiReader(readerv ...io.Reader): if, for example, someone has []io.ReadCloser, or []OTHERTYPE with OTHERTYPE implementing io.Reader, it won't be possible to pass that slice to io.MultiReader directly without explicit conversion:

    package main
    
    import "io"
    
    func something(rv ...io.Reader) {}
    
    func f2() {
            var xv []io.ReadCloser
            something(xv...)
    }
    ./argv.go:9:11: cannot use xv (type []io.ReadCloser) as type []io.Reader in argument to something
    

    From this point of view, personally, I think it is ok for cancelv to be ...CancelCtx, because

    • explicit usage with direct arguments - even of Context type - without ... will work;
    • for rare cases where users will want to use ... they will have to explicitly convert to []Context, which is currently generally unavoidable as io.MultiReader example shows.

    Said that I'm ok if we change MergeCancel to accept ...Context or even just only one extra explicit argument MergeCancel(parent Context, cancelCtx Context). However I think that would be worse overall because it looses generality.

  • timeouts are indeed merged as part of cancellation, because timeouts are directly related to cancellation and are included into CancelCtx for that reason. They are not values. We cannot skip merging timeouts when merging cancellation.

    For example after

    func (srv) doSomething(ctx) {
        ctx, cancel := context.Merge(ctx, srv.serveCtx)

    I, as a user, expect

    • ctx to be cancelled in particular when srv.serveCtx is cancelled;
    • ctx to have deadline not longer than deadline for srv.serveCtx.

    if we merge only cancel signal, but not deadline, the result context can be cancelled early - due to srv.serveCtx timeout, but its deadline could be infinity, if original ctx is e.g. background. To me it is not good to report deadline=โˆž when it is known in advance that the operation will be canceled due to timeout.

    That's my rationale for treating and merging deadlines together with cancellation.

    I think it coincides with @Sajmani treatment that cancellation is constituted by Deadline, Done and Err instead of only Done and Err without Deadline.


Regarding Link - I think it is better we indeed try to avoid exposing this general functionality to API. Link can create cycles and besides that it is not possible to implement Link for arbitrary third-party context because having only Context interface there is no way to cancel it even via extra goroutine or whatever. At least without introducing other extra interfaces a context must expose to be linkable. Contrary to that, MergeCancel is well-defined operation and can be implemented generally - efficiently if all arguments are native to context package, and via extra goroutine to propagate cancellation for contexts coming from third-party places.

What do you think? Does my feedback clarify anything?
It would be good to also see what @Sajmani thinks.

Kirill

rsc commented

@navytux,

FWIW, @Sajmani's comment from 2017 #19643 (comment) is out of date. WithNewValues can be implemented efficiently outside the context package, after changes we made recently.

Re: MergeCancel(parent Context, cancelCtx ...Context) being "worse overall because it looses generality", what generality does it lose? No one has any values of type CancelCtx today, so there is nothing to generalize. Even if we added the CancelCtx type, wouldn't every instance be a Context anyway? Certainly the only ones we can handle efficiently would be contexts.

It does sound like we're converging on

//  MergeCancel returns a copy of parent with additional deadlines and cancellations applied
// from the list of extra contexts. The returned context's Done channel is closed
// when the returned cancel function is called or when parent's Done channel is closed,
// or when any of the extra contexts' Done channels are closed.
//
// Canceling this context releases resources associated with it, so code should
// call cancel as soon as the operations running in this Context complete.
func MergeCancel(parent Context, extra ...Context) (ctx Context, cancel CancelFunc)

Does anyone object to those semantics? Maybe it should be MergeDone? Some better name?

@rsc

WithNewValues can be implemented efficiently outside the context package, after changes we made recently.

Can you elaborate on those changes?

I think if we want to use only a subset of the Context methods, we should require only the necessary subset of those methods, not the full Context interface. Otherwise we still have the same awkward asymmetry from the straight-up Merge with Value fallback โ€” it's just that that asymmetry happens after the first argument instead of uniformly across all arguments. (Eliminating the Value method also doesn't provide much benefit in terms of implementation complexity: it addresses the straightforward Value-chaining problem, but not the more difficult lock-ordering problem.)

The issue of assignability from []Context could be addressed using the current generics draft instead, although I'm not sure whether that's better or worse:

type DoneCtx interface {
	Done() <-struct{}
	Err() error
}

func MergeDone[type DC DoneCtx](parent Context, extra ...DC) (ctx Context, cancel CancelFunc)

@rsc, everyone, thanks for feedback.

FWIW, @Sajmani's comment from 2017 #19643 (comment) is out of date. WithNewValues can be implemented efficiently outside the context package, after changes we made recently.

@rsc, here you probably mean commit 0ad3686 (CL196521), which implemented done propagation through foreign contexts via introducing dedicated cancelCtxKey value type:

go/src/context/context.go

Lines 288 to 302 in 11f92e9

// &cancelCtxKey is the key that a cancelCtx returns itself for.
var cancelCtxKey int
// parentCancelCtx returns the underlying *cancelCtx for parent.
// It does this by looking up parent.Value(&cancelCtxKey) to find
// the innermost enclosing *cancelCtx and then checking whether
// parent.Done() matches that *cancelCtx. (If not, the *cancelCtx
// has been wrapped in a custom implementation providing a
// different done channel, in which case we should not bypass it.)
func parentCancelCtx(parent Context) (*cancelCtx, bool) {
done := parent.Done()
if done == closedchan || done == nil {
return nil, false
}
p, ok := parent.Value(&cancelCtxKey).(*cancelCtx)

go/src/context/context.go

Lines 353 to 358 in 11f92e9

func (c *cancelCtx) Value(key interface{}) interface{} {
if key == &cancelCtxKey {
return c
}
return c.Context.Value(key)
}

go/src/context/context.go

Lines 264 to 285 in 11f92e9

if p, ok := parentCancelCtx(parent); ok {
p.mu.Lock()
if p.err != nil {
// parent has already been canceled
child.cancel(false, p.err)
} else {
if p.children == nil {
p.children = make(map[canceler]struct{})
}
p.children[child] = struct{}{}
}
p.mu.Unlock()
} else {
atomic.AddInt32(&goroutines, +1)
go func() {
select {
case <-parent.Done():
child.cancel(false, parent.Err())
case <-child.Done():
}
}()
}

In other words, in today's implementation, for cancellation to work efficiently, cancelCtxKey value has to be present in values, or else, the next time e.g. WithCancel is called, it will have to spawn a goroutine to
propagate cancellation.

If we imagine WithNewValues be implemented outside of context package, how that third-party place would a) care to preserve cancelCtxKey when switching values to new set, and b) care not to inject cancelCtxKey from the new set of values not to corrupt cancellation? All that given that cancelCtxKey is private to context package.

Maybe I'm missing something, but to me this tells that even today, WithNewValues cannot be efficiently and even correctly implemented outside of context package.


Regarding cancellation: it is good we start to converge to common understanding, thanks.

For naming I think the name MergeCancel is a good one. Like we discussed above, cancellation consists not only of done channel - it also has deadline and associated error. And this name aligns well with usage of word "cancel" in other places in the package, for example with package overview, cancellation description in Context interface and with WithCancel. I certainly see MergeDone as a less good alternative.

Regarding ...Context vs ...CancelCtx in MergeCancel argument: the problem here is that once we establish signature of MergeCancel, due to backward compatibility we will likely not be able to change it later if/when we decide to introduce CancelCtx type. In other words if whole Context interface is not reduced to only cancellation part (CancelCtx) and only values part (Values), people will still have to propagate and use whole Context, even if inside a function only one part is used. This can prevent cleaner API and mislead programmers to think that whenever context is passed in, corresponding operation can be cancelled and errored out, or it can use values associated with the context where in fact it must not. This concides with what @bcmills says in #36503 (comment), and is also exactly the same reason as @Sajmani was pointing out in #19643 (comment):

In an earlier comment, I proposed defining this interface:

type Values interface {
	Value(key interface{}) interface{}
}

For use with a context.WithNewValues function.

It occurred to me that the ability to separate a context's values from the rest of the context (notably its deadline and cancelation) is also useful in logging and tracing. Some logging APIs want values from a context, but it is somewhat confusing to pass a Context to a Log call, since this suggests that the Log call might be canceled. With a Values interface, we can define:

func Log(vals context.Values, ...)

Which makes it clear that the logger is only consuming the values from the Context.

I hope it clarifies a bit what kind of generality we can loose if we establish cancelv ...Context instead of cancelv ...CancelCtx now.


With all that feedback

I would still like to see and appreciate feedback from @Sajmani.

It was him to raise this "separate cancellation and values concern" in #36503 (comment), and the way I've reworked my proposal in #36503 (comment) was directly due to that @Sajmani's request.

I feel we are likely to miss the bigger picture without getting feedback from Sameer, that's why I'm asking for it.

Thanks beforehand,
Kirill

rsc commented

I spoke to @Sajmani about this for a while last week. (He doesn't have much time for direct use of the GitHub issue tracker these days.)

He was in favor of defining a type:

// A NameTBD is an interface capturing only the deadline and cancellation functionality of a context.
type NameTBD interface {
    	Deadline() (deadline time.Time, ok bool)
	Done() <-chan struct{}
	Err() error
}

func MergeCancel(parent Context, extra ...NameTBD) (ctx Context, cancel CancelFunc)

That makes very clear that the extra parameters have no influence over the values in the result.

And then it would also make sense to do MergeValues:

type NameTBD2 interface {
    Value(interface{}) interface{}
}

func MergeValues(parent Context, extra ...NameTBD2) Context

Then the question is what names to use. For MergeValues and NameTBD2, context.Values seems like a good name for that interface. Having named the interface after the one method (Values not Valueser), maybe using one of the methods in the NameTBD would work for that. context.Done sounds like a predicate function, but maybe context.Deadline?

// A Deadline is an interface capturing only the deadline and cancellation functionality of a context.
type Deadline interface {
    	Deadline() (deadline time.Time, ok bool)
	Done() <-chan struct{}
	Err() error
}

// A Values is an interface capturing only the values functionality of a context.
type Values interface {
    	Value(interface{}) interface{}
}

And then at that point MergeCancel would actually be MergeDeadline instead.

Thoughts?

cretz commented

Alternatively, context.Valuer and context.Deadliner, or if not wanting er suffix, context.Valued and context.Deadlined (in other OO worlds, it might be "Valueable"/"WithValue" and "Cancelable"/"WithDeadline"/"WithDone"). But now that I think about it, context.Deadline and context.Values works just fine.

Also, one wonders if instead of context.MergeCancel returning (context.Context, context.CancelFunc), it would return context.Deadline and there is a context.Combine(context.Deadline, context.Values) (context.Context, context.CancelFunc). Then you could call context.Combine(context.MergeCancel(ctx1, ctx2), context.MergeValues(ctx1, ctx2)) to get deadline and values combined (and maybe that's what a context.Merge might do anyways as a shortcut), or to only merge cancels w/out values, context.Combine(context.MergeCancel(ctx1, ctx2), context.Background()) so the caller at least isn't confused on why the result of context.MergeCancel lost all of their values. Same with context.MergeValues returning context.Values instead. Granted, I'm probably overthinking it.

jba commented

Deadline sounds like it's some sort of time.Duration-like thing that specifies a deadline.

If you were describing a Context to someone, you would say that it holds values and also has a cancellation aspect. So Values and Cancellation sound like good names to me, even though they're not parallel.

How would MergeCancel interact with the optimizations from #28728, given that the fast-path optimization relies on a key accessed via the Value method?

(Would we type-assert the NameTBD to check whether it has a Value method in order to facilitate a similar fast-path?)

rsc commented

@bcmills, yes, I guess we might have to. But at least the public API would be clear about the main requirements, and also about the fact that the extra cancellation contexts really do not affect the outgoing values.

rsc commented

OK, so it sounds like maybe @jba's Values and Cancellation work for people, with - I assume - MergeValues and MergeCancellation (or MergeCancel)?

Does anyone object to that? Thanks.

( I'm taking time break due to overload; I hope to review recent feedback in one month. It would be a pity to accept the proposal without proper review from original reporter. I appologize for the inconvenience )

rsc commented

On hold for @navytux to weigh in when convenient.

Adding a quick +1 on this thread - my use-case is the same as mentioned in #36503 (comment) , with "server has a shutdown context and each request has its own, and have to watch both"

I have the same issue - long running worker has it's own context and each task is supplied with another context. All this contexts don't store any values, they are used only for cancellation.
In my case I have event more cancellation mechanisms in different places - contexts and channels (by closing). So I'm searching a way to combine context+context, context+chan, chan+chan. I can do it manually, but looks like a lot of people have such demands.

+1
My use case seems slightly different to above.
I have a "standard" select with cases for read channel and ctx.Done(). However, the value read will be sent to each of a collection of handlers, each one having its own context. I want to cancel the read if any of the handlers cancels (hence merged context). - At this point I will detect the cancelled handler, remove it and re read if any handlers still exist.

Kindly ping @navytux

bjwrk commented

If the values aren't merged, is there a risk that this proposal won't play nice with Cause()? #51365

Hello everyone.

First of all I apologize for being silent here for so long.

I've squeezed some time tonight to reread carefully this conversation and below is how I would go further:

  1. As negotiated with @Sajmani through @rsc we split context.Context into two interfaces that represent cancellation and values. @jba suggested Cancellation and Values names, which I find good. This way we have:
// Context carries deadline, cancellation signal, and other values across API boundaries.
type Context interface {
        Cancellation
        Values
}

// Cancellation is an interface capturing only the deadline and cancellation functionality of a context.
type Cancellation interface {
    	Deadline() (deadline time.Time, ok bool)
	Done() <-chan struct{}
	Err() error
}

// Values is an interface capturing only the values functionality of a context.
type Values interface {
        Value(key interface{}) interface{}
}
  1. Then for merging cancellation we establish MergeCancel:
// MergeCancel merges cancellation from parent and set of cancel contexts.
//
// It returns copy of parent with new Done channel that is closed whenever
//
//      - parent.Done is closed, or
//      - any of Cancellation from cancelv is canceled, or
//      - cancel called
//
// whichever happens first.
//
// Returned context has Deadline as earliest of parent and any of cancels.
// Returned context inherits values from parent only.
func MergeCancel(parent Context, cancelv ...Cancellation) (ctx Context, cancel CancelFunc)

MergeCancel uses ...Cancellation, not ...Context as agreed with @Sajmani in #36503 (comment), and in particular "That makes very clear that the extra parameters have no influence over the values in the result".

  1. For Values it was said (2) that we want to use MergeValues. However here I'm not so sure that MergeValues is a good choice. Like I explained earlier merging values is inherently custom and requires application to provide merge strategy. The proposed API of MergeValues (e.g. just recently at #40221 (comment)) hardcodes builtin strategy of first parent wins. However this strategy is too limiting I think. Still with MergeCancel, Background and this MergeValues - even with this builtin simple strategy - there is a way to build any thing I think - e.g.

    • drop cancellation: MergeValues(Background, ctx),
    • drop values: MergeCancel(Background, ctx),
    • implement custom merging strategy for values: MergeValues(DropCancel(ctx), vMerge{ctx, ctx2}) (vMerge is something similar to the one in #36503 (comment).

For the Values part I don't have strong preference of whether it is WithNewValues (as originally suggested in #36503 (comment)) or this MergeValues. I don't have a particular use-case where merging values is necessary as my primary concern is merging cancellation. Still if values handling is needed for completeness to progress here and @Sajmani prefers MergeValues I would say I should be ok with something close to his proposal:

// MergeValues merges values of parent and set of values.
//
// It returns copy of parent whose Value(key) method works by merging
// parent and values by "first-win" strategy:
//
//      - it returns parent.Value(key) if it gives non-nil,
//      - it returns values[i].Value(key) for the minimum i where it gives non-nil,
//      - else, if no such i exists, it returns nil.
//
// Returned context inherits cancellation from parent only.
func MergeValues(parent Context, values ...Values) Context

Would that be ok?

I'm sorry once again for the delay with replying and I hope my message might be at least a bit useful.

Kirill

neild commented

Looking at the original problem statement in this issue, this stands out to me:

While it is possible to implement merge functionality in third-party library (ex. lab.nexedi.com/kirr/go123/xcontext), with current state of context package, such implementations are inefficient as they need to spawn extra goroutine to propagate cancellation from parents to child.

Another example of this inefficiency is the difficulty of integrating context-based cancellation with a sync.Cond. If you want to wait on a sync.Cond and a context.Context, you need to do so in two goroutines. There have been various proposals to address this by providing a context-aware condition variable, but perhaps there's a more general solution to both these issues.

cancel := context.OnDone(ctx, func() {
  // This func is called when ctx is canceled or expires.
  // It is called at most once.
})
cancel() // Don't call the OnDone func, we aren't interested in it any more.
func MergeCancel(valueCtx context.Context, cancelCtxs ...context.Context) context.Context {
  ctx, cancel := context.WithCancel(valueCtx)
  for _, c := range cancelCtxs {
    context.OnDone(c, cancel)
  }
  return ctx
}
func CondWait(ctx context.Context, cond *sync.Cond) error {
  // The broadcast does wake other waiters on the Cond.
  cancel := context.OnDone(ctx, cond.Broadcast)
  defer cancel()
  cond.Wait()
  return ctx.Err()  
}

I believe it should be possible to implement context.OnDone without an extra goroutine for contexts created by the context package. (Third-party contexts would require a goroutine, as is already the case for context.WithCancel.)

@neild I like this idea, but does't this looks like a separate proposal?

@neild I'm confused by the cancel function returned by OnDone. Does it cancel ctx, or just cancel the OnDone function? I'm assuming the latter, since OnDone doesn't return a new Context. I would expect any function registered with OnDone to be called whenever ctx is canceled. If I've got that right, we should probably name the function returned from OnDone something else to avoid confusion with the context.CancelFuncs.

Would the context.Detach proposal solve the complexity of cancellation with context.Merge?

Detach + Merge -> MergeValues

If we make a WithoutValues(Context) Context func, then we can easily express merge cancellations

WithoutValues + Merge -> MergeCancellation

A small bonus is that this leans a bit towards the imperative style.

neild commented

@neild I'm confused by the cancel function returned by OnDone. Does it cancel ctx, or just cancel the OnDone function?

The func() returned by OnDone would make it so the OnDone function will no longer be called on the context becoming done. You're right that cancel isn't a good name for it.

Another possibility might be to say that there's no way to cancel an OnDone. You could still limit the scope of one with something like:

ctx, cancel := context.WithCancel(ctx) // create a new cancel context
stopOnDone := make(chan struct{})
context.OnDone(ctx, func() {
  if _, stopped := <-stopOnDone; stopped {
    return
  }
})
// ...
close(stopOnDone)
cancel()

A question would be what goroutine the OnDone func runs in. Synchronously with the call to the CancelFunc? Or in a new goroutine? Synchronously means an OnDone can block a CancelFunc.

@neild To me this looks both race-prone and not obvious: ๐Ÿ˜ž

cancel := context.OnDone(ctx, func() {
  // This func is called when ctx is canceled or expires.
  // It is called at most once.
})
cancel() // Don't call the OnDone func, we aren't interested in it any more.
  • What happens if ctx will be cancelled in parallel with (right before) OnDone call - is callback will be called?
  • What happens if your cancel() will be called in parallel with ctx deadline/cancel - is callback will be called?
  • In any case this callback will be called in another goroutine, so is your idea actually solve "extra goroutine" issue?
  • OnDone itself creates new way to handle context cancellation (callback in another goroutine), while Merge* does not (which is good IMO).

To me MergeCancel and MergeValues as described by @navytux looks more consistent.
Also I agree MergeCancel is much more important and have a lot of real use cases, so if there is no consensus about MergeValues then it's worth to accept and implement everything except MergeValues as a first step forward.

BTW, I've one real use case for MergeValues. We put into context prometheus metrics initialized by helper packages. E.g. there is helper package cool/rest used to send http requests with extra metrics/rate limiting/other cool features and it's metrics has to be initialized once and using non-global *prometheus.Registry. So, on service start it does ctxApp = rest.NewMetricsCtx(ctxApp, reg) to store that helper package's metrics in ctxApp. Next, service gets incoming gRPC call, which cames with own request-related ctx (which also contains important values related to request). And then handler of that gRPC call wanna call cool/rest for something. This call may use both some values (e.g. auth token) from ctx and metrics from ctxApp (and of course this call should be cancelled both by ctxApp - in case of service graceful shutdown, - and ctx).

rsc commented

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
โ€” rsc for the proposal review group

rsc commented

Given that we just did errors.Join maybe this is context.Join? Or is there an argument for Merge instead?

What are the exact semantics we want for J = Join(A, B)?

For cancellation/deadlines, it sounds like we want the semantics to be that J is cancelled/timed out if either of A or B is cancelled/timed out.

For values, it sounds like we want the semantics to be either

  • J.Value(key) = A.Value(key) unless that's nil, in which case it's B.Value(key) (A on top of B)
  • J.Value(key) = B.Value(key) unless that's nil, in which case it's A.Value(key) (A amended by B)

Which one do we want? The os/exec semantics for environment are that later entries win, and on the command line if you say

foo -flag=a -flag=b

then you get b (later things win). So maybe we want Join to do the same - things later in the argument list win. That would be "A amended by B".

Is that the proposal? Do I have that right?

There was discussion above about only taking deadlines from one context, but if you want that you can use J = Join(A, Detach(B)) for example, which should be very clear.

Is there anything that would prevent Merge/Join from being Varadic?

I can't think of many good usages for more than 2 or 3 contexts being combined, but for 3+; when having to do multiple Join calls, geting the ordering right, and to act as expected would get confusing quite quickly.

Given that, I don't know if we should strictly limit to join being 2 argument.

edit: I stumbled across someone supporting Varadic after someone else already suggested it here, tho I don't know if it got any major discussion otherwise. This thread has gotten quite long.

@rsc, thanks for feedback.

In my view Join is about combining two things without changing what they each provide. The
result contains both. For example path.Join("a", "b") -> gives "a/b",
errors.Join("mistake1", "mistake2") gives "mistake1\nmistake2". Contrary to
that Merge takes two things and synthesizes some third state from them. For
merging cancellation I believe Merge is a better word.

Now regarding hereby proposal and about Detach (#40221):
I thought about those two problems combined together and now I believe a
consistent solution should be both solving them together.

But before we continue I would like that we clearly hear from @Sajmani
whether we actually need and want to do Values merging
. Because depending on
that there are two different schemes how to go. For the reference, as I already
explained in #36503 (comment)
merging of values is the weakest point of current proposal.

Anyway please find below two schemes. The first one does not do nor depend on
the merge for values. The second one is alternative that does values merging
business and is simple, but hardcodes the strategy for how values are merged.
I actually doubt that if non-trivial use case for values merging pops up, that
Merge(values) would be useful.

Before we begin let me also note an analogy: a Context could be represented by
vector (c, v) where c denotes cancellation and v denotes values. The
Background is vacuum - (รธ, รธ). Then functions in context package provide
operations to modify such vectors - for example WithCancel transforms (c, v)
-> (c*, v) , while WithValue transforms (c, v) -> (c, v*). It is useful to
think about contexts in terms of such vectors. For example Detach semantic is
to transform (c, v) -> (รธ, v). By the way from this point of view I would
say a better name for that operation would be OnlyValues, and for symmetry we
could also want to consider OnlyCancel that transforms (c, v) -> (c, รธ).
Those name are also in symmetry with WithValues and WithCancel operations
and complement them.

Now to the proposed schemes:

First we have common part that defines Context, Cancellation and Values the same way as in #36503 (comment) :

type Context interface {
        Cancellation
        Values
}

type Cancellation interface { ... }
type Values interface { ... }

Scheme A (no merging of values)

then context package already has WithValue as follows:

// WithValue returns a copy of parent in which the value associated with key is val.
func WithValue(parent Context, key, val any) Context

This WithValue already specifies kind of merge strategy for values: even if
key was present in parent, it is overwritten with newly provided value. So what
we can do in that line is to provide new function WithValues that allows to
add whole Values over parent:

// WithValues returns a copy of parent with adjusted values.
//
// It returns copy of parent whose Value(key) method works as follows:
//
//      - it returns values.Value(key) if it gives non-nil,  else
//      - it returns parent.Value(key) if it gives non-nil,  else
//      - it returns nil.
func WithValues(parent Context, values Values) Context

this is simple, goes in line with existing WithValue semantic, and allows to
build Detach = OnlyValues via WithValues(Background, ctx). I think for
detaching the explicit usage of Background is good because it puts an accent on
that the context is rederived from background from scratch and how.

Then, having solved the problem of detach, the solution to merging cancellation
comes as MergeCancel taken as-is directly from #36503 (comment)

func MergeCancel(parent Context, cancelv ...Cancellation) (ctx Context, cancel CancelFunc)

Scheme B (alternative variant if we do want to "merge" values)

Alternatively if we do not want to introduce MergeCancel and want to have a single Merge
that handles both cancellation and values in one go, then there is another variant:

First we start with the same common definition of Context, Cancellation and Values.

The we deploy functions that allow to select only one part of a Context:

// OnlyCancel returns new context with cancellation part taken from ctx.
func OnlyCancel(ctx Context) Context

// OnlyValues returns new context with values part taken from ctx.
func OnlyValues(ctx Context) Context

The Detach is then OnlyValues(ctx).

And if we have Merge() that handles both cancellation and values, merging-in
only cancellation is then Merge(ctx1, OnlyCancel(ctx2))

The Merge definition could be taken as-is directly from
#36503 (comment). In
particular I suggest, if we go this way, that it uses "first-win" strategy when
merging values. That is Merge(A, B) gives A.Value || B.Value. I suggest it to
be this way because contrary to adjustments where later things win (e.g.
WithValue(k,v) and -flag=a -flab=b in your example), in Merging the first
parent is usually considered to be the primary one. But once again, I believe
that "Scheme A" would be more natural and less ambiguous.


Once again I suggest to clearly decide first whether we want to do uniform
Merge, that handles both cancellation and values in one go, or whether we do not
delve into merging values with hardcoding a strategy and do things in more
clear and less ambiguous way.

And I would also appreciate to know what @Sajmani thinks about this.

Kirill

Thanks for the detailed outline of alternatives, @navytux

I like WithValues as the solution for merging values. There's no need to make it variadic as calls can be nested to achieve the same result: WithValues(ctx1, WithValues(ctx2, ctx3)). I agree the Values interface makes the role of the parameters clear, though if there's an objection to adding a new exported type just for that purpose, we can just use Context for both parameters and document the behavior.

I like MergeCancel but would rename it WithFirstCancel. As with WithValues, WithFirstCancel need not be variadic as calls can be nested to achieve the same result: WithFirstCancel(ctx1, WithFirstCancel(ctx2, ctx3)). We can just have func WithFirstCancel(Context, Cancellation) Context.

In the call WithFirstCancel(ctx1, ctx2) where both ctx1 and ctx2 are already canceled, we need to define which Err is returned. We have three choices: specify ctx1, specify ctx2, or say it's unspecified. Of these, I think ctx1 is the most intuitive. However, we might instead imagineWithFirstCancel(ctx1, ctx2).Err() as a select statement, in which case the choice would be random:

select {
case <-ctx1.Done():
    return ctx1.Err()
case <-ctx2.Done():
    return ctx2.Err()
default:
    return nil
}

A final concern is the behavior of the new Cause(ctx) error function for Contexts merges using WithValues or WithFirstCancel. The implementation of Cause uses context values to find the nearest parent cancelCtx. (This implementation technique is also used to avoid spawning a goroutine for cancelation when there's a custom context implementation in the chain.) This implementation detail may need to be rethought for merged contexts.

neild commented

I do not support adding WithValues or MergeCancel, or any variant thereof.

(1) Contexts are not a good mechanism for bounding object lifetimes, which calls the motivating example into question.

Contexts carry a cancellation signal, but do not provide a corresponding completion signal to indicate when operations using the context have completed. This lack of a completion signal is by design, since a function which accepts a context signals completion by returning. The operation started by f(ctx) finishes when f returns.

This association of operation lifetimes with function calls is a primary reason why Google's Go style guidelines forbid storing contexts within other types.

In addition, contexts provide no facility for ordering cleanup operations.

These limitations make contexts a poor choice for bounding the lifetime of an object. For example, consider a file handle with an associated context which closes itself when the context becomes done. The user of this file has no way to tell when the file has been closed, and no good way to specify that one file should be closed before another. An explicit Close call is simpler and more robust, since it can ensure that cleanup has completed before returning.

The motivating example for this proposal is a service object which is instructed to shut down when a context is cancelled. This is a dubious design; a long-running object of this nature should almost certainly have an explicit Close or Shutdown operation rather than relying on a context.

It is notable that the motivating example would not be permitted in code following the Google style guidelines, because it violates the rule "do not add a context member to a struct type". Obviously, these guidelines are not the final word on Go style, but given that the context package originated in Google's codebase I find this deeply concerning.

(2) There are no clear motivating examples for combining context values.

It seems to me that we've jumped to designing a mechanism for merging values from two contexts without sufficiently understanding why this is necessary.

(3) Splitting cancellation and values adds confusion.

Separate facilities for combining context cancellation and values means that one context may inherit from another in four possible ways: Not at all, cancellation only, values only, or both. I do not understand how one will choose between these options in a principled fashion. At a minimum, before we do this I would like someone to write the style guide entry explaining how to properly use these facilities.

(4) I do not see how to implement cancellation merging efficiently.

We can implement this efficiently for first-party context implementations, but it seems to me that MergeCancel will need to start a new goroutine for each third-party context in the cancellation group.

As a more general point, I would like to see a reference implementation before any proposal here is accepted, so we can properly evaluate the amount of complexity being taken on.

(5) This proposal does not sufficiently address the functionality gap.

It is possible to implement MergeCancel in third-party code today. The motivation for this proposal is to do so efficiently, without the need to start a goroutine for each context past the first.

This is an example of a more general problem: Combining context cancellation with other cancellation mechanisms is inefficient. For example, bounding the duration of a read from a net.Conn or a wait on a sync.Cond with a context requires starting a goroutine to watch the context Done channel and propagate the cancellation signal. While goroutines are not particularly expensive in general, this can be a significant cost in the common case where operations are not canceled.

This proposal addresses the inefficiency of cancelling one context when another context completes, but it does not address cases such as cancelling an operation on a net.Conn. If we address the general case, then MergeCancel can be efficiently and simply implemented in user code.

There are no clear motivating examples for combining context values.

If I may offer up a motivating example:

I have a package wqueue which orchestrates a fixed-size pool of goroutines to run functions inserted into named/ordered work queues; the specifics aren't too important, but the goroutines I'm running need to be cancelled, so all of the goroutines share a context for cancellation of the entire pool.

Separately, my application has an NSQ connection, which calls a handler for each incoming message and provides its own context with important values (like a correlation ID) and its own cancellation. If I want to handle that request via the pool, I'm now am in a situation where I have two contexts, both of which may cancel my request.

This isn't the only case, either; I also have scheduled cron-like tasks which have to perform work on the same queues, and those too have their own contexts.

I ended up writing my own library which implements what I believe Merge does (https://github.com/zikaeroh/ctxjoin), which combines two contexts together and makes it so that when either are cancelled, the derived context is cancelled.

If this doesn't go into the stdlib, I'll just keep using my package, but it was not totally trivial (100 lines) to implement, and it does seem to me to be useful.

I agree with the core point that @neild is making, which is: is there a clear enough need for these new APIs to justify the added complexity?

For Detach (#40221), we can find 45 reimplementations in open source code.

For Merge we see 8.
Using "Find references" on these turns up few results.

Based on this, I'm not sure it's appropriate to add these functions to the standard context package at this time.

@neild This is going to be a long reply, but my point (tldr) is: there are clearly some common use cases and demand in merging contexts (and a lot of existing implementations mentioned by @Sajmani proves it), so main question "is it worth/possible to add efficient merge implementation in stdlib" and not "should we merge contexts at all".

(1) Contexts are not a good mechanism for bounding object lifetimes, which calls the motivating example into question.

Probably this is because they're bounding operation lifetimes, not object lifetimes.

There are already some motivating examples above, including really common one about cancelling gRPC handler either on gRPC request cancellation or on whole app graceful shutdown. One may say it's because of gRPC implementation's bad design, because it should use user-provided base context instead of Background. But there are other cases when handler gets input data from different sources with different cancellation - and it's unclear is all of them should/can be fixed by using "right design".

This association of operation lifetimes with function calls is a primary reason why Google's Go style guidelines forbid storing contexts within other types.

Sure, to have app's shutdown context in gRPC handler you have to store it in handler's object. Sure, there are really good reasons to avoid storing contexts within other types, but that's because in most cases this is leading to/needed for misusing contexts. But any rule has an exception, and I don't see anything wrong if this is used to deliver app's shutdown context into gRPC handlers, just because there are no other way to do this.

In addition, contexts provide no facility for ordering cleanup operations.

defer already does this good enough. Anyway, how is this related to the proposal?

These limitations make contexts a poor choice for bounding the lifetime of an object.

Yep. Probably this is because they're bounding the lifetime of operation, not object.

It sounds like some misunderstanding happens here, either to you or to me. The goal of merging context in gRPC handler is not to limit the lifetime of object which contains these handlers (and contains app's ctx inside object's field), the goal is to interrupt operation itself using two different sources of cancellation signals.

To make this a bit more clear let's imagine case when user pays for CPU time and then runs long operations: cancellation may comes both from billing and from user (who cancels current request) - and both should be able to cancel user's current operations. Neither of these bounds the lifetime of any object - it's not app shutdown, app continue to work and handle other user's operations, and even this user may continue using the app after depositing some more money - only operations of that user which was running at the moment when user's balance become zero should be cancelled.

Sure, there are ways to implement this without merging context. E.g. gRPC handler may on start create new CancelFunc from gRPC context and call some global app.RegisterShutdownHandler(cancel) to make sure this handler's context will be cancelled in both use cases. And then in defer also call app.UnregisterShutdownHandler(cancel). To me - merging context looks much more natural than this.

The motivating example for this proposal is a service object which is instructed to shut down when a context is cancelled. This is a dubious design; a long-running object of this nature should almost certainly have an explicit Close or Shutdown operation rather than relying on a context.

How exactly Close/Shutdown methods on object which contains gRPC method handlers can be used to cancel such gRPC handlers? By doing same Register/Unregister dance like above?

(2) There are no clear motivating examples for combining context values.

My example with having metrics in app's context and gRPC metadata in gRPC's context doesn't counts?

(3) Splitting cancellation and values adds confusion.

There is another point of view on this: joining both in single value has already added a LOT of confusion. Anything we're doing here unlikely adds more confusion. To me it's actually the opposite: ability to split these values and manage as we need makes it more clear and ease to use.

(4) I do not see how to implement cancellation merging efficiently.

As a more general point, I would like to see a reference implementation before any proposal here is accepted, so we can properly evaluate the amount of complexity being taken on.

Here I'm 100% with you!

(5) This proposal does not sufficiently address the functionality gap.

This proposal addresses the inefficiency of cancelling one context when another context completes, but it does not address cases such as cancelling an operation on a net.Conn. If we address the general case, then MergeCancel can be efficiently and simply implemented in user code.

This may be a valid point, but it may also result in having nothing as result because we make simple "merge contexts" issue into something too general and huge.

rsc commented

Talked with @Sajmani a bit yesterday about this. It seems like there's some evidence (and intuition) for MergeCancel, but not as much for MergeValues. So focusing on cancellation may make sense.

rsc commented

What would be most useful is a compelling use case for MergeCancel. #36503 (comment) pokes holes in some of the uses that have been suggested.

What would be most useful is a compelling use case for MergeCancel. #36503 (comment) pokes holes in some of the uses that have been suggested.

Did you mean MergeValues? (Given the previous comment is saying there's evidence for MergeCancel, I assume so?)

My use case is calling cleanup/deferred close-like function that also takes context though.
Something like this

If the original context was cancelled, then passing it to those would be pointless.
So mostly, I use context.Background + timeout.
That solves the cancellation, but not context values.
Let's say, that the shutdown use OpenTracing, now I'm missing a span information.

TBH,

context.WithTimeout(ctx.OnlyValues(), 4*time.Day)

or

context.WithTimeout(context.MergeValues(context.Background(), ctx), 4*time.Day)

I don't care much.
Maybe, I would vote for the less verbose here. But that would lead to context.Context interface bloat.

rsc commented

I was asking for compelling use cases for MergeCancel. I thought we had them, but #36503 (comment) pointed out, for example, that server shutdown should probably not be represented as a context, which was fundamental to the motivating example I've had in my head.

I am assuming this issue is only about MergeCancel. MergeValues is both easier to implement and more difficult to imagine use cases for.

Thanks for clarifying; I guess my example on #36503 (comment) wasn't all that compelling, maybe. In my case, contexts are more likely to be canceled by timeouts/deadlines; either imposed by my own handlers (just so things don't go awry), or by the message handler library (as it may have its own expected deadline before telling the message broker to try again). Shutdown is another source as my application has a root context that uses the new os/signal-based cancellation, but is of course much rarer than those other sources.

I will say that it feels like this proposal could be implemented efficiently using #57928 (and I'll probably do so in my version of this function, if that proposal is accepted, as it does save a goroutine), so to me that strikes out the point of "can't be implemented efficiently".

I thought we had them, but #36503 (comment) pointed out, for example, that server shutdown should probably not be represented as a context, which was fundamental to the motivating example I've had in my head.

It's really not obvious and counter-intuitive (even for you, @rsc ๐Ÿ˜„) that server shutdown should PROBABLY NOT be represented as a context. But let's assume @neild right about this for a minute. Can someone show me how server shutdown should be implemented in a "right way" (which is using Shutdown() method on object which contains gRPC method handlers if I get @neild idea right)? What exactly such Shutdown() method should do to interrupt already running gRPC method handlers?

I'm asking because obvious solution which doesn't involve contexts and merging cancellation looks like func (server *Server) Shutdown() { close(server.shutdown) }, but this means each and every gRPC handler method (and all code it calls) must check both gRPC-provided ctx and one more shutdown channel in each and every place they currently handle ctx.

If the original context was cancelled, then passing it to those would be pointless.
So mostly, I use context.Background + timeout.
That solves the cancellation, but not context values.
Let's say, that the shutdown use OpenTracing, now I'm missing a span information.

@prochac, would the context.Detach proposal solve your issue?

I thought that MergeCancel is more about merging multiple cancellations, for example merging the shutdown signal and request cancellation.

neild commented

Can someone show me how server shutdown should be implemented in a "right way"

The way net/http.Server handles shutdown and contexts seems fairly reasonable to me (aside from the need to pass the Context to handlers hidden inside the Request, since the handler interface predates contexts).

For an object which contains handler methods called by an http.Server (which seems analogous to the gRPC case you describe), I'd stop calls to those methods by shutting down the http.Server.

For an object which contains handler methods called by an http.Server (which seems analogous to the gRPC case you describe), I'd stop calls to those methods by shutting down the http.Server.

The problem with gRPC here is support for very long running operations (streaming RPC), so this (stop calling NEW methods) won't work. But *grpc.Server does have GracefulShutdown() method, which may cancel request contexts (this isn't documented, so needs some testing).

So, if I get your idea right: if we've two independent contexts A and B and want to cancel B when A is cancelled, then we should ask code which creates B to cancel it (if it support such a feature).

rsc commented

At the moment we appear to be waiting for a compelling use case for MergeCancel (#36503 (comment)). Perhaps we should wait until we have a clearer use case to move forward.

MergeValues doesn't have a use case either but it can be done much more easily outside the standard library.

rsc commented

Based on the discussion above, this proposal seems like a likely decline.
โ€” rsc for the proposal review group

I still think that the points that @powerman brought up here are good enough support for MergeCancel(). Even if GracefullShutdown is the perferred method, I feel its much more difficult to track the flow if you use GracefullShutdown.

Beyond that however, I'm making a service that multiple clients connect to via web socket. When one client in a group closes/leaves, some common cleanup needs to be performed. To me, the concept of merging contexts and waiting on the single resulting context to be done just makes sense for this.

I have a use case with context as the carrier for the cancellation signal.
It doesn't help MergeCancel's case, but it's an example of when the context is used to signal shutdown.

It allows me to achieve simplicity by decoupling my component from application lifecycle management.
For example, I have a high-level component to handle shutdown signals and background jobs,
and these jobs use a context as a parent context for all sub-interactions.
If the shutdown is signalled, all cleanup is taken care of as the synchronous process calls finish their execution and return.

Here is an example package that describes the approach in more detail:
https://github.com/adamluzsi/frameless/tree/main/pkg/jobs

@Sajmani, @rsc, everyone, thanks for feedback.
I could find the time to reply only today. I apologize for the delay.

The main criticisms of hereby proposal are

a) that there is no real compelling use case for MergeCancel, and
b) that the number of places where MergeCancel could be useful is small.

Let's go through those:

The need for MergeCancel

Even though the original overview of this proposal does not say anything
regarding that Service is stopped via context cancellation, such argument was used
against hereby proposal with saying that it is not ok to bind a Service
lifetime to context. Given that argument, I believe, it makes sense to describe
everything once again from scratch:

Imagine we are implementing a Service. This service has dedicated methods to
be created, started and stopped. For example:

type Service { ... }
func NewService() Service
func (srv *Service) Run()
func (srv *Service) Stop()

The Service provides operations to its users, e.g.

func (srv *Service) DoSomething(ctx context.Context) error

and we want those operations to be canceled on both

a) cancellation of ctx provided by user, and
b) if the service is stopped by way of srv.Stop called from another goroutine.

So let's look how that could be implemented. Since inside DoSomething it might
need to invoke arbitrary code, including code from another packages, it needs to
organize a context that is cancelled on "a" or "b" and invoke that, potentially
third-party code, with that context.

To organize a context that is cancelled whenever either user-provided ctx is
cancelled, or whenever Service.Stop is called we could create a new context
via WithCancel, and register its cancel to be invoked by Service.Stop:

struct Service {
	...

	// Stop invokes functions registered in cancelOnStop
	stopMu       sync.Mutex
	stopped      bool
	cancelOnStop set[context.CancelFunc]
}

func (srv *Service) Stop() {
	srv.stopMu.Lock()
	defer srv.stopMu.Unlock()

	srv.stopped = true
	for cancel := range srv.cancelOnStop {
		cancel()
	}

	...
}

func (srv *Service) DoSomething(ctx context.Context) (err error) {
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()

	srv.stopMu.Lock()
	if srv.stopped {
		return ErrServiceDown
		srv.stopMu.Unlock()
	}
	srv.cancelOnStop.Add(cancel)
	srv.stopMu.Unlock()

	defer func() {
		srv.stopMu.Lock()
		defer srv.stopMu.Unlock()
		srv.cancelOnStop.Del(cancel)

		if err != nil && srv.stopped {
			err = ErrServiceDown
		}
	}()

	return thirdparty.RunJob(ctx)
}

This pattern is scattered and kind of used everywhere including in the gRPC internals I
explained in detail in #36503 (comment).

It is also not always implemented in such simple form, as people sometimes use
dedicated registry for cancels, and sometimes they keep on other objects and
retrieve created cancel func indirectly from there which makes the logic more
scattered and harder to follow.

But what all this code is doing in a sense - is that it is duplicating
functionality of context package by manually propagating cancellation from
Stop to every spawned job. Let me say it once again:

_Every_ service implementation has to _manually_ propagate the
cancellation from its `Stop` or `Shutdown` to _every_ spawned job in
_every_ of its methods.

Even though this could be done, does the need for manual duplication of the
context functionality suggest that functionality should be provided by the
context package in the first place?

And because this propagation is usually scattered, searching code for it does
not yield results with simple queries like Merge(Context). Most people keep
on copying the pattern from one place to another preferring not to go against
official guidelines not to store context in structures: it hides the problem if
the context and propagation logic is stored in expanded form.

For the reference: Detach operation is significantly easier to implement and
does not need to go against style guide that context should not be stored anywhere.
That's why searching for it yields more results. But the interpretation of the
search should be normalized to difficulty and willingness to go against
existing recommendation with risking to receive "obey dogma" feedback.

For the reference 2: Go is actually storing contexts inside structures, e.g. in database/sql, net/http (2, 3, 4, 5, 6) and os/exec.

Actually in 2017 in #22602 (comment) @Sajmani wrote

I had also pushed for the explicit parameter restriction so that we could
more easily automate refactorings to plumb context through existing code. But
seeing as we've failed to produce such tools, we should probably loosen up
here and deal with the tooling challenges later.

so I guess that the restriction that context should not be kept in structs, should not be that strong now (/cc @cespare).

For the reference 3: and even if context.OnDone proposal is accepted, without
adding MergeCancel to standard package context, the scheme of manual cancel
propagation from Stop to operations will need to be kept in exactly the same
form as it is now and everywhere because people still get a message that "storing
serverCtx in struct is not ok". By the way, why it is context.OnDone instead
of chan.OnDone? If we have a way to attach callback to channel operations
many things become possible without any need to modify context or other
packages. Is that path worth going? I mean internally OnDone might be
useful, but is it a good idea to expose that as public API?

For the reference 4: for cancelling operations on net.Conn the solution, from
my point of view, should be to extend all IO operations to also take ctx parameter
and cancel IO on ctx cancel. This is the path e.g. xio package
takes. And internally, after an IO operation was submitted, the implementation of that IO
on Go side does select on IO completion or ctx cancel, and if ctx is cancelled
issues cancel command via io_uring. That's how it should work. True, we could
use wrappers over existing Reader and Writer that call Close via ctx.OnDone.
While it somewhat works, closing the IO link on read cancel is not what read
canceler really expects. And going this way also exposes the inner details of context
machinery as public API. While that, once again, might work in the short term,
in my view that won't be a good evolution in the longer run.


Another critic note regarding hereby proposal was that there is no
proof-of-concept implementation shown.

But the proof of concept implementation is there and was pointed out originally
right in the first message of hereby proposal. Here it is once again:

https://lab.nexedi.com/kirr/pygolang/blob/39dde7eb/golang/context.h
https://lab.nexedi.com/kirr/pygolang/blob/39dde7eb/golang/context.cpp
https://lab.nexedi.com/kirr/pygolang/blob/39dde7eb/golang/context_test.py
https://lab.nexedi.com/kirr/pygolang/blob/39dde7eb/golang/libgolang.h

It is in C++, not Go, but it shows that the implementation is straightforward.


At this stage I have very little hope that this proposal will be accepted. As
it looks it will likely be declined next Wednesday and closed. That's sad, but
I will hopefully try to find my way.

Kirill

To organize a context that is cancelled whenever either user-provided ctx is
cancelled, or whenever Service.Stop is called we could create a new context
via WithCancel, and register its cancel to be invoked by Service.Stop:

@navytux While I like the MergeCancel idea, it should be noted actual implementation can be much simpler - at cost of adding 1 extra goroutine per method call (same 1 extra goroutine which is currently created anyway by 3rd-party MergeCancel implementations):

struct Service {
	...
	stopped      chan struct{}
}

func (srv *Service) Stop() {
	close(srv.stopped)
}

func (srv *Service) DoSomething(ctx context.Context) (err error) {
	ctx, cancel := context.WithCancel(ctx)
	defer cancel()

	go func() {
		select {
		case <-srv.stopped:
			cancel()
		case <-ctx.Done():
		}
	}()

	return thirdparty.RunJob(ctx)
}
rsc commented

Will move back to active instead of likely decline.

rsc commented

Note that #57928 may be a better or equally good answer, since it would let people implement Merge efficiently outside the standard library.

Thanks, Russ.

Regarding your note I'd like to point out that "let people implement X outside the project" might be not universally good criteria. For example from this point of view we could also say that "with callbacks there is no need for channels and goroutines to be built-in, since with callbacks those primitives could be implemented outside". I might be missing something but in my view what makes sense to do is to provide carefully selected and carefully thought-out reasonably small set of high-level primitives instead of lower-level ones for "doing anything outside".

In #36503 (comment) I already explained that MergeCancel is fundamental context operation paralleling it to merges in git, select in go and ฯ† in SSA. From this point of view adding MergeCancel makes sense to do because it makes context operations to be kind of full closure, which were previously incomplete.

Kirill

P.S. @powerman, thanks. I was implicitly assuming we want to avoid that extra goroutine cost, but you are right it would be better to state that explicitly and to show plain solution as well.

@navytux, thanks for the detailed scenario for helping to think about this proposal.

When people have asked me about how to handle these situations in the past I have usually encouraged them to figure out how to get the two contexts to have a common parent context rather than try to figure out how to merge unrelated contexts.

In your example it doesn't appear that is possible at first glance because the context passed to Service.DoSomething isn't related to anything inside Service.Stop. But maybe it's not as difficult as it seems.

Isn't there always a higher scope in the application that manages the lifetime of the Service and also the code that calls DoSomething? It seems possible that the higher scope can ensure that there is a base context that gets canceled when Service.Stop is called.

For example:

func main() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()

    svc := NewService()
    go func() {
        svc.Run() // FWIW, I usually write Run(ctx) here, and avoid Stop methods.
    }()

    go func() {
        <-ctx.Done()
        svc.Stop()
    }()

    DoThingsWithService(ctx, svc)
}

func DoThingsWithService(ctx context.Context, svc *Service) {
    ctx, cancel := context.WithTimeout(ctx, time.Second)
    defer cancel()

    // ....

    svc.DoSomething(ctx)
}

Granted, the higher scope has to orchestrate more this way, but I haven't found that difficult to manage in the code I've worked on. I am curious if there is a fundamental reason why it shouldn't be done this way?

neild commented

Abstract examples are difficult to work with. What is a Service? Why does it stop? Should stopping a service (possibly optionally) block until outstanding requests have been canceled? Should stopping a service (possibly optionally) let outstanding requests finish before stopping? If either of these features are desired, then context cancellation isn't sufficient; the service needs to be aware of the lifetimes of outstanding requests, which contexts don't provide.

I'd expect any network server, such as an HTTP or RPC server, to support graceful shutdown where you stop accepting new requests but let existing ones run to completion. But perhaps a Service in this example is something else.

I still feel that the motivating case for MergeCancel is unconvincing. However, even granting a compelling use case, MergeCancel confuses the context model by separating cancellation and value propagation, cannot be implemented efficiently in the general case, and does not address common desires such as bounding a network operation or condition variable wait by a context's lifetime. context.OnDone/context.AfterFunc as proposed in #57928 fits neatly within the context model, lets us efficiently implement operations on third-party contexts that are inefficient today, and can be easily used to implement MergeCancel or the equivalent.

rsc commented

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
โ€” rsc for the proposal review group

I'd expect any network server, such as an HTTP or RPC server, to support graceful shutdown where you stop accepting new requests but let existing ones run to completion.

@neild Please don't take this reply as one "for MergeCancel" and "against context.After" - I just like to clarify this use case for you.

For trivial fast RPC calls - you right, it's usually preferable to complete current requests on graceful shutdown instead of interrupting them. But some RPC can be slow and also RPC can be streaming (i.e. never ending) - in these cases cancelling them using context is really natural way to go.

Also, once again, it's worth to remind about less common but valid use case when we need to cancel some group of requests but not all of them - i.e. not a graceful shutdown case. It may be requests of some user account which was blocked by admin or runs out of money, may be requests from some external service/client whose certificate has just expired, may be requests related to some deprecated API which deprecation time happens to begin right now, cancelling a group of jobs, etc. And, yeah, here I'm still talking about slow/long-running/streaming type of requests.

rsc commented

Waiting on #57928, but assuming we do find a good answer there, it will probably make sense to let that one go out and get used before we return to whether Merge needs to be in the standard library.

rsc commented

Having accepted #57928, it seems like we can decline this and let people build it as needed with AfterFunc.
(They could build it before using a goroutine, but now AfterFunc lets them avoid the goroutine.)

Do I have that right?

neild commented

I believe that's right; AfterFunc should make it simple to efficiently implement MergeCancel.

Apologies folks, if you got this via the email, I got stung by github's "oops you hit control-enter and now it's sent" misfeature and sent this too early, I have edited since to clean up some of the clumsy wording.

Seems better to leave it on hold to me. Merge is a very useful building block and even if it's built on AfterFunc, I think it still merits inclusion, but I don't think we'll know until AfterFunc has been out in the wild for a bit.

The shutdown propagation thing is the most valuable use case I've encountered. I'm aware that there are other preferences for how to implement that, but nothing to me has been cleaner or easier to explain to other engineers than merged contexts. Graceful shutdowns in complex systems are hard enough that I rarely see them even attempted in practice, much less done well.

I think there are many ways of composing context cancellations that are underserved by the standard library, not just Merge, Merge is a useful primitive building block that I think can help unlock this space a bit (along with relaxing the guidance on context retention, which I think is restrictive, in favour of something more nuanced), and if it's simple enough to implement on top of AfterFunc, I can't help but wonder if that's actually reason to include it, not a reason not to, especially given there's a lot of interest here, a lot of examples, and a lot of +1s.

Part of the problem with some of the examples given in this thread, all of which seem somewhat familiar to me in some ways and very alien in others, is that it's really hard to give a good, narrow example of exactly how we have seen a need for something like this. It's usually in the context of a much bigger problem, with constraints and nuances that are very hard to relay in the bounds of a small example. It's really one of those things where you have to feel the fiber of the fabric of it between your fingers to know. This can lead us to a lot of strawmanning of examples and suggestions that inappropriate alternatives are the Absolute Right Way, even though we don't really know enough about somebody's problem, and whether this would in fact be a good solution, to make that judgement. This feels like a case where the challenge has been to come up with a small-enough demonstration to be convincing. Maybe that's a sign that in this case, "is there a concrete use case? is the wrong question, and a better one might be "do enough folks see value in the potential unlocked by something composable and is the implementation simple enough to just let the flowers bloom?"

Just my 2c, but I think it's best to leave this on hold for a while, see how things shake out after AfterFunc has had a chance to percolate, then re-evaluate with a less hard-edged approach to demanding perfectly sculpted examples. If it is indeed simple enough to build on top of AfterFunc, I think that's a reason to include it.

rsc commented

We can decline this for now and then feel free to open a new proposal for Merge in a year or so if there is new evidence.

rsc commented

Based on the discussion above, this proposal seems like a likely decline.
โ€” rsc for the proposal review group

rsc commented

No change in consensus, so declined.
โ€” rsc for the proposal review group