creachadair/jrpc2

Separate dispatch queue for server push requests

Closed this issue · 3 comments

While I appreciate concurrency from performance perspective, we had to reduce it to 1 early on in our LS due to the fact that LSP makes certain assumptions about the ordering of requests and this allows us to basically ensure that requests are processed in the same order they come in.

(anecdotally) This doesn't seem to affect the performance too much in the context of standard language server usage as the user would be typically editing just one file at a time anyway, which limits the number of requests for dispatch at any given time.

However the negative side effect of this decision is that this basically prevents the server from sending requests back while a handler/task is being dispatched, which is important e.g. for cancellation or reporting progress.

Therefore I was wondering if you would consider processing push requests/notifications sent from the server in a separate queue, so that server and client don't block each other when Concurrency is set to 1?

I'm willing to consider this, but it would be a substantial change—and not one that fits well with the existing design. I would have structured things differently if I'd been attempting to implement a true "peer" rather than separate client and server. Ironically, I considered that approach when I was first building it, but wound up not going that direction.

Let me start by asking: Is it still necessary to set concurrency to 1 to achieve what you are describing?

Since #24 et seq., requests should be properly sequenced (not just notifications): The handlers for two requests should not overlap in time unless they are concurrent (as defined in the docs).

Let me start by asking: Is it still necessary to set concurrency to 1 to achieve what you are describing?

I'm tempted to say yes, but I don't have any data to prove it at this point. We can try to raise the concurrency and add some telemetry to see how it behaves now in practice to confirm one way or the other.

We didn't have any way of collecting such data other than by surfacing it to the user before, so I'd be worried about taking such a step in the past, but I think we can take the (calculated) risk now. I will try to find a way to measure this and PR something in and get it released in coming weeks and then report back.

Thank you for the patience and quick response, as usual.

We have raised the concurrency to 2+ (+ depending on available CPUs) in recent release of terraform-ls and so far we have not received any reports of wrongly ordered requests, so I'm going to close this issue for now and raise a new one if/when we do receive any and provide more concrete data.

Thank you!