coder/websocket

docs: Rationalize wsjson buffer pooling and explain tradeoffs

mitar opened this issue · 8 comments

mitar commented

I see that this library is using sync.Pool to pool buffers for wsjon sub-package. sync.Pool is suitable only for pooling objects of same size. If they are of different size, you have a problem: those buffers will just grow and grow never to shrink.

See golang/go#23199 for more background.

nhooyr commented

So we don't use sync.Pool directly. We use json.Encoder which uses sync.Pool under the hood.

I haven't looked at that issue in detail yet but I would say it's on the Go team to fix json.Encoder if there are performance concerns with the use of sync.Pool internally.

See also #409

nhooyr commented

Ah right sorry. I don't know if I agree that the concern outlined in that issue applies here. Most websocket protocols have similar sized messages and certainly not near 256 MiB in one frame and 1 KiB in another. For the general use case I think wsjson reusing memory is the correct decision.

I would say though that documentation regarding this tradeoff is warranted. And that it's easy to write your own helpers with encoding/json if you don't want to reuse memory or if you want to reuse memory in more clever ways like limiting the size of buffers returned to the pool.

nhooyr commented

Maybe we adopt the same fix as the fmt package and limit the buffer size to 64 KiB? https://go-review.googlesource.com/c/go/+/136116

mitar commented

The issue is that if only 1% of messages are 1 MB, then quickly all buffers become 1 MB forever. Even if 99% of messages are 1 KB.

nhooyr commented

Right but most websocket protocols in my experience don't have that kind of variability between their messages. For example, the frame read limit in this library is just 32768. While it's adjustable, there haven't been any complaints that this isn't a reasonable default.

More potential solutions/ideas at:

nhooyr commented

Also your PR here: rs/zerolog#602

nhooyr commented

Good summary here from you: rs/zerolog#602 (comment)