hashicorp/yamux

Stream receivebuffer continually grows

Matthias247 opened this issue · 8 comments

The library currently uses bytes.Buffer for buffering data that was received from the network connection until it was fetched by the user through a Read() call.
The problem with that is that all writes from the network will append to the end of the buffer. Reads will read from the current position and shift the read pointer to the end of the buffer, but the already read data will still stay at the beginning of the buffer - until the stream is closed.

This means e.g. for transferring a 100MB file through yamux the receivebuffer for the stream will grow to 100MB.

This could be fixed by utilizing a ring buffer, which is big enough to hold the maximum window size. Or a buffer of window size where each read copies remaining data to the front, resets the read pointer to 0 and resets the writer pointer according to it.

It's not entirely true that data will stay at the beginning of the buffer until the stream is closed, is it? If you reach a point where the bytes.Buffer is completely emptied, its internal position tracker will be reset to point to the beginning of its buffer and its capacity will be reused (https://golang.org/src/bytes/buffer.go?s=8509:8559#L248).

But I agree that it does seem like even if you are constantly Reading (e.g. doing an io.Copy call into a file) you might never actually reach that condition unless you happen to do just the right read with just the right buffer size, and thus the buffer would continue to grow. Is that what you see in practice? I ask as someone considering using this library.

@havoc-io Yes, Buffer has this property. However if you look at https://github.com/hashicorp/yamux/blob/master/stream.go#L107-L112 you will see that this branch will never be taken as Read() will only be called if there is something in the buffer. But even if it would apply, as we can't guarantee that the reader is faster than the writer and that the buffer will ever be empty it would not be the best solution.

You can see the problem happens in the unit tests if you output the capacity of the buffer after TestSendData_Large with fmt.Printf("Recv buf cap: %d\n", stream.recvBuf.Cap())

I would fix it by replacing the Buffer with a simple slice with a behavior similar to here: https://github.com/bradfitz/http2/blob/master/buffer.go

Ah, that's true. It's seems like a weird design decision on the part of bytes.Buffer to have truncation triggered on the beginning of Read rather than the end. Triggering at the end would at least potentially truncate in this case, though of course something like a ring buffer would make more sense. Thanks for clarifying!

The problem is the window update logic is broken. A Stream#Read which does not empty the recvBuf buffer in a single call will cause unbounded memory growth for large streams.

I believe this issue was solved by #53, so I think it can be closed. Using the following test (which mismatches write and read speeds) on darwin/amd64, I see memory growing extremely rapidly pre-#53 and stopping at around 5.0 MB post-#53. There's a bit of a slow asymptotic growth towards 5.0 MB upon starting the executable, perhaps due to the Go runtime/GC behavior, but it does seem to stop eventually.

package main

import (
	"crypto/rand"
	"io"
	"io/ioutil"
	"net"
	"time"

	"github.com/hashicorp/yamux"
)

type sleepReader struct {
	reader io.Reader
}

func (r *sleepReader) Read(buffer []byte) (int, error) {
	time.Sleep(10 * time.Millisecond)
	return r.reader.Read(buffer)
}

func main() {
	clientConn, serverConn := net.Pipe()
	clientSession, _ := yamux.Client(clientConn, yamux.DefaultConfig())
	serverSession, _ := yamux.Server(serverConn, yamux.DefaultConfig())

	go func() {
		c, _ := serverSession.Accept()
		reader := &sleepReader{c}
		io.Copy(ioutil.Discard, reader)
	}()

	c, _ := clientSession.Open()
	io.Copy(c, rand.Reader)
}

I have a similar problem under production conditions like OP.
My memory is not growing 1Mb per 1Mb streamed data, but it is growing over time.

Here is a pprof dump (linux, amd64), that pointed me to this issue:

Showing nodes accounting for 9243.28kB, 100% of 9243.28kB total
Showing top 10 nodes out of 14
      flat  flat%   sum%        cum   cum%
 8731.23kB 94.46% 94.46%  8731.23kB 94.46%  bytes.makeSlice
  512.05kB  5.54%   100%   512.05kB  5.54%  github.com/hashicorp/yamux.newStream
         0     0%   100%  8731.23kB 94.46%  bytes.(*Buffer).ReadFrom
         0     0%   100%  8731.23kB 94.46%  bytes.(*Buffer).grow
         0     0%   100%   512.05kB  5.54%  github.com/hashicorp/yamux.(*Session).Open
         0     0%   100%   512.05kB  5.54%  github.com/hashicorp/yamux.(*Session).OpenStream
         0     0%   100%  8731.23kB 94.46%  github.com/hashicorp/yamux.(*Session).handleStreamMessage
         0     0%   100%  8731.23kB 94.46%  github.com/hashicorp/yamux.(*Session).recv
         0     0%   100%  8731.23kB 94.46%  github.com/hashicorp/yamux.(*Session).recvLoop
         0     0%   100%  8731.23kB 94.46%  github.com/hashicorp/yamux.(*Stream).readData

The biggest jumps occur, when a new stream (OpenStream) begins to stream. I was wondering, if the problem is on my side, that the code retains open streams, but I'm pretty sure it doesn't.

Is there anything else I could provide, apart from a test case, that reproduces the issue?

@pbedat What version of yamux are you using, could you retest with the latest. Any repro would be great as well!

@evanphx I was using github.com/hashicorp/yamux v0.0.0-20200609203250-aecfd211c9ce (Go 1.15)

I switched to smux right after that post, which served me well. I could try to reproduce this, but I don't think it would be worth it. A lot has changed in 1.5 years 😉
If you really really need a repro I will try my best.