studio-b12/gowebdav

panic due to concurrent map writes

jkowalski opened this issue · 8 comments

I'm sometimes getting this panic when running multiple requests in parallel using a single webdav client:

	/usr/local/Cellar/go/1.14.6/libexec/src/runtime/panic.go:1116 +0x72 fp=0xc0001d9448 sp=0xc0001d9418 pc=0x10351d2
runtime.mapassign_faststr(0x1c15400, 0xc00001f0b0, 0x1ce5cd4, 0xd, 0x64)
	/usr/local/Cellar/go/1.14.6/libexec/src/runtime/map_faststr.go:291 +0x3de fp=0xc0001d94b0 sp=0xc0001d9448 pc=0x101529e
net/textproto.MIMEHeader.Set(...)
	/usr/local/Cellar/go/1.14.6/libexec/src/net/textproto/header.go:22
net/http.Header.Set(...)
	/usr/local/Cellar/go/1.14.6/libexec/src/net/http/header.go:37
github.com/studio-b12/gowebdav.(*BasicAuth).Authorize(0xc0005585c0, 0xc00001f0e0, 0x1ce1500, 0x8, 0x1cdc5ce, 0x1)
	/Users/jarek/go/pkg/mod/github.com/studio-b12/gowebdav@v0.0.0-20200303150724-9380631c29a1/basicAuth.go:32 +0x1d0 fp=0xc0001d9578 sp=0xc0001d94b0 pc=0x18d1c80
github.com/studio-b12/gowebdav.(*Client).req(0xc00001f0e0, 0x1ce1500, 0x8, 0x1cdc5ce, 0x1, 0x1e96e40, 0xc0002807a0, 0xc0001d97a0, 0x0, 0x0, ...)
	/Users/jarek/go/pkg/mod/github.com/studio-b12/gowebdav@v0.0.0-20200303150724-9380631c29a1/requests.go:28 +0x283 fp=0xc0001d9740 sp=0xc0001d9578 pc=0x18d6353
github.com/studio-b12/gowebdav.(*Client).propfind(0xc00001f0e0, 0x1cdc5ce, 0x1, 0x26cdc00, 0x1d1eebf, 0xcb, 0x1acf280, 0xc00028d2f0, 0xc0001d98b0, 0x0, ...)
	/Users/jarek/go/pkg/mod/github.com/studio-b12/gowebdav@v0.0.0-20200303150724-9380631c29a1/requests.go:93 +0x114 fp=0xc0001d9808 sp=0xc0001d9740 pc=0x18d7544
github.com/studio-b12/gowebdav.(*Client).ReadDir(0xc00001f0e0, 0x1cdc5ce, 0x1, 0x1cdc5ce, 0x1, 0x1cdc5ce, 0x1, 0xc0002a8520)
	/Users/jarek/go/pkg/mod/github.com/studio-b12/gowebdav@v0.0.0-20200303150724-9380631c29a1/client.go:164 +0x1a6 fp=0xc0001d98e8 sp=0xc0001d9808 pc=0x18d2476

Should parallel calls using single webdav client be supported?

@jkowalski Can you please provide code to reproduce this behavior? And what WebDav server do you use?

package main

import (
	"log"
	"net/http"
	"net/http/httptest"
	"sync"

	"github.com/studio-b12/gowebdav"
	"golang.org/x/net/webdav"
)

func basicAuth(h http.Handler) http.HandlerFunc {
	return func(w http.ResponseWriter, r *http.Request) {
		if user, passwd, ok := r.BasicAuth(); ok {
			if user == "user" && passwd == "password" {
				h.ServeHTTP(w, r)
				return
			}

			http.Error(w, "not authorized", 403)
		} else {
			w.Header().Set("WWW-Authenticate", `Basic realm="x"`)
			w.WriteHeader(401)
		}
	}
}

func main() {
	mux := http.NewServeMux()
	mux.HandleFunc("/", basicAuth(&webdav.Handler{
		FileSystem: webdav.Dir("/"),
		LockSystem: webdav.NewMemLS(),
	}))

	server := httptest.NewServer(mux)
	defer server.Close()

	cli := gowebdav.NewClient(server.URL, "user", "password")

	var wg sync.WaitGroup

	for i := 0; i < 2; i++ {
		wg.Add(1)

		go func() {
			defer wg.Done()
			f, err := cli.ReadDir("/")
			log.Printf("f: %v err: %v", f, err)
		}()
	}

	wg.Wait()
}

Run this with go -race repro.go and it prints:

╰─➤  go run -race repro                                                                                                                   1 ↵
==================
WARNING: DATA RACE
Read at 0x00c00011cf50 by goroutine 9:
  github.com/studio-b12/gowebdav.(*Client).req()
      /Users/jarek/Projects/gowebdav/requests.go:45 +0x1276
  github.com/studio-b12/gowebdav.(*Client).propfind()
      /Users/jarek/Projects/gowebdav/requests.go:94 +0x15a
  github.com/studio-b12/gowebdav.(*Client).ReadDir()
      /Users/jarek/Projects/gowebdav/client.go:164 +0x1fb
  main.main.func1()
      /Users/jarek/Projects/repro/main.go:48 +0x9f

Previous write at 0x00c00011cf50 by goroutine 8:
  github.com/studio-b12/gowebdav.(*Client).req()
      /Users/jarek/Projects/gowebdav/requests.go:54 +0xd48
  github.com/studio-b12/gowebdav.(*Client).propfind()
      /Users/jarek/Projects/gowebdav/requests.go:94 +0x15a
  github.com/studio-b12/gowebdav.(*Client).ReadDir()
      /Users/jarek/Projects/gowebdav/client.go:164 +0x1fb
  main.main.func1()
      /Users/jarek/Projects/repro/main.go:48 +0x9f

Goroutine 9 (running) created at:
  main.main()
      /Users/jarek/Projects/repro/main.go:46 +0x4de

Goroutine 8 (running) created at:
  main.main()
      /Users/jarek/Projects/repro/main.go:46 +0x4de

The bug is because two different instances of req() are both racing to update request headers stored in c.headers

I think there are 2 required fixes:

  1. Ensure you're setting c.auth only once - use mutex or sync.Once for that
  2. Change Authorize() method to take http.Headers and set authentication directly on each request instead of on Client - move the call after copying headers from c.headers

I actually have made a fix and will send PR shortly.

@jkowalski thank you for your report and patch.
i skimmed though your improvement.
it's definitely a solution.

Here are my points which I have to think more about.

  1. API changes in Authorize. I'd love to avoid it.
  2. Losing the context (Client) in Authorize.
  3. The Mutext, which is just needed for concurrency.

Any updates on this? without a fix in #37 I can reproduce this issue very frequently.

We are getting the same error and I would love to see the fix merged :)

@jozuenoon I suppose you could use @jkowalski's fork for the time being.
@jkowalski What is the reason for moving to http.Request from Client?
Also, in requests.go, why are the Authorize call and the Header.Add loop rearranged? It's been quite a while since I've worked on this but if I remember correctly, that order was important.

The race condition is due to unsafe changes to c.auth and parallel modifications to the Client from two separate goroutines.

While i'm sure this could be made safer with a mutex around client, it's not really necessary to share the client (for writes) at all, since all requests are individually authorized anyway. The client in this case just provides additional headers to be set on each request, which is what my PR is doing.

@jkowalski thank you!
I changed my mind and applied your patch to keep the library working.
we should face #38 as next task.
Welcome!