Really large file with concurrent access hits EIO
diwu1989 opened this issue · 3 comments
When I run ./fakedatafs /tmp/test -m 3000000000 -n 1
and do multiple concurrent readers, half way through the ingest, I end up hitting EIO.
There might be a race condition when or data corruption issue with large files over 1TB.
Oh, interesting. I haven't used this project in a long while, so I'm not sure what's going on. I can try to look into the issue, can you maybe provide instructions on how to reproduce this?
So I seem to have fixed it by adding a mutex in file.go
in the contFileReader
struct and always locking & unlocking in the func (rd *contFileReader) Read(p []byte) (int, error) {
This makes me feel like the same instance of contFileReader
were being accessed by multiple threads, and they were clobbering each other.
We're running with this patch in place, but I did notice that performance went down from over 500MB/s to 300MB/s due to the sync.
Can you please confirm it's a race condition by building fakedatafs
with go build -race
and running it again? it should then print stack traces whenever a race condition occurred...
How can I test multiple concurrent readers? How did you produce the issue?