Slow seeking through large file
Opened this issue · 1 comments
Hello,
I wrote a program that extracts certain meta data from a large file (~100GB) mounted via sshfs. The extracted data is just about 1MB but this data is spread in many but small chunks across the large file at known byte offsets.
Now the problem is that seeking through this file takes longer than expected. So I believe too much (unwanted) bytes are read due to too large buffer/block sizes or because of some readahead optimization.
I already tried to set -o no_readahead
and to reduce -o max_read=1000
. Both don't seem to have an effect in my case. Has anyone an idea what else I could try? Does anyone know whether there is also some buffering or large block sizes in the kernel or on the server side?
Thank you very much in advance
Is it any faster if you don't do it through SSHFS (if you run the program on your remote machine instead?)
Just an idea to determine if the latency comes from the disk of your remote machine (because reading files are slow there) or if it's SSHFS or network speed that's causing the problem.
Which version of SSHFS are you using?