avoid indexing every time?
wakamex opened this issue · 1 comments
is there a feasible way to do this? maybe it makes no sense, if displaying file size. but if i'm using this as an ls
substitute, it feels weird for it to take 5 seconds every time. maybe let me set an ls
alias to use a cached index, then other commands update this index?
Whether or not to leverage the cache is a decision made by the underlying read_dir
API from libc
; but rest assured the cache is indeed being utilized automatically. Checkout the following screenshot:
I color coordinated the highlights for the stats from iostat
on the left with the time
ed erd
runs on the right. You'll notice two things:
- The subsequent
erd
run is approx 1.8 times faster than the first - The amount of 4KB transfers per second (tps) is lower on the subsequent
erd
run
This all lends credence to the fact that we're not always making round-trips to the disk when disk data already exists in the cache.
Now as to why erd
may not be 100% suitable as an ls
alternative: Ultimately in order for us to get disk usage information of directories we need to do a full-depth traversal of a directory's descendants. Even if you were to limit the depth to 1 (-L1
), erd
accurately reports disk usage information which necessitates the traversal.
There IS an option to --suppress-size
but this issue is still ongoing.
Would you want ls
-like performance at the cost disk usage information? Could I also ask how you're using erd
currently as an ls
-alternative?