A command-line benchmarking tool.
Demo: Benchmarking fd
and
find
:
- Statistical analysis across multiple runs.
- Support for arbitrary shell commands.
- Constant feedback about the benchmark progress and current estimates.
- Warmup runs can be executed before the actual benchmark.
- Cache-clearing commands can be set up before each timing run.
- Statistical outlier detection to detect interference from other programs and caching effects.
- Export results to various formats: CSV, JSON, Markdown, AsciiDoc.
- Parameterized benchmarks (e.g. vary the number of threads).
- Cross-platform
To run a benchmark, you can simply call hyperfine <command>...
. The argument(s) can be any
shell command. For example:
hyperfine 'sleep 0.3'
Hyperfine will automatically determine the number of runs to perform for each command. By default,
it will perform at least 10 benchmarking runs. To change this, you can use the -m
/--min-runs
option:
hyperfine --min-runs 5 'sleep 0.2' 'sleep 3.2'
If the program execution time is limited by disk I/O, the benchmarking results can be heavily influenced by disk caches and whether they are cold or warm.
If you want to run the benchmark on a warm cache, you can use the -w
/--warmup
option to perform
a certain number of program executions before the actual benchmark:
hyperfine --warmup 3 'grep -R TODO *'
Conversely, if you want to run the benchmark for a cold cache, you can use the -p
/--prepare
option to run a special command before each timing run. For example, to clear harddisk caches
on Linux, you can run
sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
To use this specific command with Hyperfine, call sudo -v
to temporarily gain sudo permissions
and then call:
hyperfine --prepare 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' 'grep -R TODO *'
If you want to run a benchmark where only a single parameter is varied (say, the number of
threads), you can use the -P
/--parameter-scan
option and call:
hyperfine --prepare 'make clean' --parameter-scan num_threads 1 12 'make -j {num_threads}'
Hyperfine has multiple options for exporting benchmark results: CSV, JSON, Markdown (see --help
text for details). To export results to Markdown, for example, you can use the --export-markdown
option that will create tables like this:
Command | Mean [s] | Min [s] | Max [s] | Relative |
---|---|---|---|---|
find . -iregex '.*[0-9]\.jpg$' |
2.395 ± 0.033 | 2.355 | 2.470 | 7.7 |
find . -iname '*[0-9].jpg' |
1.416 ± 0.029 | 1.389 | 1.494 | 4.6 |
fd -HI '.*[0-9]\.jpg$' |
0.309 ± 0.005 | 0.305 | 0.320 | 1.0 |
The JSON output is useful if you want to analyze the benchmark results in more detail. See the
scripts/
folder for some examples.
Download the appropriate .deb
package from the Release page
and install it via dpkg
:
wget https://github.com/sharkdp/hyperfine/releases/download/v1.6.0/hyperfine_1.6.0_amd64.deb
sudo dpkg -i hyperfine_1.6.0_amd64.deb
On Alpine Linux, hyperfine can be installed from the official repositories:
apk add hyperfine
On Arch Linux, hyperfine can be installed from the AUR:
yaourt -S hyperfine
Hyperfine can be installed via xbps
xbps-install -S hyperfine
Hyperfine can be installed via Homebrew:
brew install hyperfine
Hyperfine can be installed via conda
from the conda-forge
channel:
conda install -c conda-forge hyperfine
Hyperfine can be installed via cargo:
cargo install hyperfine
Make sure that you use Rust 1.30 or higher.
Download the corresponding archive from the Release page.
Hyperfine is inspired by bench.
The name hyperfine was chosen in reference to the hyperfine levels of caesium 133 which play a crucial role in the definition of our base unit of time — the second.