Python's timeit
module, rewritten C++11. Thus, it's a simple, header-only benchmarking library for quick experiments. Probably, not only :)
Download timeit.hpp. The minimal example is:
#include "timeit.hpp"
void main() {
timeit([] {
pow(1, 2);
});
}
min: 85.780ns, mean: 87.365ns (3 runs, 10000 loops each)
You can also access the results and disable the default output.:
_timeit::autoprint = false;
long double exact = 0, approx = 0;
_timeit::Stats results1 = timeit([&] { exact += distance(10., 100.); }),
_timeit::Stats results2 = timeit([&] { approx += approx_distance(10., 100.); });
cout << "Accuracy: " << (1. - abs((exact - approx) / exact)) * 100. << "%" << endl;
cout << "Performance: " << fixed << setprecision(2) << (results1.fast / results2.fast)*100. << "%" << endl;
Accuracy: 97.8629%
Performance: 204.77%
Please, have a look at the source file for the exact source code.
- Checks the granularity/resolution of the used timer and guesses the number of iterations based on that. So, it shouldn't use too much or too small amount of time.
- The precision down to nanoseconds.
- Should work well in a years-scale as well (though it hasn't been tested so heavily).
- There are some unit-tests and they run during the start of the program. You have more chances to be warned about possible mistakes if they are (the project is tiny and young :).
- Find the best amount of repetitions
- Show standard deviation as well
- Rewrite the main function in macroses to be sure, that in the debug mode it also inlines the code under benchmarking.
- How to avoid optimizing out the benchmarking loop in the release mode?
- How to take into an account the cost of an empty loop (since it's optimized out)?