Option to not re-run failed benchmarks / per-benchmark blacklist
mstimberg opened this issue · 4 comments
Hi,
in our project we run a benchmark that fails for very old revisions, simply because the functionality we are testing did not exist back then. Unfortunately, every time we run all the benchmarks, vbench tries to re-run this benchmark again for the revisions where it failed before. Is there any way of preventing this behaviour? Or would it be possible to blacklist revisions on a per-benchmark basis?
We could of course wrap the code in a try/except, but then this would lead to very short run times for those, somewhat spoiling the data.
Thanks
What I do is pass a start_date
to the Benchmark, for example for new APIs that only exist after a certain date. Is that an option for you?
Oh, indeed it is! I did not see that every Benchmark
has its own start_date
, I was only using it for the BenchmarkRunner
. This solves my current problem, though that still leaves the case where some intermediate revisions are broken only for a specific benchmark.
I think having per-benchmark blacklist is a good idea, though. Maybe after the 2nd failure it gets blacklisted?
Sorry for the late reply... Yes, blacklisting a revision for a benchmark if it failed twice sounds good to me.