Testing framework of benchmarks
fontikar opened this issue · 2 comments
Testing benchmarks keeps failing.
Related to seq_len
possibly. Benchmark and actual output looks different, some sort of shift in columns
My suggestion is to move to snapshop testing which is designed for "Text output that includes many characters like quotes and newlines that require special handling in a string."
I don't think we can get around the fact that when we improve the algorithm, the tests will fail, but updating the benchmarks with snapshot tests is actually very simply.
#> * Run `testthat::snapshot_accept('snapshotting.Rmd')` to accept the change.
#> * Run `testthat::snapshot_review('snapshotting.Rmd')` to interactively review the change.
Happy to discuss more and trial on a branch if you both think its helpful
Having just worked through some refactoring, I can confidently say the testing is very good and useful!
I've implemented some changes in #196 and c68dc70 to
- for state diversity, only compare a few hundred lines
- for alignments, ignore taxon_ID
- add extra tests on functions like standardise_names, extract_genus