donmccurdy/glTF-Transform

Continuous benchmarking

donmccurdy opened this issue · 1 comments

Basic benchmarks were added to the project in #1202. I'd like to set up continuous benchmarking, such that:

  1. benchmark results are written to a CSV or JSON file
  2. each release appends to archived results
  3. each PR compares against results from last release and last commit
  4. non-trivial regressions are flagged in PR review

I've reviewed existing GitHub actions and they don't fit my goals well, so this might be somewhat custom.