tlc-pack/TLCBench

Roadmap for a Reproducible TVM Benchmark

merrymercy opened this issue · 3 comments

Motivation

Currently, TVM lacks an up-to-date and reproducible benchmark. The only benchmark is hosted at tvm/apps/benchmark. However, this benchmark is too old and has several flaws.

  1. The results were obtained 2 years ago.
  2. The deep learning models are old. It does not include new models (e.g., BERT, EfficientNet)
  3. The input format is TVM's internal relay format. It does not use formats from high-level frameworks (e.g., pytorch, mxnet) or open exchange format (e.g., ONNX).
  4. It does not cover Intel CPUs.
  5. It only provides pre-tuned configurations by tophub, but does not provide the scripts to generate these configurations.

This repo aims at building a new open, reproducible bechmark for TVM. When the repo is ready, we can run evaluation nightly and run auto-tuning weekly or monthly.

Approach

As the first step, we target three models, three hardware platforms and four code generation strategies.
To make the comparision with other frameworks easier, we choose ONNX as the input model format.

  • models: resnet-50, mobilenet v2 and BERT with batch size 1
  • hardware platforms: NVIDIA GPU, Intel CPU, ARM CPU
  • code generation strategies: autotvm, auto-scheduler, tvm + manual library, ONNX-runtime.

All logs generated during the auto-tuning should be uploaded for future references.

Roadmap

Task 1: Add autotvm benchmark

reference: the old autotvm benchmark

  • Implement auto-tuning scripts by following the tutorials
  • Implement evaluation scripts by following the old benchmark
  • Use ONNX as input format by following the front end tutorials. You can find models from the onnx model zoo or other reliable source.

Task 2: Add auto-scheduler benchmark

  • Implement auto-tuning scripts by following the tutorials
  • Implement evaluation scripts by following the old autotvm benchmark

Task 3: Add ONNX-runtime benchmark

reference: https://github.com/microsoft/onnxruntime

Task 4: Add tvm + manual library benchmark

reference: https://tvm.apache.org/docs/tutorials/frontend/using_external_lib.html

cc @tlc-pack/tlcpack-committer

shufflenet is also a popular model in production usage, any plan to support it ?

@hanzz2007 It is not on my agenda, but contributions are welcome.
I updated the scripts and some results to main branch. You can easily plug in your own model.