A DNN tuning framework for mobile devices
This project is built for auto-benchmarking AI workloads on android mobile phones.
- One android phone with system version>=5.0 and 64-bit arm processor
- Make sure the Android devolopment mode is enabled
- Install android SDK tools on your PC and can connect to the mobile phone via adb
- For performance profiling, it would be better to root the mobile phone
- Download Snapdragon Profiler and install it (optional for energy profiling)
All the models and benchmark model tools are hosted on the google drive.
- Clike the link
and download
tf_benchmark_model
andtflite_benchmark_model
.
adb push path/to/tf_benchmark_model /data/local/tmp/
adb shell "chmod +x /data/local/tmp/tf_benchmark_model"
adb push path/to/tflite_benchmark_model /data/local/tmp/
adb shell "chmod +x /data/local/tmp/tflite_benchmark_model"
The model file who's name is start with 'frozen_' is TensorFlow model. The model file who's name is end with '.tflite' is TFLite model. The model file who's name is end with 'quantized.tflite' is TFLite quantized model. Download the model you are interested in.
- Push model(s) to your android devices.
mkdir /sdcard/dnntune_models
adb push path/to/your_downloaded_model_file /sdcard/dnntune_models
- Start benchmarking. Assume you have
mobilenet-v1.tflite
in/sdcard/dnntune_models
python dnntune_models.py --framework TFLite --model_name mobilenet-v1 --device CPU --thread_number 2 --use_quantization 0
- Supported models are listed in
dnntune_models.py
.
Our work is based on the open source frameworks including Tensorflow, TFLite and MACE, MNN.