-
Obtain access to the ImageNet data using this link
-
This repository contains a evaluation script to measure the performance of the selected model. However, the Coral Dev Board does not seem to be supported as a delegate runtime. Therefore, we have modified a little bit the original script to handle the Edge TPU device.
-
The data used to evaluate the performance of these models have been the validation set of ILSVRC2012 (6.3gb)
-
Install the specific version of
tflite_runtime
for your device - Releases. For Windows 10 with Python 3.7:
pip install tflite_runtime-2.5.0-cp37-cp37m-win_amd64.whl
-
Install requirements
pip install -r requirements.txt
-
Clone PyCoral package to the Coral device.
https://github.com/google-coral/pycoral
-
Go to
pycoral/test_data
path:
cd pycoral/test_data
-
Download models to the Coral Dev Board from the official website - here
wget <url_model>
-
Go to
pycoral/benchmarks/reference
and modify the fileinference_reference_aarch64.csv
with the models seleted for the benchmarking. -
Before running the benchmark test we need to install
cpupower
sudo apt-get install linux-cpupower
-
Run benchmark script using:
python3 inference_benchmarks.py
-
The scrip generates a
.csv
file with the results. The file is saved intmp/results
folder.
-
The models from the official Coral website have been trained using only 1000 labels from the ImageNet dataset. We need to obtain the labels map file from here
-
Extract validation set downloaded previously to a folder
-
Extract labels from the ImageNet validation set using this official script (Also included in this repository).
-
Execute
imagenet_evaluate.py
using:
python imagenet_evaluate.py -m path/to/edgetpu_model.tflite -i path/to/imagenet/validation/folder -v path/to/generated_validation_labels.txt -l path/to/model_labels.txt