Benchmarking custom model
Closed this issue · 3 comments
aqibsaeed commented
Hi,
Is it possible to run benchmarking on a model (e.g., a custom model built/trained using larq) as described here (https://docs.larq.dev/compute-engine/benchmark/) under Android phone section?
Thanks in advance.
aqibsaeed commented
I was able to run it on android phone, but I have another question about inference times.
"Inference timings in us" what does "us" mean? is it micro seconds?
aqibsaeed commented
Got it from here. https://docs.larq.dev/compute-engine/end_to_end/
Tombana commented
Good to hear that you got it working!
"Inference timings in us" what does "us" mean? is it micro seconds?
That is correct.