larq/compute-engine

Benchmarking custom model

Closed this issue · 3 comments

Hi,

Is it possible to run benchmarking on a model (e.g., a custom model built/trained using larq) as described here (https://docs.larq.dev/compute-engine/benchmark/) under Android phone section?

Thanks in advance.

I was able to run it on android phone, but I have another question about inference times.

"Inference timings in us" what does "us" mean? is it micro seconds?

Good to hear that you got it working!

"Inference timings in us" what does "us" mean? is it micro seconds?

That is correct.