Question's about microlite.interpreter API
mocleiri opened this issue · 3 comments
I'm splitting this question from surajkumarpandey in #99 out into a separate issue.
Is there a documentation for the functions/objects used in the Hello_world example? I am trying to build a custom network that requires different configuration for running. For instance I wanted to know about the second parameter of "microlite.interpreter()" as to whose size is it and are callbacks optional for interpreter.
The only documentation are the examples themselves or the microlite module code.
The parameters for the microlite.interpreter are:
- Tensorflow lite for microcontrollers model loaded into byte array. The size of the array should match the file size of the model.
- Arena size. TFLM needs a memory area to run inference within. This number varies depending on the model being used.
- Callback function to setup input tensor.
- Callback function to extract data from output tensor.
The callbacks are called when the microlite.interpreter.invoke()
method is called.
They can be stubbed with no-op functions of you want to do the data setup and data extraction in your main loop before and after invoking the interpreter.
I would suggest starting from the callback approach. Use netron to understand the shape of the input and output tensors of your custom model.
Then adjust each callbacks to work with the shape of your specific model.
Hey, just to confirm to, build an example like hello-world
from scratch for a custom network, is referring to the following files enough: openmv-libtf.cpp
and tensorflow-microlite.c
Are there any other files I may refer to for the custom network?
The purpose of the micropython firmware is that plugging in and experimenting with a new model can all be done from the micropython side.
Its true for additional context you can look at those C++ and C files but in general you just need to upload the model into the file system and then adjust the input callback to setup the input tensor and adjust the output callback to read out the inferenced result of that input.