larq/compute-engine

inference using tflite with larq customops

tch2841 opened this issue · 6 comments

Hi all, I am trying to perform inference using a custom tflite model built using larq custom ops. Sorry, I am new to tensorflow and I strongly suspect that I need a way for the tflite interpreter to cooperate with the larq custom ops... however, I am a little confused on how to do so.

My bug:

RuntimeError: Encountered unresolved custom op: LceQuantize.Node number 3 (LceQuantize) failed to prepare.

To reproduce:

from tflite_runtime.interpreter import Interpreter
interpreter = Interpreter('my_model.tflite')
interpreter.allocate_tensors()

where my_model is a custom tflite model.

The question I have is:

  1. How do I get the tflite interpreter to recognize these custom ops?

Thank you!

Did you already checkout https://docs.larq.dev/compute-engine/api/python/#interpreter?
We provide a wrapper around the TFLite interpreter which includes our custom ops and matches the Keras predict API.

I did, but I am wondering what does it mean by saying the first argument of "flatbuffer model" must be of form "bytes"... is this the tflite file or do I need to find a way to convert the .tflite file to flatbuffer bytes?

I did, but I am wondering what does it mean by saying the first argument of "flatbuffer model" must be of form "bytes"... is this the tflite file or do I need to find a way to convert the .tflite file to flatbuffer bytes?

This is the content of the .tflite file like returned from larq_compute_engine.convert_keras_model.
If you have a .tflite file on disk, you can read it using:

with open("my_model.tflite", "rb") as f:
    lce_model = f.read()
interpreter = larq_compute_engine.testing.Interpreter(lce_model)

I'm closing this issue due to inactivity. Feel free to re-open if there are more questions.

Hi @Tombana ,

I have same problem like @tch2841.

briefly, I want to use python API like

with open("my_model.tflite", "rb") as f:
    lce_model = f.read()
interpreter = larq_compute_engine.testing.Interpreter(lce_model)

in raspberry pi or NVIDIA jetson xavier, but when i try it, it requires larq_compute_engine python module which is not able to be installed in raspberry pi. it seems not support installing in the hardware.

is there any possible way to use python API in raspyberry PI or jetson xavier?

many thanks

@godhj93 Please see this issue: #653 . It will allow you to build a Python package for the Raspberry Pi.