peteryuX/retinaface-tf2

how convert to .tflite?

xieshenru opened this issue · 5 comments

when using the following code:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_model = converter.convert()
open("./tflite_models/face_retinaface_mobilenetv2.tflite", "wb").write(tflite_model)
print('saved tflite model!')
An error of "Tensor 'input_image' has invalid shape '[None, None, None, 3]."appears
How convert to .tflite?Looking forward to your reply.

I have converted to TFLite file successfully in TF 2.3, but you cannot convert it under TF 2.3.
it is because, as of now, only TF 2.3 supports ResizeNearestNeighbor operation which is used in FPN layers.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()

I have converted to TFLite file successfully in TF 2.3, but you cannot convert it under TF 2.3.
it is because, as of now, only TF 2.3 supports ResizeNearestNeighbor operation which is used in FPN layers.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()

For me, it was not so straight forward. However, I was able to convert using custom_opdefs. But it is not easy to do the post training integer quantization. Have you tried this? I am talking about mobilenet backbone.

Retinaface model using dynamic input shape, you should fix input shape to static before converting to TF Lite.

@DavorJordacevic Could you please share your method for converting RetinaFace to tflite using custom_opdefs?

I get this error :
image

Thanks.

fmobrj commented

I have converted to TFLite file successfully in TF 2.3, but you cannot convert it under TF 2.3.
it is because, as of now, only TF 2.3 supports ResizeNearestNeighbor operation which is used in FPN layers.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.experimental_new_converter = True
tflite_model = converter.convert()

For me, it was not so straight forward. However, I was able to convert using custom_opdefs. But it is not easy to do the post training integer quantization. Have you tried this? I am talking about mobilenet backbone.

Hi @DavorJordacevic . Have you managed to deal with the integer quantization for tflite for retinaface? The tflite version gives the same results as the original pytorch model, but when I convert to a quantized integer (8bit) model, the results are messy. Even If I use the quantizing parameters to transform the output. Could you solve this?