espressif/esp-tflite-micro

How to make a personal model more compact to fit in the project?

thomasliebgott opened this issue · 1 comments

Hello everyone, I'm curious to learn about the process you used to achieve a significantly smaller person_detect_model_data in your project on GitHub. Despite my attempts at quantization to int8 and conversion to a C array, I haven't been able to obtain a model as compact as yours. My quantized TFLite model is approximately 1,900Kb, while the size after conversion to a C array reaches 12,000Kb. Could you please share insights on how you trained your model to achieve such compactness?

I appreciate your response in advance

Hi @thomasliebgott

Conversion to CC array doesn't really increase the size of the model. Only the file size looks increased but not the actual weights. It is a character array. You should judge the size by the one in the model_size variable.

If you're interested, the person_detect model is one from google's original repo, used as is, README for which is provided here: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/person_detection/training_a_model.md