tensorflow/tensorrt

Quantized saved_model size increased.

JuHyung-Son opened this issue · 1 comments

I followed colab tutorial and noticed that quantized mode size is increased after converting.
Is this an expected result?? I expected the quantized model size would be decreased.

I checked this PR 30789, but it doesn't work as expected.

TF 2.0.0
tensorrt 5.1.5

du -h -d 1

88K	./.config
448M	./resnet50_saved_model_TFTRT_FP16
386M	./resnet50_saved_model_TFTRT_INT8
924K	./data
298M	./resnet50_saved_model_TFTRT_FP32
103M	./resnet50_saved_model
55M	./sample_data
1.3G	.

I have same problem. I'm quantizing a model to reduce the size, not to make it larger :/