tensorflow/model-optimization

trainable=False doesn't work for QuantizeWrapperV2

jiannanWang opened this issue · 1 comments

Prior to filing: check that this should be a bug instead of a feature request. Everything supported, including the compatible versions of TensorFlow, is listed in the overview page of each technique. For example, the overview page of quantization-aware training is here. An issue for anything not supported should be a feature request.

Describe the bug
Setting layer.trainable=False for a QuantizeWrapperV2 wrapped Dense layer doesn't convert all trainable weights to non-trainable.

System information

TensorFlow version (installed from source or binary):
2.12.0

TensorFlow Model Optimization version (installed from source or binary):
0.7.4

Python version:
3.10.11

Describe the expected behavior
Like the normal model, by setting layer.trainable=False for the quantized layer, the layer's weights should all become non-trainable.

Describe the current behavior
After setting layer.trainable=False for a quantized Dense layer, the Dense layer still contains trainable weights.

Code to reproduce the issue
The colab contains the code to reproduce this bug.
https://colab.research.google.com/drive/1KYnZkBI_g3Pu9Vqz4UneXCtNOs3_kB39?usp=sharing

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
The output from the colab is below. Noted that by setting layer.trainable=False, the original model's Dense layer's weights become non-trainable. However, the dense_11/kernel:0 from the Dense layer in the quantized model is still trainable after setting layer.trainable=False.

trainable variables in model:  2
trainable variables in model after setting trainable to false:  0
trainable variables in quantized model:  1
trainable variables in quantized model after setting trainable to false:  1
dense_11/kernel:0

Any progress?