tensorflow/model-optimization
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
PythonApache-2.0
Pinned issues
Issues
- 0
ValueError: `prune_low_magnitude` can only prune an object of the following types: keras.models.Sequential, keras functional model, keras.layers.Layer, list of keras.layers.Layer. You passed an object of type: Sequential.
#1167 opened by RifatUllah102 - 0
- 5
Any plans to support keras3?
#1119 opened by Krovatkin - 3
quantize_model() cannot detect a keras.Sequential model
#1144 opened by DKMaCS - 1
TFOpLambda not supported in INT8 Quantization Aware Training (Mobilenetv3)
#1145 opened by pedrofrodenas - 0
- 2
Failed to apply the QAT function 'quantize_model' to the sequential model that is defined using tensorflow.keras
#1140 opened by wwwind - 3
Determinism is not yet supported in GPU implementation of FakeQuantWithMinMaxVarsGradient
#1087 opened by puelon - 1
ValueError: `prune_low_magnitude` can only prune an object of the following types: keras.models.Sequential, keras functional model, keras.layers.Layer, list of keras.layers.Layer. You passed an object of type: Sequential.
#1141 opened by FrancescoSorrentino - 7
MobileNetV3 QAT TFLite Conversion Issue
#1107 opened by tarushbansal - 2
Add batch norm to default_n_bit_quantize_registry and default_8_bit_quantize_registry
#1099 opened by DerryFitz - 1
- 3
- 0
RuntimeError: Layer tf.__operators__.getitem:<class 'tf_keras.src.layers.core.tf_op_layer.SlicingOpLambda'> is not supported.
#1139 opened by reganh98 - 0
Unexpected Inference Time and Model Size for TensorFlow Lite and Pruned Models
#1135 opened by experimentsym3 - 0
Is it possible to use this tool to optimize (quantize) a model trained with PyTorch?
#1134 opened by zhuoran-guo - 0
Does structural pruning support pre-trained model?
#1133 opened by aidevmin - 0
- 0
Per-tensor QAT model Conv2d+BN+relu folding issue
#1131 opened by sheh - 0
Failed quantization of dilated convolution layers: tensorflow or tensorflow-model-optimization bug?
#1130 opened by Ebanflo42 - 3
float16 quantization runs out of memory for LSTM model
#1091 opened by Black3rror - 1
- 4
16x8 Quantization fails for RNN model - Max and min for dynamic tensors should be recorded during calibration
#1090 opened by Black3rror - 4
Quant aware training in tensorflow model optimization
#1100 opened by ardeal - 1
Can't use TFMOT version 0.8.0 due to missing dependency
#1117 opened by vloncar - 4
`tf.split` or `tf.transpose` cause errors for quantize-aware training with `quantize_apply`
#1062 opened by Janus-Shiau - 2
strange behavior when quantizing a model.
#1109 opened by IdrissARM - 0
Error in MovingAverageQuantizer with per_axis=True due to missing parameters in _add_range_weights
#1108 opened by Litschi123 - 2
A error about quantization aware training
#1105 opened by panhu - 2
Module Import Error
#1104 opened by surajpandey353 - 1
[COLAB] No module named 'tensorflow_model_optimization'
#1101 opened by rjtp5670 - 2
- 1
Add a default PruningPolicy that filters out any layers not supported by the API
#1098 opened by annietllnd - 6
Cannot prune models with tensorflow operations
#1051 opened by useruser2023 - 2
tfl.concatenation op quantization parameters violate the same scale constraint
#1053 opened by akrapukhin - 2
Cannot `pip install` nightly
#1079 opened by christian-steinmeyer - 1
float16 quantization runs out of memory for LSTM model
#1092 opened by Black3rror - 2
batch norm layer quantization error
#1089 opened by mhyeonsoo - 1
AttributeError: 'CustomLayerMaxPooling1D' object has no attribute 'kernel'
#1085 opened by konstantinatopali - 1
QAT aware training for mobilenetV2 not working
#1086 opened by christophezeinaty - 2
Custom_Layer Quantization with custom training (QAT)
#1084 opened by ManishwarG - 1
trainable=False doesn't work for QuantizeWrapperV2
#1067 opened by jiannanWang - 1
- 2
A error when using tfmot.quantization.keras.quantize_model to quantize keras model
#1031 opened by DuJiahao1996 - 1
Does Post-training full integer quantization support BERT?
#1066 opened by MrRace - 0
some questions about quantization in TensorFlow
#1064 opened by rthenamvar - 0
Stripping disconnects input layer from graph
#1063 opened by christian-steinmeyer - 1
CQAT fails to preserve clusters on ResNet-50
#1056 opened by funkyyyyyy - 1
The input order of concatenate layer is changed, when use quantize_model interface.
#1061 opened by fhahaha - 0
Consider reorganizing file layout to shorten path lengths
#1032 opened by scdub