Issues
- 0
Support Python versions <3.10
#703 opened by mar-muel - 0
Gradient quantization support
#691 opened by lixcli - 0
flax_e2e_model.py example fails
#667 opened by Jconn - 0
AqtEinsum 'not enough values to unpack'
#498 opened by brandnewchoppa - 0
dilated conv layer that can be injected into flax.nn.Conv
#664 opened by Jconn - 0
generalized einsum or matmul api for pure jax
#600 opened by sh0416 - 0
NormalFloat4 support
#599 opened by sh0416 - 1
How to use it with jnp.einsum?
#595 opened by sh0416 - 0
- 0
Binary quantization?
#548 opened by kishorenc - 1
Add functionality to allow QK cache quantization.
#284 opened by lukaszlew - 2
Can AQT be used to calculate qk score?
#514 opened by Lisennlp - 2
- 1
Implement FP8 Numerics.
#283 opened by lukaszlew - 2
- 1
Does JAXv2 allow for arbitrary quantization?
#312 opened by fabrizio-ottati - 1
Performance of MNIST example
#325 opened by mar-muel - 1
- 1
How to use this package to quantize a pretrained model in huggingface, such as BERT / Roberta?
#263 opened by TrueNobility303 - 0
Implement backprop quantization for convolution.
#285 opened by lukaszlew - 0
Refactor config/code classes to follow Flax.
#281 opened by lukaszlew - 0
Port static quantization from AQTv1 to AQTv2
#282 opened by lukaszlew - 1
Quantized Batch Normalization?
#60 opened by sunlex0717 - 1
Difference between jax and jax legacy?
#180 opened by chenho74 - 5
Does this framework support "what you serve is what you train" for weight only quantization?
#179 opened by chenho74 - 1
Publish updated version?
#72 opened by steve-marmalade - 1
ckpt of 8-bit ResNet-50 teacher model
#96 opened by xvyaward - 1
- 0
After installation, can not import aqt.common
#110 opened by maxwillzq