Issues
- 1
- 1
[Request] Code for calibration dataset generation and Generic code for any diffusion model
#40 opened by Ali-Flt - 0
Support for Facebook DiT
#39 opened by Ali-Flt - 0
- 1
- 4
Why this quantization model need more than 24GB GPU memory which is larger than ideal 500M?
#14 opened by felixslu - 2
- 2
如何获取量化参数
#28 opened by miaott1234 - 0
- 0
How to do it in SDXL-turbo?
#35 opened by ApolloRay - 3
Does q-diffusion work on SDXL?
#22 opened by TruthSearcher - 0
'functions' package error & how to extract FID
#34 opened by parkjjoe - 0
- 2
End-to-End Quantization for Speedup and Memory Savings: Inviting Contributions!
#23 opened by Xiuyu-Li - 2
Why w4a8 quantization method have not accelerate the inferrence speed of Stable Diffusion models?
#15 opened by felixslu - 0
About the quantized model
#12 opened by shiyuetianqiang - 2
Extremely high VRAM usage
#7 opened by easonoob - 7
- 0
BOPs measurement
#32 opened by rocco-manz - 0
- 0
Compatibility with MacOSX?
#29 opened by nachiket - 6
- 1
Model sizes
#19 opened by prinshul - 0
QDiffusion for Stable Diffusion
#27 opened by stein-666 - 0
What is the minimum number of datasets that meet the requirements of text-to-image calibration?
#25 opened by hanhanpp - 0
calibration datasets
#21 opened by hanhanpp - 0
errors in executing in LSUN-bed w8a8 quantization
#24 opened by Sugar929 - 1
Would you please provide the parameters for the LSQ and the block reconstruction? Thanks a lot
#18 opened by yuzheyao22 - 5
Code for model calibration
#10 opened by Cheeun - 1
Open-source more code?
#17 opened by lingffff - 1
- 0
Question about the inference process
#16 opened by JiaojiaoYe1994 - 4
- 4
load cifar_w8a8_ckpt.pth
#11 opened by foreverlove944 - 1
Loading quantized model checkpoint error
#8 opened by arthursunbao - 1
模型大小
#9 opened by ZhibinPeng - 1
Images are broken in readme.md
#6 opened by 6174 - 1
- 1
the weight format of w4a8 is fp32?
#3 opened by gongqiang