luohao123/gaintmodels

int8 support

harishprabhala opened this issue · 0 comments

Hi - The repo says you have int8 quantization for stable diffusion. But in the trtexec code there's only fp16 given as a flag.

Have you guys tried it on int8 as well?

If yes, don't you need a calibration dataset for int8 compilation?