chengzeyi/stable-fast
Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.
PythonMIT
Issues
- 1
- 3
Save compiled model and reuse?
#111 opened - 2
Support for DeepCache
#110 opened - 3
Error when running on A100 Replicate
#109 opened - 0
Errors when running on A100 on Replicate
#108 opened - 2
torch.jit encounters xformers::efficient_attention_forward_cutlass and throws a RuntimeError
#107 opened - 6
- 1
help me
#105 opened - 2
Dynamically Switch LoRA error
#104 opened - 1
GPU memory usage is higher than before use
#103 opened - 1
- 1
How to use the inpainting pipeline?
#99 opened - 2
- 1
- 1
- 2
- 6
- 2
Can't load/unload lora dynamically
#91 opened - 0
- 2
- 9
support animatediff
#87 opened - 7
Build from source failed
#86 opened - 6
- 3
- 1
- 14
- 3
- 7
- 6
SDXL Swap lora Issue
#75 opened - 3
Potential improvements to stable-fast
#73 opened - 2
Trying to get it working on Windows
#72 opened - 2
Does xformers still matter?
#71 opened - 5
- 2
- 3
Stable Video Optimizations?
#67 opened - 4
- 3
- 3
SDXL inference speed up compare?
#63 opened - 2
- 10
- 3
What's the advance compared to TensorRT?
#59 opened - 1
- 8
- 9
- 5
- 1
Compatible with comfy ui?
#52 opened - 11
RuntimeError: _Map_base::at`
#49 opened - 1
加微信聊一下
#42 opened - 3
- 0
Tiny AutoEncoder support
#40 opened