Issues
- 13
Support for Gradio WebUI
#15 opened by 7enChan - 6
- 2
- 2
Failing tests for mlx > 0.19.0
#93 opened by filipstrand - 3
Info about Python 3.13 compatibility
#74 opened by anthonywu - 2
- 4
A Presumptuous Suggestion: Adding Localized Redraw Based on Image-to-Image (Like Inpainting)
#88 opened by raysers - 1
Make binary adhere to signals
#79 opened by Morriz - 15
If possible, could the official team consider making it compatible with ComfyUI?
#56 opened by raysers - 6
- 3
- 8
- 3
proposal: automatic output file naming
#80 opened by anthonywu - 2
Performance Report - 2020 M1 Air 8GB
#13 opened by mbvillaverde - 3
Feature Request: Train lora
#78 opened by kirel - 1
- 1
Would it be possible to also implement PulID?
#68 opened by azrahello - 2
- 5
- 6
Support for different LoRA formats
#47 opened by filipstrand - 3
`seed` can't be `None`
#69 opened by Morriz - 6
Can't install
#41 opened by ckizer - 2
Fetching files freeze
#42 opened by filippkowalski - 2
- 1
Project needs a license
#54 opened by anthonywu - 2
I manually downloaded the 4-bit quantized model. Can I avoid downloading the full model from Hugging Face? If I can directly use my downloaded model, what command should I use to specify it?
#52 opened by raysers - 1
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType' when importing / running a command
#53 opened by pelayomartinez - 1
Question about generation speed
#33 opened by TuiKiken - 16
- 4
Installation (Macbook Pro M3)
#44 opened by light-merlin-dark - 5
4bit model requires more than 9GB
#36 opened by sudhamjayanthi - 1
Support for Different Samplers and Schdulers?
#45 opened by andyw-0612 - 2
Error installing 0.2.0
#38 opened by heavysixer - 0
controlnet support ?
#43 opened by AugustRush - 2
- 2
Where do I need to place the Lora file for it to work as shown in the examples?
#39 opened by meetbryce - 1
How use other flux base model ?
#34 opened by douenergy - 1
Parameter for limiting CPU usage
#17 opened by kennell - 3
Load model from drive, note hf cache
#19 opened by stefanvarunix - 2
Feature Request: Show Prompt Metadata in Get Info
#26 opened by vincyb - 1
Is there some max prompts length limit?
#28 opened by smoothdvd - 2
Why GPU usage is low?
#25 opened by smoothdvd - 3
- 1
- 1
Where does it download the model ?
#20 opened by dostarora97 - 10
Performance report - 2023 M2 Max 96GB
#6 opened by explorigin - 2
Performance report - 2023 M3 Pro 36GB
#11 opened by kush-gupt - 1
`--output` not being used
#12 opened by rafrafek - 1
Great Work! and works like a charm
#8 opened by vincyb - 1
Performance report - 2021 M1 Pro 16GB
#7 opened by qw-in