Issues
- 4
Cannot apply trained SD 3.5 LoRA in Diffusers
#1743 opened - 1
- 0
Problem Training LoRa????
#1740 opened - 1
A massive NPZ file when using alpha mask
#1739 opened - 2
save_last_n_steps is ignored SD 3 FLUX branch
#1738 opened - 5
(sd3-Flux) returned non-zero exit status 3221225477. 13:02:14-584959 INFO Training has ended
#1737 opened - 5
- 1
Odd technical problem.
#1735 opened - 3
- 3
- 0
- 2
Improvment to lora.py and flux lora.py
#1730 opened - 1
'Max Token Length' setting for flux
#1729 opened - 3
- 2
35D Train Lora Error when sample。
#1726 opened - 2
Error when train sd35 lora
#1725 opened - 7
SD3.5のLora学習エラー
#1724 opened - 3
- 7
- 4
- 3
- 1
- 1
Meaningless Loss in tensorboard.
#1712 opened - 5
"cat_cuda" not implemented for 'Float8_e4m3fn'
#1711 opened - 1
Training with latent cache
#1709 opened - 4
--resume decreases training speed by 4-10x
#1708 opened - 3
"mixed_precision fp16" not working for Flux
#1707 opened - 1
Specify the resolution of sample image
#1706 opened - 0
ERROR for FLUX lora training with finetune
#1705 opened - 4
- 1
Flux De-distilled Training Samples Garbled
#1703 opened - 13
Flux Dedistilled / fluxdev2pro support ?
#1702 opened - 4
- 6
OOM using AdamW8bit since recent update
#1700 opened - 1
- 0
[Feature] svd_merge_lora ver.flux
#1698 opened - 4
Flux AdamWScheduleFree on 24GB
#1697 opened - 6
- 5
- 2
Support REPA
#1694 opened - 3
- 12
- 1
- 2
- 2
Best Code for Full SDXL finetuning?
#1688 opened - 1
Blurry images by LORA schnell
#1687 opened - 1
- 0
T5XXL tokenizer deadlocks when num workers>0
#1684 opened - 3
error while training lycoris for flux.1
#1683 opened - 0
GGUF finetune?
#1682 opened