SDXL rotation mixes CLIP G attention layers to_q to_k to_v
Opened this issue · 0 comments
ljleb commented
Minor bug that allows rotate to overalign by mixing the neurons of the attention layers. Multiple ways to fix:
- switch SDXL arch txt2 keys to transformers keys
- add a way for architecture extensions to manually convert weights to neurons
- something else