pykeio/diffusers

Roadmap

decahedron1 opened this issue · 3 comments

  • Img2img - March 2023

  • CLIP layer skip - March 2023

  • Textual inversion - March 2023

  • Upload more pre-converted models - March 2023

  • Scheduler rewrite (#16) - ?

  • "Hi-res fix" from A1111 webui - ? (as soon as I can buy a better GPU, I can't test with 6 GB of VRAM...)

  • Web UI - Q2 2023

oovm commented

how about infer prompt words from pictures?

I have completed the part of deep-danbooru inference: oovm/deep-danbooru

And another one is clip inference

oovm commented

Does adding --ema and --simplify-unet improve the generation quality?


Upload anything at: oovm/anything

# anything-v2.1-fp16
rm -rf ./anything-v2.1-fp16
wget https://huggingface.co/swl-models/anything-v2.1/resolve/main/anything-V2.1-pruned-fp16.safetensors -ci
python scripts/sd2pyke.py ./anything-V2.1-pruned-fp16.safetensors ./anything-v2.1-fp16 --fp16 -C v1-inference.yaml
# anything-v3.0-fp16
rm -rf ./anything-v3.0-fp16
wget https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/anything-v3.0-pruned-fp16.safetensors -ci
python scripts/sd2pyke.py ./anything-v3.0-pruned-fp16.safetensors ./anything-v3.0-fp16 --fp16 -C v1-inference.yaml
# anything-v4.0-fp16
rm -rf ./anything-v4.0-fp16
wget https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0-pruned-fp16.safetensors -ci
python scripts/sd2pyke.py ./anything-v4.0-pruned-fp16.safetensors ./anything-v4.0-fp16 --fp16 -C v1-inference.yaml
# anything-v4.5-fp16
rm -rf ./anything-v4.5-fp16
wget https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.5-pruned-fp16.ckpt -ci
python scripts/sd2pyke.py ./anything-v4.5-pruned-fp16.ckpt ./anything-v4.5-fp16 --fp16 -C v1-inference.yaml

Upload aom at: oovm/aom

# aom-v1.0-safe-fp16
rm -rf ./aom-v1.0-safe-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix/AbyssOrangeMix_base.ckpt -c
python scripts/sd2pyke.py ./AbyssOrangeMix_base.ckpt ./aom-v1.0-safe-fp16 --fp16 -C v1-inference.yaml
# aom-v1.0-soft-fp16
rm -rf ./aom-v1.0-soft-fp16
# aom-v1.0-hardcore-fp16
rm -rf ./aom-v1.0-hard-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors -c
python scripts/sd2pyke.py ./AbyssOrangeMix.safetensors ./aom-v1.0-hard-fp16 --fp16 -C v1-inference.yaml
# aom-v2.0-safe-fp16
rm -rf ./aom-v2.0-safe-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors -c
python scripts/sd2pyke.py ./AbyssOrangeMix.safetensors ./aom-v1.0-safe-fp16 --fp16 -C v1-inference.yaml
# aom-v2.0-soft-fp16
rm -rf ./aom-v2.0-soft-fp16
# aom-v2.0-hardcore-fp16
rm -rf ./aom-v2.0-hard-fp16
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/Pruned/AbyssOrangeMix2_hard_pruned_fp16_with_VAE.safetensors -c
python scripts/sd2pyke.py ./AbyssOrangeMix2_hard_pruned_fp16_with_VAE.safetensors ./aom-v2.0-hard-fp16 --fp16 -C v1-inference.yaml
# aom-v3.0-safe-fp32
rm -rf ./aom-v3.0-safe-fp32
wget https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix3/AOM3_orangemixs.safetensors -c
python scripts/sd2pyke.py ./AOM3_orangemixs.safetensors ./aom-v3.0-safe-fp16 --fp16 -C v1-inference.yaml

how about infer prompt words from pictures?

I have completed the part of deep-danbooru inference: oovm/deep-danbooru

And another one is clip inference

Interesting, I'll have a look at deep-danbooru 🙂
By "clip inference", do you mean CLIP guidance?

Does adding --ema and --simplify-unet improve the generation quality?

--ema may or may not improve quality. I've never thoroughly tested it but I've seen people both recommend and not recommend it for inference so I'm not sure. I did a basic test with AOM2 and the results were identical but YMMV.
--simplify-unet does not affect image quality, it just makes the UNet run faster.