/CoreML-VAEs

Ready to use Core ML VAEs in MLMODELC format

GNU General Public License v3.0GPL-3.0

Core ML VAEs

Original Stable Diffusion VAEs already converted to MLMODELC

UsageChoosing a VAEComparison

Usage

Download the .mlmodelc folders and swap them with the ones found in your model folder.
You can visit my Hugging Face repo if you prefer to download them as .zip files. I couldn't upload them already packed as .zip here since the GitHub limit for files are 100MB, and I don't want to pay for Git LFS storage.
Based on the version in use (split-einsum, original, original_512x768, or original_768x512), you must choose the corresponding VAE.
Always make a backup of your original files in case you don't like the new look 😉

Other Resources

If you want to convert a model yourself, you can use my SD to Core ML script.
Alternatively, already compiled models can be found here.

Choosing a VAE

  • anything-vae: for use with anime models; found in Anything v3. Use is intended for ANE only since SPLIT_EINSUM models, combined with CPU_AND_NE, sometimes produce washed-out images. orangemix-vae yelds far superior results if used with CPU_AND_GPU.
  • kl-f8-anime: for use with anime models; found in Waifu Diffusion v1.4. Output is smoother and more contrasty than orangemix-vae. A lot similar to sd-vae-ft-mse.
  • orangemix-vae: for use with anime models; found in OrangeMixs. The absolute best if used with CPU_AND_GPU. Produces washed-out images if used with CPU_AND_NE.
  • sd-vae-ft-mse: for use with realistic models; found in Stable Diffusion. Basically every new realistic model has it already embedded.