n00mkrad/text2image-gui

add Kandinsky 2.0 - the first multilingual text2image model

Opened this issue · 4 comments

Ability to write Prompt in more than 100 languages.

Kandinsky 2.0
https://github.com/ai-forever/Kandinsky-2.0
https://huggingface.co/sberbank-ai/Kandinsky_2.0

Model architecture:
It is a latent diffusion model with two multilingual text encoders:

mCLIP-XLMR 560M parameters
mT5-encoder-small 146M parameters
These encoders and multilingual training datasets unveil the real multilingual text-to-image generation experience!

Kandinsky 2.0 was trained on a large 1B multilingual set, including samples that we used to train Kandinsky.

In terms of diffusion architecture Kandinsky 2.0 implements UNet with 1.2B parameters.

Interesting, but quality doesn't seem to be that good.
I will keep an eye on it though.

Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffucion, while introducing some new ideas.

As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.

For diffusion mapping of latent spaces we use transformer with num_layers=20, num_heads=32 and hidden_size=2048.

Other architecture parts:

Text encoder (XLM-Roberta-Large-Vit-L-14) - 560M
Diffusion Image Prior — 1B
CLIP image encoder (ViT-L/14) - 427M
Latent Diffusion U-Net - 1.22B
MoVQ encoder/decoder - 67M
Kandinsky 2.1 was trained on a large-scale image-text dataset LAION HighRes and fine-tuned on our internal datasets.
kandinsky21