Ever wanted to generate images of a particular style with the current state of the art stable diffusion models? With this repo, you can generate all types of styles.
Girl with bunny ears |
A man wearing a mask |
A powerful man with fire around his body. |
An old portrait of a woman |
A girl playing with a dog |
Two boys celebrating a soccer goal |
A portrait of a blonde man |
A powerful female warrior |
- Install dependencies
git clone https://github.com/RSDP101/Anime-SD.git
pip install virtualenv
python -m venv venv
source venv/bin/activate
cd Anime-SD/examples/text_to_image/
pip install git+https://github.com/huggingface/diffusers.git
pip install -U -r requirements.txt
accelerate config default
- Run training on a SD model and dataset. The default ones are 'CompVis/stable-diffusion-v1-4' and rod101/Anime1K.
# Change model and dataset configuration at the training.py script.
python3 training.py
Training will save model's weight to sd-anime-model.
- Run inference on your fine-tuned model.
python3 inference.py
You can change model and dataset configuration at the training.py script. Pass the prompt you want to generate at the inference.py script.