We implement a (simple) module that generates images based on the user-defined (text) prompts. While preparing the module, we utilized the pretrained model Stable Diffusion-v1-5 provided by runwayml in HuggingFace.
- Install Conda, if not already installed.
- Clone the repository
git clone https://github.com/byrkbrk/generating-via-prompt-sd.git
- Change the directory:
cd generating-via-prompt-sd
- For macos, run:
For linux or windows, run:
conda env create -f generating-via-prompt_macos.yaml
conda env create -f generating-via-prompt_linux.yaml
- Activate the environment:
conda activate generating-via-prompt-sd
Check it out how to use:
python3 generate.py -h
Output:
Generate image by text prompts using Stable Diffusion
positional arguments:
text_prompts Text prompts for image generation
options:
-h, --help show this help message and exit
--scheduler_name SCHEDULER_NAME
Scheduler name that be used during inference. Default:
'pndm'
--device DEVICE Name of the device that be used during inference.
Default: None
python3 generate.py\
"an image of turtle in Picasso style"\
"an image of turtle in Camille Pissarro style"
The output images seen below (left: Picasso style, right: Pissarro style) will be saved into generated-images
folder.
To run the gradio app on your local computer, execute
python3 app.py
Then, visit the url http://127.0.0.1:7860 to open the interface.
See the display below for an example usage of the module via Gradio for the prompt 'a picture of a lion in Claude Monet style'
.