Ready to deploy inference endpoint for pix2pix.
This repo uses the HuggingFace diffusers' implementation of Tim Brooks et al. Instruct Pix2pix model - https://www.timothybrooks.com/instruct-pix2pix
2 options:
Without docker
pip install -r requirements.txt
python3 server.py
Or with docker
docker build -t pix2pix .
docker run -p 8000:8000 pix2pix
The model accepts the following inputs:
prompt
(str, required)image
(base64 str, required) - A base64 string of the image (data:image/type;base64,.... also accepeted) should be 512x512 or another standard Stable Diffusion 1.5 resolution for best resultsseed
(int, optional, defaults to 42)text_cfg_scale
(float, optional, default 7)image_cfg_scale
(float, optional, default 1.5)steps
(int, optional, default to 20)randomize_cfg
(boolean, optional, default False)randomize_seed
(boolean, optional, default True)image_cfg_scale
(float, optional, default 1.5)
Additional parameters:
test_mode
(boolean, optional, default False)toDataUrl
(boolean, optional, default False) - if you want output "data:image/type;base64,...."
Not implemented
negative_prompt
num_images_per_prompt
The model outputs:
A list of image objects where each has the following properties:
image
(base64 str) - base64 or base64 with data_url prefix if specifiedseed
(int)text_cfg_scale
(float)image_cfg_scale
(float)steps
(int)
- Checkout the test.py for an example
Venus de Milo | Turn her into a cyborg |
---|---|
Elon | Turn him into a cyborg |
---|---|
Learn more about Instruct Pix2Pix here - https://www.timothybrooks.com/instruct-pix2pix
And Hugging Face support there - https://huggingface.co/timbrooks/instruct-pix2pix
Understand the 🍌 Serverless framework and functionality of each file within it.