Want to figure out what a good prompt might be to create new images like an existing one? The CLIP Interrogator is here to get you answers!
Run Version 2 on Colab, HuggingFace, and Replicate!
Version 1 still available in Colab for comparing different CLIP models
The CLIP Interrogator is a prompt engineering tool that combines OpenAI's CLIP and Salesforce's BLIP to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like Stable Diffusion on DreamStudio to create cool art!
Create and activate a Python virtual environment
python3 -m venv ci_env
source ci_env/bin/activate
Install with PIP
pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip
pip install -e git+https://github.com/pharmapsychotic/BLIP.git@lib#egg=blip
pip install clip-interrogator
You can then use it in your script
from PIL import Image
from clip_interrogator import Interrogator, Config
image = Image.open(image_path).convert('RGB')
ci = Interrogator(Config(clip_model_name="ViT-L/14"))
print(ci.interrogate(image))