/Text-Guided-DDPM-Preserving-Semantics

We implement a model for text-guided style manipulation. Given an input image and text prompt, our goal is to keep the image’s identifiable structure and form intact while adopting style and local texture associated with the text. Our model is a DDIM (denoising diffusion implicit model) residual UNet guided by CLIP through the use of loss functions

Primary LanguagePython

No issues in this repository yet.