Plan for integrating stable diffusion?
jeeyung opened this issue · 5 comments
Thank you for sharing great works!
I am wondering if you have a plan to integrate the feature of fine-tuning CLIP with Stable Diffusion.
yes we will integrate in 1-2 weeks or so.
The current code base does support TTA on CLIP classifiers with Stable Diffusion.
In the camera ready paper, we report CLIP results using more advanced text-prompt engineering, and deploy LoRA to update the Stable Diffusion. The current code base includes simpler text prompts ("a photo of [bla]") and does not update Stable Diffusion, but still works fine.
You can try using the following command:
python main.py +experiment=sd model.class_arch=clipb32 input.dataset_name=FGVCAircraftSubset
Thank you for sharing this :)
Have you tried this framework during training? I am just curious.
If so, how are the results if you can share?
No we haven't tried it, although i too would be interested to see it!