Fine tuning
tekmen0 opened this issue · 5 comments
Thank you for such a great work, I need to train models for image translation tasks (no text conditioning needed). Do you plan to release how to fine-tune, or can I integrate lroa or dreamboot methods into your code? Do you have any plans to release source codes?
Releasing the source code requires communication with my superior. It is also feasible to finetune the model using conventional SD training methods, and the model will still retain its ability to sample in a few steps (for example, 4 steps) after a short period of training. We are currently researching how to distill the existing LoRA into SDXS, and if successful, the training code will be released together (if the company permits me to release the code).
I've already submitted the open source application to the company, maybe within a week.
I am very sorry that the open source application was rejected by the company. I will close the issue for the time being and reopen it if there is further progress.Again, I apologize.
I tried to finetune of sdxs-512-0.9 using diffusers text-to-image training docs file.
As mentioned, finetued model generate image at 4 steps after short period of finetuning. However, 1 step image generation disappeared in finetuned model and more than 4 steps does not improve image quality so much (in 8 or 10 steps).
I feel that it seems to train other model during finetunig because 1 step generation disappeared. Is it the case?
Is it possible to generate images in 4 steps with similar qualities compared to sdxs-512-0.9 by continuing finetuning or do I need to change training settings such as noise scheduler, batch size, etc... to enhance quality?
Try this https://github.com/tianweiy/DMD2