peterwilli/sd-leap-booster

Poor results when training with default parameters

Closed this issue · 10 comments

I've trained an embedding on a specific subject, 35 training images and all the default parameters except for mixed precision=bf16, xformers enabled and resolution=768. These are the results for "Portrait photo of (token)";
image
Not using mixed precision didn't change much:
image

Try using fp32 precision, I haven't tested with embeds but training models while the unet is in fp16 yielded bad results.

Edit: My bad, didn't see the text in the middle

Increased steps to 500, still no discernible training subject on the results:
image

Okay, now after 3000 steps i'm getting some better results:
image
Taking about 3 hours on a RTX3090

Hey there. I've found similar problems with my own photos, while it worked fine during testing, I don't know what is going on, that maybe I uploaded the wrong weights or something else was up. I'm currently a little overworked, but tomorrow or the day after I'll be able to tackle this!

Hey lovelies, I decided to do a 180 and figure out the bug after all. I think I found it and am testing it out now.
I couldn't stand the idea that I uploaded something faulty, when it's fixed I'll update it in Thingy and this project.

Awesome! Can't wait to continue testing it!

Hey lovely @marianobastiUNRN, thanks for your bug report! I just pushed a new update, after some checking I found out that I accidentally left the learning rate scheduler on to cosine. I changed it back and ran multiple tests with my own pics on the default values and this is the result after 4 minutes. Please let me know how your progress goes:

image

This was my test data:
image

I know I look like a drug dealer in AI and I guess I have to roll with it

Ah, I'll have to re do my testing as well. I noticed that the learning rate seemed to be fluctuating but I thought that might have been related to the LEAP part. Do you plan on doing a whitepaper on how LEAP actually works and was trained?

Much better results! Trying it right now. How many steps was your embedding trained for?

Ah, I'll have to re do my testing as well. I noticed that the learning rate seemed to be fluctuating but I thought that might have been related to the LEAP part. Do you plan on doing a whitepaper on how LEAP actually works and was trained?

Hi @sALTaccount You're the second one who asked this, I'm more than happy to try, but don't know how or where to start. I'm fully self-taught, not an academic. If you're interested in helping out, I'm happy to talk about it with you in DMs. If you'd like, what are your preferred methods of communication.

(Closing issue because the quality is better now)