Runing in a Low Memory GPU set (8gb)
hqnicolas opened this issue · 2 comments
hello, I have been following your wonderful work,
I saw that many users are actively training these new styleGAN techniques with Google Colab and Microsoft Azure.
To help spread this knowledge,
I would like to propose a fork with a change in the stylegan3-editing architecture.
I think we could create a fork project with decreased use of vram,
even though this is proven to impact learning performance.
Original Propose
Rosinality Resized
InsightFace IR-SE50 Resized
Yuval-Alaluf Restyle Resized
At the moment I'm still training the "pre-trained files" from the above distributions,
I'll make all the resized files available soon with "256x256" and "args.latent = 32"
I would like to know if you have any reference for this type of modification in the "architecture" of StyleGAN
I didn't have a chance to go through the different modifications in detail, but I was wondering what you mean by setting the latent to 32?
By changing the latent code to 32, you'd need to modify SG's architecture quite a lot and retrain it accordingly. And it could be that 32 is simply too small a latent space to accurately encode images or find meaningful latent directions.
These are just somethings to keep in mind.
A different approach would be to keep the same SG generator and use model compression techniques to reduce the encoder network.
Thanks for sharing your journey @hotnikq
Seems like training with a lower-resolution StyleGAN generator and a small batch size did the trick :)