NVlabs/few-shot-vid2vid

Our Training Tutorial, hope you like it!

Opened this issue · 2 comments

We made a tutorial of training few shot vid2vid network and styleGAN, hope you like it!
You can use styleGAN and its latent code to generate few-shot-vid2vid input data with spacial-continuity, which is helpful of training a vid2vid network with higher accuracy and more details like teeth.
https://www.youtube.com/watch?v=zkWHTHFUYrM&lc=Ugwp3pNEoUC5m98xzfB4AaABAg

this video is not completed.

this is so nice from you