YuanBoot/Intrinsic_Garment_Space

Mie_eval is not generalizing to any new motions. Is more data needed?

Closed this issue · 2 comments

Hi there, it's me again. First, I wanted to thank you for providing this working implementation, I was able to run all the example scripts and produce very high quality results! Once I got the networks trained, I've been experimenting a bit to see how well they work on new inputs. Unfortunately, running 06.mie_eval.py on new data, even just a copy of data_set/anim/anim_01/Dude.anim.npy with slight perturbations, created very unstable results. I was digging in to debug the problem, when it occurred to me that 05.mie_train.py is only training on the 800 frames of animation in Dude.anim.py. This doesn't seem like enough data to allow much if any generalization to new motions. Checking the original paper, 30,000 frames of animation was used, which I imagine leads to much better results. Is there any chance you can provide that data set?

Thanks!
-- Paul

Hi @pkanyuk , That's great. You got very high quality results! If you use enough training data, the network can give stable results with any reasonable input. However it is not the vital point. The network's key feature is that it can reconstruct meshes between different latent vectors (it refers to different materials). For instance, the character wears a denim skirt at 1st frame and you want the skirt becomes softer(like a silk skirt) at 50th frame. In addition, from 1st to 50th frame, the skirt should be transformed smoothly. The network can do that! You can do interpolation between latent vectors. If you have time, please try that. If you encounter any question, please message me without any hesitation.

Hi @YuanBoot, thanks for clarifying! It seems with the amount of training data you provided, I can pass in almost any shape, and the shape descriptor network produces a plausible cloth shape with is awesome. I was hoping to get similar robustness to new inputs for motion as well, but that's good to know that wasn't the point of this example. Sure thing, I'll try the latent vector interpolation too. I have access to a lot of mocap data, so I'll see if increasing the amount of training data helps for better handling to novel motion input.