zhengyuf/IMavatar

Cross-subject Reenactment

uniBruce opened this issue · 4 comments

Hi, thanks for the amazing work. I am still wondering how to achieve cross-subject reenactment after reading the README file. Could you please provide some instructions?

Hey,

For cross-subject reenactment, you can move the json file of the test sequence into the training subject's folder (under the same structure), and then change trained avatar's config file by replacing dataset.test.subdir to the test sequence's name. Set --only_json True to disable the loading of GT images and run inference.

Yufeng

Hey,

For cross-subject reenactment, you can move the json file of the test sequence into the training subject's folder (under the same structure), and then change trained avatar's config file by replacing dataset.test.subdir to the test sequence's name. Set --only_json True to disable the loading of GT images and run inference.

Yufeng

Thanks for your reply. I am trying to perform cross-subject experiments, but the inference speed seems very slow (160~280s per frame). Is that normal? Or could I omit some unnecessary operations to accelerate the testing? Btw, I just need the RGB outputs.
b0faa4e754e72feea62fb26c6791f267

Hi,

I don't think there is an easy way to speed inference up, except by reducing the resolution which speeds it up quadratically.

One way to speed up, is to extract the canonical mesh, the blendshapes and skinning weights, and render IMavatar as a rigged mesh. But the mesh normals will be slightly different and therefore the rendering would also be slightly different.

I didn't implement this in the released code though, so I would suggest to just wait for it if you are not in a hurry

Yufeng

Thanks a lot for your reply!