LizhenWangT/FaceVerse

How to learn the first 20 shape principal components from the detailed dataset?

Opened this issue · 7 comments

Nice work to learn an expressive face model from hybrid dataset. I wonder how the first 20 shape principal components is learned from the 3d scan dataset.

I think you mean the first 20 shape components mentioned in our paper. We also fit the base model to our detailed dataset which has the same topology. The fitted models are also released in our dataset. So it is also just a standard PCA on the neutral-expression models.

Thank you for your reply! As you already have 100 PCA shape components from coarse dataset, how can you ensure the 20 standard PCA components from the detailed dataset are orthogonal to the previous 100 components?

That's a good question. Actually it's not orthogonal. But we found it works better practically and there is almost no artifacts caused by the non-orthogonality like clipping. The 51 expression blendshapes of Apple's face model are also not orthogonal but still work quite well practically.

@LizhenWangT then does faceverse able to generates 51 PCA same as Apple's and align with apples expression deifinition? (so that many models based on apple's definition can be driven)

Thank you for your reply! As you already have 100 PCA shape components from coarse dataset, how can you ensure the 20 standard PCA components from the detailed dataset are orthogonal to the previous 100 components?

Maybe simply applying Schmidt orthogonalization to the components matrix is able to generate the orthogonal basis.

@jinfagang This may still need several weeks.

The orthogonalization has been done in version 2.