cleardusk/3DDFA

Face reconstruction

Opened this issue · 12 comments

Hello. How to get face like this? How to make reconstruction of face?
image

Whatever image I use with this command python3 main.py -f samples/image.jpg, all faces looks the same:
image

Should I train first? Could you provide with detailed documentation

I think the ckpt assigned by main.py is already prepared in this repo (and it's trained!!!).
https://github.com/cleardusk/3DDFA/blob/master/models/phase1_wpdc_vdc.pth.tar
And if you can run main.py, it obvious that the program can find this ckpt.

After running program, you will get these files below. Don't you?
image

The first and second file is what you need!!

Same outcome?? maybe you should check whether the image assigned is right!!

I think the ckpt assigned by main.py is already prepared in this repo (and it's trained!!!). https://github.com/cleardusk/3DDFA/blob/master/models/phase1_wpdc_vdc.pth.tar And if you can run main.py, it obvious that the program can find this ckpt.

image

After running program, you will get these files below. Don't you? image

Yes, I got these files:

image

The first and second file is what you need!!

Same outcome?? maybe you should check whether the image assigned is right!!

So, look how the ply and obj looks like, just the same as other outputs. Estimation of pose was correct:
image

and this is ply file:
image

What is inside "phase1_wpdc_vdc.pth.tar" ? Should I have the same file for new input?

@Zvyozdo4ka
I don't know whether it is possible, but maybe it just because your images for testing are too similar?

Here are some way I suggest you:

  • Can you show the rendered mesh from .obj file ? I use MeshLab as platform.
    head
    image
    image

    image
    image
    image

  • I think the defect of this model is that it doesn't really care about shape parameters...

  • All the loss function be used is only about 68 keypoints and 132 resample(random sample) vertices

  • the .ply file can be viewed by any text editor. You can easily check the difference between 2 ply files.
    image

  • What is inside "phase1_wpdc_vdc.pth.tar" ?
    - It records the the parameters of trained model, so of course you should have the same file (trained model) for alignment of face you want.

thank you for your hints!
It turned out, that in cloudCompare obj was rendered without texture.

But look how it looks like:

image

and your 3d models without textures looks the same both for your face, and face of that celebrity. I needed a facial reconstruction as much close to face on image as possible, like a sculpture.

Are you aware of this? The pose parameter of the 300W-LP dataset is 7 dimensional
However, in the implementation code of the 3DDFA paper, the 12 dimensions of regression are used directly
I don't know the connection, do you know this? There are many people asking this question on github, but I can't find the answer, can you help me

@ZHJNCUT you have to use these 7 parameters to reconstruct the transform matrix (it's a homogeneous matrix 4x4 element, and only 3x4 is adjustable) which include the scaling, rotation, and translation effect.

@ZHJNCUT 你必须使用这 7 个参数来重建变换矩阵(它是一个 4x4 元素的齐次矩阵,并且只有 3x4 可以调整),其中包括缩放、旋转和平移效果。

Do you have the relevant code? I have tried several methods, but the results are all wrong, which is very upsetting, I hope you can help me
And the other thing is, does the reconstructed transformation matrix have anything to do with the size of the picture, the same picture but the size is different, does it have the same 12 pose parameters? Thank you for your answer

@ZHJNCUT 你必须使用这7个参数来重建变换矩阵(它是一个4x4元素的齐次矩阵,并且只有3x4可以调整),其中包括缩放、旋转和平移效果。

If you have relevant code, I hope you can send it to my email (hjzhang817@163.com), which will be greatly appreciated

自从我上次从事 ClearDusk/3DDFA 工作以来已经有一段时间了。以下是一些掌握它的链接: https : //github.com/XingangPan/GAN2Shape.git https://github.com/barisgecer/facegan.git https://github.com/microsoft/DiscoFaceGAN.git https://github.com/barisgecer/facegan.git https://github.com/microsoft/DiscoFaceGAN.git https://github.com/barisgecer/facegan.git https://github.com/microsoft/DiscoFaceGAN.git https://github.com/barisgecer/facegan.git /github.com/microsoft/Deep3DFaceReconstruction.git 三月份。1 月 16 日 2024 年 11:07,ZHJNCUT @.> 一份文件:
……
@ZHJNCUT https://github.com/ZHJNCUT 你必须使用这7个参数来重建变换矩阵(它是一个4x4元素的齐次矩阵,并且只有3x4可以调整),其中包括缩放、旋转和平移效果。 If you have relevant code, I hope you can send it to my email ( @.
), which will be greatly appreciated — Reply to this email directly, view it on GitHub <#228 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUJPCJCW76ACK2M75DFOOK3YOZGNHAVCNFSM6AAAAAA6BTRN7OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJTGQZDQOBZHA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

Okay, thank you. Thank you very much

个4x4元素的

I saw that the code evaluation scheme in 3DDFA only has two-dimensional feature points (NME) error, there is no three-dimensional vertices (NME) error, I learned that the normalization of the calculation of three-dimensional vertices (NME) error is: the three-dimensional outer eye spacing of the face, you know how to find this, its index values are how much?

自从我上次从事 ClearDusk/3DDFA 工作以来已经有一段时间了。以下是一些掌握它的链接: https://github.com/XingangPan/GAN2Shape.git https://github.com/barisgecer/facegan.git https://github.com/microsoft/DiscoFaceGAN.git https://github.com/barisgecer/facegan.git https://github.com/microsoft/DiscoFaceGAN.git https://github.com/barisgecer/facegan.git https://github.com/microsoft/DiscoFaceGAN.git https://github.com/barisgecer/facegan.git /github.com/microsoft/Deep3DFaceReconstruction.git 三月份。1 月 16 日 2024 年 11:07,ZHJNCUT @.> 一份文件:
……
@ZHJNCUT < https://github.com/ZHJNCUT > 你必须使用这7个参数来重建变换矩阵(它是一个4x4元素的齐次矩阵,并且只有3x4可以调整),其中包括缩放、旋转和平移效果如果您有相关代码,希望您可以发送到我的邮箱(
@.),不胜感激 — 直接回复此邮件,在 GitHub 上查看 < #228 (评论) >,或取消订阅 < https://github.com/notifications/unsubscribe-auth/AUJPCJCW76ACK2M75DFOOK3YOZGNHAVCNFSM6AAAAAA6BTRN7OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOJTGQZDQOBZHA > 。您收到此消息是因为您订阅了该主题。消息 ID:@.***>

I saw that the code evaluation scheme in 3DDFA only has two-dimensional feature points (NME) error, there is no three-dimensional vertices (NME) error, I learned that the normalization of the calculation of three-dimensional vertices (NME) error is: the three-dimensional outer eye spacing of the face, you know how to find this, its index values are how much?