/HAMR

This is the rep of 'End-to-end Hand Mesh Recovery from a Monocular RGB Image'

Primary LanguagePython

HAMR

This repo is the source code for End-to-end Hand Mesh Recovery from a Monocular RGB Image by Xiong Zhang, Qiang Li, Hong Mo, Webnbo Zhang, and Wen Zheng. HAMR targets at tackle the problem of reconstructing the full 3D mesh of a human hand from a single RGB image. In contrast to existing research on 2D or 3D hand pose estimation from RGB or/and depth image data, HAMR can provide a more expressive and useful mesh representation for monocular hand image understanding. In particular, the mesh representation is achieved by parameterizing a generic 3D hand model with shape and relative 3D joint angles.

marks

We shall thanks to the reviewer of our paper when we submit it to ICCV2019. The following are selected review comments from reviewer1.

The authors mistakenly insist that their Equation 3 is right. Different regressors are applied differently ([16] uses 2 different ones), but there is only 1 correct way for each. The source code of [16] (cited in the rebuttal at L032) verifies my comment, please see:

Only recently, we found that our Equation 3 is not correct (as the reviewer 1 point out).

Citation

If you use this code for your research, please cite:

@article{zhang2019end,
  title={End-to-end Hand Mesh Recovery from a Monocular RGB Image},
  author={Zhang, Xiong and Li, Qiang and Mo, Hong and Zhang, Wenbo and Zheng, Wen},
  booktitle = {The International Conference on Computer Vision (ICCV)},
  year = {2019}
}