/GraspTTA

Code for paper 'Hand-Object Contact Consistency Reasoning for Human Grasps Generation' at ICCV 2021

Primary LanguagePython

GraspTTA

Hand-Object Contact Consistency Reasoning for Human Grasps Generation (ICCV 2021). report

Project Page with Videos

Teaser

Demo

Quick Results Visualization

We provide generated grasps on out-of-domain HO-3D dataset (saved at ./diverse_grasp/ho3d), you can visualize the results by:

python vis_diverse_grasp --obj_id=6

The visualization will look like this:

Visualization

Generate diverse grasps on out-of-domain HO-3D dataset (the model is trained on ObMan dataset)

You can also generate the grasps by yourself

  • First, download pretrained weights, unzip and put into checkpoints.

  • Second, download the MANO model files (mano_v1_2.zip) from MANO website. Unzip and put mano/models/MANO_RIGHT.pkl into models/mano.

  • Third, download HO-3D object models, unzip and put into models/HO3D_Object_models.

  • The structure should look like this:

GraspTTA/
  checkpoints/
    model_affordance_best_full.pth
    model_cmap_best.pth
  models/
    HO3D_Object_models/
      003_cracker_box/
        points.xyz
        textured_simple.obj
        resampled.npy
       ......
    mano/
      MANO_RIGHT.pkl
  • Then, install the V-HACD for building the simulation of grasp displacement. Change this line to your own path.
  • Finally, run run.sh for installing other dependencies and start generating grasps.

Generate grasps on custom objects

  • First, resample 3000 points on object surface as the input of the network. You can use this function.
  • Second, write your own dataloader and related code in gen_diverse_grasp_ho3d.py.

Training code

Upsate soon

Citation

@inproceedings{jiang2021graspTTA,
          title={Hand-Object Contact Consistency Reasoning for Human Grasps Generation},
          author={Jiang, Hanwen and Liu, Shaowei and Wang, Jiashun and Wang, Xiaolong},
          booktitle={Proceedings of the International Conference on Computer Vision},
          year={2021}
}

Acknowledgments

We thank:

  • MANO provided by Omid Taheri.
  • This implementation of PointNet.
  • This implementation of CVAE.