/Self-Supervised-Human-Depth

The demo code and other resources of the paper "Self-Supervised Human Depth Estimation from Monocular Videos"

Primary LanguagePythonGNU General Public License v3.0GPL-3.0

Self-Supervised Human Depth Estimation from Monocular Videos

Requirements

Linux Setup with virtualenv

virtualenv self_human
source self_human/bin/activate
pip install -U pip
deactivate
source self_human/bin/activate
pip install -r requirements.txt

Install TensorFlow

With GPU:

pip install tensorflow-gpu==1.14

Without GPU:

pip install tensorflow==1.14

Demo

  1. Download the pre-trained models
wget http://vault.sfu.ca/index.php/s/wZDseYefjvFPImZ/download && tar -xf download
mv finetuned_hmr_model ./tracknet
mv self_human_depth_model ./reconnet
  1. predict base depth with finetuned hmr model
cd ./tracknet
python generate_tracknet_depth.py
  1. predict detail depth
cd ./../reconnet/predict
python demo_tang_2019.py

Citation

If you use this code for your research, please consider citing:

@inproceedings{tan2020self,
  title={Self-Supervised Human Depth Estimation from Monocular Videos},
  author={Tan, Feitong and Zhu, Hao and Cui, Zhaopeng and Zhu, Siyu and Pollefeys, Marc and Tan, Ping},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={650--659},
  year={2020}
}