- Retarget motion from RGB video RaBit models using SMPL
sudo apt install unzip cmake
-
Download the codebase
git --recursive https://github.com/shubhMaheshwari/SMPL2RaBit-Blender.git
-
Rabit installation
Details
1. Clone RaBit Librarygit clone https://github.com/kulendu/RaBit.git cd RaBit
-
Download model data from link to
<HOME_PATH>/RaBit
-
Unzip
unzip rabit_data.zip
- Python dependencies
pip install joblib torch openmesh
or
pip install -r requirements.txt
-
-
Download blender
Note- Raise an issue if you are having trouble installing any of the above packages
To setup VIBE, run the following code chunks:
NOTE: This is the fine-tined version of VIBE, maintained by Kulendu.
- Clone the repo:
git clone https://github.com/kulendu/VIBE.git
- Install the requirements using
virtualenv
orconda
:
# pip (virtualenv)
source scripts/install_pip.sh
# conda
source scripts/install_conda.sh
- To run VIBE on any arbitary video, download the required data(i.e. the trained model and SMPL model parameters). To do this you can just run:
source scripts/prepare_data.sh
- Then, for running the demo:
# Run on a local video
python demo.py --vid_file sample_video.mp4 --output_folder output/ --display
# Run on a YouTube video
python demo.py --vid_file https://www.youtube.com/watch?v=wPZP8Bwxplo --output_folder output/ --display
Refer to VIBE/doc/demo.md for more details about the demo code.
Sample demo output with the --sideview
flag:
- For running the demo on CPU:
'''
for demo.py:
loading the checkpoints and locating on the 'CPU' device
'''
ckpt = torch.load(pretrained_file, map_location=torch.device('cpu'))
'''
for lib/models/vibe.py:
loading the pretrained dictionary and checkpoints and then locating on the 'CPU' device
'''
#ln :96
pretrained_dict = torch.load(pretrained, map_location=torch.device('cpu'))['model']
# ln: 147
checkpoint = torch.load(pretrained, map_location=torch.device('cpu'))
# ln: 154
pretrained_dict = torch.load(pretrained, map_location=torch.device('cpu'))['model']
For more installation and running inferences, refer to the official VIBE documentation.
To setup DECO, refer the following steps:
- First clone the repo, then create a conda env and install the necessary dependencies:
git clone https://github.com/sha2nkt/deco.git
cd deco
conda create -n deco python=3.9 -y
conda activate deco
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
It creates an conda env of python 3.9, with compatible dependencies
- Install PyTorch3D from source:
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
pip install .
cd ..
- Install the other dependancies and download the required data:
pip install -r requirements.txt
sh fetch_data.sh
- Please download SMPL (version 1.1.0) and SMPL-X (v1.1) files into the data folder. Please rename the SMPL files to
SMPL_FEMALE.pkl
,SMPL_MALE.pkl
andSMPL_NEUTRAL.pkl
. The directory structure for thedata
folder has been elaborated below:
├── preprocess
├── smpl
│ ├── SMPL_FEMALE.pkl
│ ├── SMPL_MALE.pkl
│ ├── SMPL_NEUTRAL.pkl
│ ├── smpl_neutral_geodesic_dist.npy
│ ├── smpl_neutral_tpose.ply
│ ├── smplpix_vertex_colors.npy
├── smplx
│ ├── SMPLX_FEMALE.npz
│ ├── SMPLX_FEMALE.pkl
│ ├── SMPLX_MALE.npz
│ ├── SMPLX_MALE.pkl
│ ├── SMPLX_NEUTRAL.npz
│ ├── SMPLX_NEUTRAL.pkl
│ ├── smplx_neutral_tpose.ply
├── weights
│ ├── pose_hrnet_w32_256x192.pth
├── J_regressor_extra.npy
├── base_dataset.py
├── mixed_dataset.py
├── smpl_partSegmentation_mapping.pkl
├── smpl_vert_segmentation.json
└── smplx_vert_segmentation.json
NOTE: Sometimes running inferences on CPU might cause some hardware defination issue, which can be resolved by defining the following on ln:30 in inference.py
:
checkpoint = torch.load(args.model_path, map_location=torch.device('cpu'))
NOTE: Sometimes Mapping
might cause some comapatibilty issue with python 3.9, to resolve this, use the following import:
from collections.abc import Mapping
python inference.py \
--img_src example_images \
--out_dir demo_out
This command runs DECO on the images stored in the example_images/
directory specified in --img_src
, saving a rendering and a colored mesh in demo_out/
directory
For referring more in-depth Training and Testing directions, refer to the official DECO implementation.
Renders a video of motion transfer from smpl file dataset to RaBit Model.
-
Installation
Details
* Command Line- Open terminal using
Cntr-shift-T
orCmd-shift-T
then paste
<blender-python-path> pip install meshio
-
Example in Linux
/home/shubh/blender-4.0.1-linux-x64/4.0/python/bin/python3.10 pip install meshio
- Open terminal using
-
Input details
<sample-filepath>
is the path to the.pkl
file containing the smpl:pose params - TxJx3, Rotation of SMPL joints (24 Joints version) body params - vec(10), Body parametes of SMPL Mesh (10 dimension version ) camera ext - Tx6 or None, 6D camera pose (10 dimension version ) camera int - K or None, Camera Intrinsic params
-
Command Line
blender --background --python rabit_render.py # For complete dataset
Or
python3 renderer.py <smpl-filepath> # Specific file