Paper | Project page | Video
This is an official PyTorch code repository of the paper "Point-Based Modeling of Human Clothing" (accepted to ICCV, 2021).
- Prerequisites: your nvidia driver should support cuda 10.2, Windows or Mac are not supported.
- Clone repo:
git clone https://github.com/izakharkin/point_based_clothing.git
cd point_based_clothing
git submodule init && git submodule update
- Docker setup:
- Install docker engine
- Install nvidia-docker
- Set nvidia your default runtime for docker
- Make docker run without sudo: create docker group and add current user to it:
sudo groupadd docker sudo usermod -aG docker $USER
- Reboot
- Download
10_nvidia.json
and place it in thedocker/
folder - Create docker image:
- Build on your own: run 2 commands
- Inside the docker container:
source activate pbc
- Download the SMPL neutral model from SMPLify project page:
- Register, go to the
Downloads
section, downloadSMPLIFY_CODE_V2.ZIP
, and unpack it; - Move
smplify_public/code/models/basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
todata/smpl_models/SMPL_NEUTRAL.pkl
.
- Register, go to the
- Download models checkpoints (~570 Mb): Google Drive and place them to the
checkpoints/
folder; - Download a sample data we provide to check the appearance fitting (~480 Mb): Google Drive, unpack it, and place
psp/
folder to thesamples/
folder.
We provide scripts for geometry fitting and inference and appearance fitting and inference.
To fit a style outfit code to a single image one can run:
python fit_outfit_code.py --config_name=outfit_code/psp
The learned outfit codes are saved to out/outfit_code/outfit_codes_<dset_name>.pkl
by default. The visualization of the process is in out/outfit_code/vis_<dset_name>/
:
- Coarse fitting stage: four outfit codes initialized randomly and being optimized simultaneosly.
- Fine fitting stage: mean of found outfit codes is being optimized further to possibly imrove the reconstruction.
Note: visibility_thr
hyperparameter in fit_outfit_code.py
may affect the quality of result point cloud (e.f. make it more sparse). Feel free to tune it if the result seems not perfect.
To further infer the fitted outfit style on the train or on new subjects please see infer_outfit_code.ipynb
. To run jupyter notebook server from the docker, run this inside the container:
jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser
To fit a clothing appearance to a sequence of frames one can run:
python fit_appearance.py --config_name=appearance/psp_male-3-casual
The learned neural descriptors ntex0_<epoch>.pth
and neural rendering network weights model0_<epoch>.pth
are saved to out/appearance/<dset_name>/<subject_id>/<experiment_dir>/checkpoints/
by default. The visualization of the process is in out/appearance/<dset_name>/<subject_id>/<experiment_dir>/visuals/
.
To further infer the fitted clothing point cloud and its appearance on the train or on new subjects please see infer_appearance.ipynb
. To run jupyter notebook server from the docker, run this inside the container:
jupyter notebook --ip=0.0.0.0 --port=8087 --no-browser
If you find our work helpful, please do not hesitate to cite us:
@InProceedings{Zakharkin_2021_ICCV,
author = {Zakharkin, Ilya and Mazur, Kirill and Grigorev, Artur and Lempitsky, Victor},
title = {Point-Based Modeling of Human Clothing},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {14718-14727}
}
Non-commercial use only.
We also thank the authors of Cloth3D and PeopleSnapshot datasets.