Fit an SMPL body model (BM) to a given scan and view the optimization process in a plotly dashboard. Fitting supported:
- ๐งโโ๏ธ fit the body model parameters (shape, pose, translation, scale)
- ๐คน fit the vertices to the scan
The code supports fitting a single scan ๐ค or a whole dataset ๐ฅ.
smpl_fitting_dashboard.mp4
You can use a docker container to facilitate running the code. After cloning the repo, run in terminal:
cd docker
sh build.sh
sh docker_run.sh CODE_PATH
by adjusting the CODE_PATH
to the SMPL-Fitting
directory location. This creates a smpl-fitting-container
container. You can attach to it by running:
docker exec -it smpl-fitting-container /bin/bash
๐ง If you do not want to use docker, you can install the docker/requirements.txt
into your own environment. ๐ง
Next, initialize the chamfer distance submodule by running:
git submodule update --init --recursive
Necessary files:
- put the
SMPL_{GENDER}.pkl
(MALE, FEMALE and NEUTRAL) models into thedata/body_models/smpl
folder. You can obtain the files here. - put the
gmm_08.pkl
prior into thedata/prior
folder. You can obtain the files here. - [OPTIONAL] We provide a demo for fitting the whole FAUST dataset. To do that, download the FAUST dataset here and put the
FAUST/training/scans
andFAUST/training/registrations
folders into thedata/FAUST/training
folder in this repository. We already provided the landmarks for fitting indata/FAUST/training/landmarks
.
The configuration files for the fitting are stored in the configs
folder:
config.yaml
stores general variables and optimization-specific variablesloss_weight_configs.yaml
stores the loss weight strategy for the fitting process defined asiteration: dict of loss weights
pairs. For example,4: {"data": 1, "smooth": 150, "landmark": 100}
means that at iteration 4, the data loss will be multiplied by 1, smoothnesss loss will be multiplied by 150, etc.
The general configuration variables are listed below. The specific variables for fitting the body model are given here, and the specific variables for fitting the vertices are given here.
General variables:
verbose
- (bool) printout losses and variable values at each stepdefault_dtype
- (torch.dtype) define the shape, pose, etc. tensor data typespause_script_after_fitting
- (bool) pause the script after the fitting is done so you can visualize in peaceexperiment_name
- (string) name your experiment
Visualization variables:
socket_type
- (string) type of socket, only zmq supportedsocket_port
- (int) port for visualizations, localhost:socket_porterror_curves_logscale
- (bool) visualize loss curves in log scalevisualize
- (bool) visualize or not the fittingvisualize_steps
- (list / range) iterations to visualize, can be defined as summed ranges and lists, ex.range(0, 500, 50)+[10,30,499]
Path variables:
body_models_path
- (string) path to SMPL,SMPLX,.. body modelsprior_path
- (string) path to the gmm prior loss .pkl filesave_path
- (string) path to save the results
Dataset variables for FAUST (these are dataset specific for each dataset you implement):
data_dir
- (string) path to FAUST datasetload_gt
- (bool) load ground truth SMPL fitting or not
Optimize the body model parameters of shape and pose (including translation and scale) that best fit the given scan. Check notes on losses to see the losses used.
The optimization-specific configurations to fit a BM to a scan are set under fit_body_model_optimization
in config.yaml
with the following variables:
iterations
- (int) number of iterationslr
- (float) learning ratestart_lr_decay_iteration
- (int) iteration when to start the learning rate decay calculated aslr *(iterations-current iteration)/iterations
body_model
- (string) which BM to use (smpl, smplx,..). See Notes for supported modelsuse_landmarks
- (string / list) which body landmarks to use for fitting. Can beAll
to use all possible landmarks,{BM}_INDEX_LANDMARKS
defined in landmarks.py or list of landmark names e.g.["Lt. 10th Rib", "Lt. Dactylion",..]
defined in landmarks.pyloss_weight_option
- the strategy for the loss weights, defined inloss_weights_configs.yaml
underfit_bm_loss_weight_strategy
The default variables already set should work well for the fitting process.
python fit_body_model.py onto_scan --scan_path {path-to-scan} --landmark_path {path-to-landmarks}
Check Notes to see the supported scan and landmark file extensions.
python fit_body_model.py onto_dataset --dataset_name {dataset-to-fit}
The dataset you want to fit needs to be defined in datasets.py
as a torch dataset. Check notes on datasets for more details. We already provide the FAUST dataset in datasets.py
.
Optimize the vertices of a BM (or mesh) that best fit the given scan. Check notes on losses to see the losses used.
The optimization-specific configuration to fit the vertices to a scan is set under fit_vertices_optimization
in config.yaml
with the following variables:
max_iterations
- (int) number of maximal iterationsstop_at_loss_value
- (float) stop fitting if loss under this thresholdstop_at_loss_difference
- (float) stop fitting if difference of loss at iterationi-1
and iterationi
is less this thresholduse_landmarks
- (string / list) which body landmarks to use for fitting. Can benul
to not use landmarks,All
to use all possible landmarks,{BM}_INDEX_LANDMARKS
defined in landmarks.py, or list of landmark names e.g.["Lt. 10th Rib", "Lt. Dactylion",..]
defined in landmarks.pyrandom_init_A
- (bool) random initialization of vertices transformationseed
- (float) seed for random initialization of vertices transformationuse_losses
- (list) losses to use. The complete list of losses is["data","smooth","landmark","normal","partial_data"]
. Check notes on losses.loss_weight_option
- (string) the strategy for the loss weights, defined inloss_weights_configs.yaml
underfit_verts_loss_weight_strategy
lr
- (float) learning ratenormal_threshold_angle
- (float) used if normal loss included inuse_losses
. Penalizes knn points only if angle is lower than this threshold. Otherwise points are ignorednormal_threshold_distance
- (float) used if normal loss included inuse_losses
. Penalizes knn points only if the distance is lower than this threshold. Otherwise points are ignoredpartial_data_threshold
- (float) used if partial_data loss included inuse_losses
. Chamfer distance from BM to scan for points that are closer than this threshold. Otherwise points are ignored
python fit_vertices.py onto_scan --scan_path {path-to-scan} --landmark_path {path-to-landmarks} --start_from_previous_results {path-to-YYYY_MM_DD_HH_MM_SS-folder}
Check Notes to see the supported scan and landmark file extensions. You can either use --start_from_previous_results
to fit the vertices of the previously fitted BM with the fit_body_model.py
script ( .npz
is located) or use --start_from_body_model
to start fitting a BM with zero shape and pose to the scan (
python fit_vertices.py onto_dataset --dataset_name {dataset-name} --start_from_previous_results {path-to-previously-fitted-bm-results}
You can either use --start_from_previous_results
to fit the vertices of the previously fitted BM with the fit_body_model.py
script (.npz
are located) or use --start_from_body_model
to start fitting a BM with zero shape and pose to the scan (datasets.py
as a torch dataset. Check notes on datasets for more details. We already provide the FAUST dataset in datasets.py
.
If you already have body model parameters (pose, shape, translation and scale) given, but they are not ideal, you can refine them.
The optimization-specific configuration to refine the parmaeters is set under refine_bm_fitting
in config.yaml
with the following variables:
iterations
- (int) number of iterationsstart_lr_decay_iteration
- (int) iteration when to start the learning rate decay calculated aslr *(iterations-current iteration)/iterations
body_model
- (string) which BM to use (smpl, smplx,..). See Notes for supported modelsuse_landmarks
- (string / list) which body landmarks to use for fitting. Can benul
to not use landmarks,All
to use all possible landmarks,{BM}_INDEX_LANDMARKS
defined in landmarks.py, or list of landmark names e.g.["Lt. 10th Rib", "Lt. Dactylion",..]
defined in landmarks.pyrefine_params
- (list of strings) of parameters you want to refine, can contiain: pose, shape, transl, scaleuse_losses
- (list) losses to use. The complete list of losses is["data","smooth","landmark","normal","partial_data"]
. Check notes on losses.loss_weight_option
- (string) the strategy for the loss weights, defined inloss_weights_configs.yaml
underfit_verts_loss_weight_strategy
prior_folder
- (string) path to the gmm prior loss .pkl filenum_gaussians
- (float) number of gaussians to use for the priorlr
- (float) learning ratenormal_threshold_angle
- (float) used if normal loss included inuse_losses
. Penalizes knn points only if angle is lower than this threshold. Otherwise points are ignored
python refine_fitting.py onto_dataset --dataset_name {dataset-name}
Use the evaluate_fitting.py
script to evaluate the fitting.
Evaluate the per vertex error (pve) which is the average euclidean distance between the given ground truth BM to the fitted BM.
python evaluate_fitting.py pve -F {path-to-results}
The pve unit is determined by the data. For the FAUST dataset the unit is given in meters.
You can use:
-V
- to visualize the pve for each example--select_examples
- (list) to select a subset of examples to evaluate (only if evaluating fitting to dataset)--ground_truth_path
- (string) to set the path to the ground truth body model (only if evaluating fitting to scan)
Evaluate the (various definitions of) chamfer distance (CD) from the estimated body model to the scan with:
python evaluate_fitting.py chamfer -F {path-to-results}
where the different definitions are:
Chamfer standard
is (mean(dists_bm2scan) + mean(dists_scan2bm))Chamfer bidirectional
is mean(concatenation(dists_bm2scan,dists_scan2bm))Chamfer from body model to scan
is mean(dists_bm2scan)Chamfer from scan to body model
is mean(dists_scan2bm)
and are averaged over the examples. The unit of these metrics is determined by the data. For the FAUST dataset the unit is given in meters.
You can use:
--select_examples
to select a subset of examples to evaluate (only if evaluating fitting to dataset)--device
to set gpu for running a faster chamfer distance (usecuda:{gpu-index}
)--scan_path
- (string) to set the path to the scan you are evaluating (only if evaluating fitting to scan and not whole dataset)
-
Visualize SMPL landmarks with:
python visualization.py visualize_smpl_landmarks
-
Visualize scan landmarks with:
python visualization.py visualize_scan_landmarks --scan_path {path-to-scan} --landmark_path {path-to-landmarks}
Check Notes section to find out the possible landmark definitions.
-
Visualize fitting:
python visualization.py visualize_fitting --scan_path {path-to-scan} --fit_paths {path-to-.npz-file}
where the
.npz
is obtained with the fitting scripts.
The list of available landmarks for each BM are listed in landmarks.py
.
The supported ways of loading landmarks for a scan are:
.txt
extension has two optionsx y z landmark_name
landmark_index landmark_name
.json
extension has two options{landmark_name: [x,y,z]}
{landmark_name: landmark_index}
where x y z
indicate the coordinates of the landmark and landmark_index
indicates the index of the scan vertex representing the landmark.
Losses for fitting the BM:
data loss
- chamfer distance between BM and scanlandmark loss
- L2 distance between BM landmarks and scan landmarksprior shape loss
- L2 norm of BM shape parametersprior pose loss
- gmm prior loss from [1]
Losses for fitting the vertices:
data loss
- directional chamfer distance from BM to scansmoothness loss
- difference between transformations of neighboring BM verticeslandmark loss
- L2 distance between BM landmarks and scan landmarksnormal loss
- L2 distance between points with normals within angle threshold
The dataset you want to fit needs to be defined in datasets.py
as a torch dataset with the following variables:
name
- (string) name of the scanvertices
- (np.ndarray) vertices of the scanfaces
- (np.ndarray) faces of the scan (set toNone
if no faces)landmarks
- (dict) of (landmark_name: landmark_coords) pairs where landmark_coords is list of 3 floats
If you additionally want to evaluate the per vertex error (pve) after fitting (check โ๏ธ Evaluate) which compares the mean L2 between the fitted BM and the ground truth BM, you need to provide the ground truth BM as:
vertices_gt
- (np.ndarray) ground truth vertices of the BMfaces_gt
- (np.ndarray) ground truth faces of the BM
If you want to refine the parameters that have already been fitted, the dataset needs to additionally return:
pose
- (torch.tensor) fitted pose parameters dim 1 x 72shape
- (torch.tensor) fitted shape parameters dim 1 x 10trans
- (torch.tensor) fitted translation dim 1 x 3gender
- (str) gender of the body model
We provide the FAUST and CAESAR and 4DHumanOutfit dataset implementations in datasets.py
. You can obtain the datasets from here, here and here.
Currently, we support the SMPL body model. If you want to add another BM, you can follow these steps:
- Add the body models into
data/body_models
- Implement the body model in
body_models.py
- Implement the body model parameters in
body_parameters.py
- Implement the body landmarks in
landmarks.py
Fit body model onto scan:
python fit_body_model.py onto_scan --scan_path data/demo/tr_scan_000.ply --landmark_path data/demo/tr_scan_000_landmarks.json
Fit body model onto dataset (๐ง you need to provide the FAUST dataset files as mentioned above ๐ง):
python fit_body_model.py onto_dataset -D FAUST
Fit the vertices of the previously fitted BM onto the scan even further:
python fit_vertices.py onto_scan --scan_path data/FAUST/training/scans/tr_scan_000.ply --landmark_path data/FAUST/training/landmarks/tr_scan_000_landmarks.json --start_from_previous_results data/demo
Fit the vertices of the previously fitted BM onto FAUST dataset further:
python fit_vertices.py onto_dataset --dataset_name FAUST --start_from_previous_results data/demo
๐ง We provide only the fitted paths for scans tr_scan_000 and tr_scan_001. Therefore the rest of the scans are going to be skipped ๐ง
Evaluate PVE of fitted scan for the two provided fittings:
python evaluate_fitting.py pve -F data/demo -G data/demo
Evaluate chamfer of fitted scan for the two provided fittings:
python evaluate_fitting.py chamfer -F data/demo
Visualize SMPL landmarks:
python visualization.py visualize_smpl_landmarks
Visualize FAUST scan landmarks:
python visualization.py visualize_scan_landmarks --scan_path data/demo/tr_scan_000.ply --landmark_path data/demo/tr_scan_000_landmarks.json
Visualize the fitted vertices of the BM onto the FAUST scan:
python visualization.py visualize_fitting --scan_path data/demo/tr_scan_000.ply --fit_paths data/demo/tr_scan_000.npz
Please cite our work and leave a star โญ if you find the repository useful.
@misc{SMPL-Fitting,
author = {Bojani\'{c}, D.},
title = {SMPL-Fitting},
year = {2024},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/DavidBoja/SMPL-Fitting}},
}
- Implement SMPLx body model
[1] Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image