PyTorch implementation of our CVPR 2020 paper:
Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement
Zehao Yu, Shenghua Gao
git clone git@github.com:svip-lab/FastMVSNet.git
pip install -r requirements.txt
-
Download the preprocessed DTU training data from MVSNet and unzip it to
data/dtu
. -
Train the network
python fastmvsnet/train.py --cfg configs/dtu.yaml
You could change the batch size in the configuration file according to your own pc.
-
Download the rectified images from DTU benchmark and unzip it to
data/dtu/Eval
. -
Test with the pretrained model
python fastmvsnet/test.py --cfg configs/dtu.yaml TEST.WEIGHT outputs/pretrained.pth
We need to apply depth fusion tools/depthfusion.py
to get the complete point cloud. Please refer to MVSNet for more details.
python tools/depthfusion.py -f dtu -n flow2
Most of the code is borrowed from PointMVSNet. We thank Rui Chen for his great works and repos.
Please cite our paper for any purpose of usage.
@inproceedings{Yu_2020_fastmvsnet,
author = {Zehao Yu and Shenghua Gao},
title = {Fast-MVSNet: Sparse-to-Dense Multi-View Stereo With Learned Propagation and Gauss-Newton Refinement},
booktitle = {CVPR},
year = {2020}
}