/RayMVSNet

RayMVSNet

Primary LanguagePython

RayMVSNet

RayMVSNet: Learning Ray-based 1D Implicit Fields for Accurate Multi-View Stereo

Junhua Xi* Yifei Shi* Yijie Wang Yulan Guo Kai Xu†

National University of Defense Technology


How to use

Environment

  • CUDA 11.2
  • Python 3.8.5
  • torch 1.7.1+cu110
  • pip install -r requirements.txt

Data

  • Download the preprocessed DTU data and unzip it to data/dtu.
./dtu  
      ├── Rectified                 
      │   ├── scan1_train       
      │   ├── scan2_train       
      │   └── ...                
      ├── Cameras
      │   ├── pair.txt   
      │   ├── train   
      │       ├── 00000000_cam.txt   
      │       ├── 00000001_cam.txt   
      │       └── ...  
      └── Depths         
          ├── scan1_train   
          ├── scan2_train    
          └── ... 

Training

  • Train the network
  • python train.py

Testing

  • python test.py

You can test with the pretrained model: ./model.ckpt

Depth Fusion

Testing process generates per-view depth map. We need to apply depth fusion fusion.py to get the complete point cloud. Please refer to MVSNet for more details.

  • python fusion.py

Evaluation

  • Download the offical evaluation tool from DTU benchmark
  • We provide our pre-computed point clouds for your convenience