This is the original implementation for the Computer Graphics Forum (2022) paper:
"Deep Flow Rendering: View Synthesis via Layer-aware Reflection Flow",
by Pinxuan Dai & Ning Xie, UESTC.
- Open-access paper is avaliable at Eurographics Digital Library.
- Oral recording at EGSR 2022 and supplementary video are here.
TensorFlow 1.15.0, NVdiffrast 0.3.0, and install other packages via:
conda env create -f requirements.yml
conda activate dfr
- Clone this repository and prepare test data as below.
- Specify data path, model name, and training configurations directly in code/main.py.
- Run:
cd dfr/code
python main.py
- Download example data used in the paper from here.
- Unzip it in the dfr base dir:
mv path_to_download/dfr_data.zip ./
unzip dfr_data.zip
- Use COLMAP's:
- Sparse reconstruction for camera poses (use pinhole model and txt output) to get cameras.txt and images.txt,
- Dense reconstruction for mesh (might need manual configuration for a fine mesh) and convert it into .obj format.
- Use Blender (or any other equivalent like xatlas) to generate texture atlas for the reconstructed mesh.obj.
- Arrange your custome data dir custome_scene in the same way as the example data:
dfr/
|—— code/...
|—— result/...
|—— data/
| |—— custome_scene/
| | |—— images/
| | | |—— img_0.jpg
| | | |—— ...
| | | |—— img_n.jpg
| | |—— cameras.txt
| | |—— images.txt
| | |—— mesh.obj
| |—— ...
@article{DaiDFR_CGF2022,
author = {Dai, Pinxuan and Xie, Ning},
title = {Deep Flow Rendering: View Synthesis via Layer-aware Reflection Flow},
journal = {Computer Graphics Forum},
volume = {41},
number = {4},
pages = {139-148},
doi = {https://doi.org/10.1111/cgf.14593},
year = {2022}
}