StereoNet: Guided Hierarchical Refinement for Real-Time Edge-Aware Depth prediction model in pytorch. ECCV2018
If you want to communicate with me about the StereoNet, please concact me without hesitating. My email:
Now, my model's speed can achieve 60-25FPS on 540*960 img with the best result of 1.87 EPE_all with 16X multi model, 1.95 EPE_all with 16X single model 1.32 EPE_all with 8X single model 1.48EPE_all with 8X multi model on sceneflow dataset by end-to-end training. the following are the side outputs and the prediction example
- refercence[1]
If you find our work useful in your research, please consider citing:
@inproceedings{khamis2018stereonet, title={Stereonet: Guided hierarchical refinement for real-time edge-aware depth prediction}, author={Khamis, Sameh and Fanello, Sean and Rhemann, Christoph and Kowdle, Adarsh and Valentin, Julien and Izadi, Shahram}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany}, pages={8--14}, year={2018} }
I implement the real-time stereo model according to the StereoNet model in pytorch. The speed can reach 30FPS with top performance. The speed can reach 60FPS with lower performance.
Method | EPE_all on sceneflow dataset | EPE_all on kitti2012 dataset | EPE_all on kitti2015 dataset |
---|---|---|---|
ours(16X multi) | 1.32 | ||
Reference[1] | 1.525 |
- Our code is released under MIT License (see LICENSE file for details).
- python3.6
- pytorch0.4
- run main8Xmulti.py
- finetune the performance beating the original paper.
- optimize the inference speed
- coming soon.
- Thanks to Sameh Khamis' help