/MoCha-Stereo

[CVPR2024] The official implementation of "MoCha-Stereo: Motif Channel Attention Network for Stereo Matching”.

Primary LanguagePython

MoCha-Stereo

[CVPR2024] The official implementation of "MoCha-Stereo: Motif Channel Attention Network for Stereo Matching".

     

MoCha-Stereo: Motif Channel Attention Network for Stereo Matching
Ziyang Chen†, Wei Long†, He Yao†, Yongjun Zhang✱,Bingshu Wang, Yongbin Qin, Jia Wu
CVPR 2024
Correspondence: ziyangchen2000@gmail.com; zyj6667@126.com
Grateful to Prof. Wenting Li, Prof. Huamin Qu, and anonymous reviewers for their comments on this work.

Demo.mp4
@inproceedings{chen2024mocha,
  title={MoCha-Stereo: Motif Channel Attention Network for Stereo Matching},
	author={Chen, Ziyang and Long, Wei and Yao, He and Zhang, Yongjun and Wang, Bingshu and Qin, Yongbin and Wu, Jia},
	booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
	year={2024}
}

or

@article{chen2024mocha,
  title={MoCha-Stereo: Motif Channel Attention Network for Stereo Matching},
  author={Chen, Ziyang and Long, Wei and Yao, He and Zhang, Yongjun and Wang, Bingshu and Qin, Yongbin and Wu, Jia},
  journal={arXiv preprint arXiv:2404.06842},
  year={2024}
}

Todo List

  • [CVPR2024] V1 version
    • Preprint paper
    • Code of MoCha-Stereo (1. MoCha-Stereo will be released in this repository in July, 2024. 2. For researchers at Guizhou University, I have made the code available in our internal repository. Therefore, you do not need to contact me to get the code, just request access to the repository.)
    • Code of MoCha-MVS

The code and checkpoints are still being prepared. They will be released when they are sorted out!

Acknowledgements

This project borrows the code from IGEV, DLNR, RAFT-Stereo, GwcNet. We thank the original authors for their excellent works!