Code repository for our paper entilted "Accurate RGB-D Salient Object Detection via Collaborative Learning" accepted at ECCV 2020 (poster).
- pytorch 1.0.0+
- torchvision
- PIL
- numpy
git clone https://github.com/jiwei0921/CoNet.git
cd CoNet/
- test
Our test datasets link and checkpoint link code is 12yn. You need to set dataset path and checkpoint name correctly.
'--phase' as test in demo.py
'--param' as True in demo.py
python demo.py
- train
Our training dataset link code is 203g. You need to set dataset path and checkpoint name correctly.
'--phase' as train in demo.py
'--param' as True or False in demo.py
Note: True means loading checkpoint and False means no loading checkpoint.
python demo.py
We provide saliency maps (code: qrs2) of our CoNet on 8 datasets (DUT-RGBD, STEREO, NJUD, LFSD, RGBD135, NLPR, SSD, SIP) as well as 2 extended datasets (NJU2k and STERE1000) refer to CPFP_CVPR19.
- Note: For evaluation, all results are implemented on this ready-to-use toolbox.
All common RGB-D Saliency Datasets we collected are shared in ready-to-use manner.
- The web link is here.
@InProceedings{Wei_2020_ECCV,
author={Ji, Wei and Li, Jingjing and Zhang, Miao and Piao, Yongri and Lu, Huchuan},
title = {Accurate {RGB-D} Salient Object Detection via Collaborative Learning},
booktitle = {European Conference on Computer Vision},
year = {2020}
}
- For more info about CoNet, please read the Manuscript.
- Thanks for related authors to provide the code or results, particularly, Deng-ping Fan, Hao Chen, Chun-biao Zhu, etc.
More details can be found in Github Wei Ji.
If you have any questions, please contact us ( weiji.dlut@gmail.com ).