/UniSOD

Unified-modal Salient Object Detection via Adaptive Prompt Learning

Primary LanguagePython

UniSOD

This repository provides the source code and results for the paper entilted "Unified-modal Salient Object Detection via Adaptive Prompt Learning".

arXiv version: https://arxiv.org/abs/2311.16835.

Thank you for your attention.

Citing our work

If you think our work is helpful, please cite

@article{wang2023unified,
  title={Unified-modal Salient Object Detection via Adaptive Prompt Learning},
  author={Wang, Kunpeng and Li, Chenglong and Tu, Zhengzheng and Luo, Bin},
  journal={arXiv preprint arXiv:2311.16835},
  year={2023}
}

Overview

Framework

avatar

Baseline SOD framework

avatar

RGB SOD Performance

avatar

RGB-D SOD Performance

avatar

RGB-T SOD Performance

avatar

Predictions

The predicted RGB, RGB-D, and RGB-T saliency maps can be found here. [baidu pan fetch code: vpvt]

Pretrained Models

The pretrained parameters of our models can be found here. [baidu pan fetch code: o8yx]

Usage

Requirement

  1. Download the datasets for training and testing from here. [baidu pan fetch code: 2sfr]
  2. Download the pretrained parameters of the backbone from here. [baidu pan fetch code: mad3]
  3. Organize dataset directories for pre-training and fine-tuning.
  4. Create directories for the experiment and parameter files.
  5. Please use conda to install torch (1.12.0) and torchvision (0.13.0).
  6. Install other packages: pip install -r requirements.txt.
  7. Set your path of all datasets in ./options.py.

Pre-train

python -m torch.distributed.launch --nproc_per_node=2 --master_port=2024 train_parallel.py

Fine-tuning

python -m torch.distributed.launch --nproc_per_node=2 --master_port=2024 train_parallel_multi.py

Test

python test_produce_maps.py

Acknowledgement

The implement of this project is based on the following link.

Contact

If you have any questions, please contact us (kp.wang@foxmail.com).