I can find you! Boundary-guided Separated Attention Network for Camouflaged Object Detection (AAAI-22)
This repo is an official implementation of BSA-Net, which has been published in 36th IEEE Conference on Artificial Intelligence (AAAI-22).
Authors: Hongwei Zhu, Peng Li, Haoran Xie, Xuefeng Yan, Dong Liang, Dapeng Chen, Mingqiang Wei and Jing Qin
The main pipeline of our BSA-Net is shown as the following,
BSA-Net simulates the procedure of how humans to detect camouflaged objects. We adopt Res2Net as the backbone encoder. After capturing rich context information by the Residual Multi-scale Feature Extractor (RMFE), we design the Separated Attention (SEA) module to distinguish the subtle difference of foreground and background. The Boundary Guider (BG) module is included in the SEA module to strengthen the model’s ability to understand the boundary. Finally, we employ the Shuffle Attention (SHA) block and a feature fusion module to refine our COD result.
Here's the experimental result.
Please refer to requirements.txt
Installing necessary packages: pip install -r requirements.txt
.
Once finished, please move the train/test dataset into ./Dataset/
.
After you download the train dataset, just run MyTrain.py
. You can change the arguments to customize your preferred training environment settings. The trained model will be saved in ./Snapshot
.
-
BSA-Net uses Res2Net as its backbone, so please download Res2Net's pretrained model here and put it into
./Src/Backbone
. -
The BSA-Net pretrained model and prediction maps on 3 benchmark datasets can be found here. Please put the pretrained model (
final_35.pth
) into./Snapshot/
.
- After you download all the pretrained model, just run
MyTest.py
to generate the final prediction map: replace your trained model directory (--model_path
) and assign your the save directory of the inferred mask (--test_save
). (Better not to change--test_save
since the default path will used by evaluation).
We provide complete and fair one-key evaluation toolbox for benchmarking within a uniform standard. Please refer to this link for more information:
- Matlab version: https://github.com/DengPingFan/CODToolbox
- Python version: https://github.com/lartpang/PySODMetrics
Copy the testing GT map (./Dataset/TestDataset/*/GT
) ./evaluation/GT
, run ./evaluation/evaluation.py
, when the evaluation finished, it will save the metric results into ./result.txt
.
If you have any questions, feel free to E-mail me via: zhuhongwei1999@gmail.com