Here is the SALient Object Detection (SALOD) benchmark (paper link), published in Pattern Recognition.
We have re-implemented over 20 SOD methods using the same settings, including input size, data loader and evaluation metrics (thanks to Metrics). Some other networks are debugging now, it is welcome for your contributions on these models.
Our new unsupervised A2S-v2 method was accepted by CVPR 2023!
You can contact me through the official email: zhouhj26@mail2.sysu.edu.cn
- MENet (CVPR 2023) is available, but not guaranted to achiveve SOTA performance. You may need to setup its loss function and training strategy.
- New loss_factory formatting style. See base/loss.py for details.
Our SALOD dataset can be downloaded from: SALOD.
Original SOD datasets from: SOD, including DUTS-TR,DUTS-TE,ECSSD,SOD,PASCAL-S,HKU-IS,DUT-OMRON.
COD datasets from: COD, including COD-TR (COD-TR + CAMO-TR), COD-TE, CAMO-TE, NC4K.
All models are trained with the following setting:
--strategy=sche_f3net
for the latest training strategy as original F3Net, LDF, PFSNet and CTDNet;--multi
for multi-scale training;--data_aug
for random croping;- 1 * BCE_loss + 1 * IOU_loss as loss.
Following the above settings, we list the benchmark results here.
All weights can be downloaded from Baidu disk [pqn6].
Noted that FPS is tested on our device with batch_size=1, you should test all methods and report the scores on your own device.
Methods | #Para. | GMACs | FPS | max-F | ave-F | Fbw | MAE | SM | em |
---|---|---|---|---|---|---|---|---|---|
DHSNet | 24.2 | 13.8 | 49.2 | .909 | .871 | .863 | .037 | .905 | .925 |
Amulet | 79.8 | 1093.8 | 35.1 | .897 | .856 | .846 | .042 | .896 | .919 |
NLDF | 41.1 | 115.1 | 30.5 | .908 | .868 | .859 | .038 | .903 | .930 |
SRM | 61.2 | 20.2 | 34.3 | .893 | .851 | .841 | .042 | .892 | .925 |
DSS | 134.3 | 35.3 | 27.3 | .906 | .868 | .859 | .038 | .901 | .933 |
PiCaNet | 106.1 | 36.9 | 14.8 | .900 | .864 | .852 | .043 | .896 | .924 |
BASNet | 95.5 | 47.2 | 32.8 | .911 | .872 | .863 | .040 | .905 | .925 |
CPD | 47.9 | 14.7 | 22.7 | .913 | .884 | .874 | .034 | .911 | .938 |
PoolNet | 68.3 | 66.9 | 33.9 | .916 | .882 | .875 | .035 | .911 | .938 |
EGNet | 111.7 | 222.8 | 10.2 | .913 | .884 | .875 | .036 | .908 | .936 |
SCRN | 25.2 | 12.5 | 19.3 | .916 | .881 | .872 | .035 | .910 | .935 |
F3Net | 25.5 | 13.6 | 39.2 | .911 | .878 | .869 | .036 | .908 | .932 |
GCPA | 67.1 | 54.3 | 37.8 | .914 | .884 | .874 | .036 | .910 | .937 |
ITSD | 25.7 | 19.6 | 29.4 | .918 | .880 | .873 | .037 | .910 | .932 |
MINet | 162.4 | 87 | 23.5 | .912 | .874 | .866 | .038 | .908 | .931 |
LDF | 25.2 | 12.8 | 37.5 | .913 | .879 | .873 | .035 | .909 | .938 |
GateNet | 128.6 | 96 | 25.9 | .912 | .882 | .870 | .037 | .906 | .934 |
PFSNet | 31.2 | 37.5 | 21.7 | .912 | .879 | .865 | .038 | .904 | .931 |
CTDNet | 24.6 | 10.2 | 64.2 | .918 | .887 | .880 | .033 | .913 | .940 |
EDN | 35.1 | 16.1 | 27.4 | .916 | .883 | .875 | .036 | .910 | .934 |
The orig. means the results of official saliency predictions, while ours are the re-implemented results in our benchmark. The weights of these models can be downloaded from: Baidu Disk(cs6u)
Method | Src | PASCAL-S | ECSSD | HKU-IS | DUTS-TE | DUT-OMRON | |||||
max-F | MAE | max-F | MAE | max-F | MAE | max-F | MAE | max-F | MAE | ||
DHSNet | orig. | .820 | .091 | .906 | .059 | .890 | .053 | .808 | .067 | -- | -- |
ours | .870 | .063 | .944 | .036 | .935 | .031 | .887 | .040 | .805 | .062 | |
Amulet | orig. | .828 | .100 | .915 | .059 | .897 | .051 | .778 | .085 | .743 | .098 |
ours | .871 | .066 | .936 | .045 | .928 | .036 | .871 | .044 | .791 | .065 | |
NLDF | orig. | .822 | .098 | .905 | .063 | .902 | .048 | .813 | .065 | .753 | .080 |
ours | .872 | .064 | .937 | .042 | .927 | .035 | .882 | .044 | .796 | .068 | |
SRM | orig. | .838 | .084 | .917 | .054 | .906 | .046 | .826 | .059 | .769 | .069 |
ours | .854 | .069 | .922 | .046 | .904 | .043 | .846 | .049 | .774 | .068 | |
DSS | orig. | .831 | .093 | .921 | .052 | .900 | .050 | .826 | .065 | .769 | .063 |
ours | .870 | .063 | .937 | .039 | .924 | .035 | .878 | .040 | .800 | .059 | |
PiCANet | orig. | .857 | .076 | .935 | .046 | .918 | .043 | .860 | .051 | .803 | .065 |
ours | .867 | .074 | .938 | .044 | .927 | .036 | .879 | .046 | .798 | .077 | |
BASNet | orig. | .854 | .076 | .942 | .037 | .928 | .032 | .859 | .048 | .805 | .056 |
ours | .884 | .057 | .950 | .034 | .943 | .028 | .907 | .033 | .833 | .052 | |
CPD | orig. | .859 | .071 | .939 | .037 | .925 | .034 | .865 | .043 | .797 | .056 |
ours | .883 | .057 | .946 | .034 | .934 | .031 | .892 | .037 | .815 | .059 | |
PoolNet | orig. | .863 | .075 | .944 | .039 | .931 | .034 | .880 | .040 | .808 | .056 |
ours | .877 | .062 | .946 | .035 | .936 | .030 | .895 | .037 | .812 | .063 | |
EGNet | orig. | .865 | .074 | .947 | .037 | .934 | .032 | .889 | .039 | .815 | .053 |
ours | .880 | .060 | .948 | .032 | .937 | .030 | .892 | .037 | .812 | .058 | |
SCRN | orig. | .877 | .063 | .950 | .037 | .934 | .034 | .888 | .040 | .811 | .056 |
ours | .871 | .063 | .947 | .037 | .934 | .032 | .895 | .039 | .813 | .063 | |
F3Net | orig. | .872 | .061 | .945 | .033 | .937 | .028 | .891 | .035 | .813 | .053 |
ours | .884 | .057 | .950 | .033 | .937 | .030 | .903 | .034 | .819 | .053 | |
GCPA | orig. | .869 | .062 | .948 | .035 | .938 | .031 | .888 | .038 | .812 | .056 |
ours | .885 | .056 | .951 | .031 | .941 | .028 | .905 | .034 | .820 | .055 | |
ITSD | orig. | .872 | .065 | .946 | .035 | .935 | .030 | .885 | .040 | .821 | .059 |
ours | .880 | .067 | .950 | .036 | .939 | .030 | .895 | .040 | .817 | .072 | |
MINet | orig. | .867 | .064 | .947 | .033 | .935 | .029 | .884 | .037 | .810 | .056 |
ours | .874 | .064 | .947 | .036 | .937 | .031 | .893 | .039 | .816 | .061 | |
LDF | orig. | .874 | .060 | .950 | .034 | .939 | .028 | .898 | .034 | .820 | .052 |
ours | .883 | .058 | .951 | .032 | .940 | .029 | .903 | .035 | .818 | .058 | |
GateNet | orig. | .869 | .067 | .945 | .040 | .933 | .033 | .888 | .040 | .818 | .055 |
ours | .867 | .066 | .944 | .037 | .934 | .031 | .891 | .039 | .803 | .062 | |
PFSNet | orig. | .875 | .063 | .952 | .031 | .943 | .026 | .896 | .036 | .823 | .055 |
ours | .883 | .060 | .950 | .034 | .939 | .030 | .899 | .037 | .816 | .063 | |
CTDNet | orig. | .878 | .061 | .950 | .032 | .941 | .027 | .897 | .034 | .826 | .052 |
ours | .885 | .057 | .950 | .031 | .940 | .028 | .904 | .033 | .821 | .055 | |
EDN | orig. | .880 | .062 | .951 | .032 | .941 | .026 | .895 | .035 | .828 | .049 |
ours | .891 | .058 | .953 | .031 | .945 | .027 | .910 | .032 | .837 | .055 |
Methods | Publish. | Paper | Src Code |
---|---|---|---|
MENet | CVPR 2023 | openaccess | PyTorch |
EDN | TIP 2022 | TIP | Pytorch |
CTDNet | ACM MM 2021 | ACM | Pytorch |
PFSNet | AAAI 2021 | AAAI.org | Pytorch |
GateNet | ECCV 2020 | springer | Pytorch |
LDF | CVPR 2020 | openaccess | Pytorch |
MINet | CVPR 2020 | openaccess | Pytorch |
ITSD | CVPR 2020 | openaccess | Pytorch |
GCPA | AAAI 2020 | aaai.org | Pytorch |
F3Net | AAAI 2020 | aaai.org | Pytorch |
SCRN | ICCV 2019 | openaccess | Pytorch |
EGNet | ICCV 2019 | openaccess | Pytorch |
PoolNet | CVPR 2019 | openaccess | Pytorch |
CPD | CVPR 2019 | openaccess | Pytorch |
BASNet | CVPR 2019 | openaccess | Pytorch |
DSS | TPAMI 2019 | IEEE/ArXiv | Pytorch |
PicaNet | CVPR 2018 | openaccess | Pytorch |
SRM | ICCV 2017 | openaccess | Pytorch |
Amulet | ICCV 2017 | openaccess | Pytorch |
NLDF | CVPR 2017 | openaccess | Pytorch/TF |
DHSNet | CVPR 2016 | openaccess | Pytorch |
Tuning |
----- | ----- | ----- |
*PAGE | CVPR2019 | openaccess | TF |
*PFA | CVPR2019 | openaccess | Pytorch |
*PFPN | AAAI2020 | aaai.org | Pytorch |
# model_name: lower-cased method name. E.g. poolnet, egnet, gcpa, dhsnet or minet.
python3 train.py model_name --gpus=0 --trset=[DUTS-TR,SALOD,COD-TR]
python3 test.py model_name --gpus=0 --weight=path_to_weight [--save]
python3 test_fps.py model_name --gpus=0
# To evaluate generated maps:
python3 eval.py --pre_path=path_to_maps
We supply a Loss Factory for an easier way to tune the loss functions.
loss are defined by --loss=loss1,loss2,loss3
, where each loss is formated as name_type#weight
.
'name' is one of keys in loss_dict, 'type' usually is one of ('sal', 'edge'), 'weight' is a float number.
Here are some examples:
python train.py basnet --loss=bce_sal,dice
# For saliency prediction
# loss = 1 * bce_loss + 1 * dice_loss
python train.py basnet --loss=bce_sal#0.3,ssim_sal#0.7
# For saliency prediction
# loss = 0.3 * bce_loss + 0.7 * ssim_loss
python train.py basnet --loss=bce#0.3,ssim#0.1,iou#0.5,bce_edge#0.2
# For saliency prediction
# loss = 0.3 * bce_loss + 0.1 * ssim_loss + 0.5 * iou_loss
# For edge prediction
# loss = 0.2 * bce_loss
2023/06/27:
- MENet (CVPR 2023) is available, but need more time for achiveving SOTA performance.
2023/03/17:
- Re-organize the structure of our code.
2022/12/07:
- Update conventional SOD results and weights.
2022/10/17:
- Use
timm
library for more backbones. - Code update.
- Benchmark results update.
2022/08/09:
- Remove loss.py for each method. The loss functions are defined in config.py now.
- Weights are uploaded to Baidu Disk.
2022/06/14:
- New model: EDN (TIP 2022).
2022/05/25:
- In the previous versions, we found that images with large salient regions get 0 ave-F scores, and thus we obtain lower ave-F scores than their original paper. Now, we fix this bug by adding a round function before evaluating.
2022/05/15:
- New models: F3Net (AAAI 2020), LDF (CVPR 2020), GateNet (ECCV 2020), PFSNet (AAAI 20221), CTDNet (ACM MM 2021). More models for SOD and COD tasks are coming soon.
- New dataset: training on COD task is available now.
- Training strategy update. We notice that training strategy is very important for achieving SOTA performance. A new strategy factory is added to /base/strategy.py.
Thanks for citing our work
@article{zhou2024benchmarking,
title={Benchmarking deep models on salient object detection},
author={Zhou, Huajun and Lin, Yang and Yang, Lingxiao and Lai, Jianhuang and Xie, Xiaohua},
journal={Pattern Recognition},
volume={145},
pages={109951},
year={2024},
publisher={Elsevier}
}