This repository is cloned from backseason/PoolNet and modified for research
This is a PyTorch implementation of our CVPR 2019 paper.
- conda-py38torch17.yml is appropriate if you'd like to use this repository on conda environment
- Please refer to [Managing environments — conda documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html?highlight=yml file#creating-an-environment-from-an-environment-yml-file) for more details
cudatoolkit>=10.1.0
torch>=1.7.0
torchvision>=0.8.0
- We released our code for joint training with edge, which is also our best performance model.
- You may refer to this repo for results evaluation: SalMetric.
-
Clone the repository
$ git clone https://github.com/backseason/PoolNet.git $ cd ./PoolNet/
-
Create the conda environment
$ conda env create -f ${REPO_ROOT}/dev-envs/conda-py38torch17.yml
Download the following datasets and unzip them into data
folder.
- MSRA-B and HKU-IS dataset. The .lst file for training is
data/msrab_hkuis/msrab_hkuis_train_no_small.lst
. - DUTS dataset. The .lst file for training is
data/DUTS/DUTS-TR/train_pair.lst
. - BSDS-PASCAL dataset. The .lst file for training is
./data/HED-BSDS_PASCAL/bsds_pascal_train_pair_r_val_r_small.lst
. - Datasets for testing.
Models | FPS |
---|---|
PoolNet-ResNet50 w/o edge model GoogleDrive | BaiduYun (pwd: 2uln). | 1.29 (CPU) 29.82 (GPU) |
PoolNet-ResNet50 w/ edge model (best performance) GoogleDrive | BaiduYun (pwd: ksii). | - |
PoolNet-VGG16 w/ edge model (pre-computed maps) GoogleDrive | BaiduYun (pwd: 3wgc). | - |
Unspecified GoogleDrive | BaiduYun (pw: 27p5). | - |
FPS is measured including pre/post-processing with batch size 1 .
FPS on CPU is measured on Threadripper 2950X CPU.
FPS on GPU is measured on RTX 2080Ti GPU.
✋ Note
- only support
bath_size=1
- Except for the backbone we do not use BN layer.
- Execute the command below:
$ python ${REPO_ROOT}/model_inspect.py --runmode infer --model_path ${PTH_PATH} --input_img_path ${INPUT_IMG_PATH} --output_img_path ${OUTPUT_IMG_PATH}
- Execute the command below for CPU:
$ python ${REPO_ROOT}/model_inspect.py --runmode fps --model_path ${PTH_PATH} --input_img_path ${INPUT_IMG_PATH} --cpu
-
Set the
--train_root
and--train_list
path intrain.sh
correctly. -
We demo using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 24 epoches, which is divided by 10 after 15 epochs.
./train.sh
- We demo joint training with edge using ResNet-50 as network backbone and train with a initial lr of 5e-5 for 11 epoches, which is divided by 10 after 8 epochs. Each epoch runs for 30000 iters.
./joint_train.sh
- After training the result model will be stored under
results/run-*
folder.
For single dataset testing: *
changes accordingly and --sal_mode
indicates different datasets (details can be found in main.py
)
python main.py --mode='test' --model='results/run-*/models/final.pth' --test_fold='results/run-*-sal-e' --sal_mode='e'
For all datasets testing used in our paper: 2
indicates the gpu to use
./forward.sh 2 main.py results/run-*
For joint training, to get salient object detection results use
./forward.sh 2 joint_main.py results/run-*
to get edge detection results use
./forward_edge.sh 2 joint_main.py results/run-*
All results saliency maps will be stored under results/run-*-sal-*
folders in .png formats.
If you have any questions, feel free to contact me via: j04.liu(at)gmail.com
.
@inproceedings{Liu2019PoolSal,
title={A Simple Pooling-Based Design for Real-Time Salient Object Detection},
author={Jiang-Jiang Liu and Qibin Hou and Ming-Ming Cheng and Jiashi Feng and Jianmin Jiang},
booktitle={IEEE CVPR},
year={2019},
}
Thanks to DSS and DSS-pytorch.