Code for the paper is coming soon
clone repo:
git clone https://github.com/zengxianyu/jsws.git
git submodule init
git submodule update
prepare environment:
conda env create --file=pytorch_environments.yml
Required training data:
- PASCAL VOC 2012 segmentation dataset. We only use image-level class labels of them. Put the folder
VOC2012
indata/datasets/segmentation_Dataset/VOCdevkit/
. I include 10,582 extra training samples introduced by Hariharan et al [15]; unzip SegmentationClassAug.zip and put it in VOCdevkit - DUTS saliency dataset training split. Put the folder
DUT-train
indata/datasets/saliency_Dataset/
- (Optional) ECSSD dataset for test and validation on saliency task. Put the folder
ECSSD
indata/datasets/saliency_Dataset/
train using image-level class labels and saliency ground-truth:
weak_seg_full_sal_train.py
Open http://host ip:8000/savefiles/jlsfcn_dense169.html in a browser for visualizing training process.
It should be easy to achieve MIOU>54 but you may need to try multiple times to get the score MIOU 57.1 or more than that in Table. 5 of the paper.
train a more complex model using the prediction of the model trained in the stage 1.
- make training data
weak_seg_full_sal_syn.py
- train (optional: processing by densecrf)
self_seg_full_sal_train.py
stage 1 model:
weak_seg_full_sal_test.py
stage 2 model
self_seg_full_sal_test.py
By default it calls the function test(...)
to test on segmentation task
Change to call the function test_sal(...)
to test on saliency task
download saliency maps on datasets ECSSD, PSACALS, HKU-IS, DUT-OMRON, DUTS-test, SOD; Google Drive; One Drive
@inproceedings{zeng2019joint,
title={Joint learning of saliency detection and weakly supervised semantic segmentation},
author={Zeng, Yu and Zhuge, Yunzhi and Lu, Huchuan and Zhang, Lihe},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
year={2019}
}