/L2G

The PyTorch Code for our CVPR 2022 paper "L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation""

Primary LanguagePython

Local to Global

The Official PyTorch code for "L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation", which is implemented based on the code of OAA-PyTorch. The segmentation framework is borrowed from deeplab-pytorch.

PWC PWC PWC

Installation

Use the following command to prepare your enviroment.

pip install -r requirements.txt

Download the PASCAL VOC dataset and MS COCO dataset, respectively.

L2G uses the off-the-shelf saliency maps generated from PoolNet. Download them and move to a folder named Sal.

The data folder structure should be like:

L2G
├── models
├── scripts
├── utils
├── data
│   ├── voc12
│   │   ├── JPEGImages
│   │   ├── SegmentationClass
│   │   ├── SegmentationClassAug
│   │   ├── Sal
│   ├── coco14
│   │   ├── JPEGImages
│   │   ├── SegmentationClass
│   │   ├── Sal

Download the pretrained model to initialize the classification network and put it to ./models/.

L2G

To train a L2G model on dataset VOC2012, you need to implement the following commands:

cd L2G/
./train_l2g_sal_voc.sh 

For COCO:

cd L2G/
./train_l2g_sal_coco.sh 

We provide the pretrained classification models on PASCAL VOC and MS COCO, respectively.

After the training process, you will need the following command to generate pseudo labels and check their qualities.
For VOC:

./test_l2g_voc.sh

For COCO:

./test_l2g_coco.sh

Weakly Supervised Segmentation

To train a segmentation model, you need to generate pseudo segmentation labels first by

./gen_gt_voc.sh

This code will generate pseudo segmentation labels in ./data/voc12/pseudo_seg_labels/.
For COCO

./gen_gt_coco.sh

This code will generate pseudo segmentation labels in ./data/coco14/pseudo_seg_labels/.

cd deeplab-pytorch

Download the pretrained models and put them into the pretrained folder.

Train DeepLabv2-resnet101 model by

python main.py train \
      --config-path configs/voc12_resnet_dplv2.yaml

Test the segmentation model by

python main.py test \
    --config-path configs/voc12_resnet_dplv2.yaml \
    --model-path data/models/voc12/voc12_resnet_v2/train_aug/checkpoint_final.pth

Apply the crf post-processing by

python main.py crf \
    --config-path configs/voc12_resnet_dplv2.yaml

Performance

Dataset mIoU(val) mIoU (test)
PASCAL VOC 72.1 71.7
MS COCO 44.2 ---

If you have any question about L2G, please feel free to contact Me (pt.jiang AT mail DOT nankai.edu.cn).

Citation

If you use our codes and models in your research, please cite:

@inproceedings{jiang2022l2g,
  title={L2G: A Simple Local-to-Global Knowledge Transfer Framework for Weakly Supervised Semantic Segmentation},
  author={Jiang, Peng-Tao and Yang, Yuqi and Hou, Qibin and Wei, Yunchao},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  year={2022}
}

License

The code is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License for NonCommercial use only. Any commercial use should get formal permission first.

Acknowledgement

Some parts of this code are borrowed from a nice work, EPS.