/CL4WSIS

Primary LanguagePythonMIT LicenseMIT

Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision (ICCV 2023)

Yu-Hsing Hsieh, Guan-Sheng Chen, Shun-Xian Cai, Ting-Yun Wei, Huei-Fang Yang, Chu-Song Chen

Official PyTorch Implementation

Instance segmentation requires labor-intensive manual labeling of the contours of complex objects in images for training. The labels can also be provided incrementally in practice to balance the human labor in different time steps. However, research on incremental learning for instance segmentation with only weak labels is still lacking. In this paper, we propose a continual-learning method to segment object instances from image-level labels. Unlike most weakly-supervised instance segmentation (WSIS) which relies on traditional object proposals, we transfer the semantic knowledge from weakly-supervised semantic segmentation (WSSS) to WSIS to generate instance cues. To address the background shift problem in continual learning, we employ the old class segmentation results generated by the previous model to provide more reliable semantic and peak hypotheses. To our knowledge, this is the first work on weakly-supervised continual learning for instance segmentation of images. Experimental results show that our method can achieve better performance on Pascal VOC and COCO datasets under various incremental settings.

How to run

Requirements

We are tested under:

python 3.8

If you want to install a custom environment for this code, you can run the following using conda:

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
conda install tensorboard
conda install jupyter
conda install matplotlib
conda install tqdm
conda install imageio

pip install inplace-abn # this should be done using CUDA compiler (same version as pytorch)
pip install wandb # to use the WandB logger
pip install opencv-python
pip install pycocotools
pip install chainercv
pip install numpy==1.23.1

Datasets

We use Pascal SBD, Pascal-VOC 2012 and COCO (object only). For the Pascal dataset, you can download the data from here. We need SegmentationClassAug and SegmentationObjectAug. For the COCO dataset, we followed the splits and annotations, that you can see here. We use the Thing-only COCO-style annotations.

If your datasets are in a different folder, make a soft-link from the target dataset to the data folder. We expect the following tree:

data/voc/
    SegmentationClassAug/
        <Image-ID>.png
    SegmentationObjectAug/
        <Image-ID>.png
    JPEGImages/
        <Image-ID>.png
    split/
    ... other files 
    
data/coco/
    annotations/
        instances_train2017.json
        instances_val2017.json
    images/
        train2017/
            <Image-ID>.png
        val2017/
            <Image-ID>.png
    ... other files 

Finally, to prepare the COCO-to-VOC setting we need to map the VOC labels into COCO. Do that by running python data/make_cocovoc.py

ImageNet Pretrained Models

Once you have prepared the dataset, you can obtain the pre-trained models on ImageNet by utilizing InPlaceABN. For example, download the pre-trained weight for ResNet101 and renamed it as resnet101_iabn_sync.pth.tar. Then, put the pretrained model in the pretrained folder.

Training

Different scripts are provided in scripts to run the experiments. In the following, we describe the basic parameter to run an experiment.

First, we assume that we have a command

exp='python -m torch.distributed.launch --nproc_per_node=<num GPUs> --master_port <PORT> run.py --num_workers <N_Workers>'

For step 0 (base step, fully supervised). You can run:

exp --name Base --step 0 --bce --lr 0.00005 --dataset <dataset> --task <task> --batch_size 16 --epochs 100 --optim adam --weight_decay 0 [--overlap]

where we use --bce to train the semantic part with the binary cross-entropy. dataset can be voc or coco-voc.

The tasks are,

voc: (you can set overlap here)
    15-5, 10-10
coco: (overlap is not used)
    voc 

After this, you can run the incremental steps using only image level labels (set the weakly parameter).

For phase 1: CL for WSSS:

exp --name OURS --step 1 --weakly --lr 0.001 --alpha 0.5 --step_ckpt <pretr> --loss_de 1 --lr_policy warmup --affinity \
    --optim sgd --phase 1 --dataset <dataset> --task <task> --batch_size 16 --epochs 40 [--overlap]

where pretr should be the path to the pretrained model (usually checkpoints/step/<dataset>-<task>/<name>.pth). phase is set to 1 for training CL for WSSS. Please, set --alpha 0.9 on the COCO dataset.

For phase 2: CL4WSIS:

exp --name OURS --step 1 --weakly --lr 0.00005 --alpha 0.5 --step_ckpt <pretr> --loss_de 1 --lr_policy warmup --affinity \
    --optim adam --weight_decay 0 --seg_ckpt <pretr_seg> --phase 2 --dataset <dataset> --task <task> --batch_size 16 --epochs 50 [--overlap]

where pretr_seg should be the path to the model after phase 1 training. phase is set to 2 for training CL4WSIS.

While we have taken steps to set the random seed, we note that our implementation may still encounter minor differences in each run.

Cite us

If you find this work helpful to your research, please consider citing:

@InProceedings{Hsieh_2023_ICCV,
    author    = {Hsieh, Yu-Hsing and Chen, Guan-Sheng and Cai, Shun-Xian and Wei, Ting-Yun and Yang, Huei-Fang and Chen, Chu-Song},
    title     = {Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {1250-1261}
}

Acknowledgement

Our implementation is based on these repositories: WILSON, BESTIE