requirements.txt
git clone https://github.com/khangt1k25/Contrastive-Segmentation.git
cd Contrastive-Segmentation/
download pretrained MOCOv2 800 step from https://github.com/facebookresearch/moco
-
Change data path at data/util/mypath.py [automatically download]
-
Change result path at configs/env.yml [save result]
-
Change config at configs/VOCSegmentation_unsupervised_saliency_model or VOCSegmentation_supervised_saliency_model
-
Run this script to train with unsupervised saliency model
cd pretrain
python main.py --config_env configs/env.yml --config_exp configs/VOCSegmentation_unsupervised_saliency_model.yml
Change segmentation result path at configs/env.yml [The same path as step 3] and data path [The same as step 1]
-
Change pretraining path at configs/linear_finetune/linear_finetune_VOCSegmentation_unsupervised_saliency to point our pretrained model path
-
Run this script
cd segmentation
python linear_finetune.py --config_env configs/env.yml --config_exp configs/linear_finetune/linear_finetune_VOCSegmentation_unsupervised_saliency.yml
-
Change pretraining path at configs/kmeans/kmeans_VOCSegmentation_unsupervised_saliency to point our pretrained model path
-
Run this script
cd segmentation
python kmeans.py --config_env configs/env.yml --config_exp configs/kmeans/kmeans_VOCSegmentation_unsupervised_saliency.yml
-
Change pretraining path at configs/retrieval/retrieval_VOCSegmentation_unsupervised_saliency to point our pretrained model path
-
Run this script
cd segmentation
python retrieval.py --config_env configs/env.yml --config_exp configs/retrieval/retrieval_VOCSegmentation_unsupervised_saliency.yml
cd segmentation
python eval.py --config_env configs/env.yml --config_exp configs/VOCSegmentation_supervised_saliency_model.yml --state-dict $PATH_TO_MODEL
This code is based on the SCAN and MoCo repositories. If you find this repository useful for your research, please consider citing the following paper(s):
@article{vangansbeke2020unsupervised,
title={Unsupervised Semantic Segmentation by Contrasting Object Mask Proposals},
author={Van Gansbeke, Wouter and Vandenhende, Simon and Georgoulis, Stamatios and Van Gool, Luc},
journal={arxiv preprint arxiv:2102.06191},
year={2021}
}
@inproceedings{vangansbeke2020scan,
title={Scan: Learning to classify images without labels},
author={Van Gansbeke, Wouter and Vandenhende, Simon and Georgoulis, Stamatios and Proesmans, Marc and Van Gool, Luc},
booktitle={Proceedings of the European Conference on Computer Vision},
year={2020}
}
@inproceedings{he2019moco,
title={Momentum Contrast for Unsupervised Visual Representation Learning},
author={Kaiming He and Haoqi Fan and Yuxin Wu and Saining Xie and Ross Girshick},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year={2019}
}
For any enquiries, please contact the main authors.
For an overview on self-supervised learning, have a look at the overview repository.
This software is released under a creative commons license which allows for personal and research use only. For a commercial license please contact the authors. You can view a license summary here.
This work was supported by Toyota, and was carried out at the TRACE Lab at KU Leuven (Toyota Research on Automated Cars in Europe - Leuven).