Official repository for CVPR 2024 Oral paper: "From SAM to CAMs: Exploring Segment Anything Model for Weakly Supervised Semantic Segmentation" by Hyeokjun Kweon.
-
Tested on Ubuntu 18.04, with Python 3.8, PyTorch 1.8.2, CUDA 11.4, 4 GPUs.
-
The PASCAL VOC 2012 development kit: You need to specify place VOC2012 under ./data folder.
-
ImageNet-pretrained weights for resnet38d are from [resnet_38d.params]. You need to place the weights as ./pretrained/resnet_38d.params.
-
Please install SAM and download vit_h version as ./pretrained/sam_vit_h.pth
-
You need to run the Segment-Everything option using SAM as preprocessing. Please refer to get_se_map.py for further details.
- This repository generates CAMs (seeds) to train the segmentation network.
- For further refinement, refer RIB and SAM_WSSS.
- Please specify the name of your experiment.
- Training results are saved at ./experiment/[exp_name]
python train.py --name [exp_name] --model s2c
python evaluation.py --name [exp_name] --task cam --dict_dir dict
If our code be useful for you, please consider citing our CVPR 2024 paper using the following BibTeX entry.
@inproceedings{kweon2024sam,
title={From SAM to CAMs: Exploring Segment Anything Model for Weakly Supervised Semantic Segmentation},
author={Kweon, Hyeokjun and Yoon, Kuk-Jin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={19499--19509},
year={2024}
}