/TransWCD

Transformer-based Weakly-Supervised Change Detection. Code for paper: Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection

Primary LanguagePython

TransWCD: Transformer-based Weakly-Supervised Change Detection

📔 Code for Paper: Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection [arXiv]

Update

Higher-performing TransWCD baselines have been released, with F1 score of +2.47 on LEVIR-CD and +5.72 on DSIFN-CD compared to those mentioned in our paper.

Abastract

Weakly-supervised change detection (WSCD) aims to detect pixel-level changes with only image-level (i.e., scene-level) annotations. We develop TransWCD, a simple yet powerful transformer-based model, showcasing the potential of weakly-supervised learning in change detection.

💬 TransWCD Architectures (Encoder-Only):

A. Preparations

1. Download Dataset

You can download WHU-CD, DSIFN-CD, LEVIR-CD, and other CD datasets, then use our data_and_label_processing to convert these raw change detection datasets into cropped weakly-supervised change detection datasets.

Or use the processed weakly-supervised datasets from here. Please cite their papers and ours.

WSCD dataset with image-level labels:
├─A
├─B
├─label
├─imagelevel_labels.npy
└─list

2. Download Pre-trained Weights

Download the pre-trained weights from SegFormer and move them to transwcd/pretrained/.

3.Create and activate conda environment

conda create --name transwcd python=3.6
conda activate transwcd
pip install -r requirments.txt

B. Train and Test

# train 
python train_transwcd.py

You can modify the corresponding implementation settings WHU.yaml, LEVIR.yaml, and DSIFN.yaml in train_transwcd.py for different datasets.

# test
python test.py

Please remember to modify the corresponding configurations in test.py, and the visual results can be found at transwcd/results/

C. Performance and Best Models

TransWCD WHU-CD LEVIR-CD DSIFN-CD
Single-Stream 67.81/Best model 51.06/Best model 57.28/Best model
Dual-Stream 68.73/Best model 62.55/Best model 59.13/Best model

*Average F1 score / Best model

On both WHU-CD and LEVIR-CD datasets, the test performance closely matches that of the validation, with differences < 3% F1 score.

Citation

If it's helpful to your research, please kindly cite. Here is an example BibTeX entry:

@article{zhao2023exploring,
  title={Exploring Effective Priors and Efficient Models for Weakly-Supervised Change Detection},
  author={Zhao, Zhenghui and Ru, Lixiang and Wu, Chen},
  journal={arXiv preprint arXiv:2307.10853},
  year={2023}
}

Acknowledgement

Thanks to these brilliant works BGMix, ChangeFormer, and SegFormer!