/targeted-augs

Code for paper "Out-of-Domain Robustness via Targeted Augmentations"

Primary LanguagePython

Out-of-Domain Robustness via Targeted Augmentations

Code for the paper Out-of-Domain Robustness via Targeted Augmentations by Irena Gao*, Shiori Sagawa*, Pang Wei Koh, Tatsunori Hashimoto, and Percy Liang. Model weights are also available at this Codalab Worksheet.

Repository originally forked from WILDS.

Abstract

Models trained on one set of domains often suffer performance drops on unseen domains, e.g., when wildlife monitoring models are deployed on new camera locations. In this work, we study principles for designing data augmentations for out-of-domain (OOD) generalization. In particular, we focus on real-world scenarios in which some domain-dependent features are robust, i.e., some features that vary across domains are predictive OOD. For example, in the wildlife monitoring application above, image backgrounds vary across camera locations but indicate habitat type, which helps predict the species of photographed animals. Motivated by theoretical analysis on a linear setting, we propose targeted augmentations, which selectively randomize spurious domain-dependent features while preserving robust ones. We prove that targeted augmentations improve OOD performance, allowing models to generalize better with fewer domains. In contrast, existing approaches such as generic augmentations, which fail to randomize domain-dependent features, and domain-invariant augmentations, which randomize all domain-dependent features, both perform poorly OOD. In experiments on three realworld datasets, we show that targeted augmentations set new states-of-the-art for OOD performance by 3.2–15.2%.

Code

To install dependencies, run

pip install -r requirements.txt

The repository supports running

Training with targeted augmentations

We also provide implementations for the three targeted data augmentations studied in the paper.

  1. Copy-Paste (Same Y) for iWildCam2020-WILDS. In iWildCam, image backgrounds are domain-dependent features with both spurious and robust components. While low-level background features are spurious, habitat features are robust. Copy-Paste (Same Y) transforms input $(x, y)$ by pasting the animal foreground onto a random training set background---but only onto backgrounds from training cameras that also observe $y$. This randomizes low-level background features while roughly preserving habitat.
python examples/run_expt.py --root_dir path/to/data --lr 3.490455181206744e-05 --weight_decay 0 --transform_p 0.5682688104816859 --train_additional_transforms copypaste_same_y --algorithm ERM --dataset iwildcam --download
  1. Stain Color Jitter for Camelyon17-WILDS. In Camelyon17, stain color is a spurious domain-dependent feature, while stage-related features are robust domain-dependent features. Stain Color Jitter (Tellez et al., 2018) transforms $x$ by jittering its color in the hematoxylin and eosin staining color space.
python examples/run_expt.py --root_dir path/to/data --lr 0.0030693212138627936 --weight_decay 0.01 --transform_p 0.5682688104816859 --train_additional_transforms camelyon_color --transform_kwargs sigma=0.1 --algorithm ERM --dataset camelyon17 --download
  1. Copy-Paste + Jitter (Region) for BirdCalls. In BirdCalls, low-level noise and gain levels are spurious domain-dependent features, while habitat-specific noise is a robust domain-dependent feature. Copy-Paste + Jitter (Region) leverages time-frequency bounding boxes to paste bird calls onto other training set recordings from the same geographic region (Southwestern Amazon Basin, Hawaii, or Northeastern United States). After pasting the bird call, we also jitter hue levels of the spectrogram to simulate randomizing microphone gain settings.
python examples/run_expt.py --root_dir path/to/data --lr 0.00044964663762800047 --weight_decay 0.001 --transform_p 0.5983713912982213 --train_additional_transforms copypaste_same_region --algorithm ERM --dataset birdcalls --download

Citation

If this codebase / these models are useful in your work, please consider citing our paper.

@inproceedings{gao2023out,
  title={Out-of-Domain Robustness via Targeted Augmentations},
  author={Gao, Irena and Sagawa, Shiori and Koh, Pang Wei and Hashimoto, Tatsunori and Liang, Percy},
  url={https://arxiv.org/abs/2302.11861}
}