/RobustSSL_Benchmark

Benchmark of robust self-supervised learning (RobustSSL) methods & Code for AutoLoRa (ICLR 2024).

Primary LanguagePython

Benchmarking Transferability of Robust Self-Supervised Learning (RobustSSL)

The wide-ranging applications of foundation models, espeically in safety-critical areas, necessitates the robust self-supervised learning which can yield strong adversarial robustness in downsteam tasks via fine-tuning. In this repo, we provide a benchmark for robustness transferability of robust pre-training.

The leaderboard is demonstrates in robustssl.github.io.

RobustSSL: Methods and Model Zoo

We consider the following RobustSSL methods:

Modle Zoo: We released all the pre-trained checkpoints in Dropbox.
Pre-trained weights of ResNet-18 encoder ACL (Jiang et al., NeurIPS'20) AdvCL (Fan et al., NeurIPS'21) A-InfoNCE (Yu et al., ECCV'22) DeACL (Zhang et al., ECCV'22) DynACL (Luo et al., ICLR'23) DynACL++ (Luo et al., ICLR'23) DynACL-AIR (Xu et al., NeurIPS'23a) DynACL-AIR++ (Xu et al., NeurIPS'23a) DynACL-RCS (Xu et al., NeurIPS'23b)
CIFAR-10 link* link link link* link* link* link link link
CIFAR-100 link* link link - link* link* link link link
STL10 link - - - link* link* link link link

Acknowledgements: The superscript * denotes that the pre-trained encoders haved been provided in their GitHub and we copied them into our Dropbox directory; otherwise, the encoders were pre-trained by us.

To provide a comprehensive benchmark, we welcome incoraporating new self-supervised robust pre-training methods into our repo!

Fine-Tuning

Here, we provide two kinds of fine-tuning methods:

  • Vanilla Fine-tuning: You need to specify the hyper-parameters such as the learning rate and the batch size for each pre-trained models. We provide all the scripts for finetuning and evalution in the file run_vanilla_tune.sh.
  • AutoLoRa (Xu et al., ICLR'24): It is a parameter-free and automated robust fine-tuning framework. You DO NOT need to search for the appropriate hyper-parameters. We provide all the scripts for finetuning and evalution in the file run_autolora.sh.

To provide a comprehensive benchmark, we welcome incoraporating new robust fine-tuning methods into our repo!

We consider the following three fine-tuning modes:

  • Standard linear fine-tuning (SLF): only standardly fine-tuning the classifier while freezing the encoder.
  • Adversarial linear fine-tuning (ALF): only adversarially fine-tuning the classifier while freezing the encoder.
  • Adversarial full fine-tuning (AFF): adversarially fine-tuning both the encoder and the classifier.

Requirement

  • Python 3.8
  • Pytorch 1.13
  • CUDA 11.6
  • AutoAttack (Install AutoAttack via pip install git+https://github.com/fra31/auto-attack)
  • robustbench (Install robustbench via pip install git+https://github.com/RobustBench/robustbench.git)

References

If you fine the code is useful to you, please cite the following papers by copying the following BibTeX.

@inproceedings{
xu2024autolora,
title={AutoLoRa: A Parameter-Free Automated Robust Fine-Tuning Framework},
author={Xilie Xu and Jingfeng Zhang and Mohan Kankanhalli},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=09xFexjhqE}
}

@inproceedings{
xu2023efficient,
title={Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection},
author={Xilie Xu and Jingfeng Zhang and Feng Liu and Masashi Sugiyama and Mohan Kankanhalli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=fpzA8uRA95}
}

@inproceedings{
xu2023enhancing,
title={Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization},
author={Xilie Xu and Jingfeng Zhang and Feng Liu and Masashi Sugiyama and Mohan Kankanhalli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=zuXyQsXVLF}
}

@inproceedings{luo2023DynACL,
    title={Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning},
    author={Rundong Luo and Yifei Wang and Yisen Wang},
    booktitle={The Eleventh International Conference on Learning Representations},
    year={2023},
    url={https://openreview.net/forum?id=0qmwFNJyxCL}
}

@inproceedings{zhang2022DeACL,
  title={Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness},
  author={Zhang, Chaoning and Zhang, Kang and Zhang, Chenshuang and Niu, Axi and Feng, Jiu and Yoo, Chang D and Kweon, In So},
  booktitle={ECCV 2022},
  pages={725--742},
  year={2022},
  organization={Springer}
}

@inproceedings{yu2022AInfoNCE,
  title={Adversarial Contrastive Learning via Asymmetric InfoNCE},
  author={Yu, Qiying and Lou, Jieming and Zhan, Xianyuan and Li, Qizhang and Zuo, Wangmeng and Liu, Yang and Liu, Jingjing},
  booktitle={European Conference on Computer Vision},
  pages={53--69},
  year={2022},
  organization={Springer}
}

@article{fan2021AdvCL,
  title={When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?},
  author={Fan, Lijie and Liu, Sijia and Chen, Pin-Yu and Zhang, Gaoyuan and Gan, Chuang},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  pages={21480--21492},
  year={2021}
}

@article{jiang2020ACL,
  title={Robust pre-training by adversarial contrastive learning},
  author={Jiang, Ziyu and Chen, Tianlong and Chen, Ting and Wang, Zhangyang},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  pages={16199--16210},
  year={2020}
}

@article{kim2020RoCL,
  title={Adversarial self-supervised contrastive learning},
  author={Kim, Minseon and Tack, Jihoon and Hwang, Sung Ju},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  pages={2983--2994},
  year={2020}
}

Contact

Please contact xuxilie@comp.nus.edu.sg and jingfeng.zhang@auckland.ac.nz if you have any question on the codes.