/openmixup

Mixup for Supervision, Semi- and Self-Supervision Learning Toolbox and Benchmark

Primary LanguagePythonApache License 2.0Apache-2.0

OpenMixup

News

  • OpenMixup v0.2.0 is now released, which supports new features as #3.
  • OpenMixup v0.1.3 is now available (finished code refactoring and fixed bugs), which steadily supports ViTs, self-supervised methods (e.g., MoCo.V3 and MAE), and online analysis (kNN metric and visualization). It requires the rebuilding of OpenMixup (install mmcv-full to support ViTs). More results are provided in Model Zoos.
  • OpenMixup v0.1.1 is released, which supports various backbones (ConvNets and ViTs), various mixup methods (e.g., PuzzleMix, AutoMix, SAMix, etc.), various classification datasets, benchmarks (model_zoo), config files generation, FP16 training (Apex or MMCV).

Introduction

The master branch works with PyTorch 1.6 or higher.

OpenMixup is an open-source supervised, self- and semi-unsupervised representation learning toolbox based on PyTorch, especially for mixup-related methods.

What does this repo do?

Learning discriminative visual representation efficiently that facilitates downstream tasks is one of the fundamental problems in computer vision. Data mixing techniques largely improve the quality of deep neural networks (DNNs) in various scenarios. Since mixup techniques are used as augmentations or auxiliary tasks in a wide range of cases, this repo focuses on mixup-related methods for Supervised, Self- and Semi-Supervised Representation Learning. Thus, we name this repo OpenMixp.

Major features

This repo will be continued to update in the next two months! Please watch us for latest update!

Change Log

Please refer to CHANGELOG.md for details and release history.

[2020-04-08] Configs reoriganized and new methods supported in OpenMixup v0.2.0. [2020-03-31] OpenMixup v0.2.0 is released.

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Get Started

Please see Getting Started for the basic usage of OpenMixup (based on MMSelfSup). Then, see tutorials for more tech details (based on MMClassification).

Benchmark and Model Zoo

Model Zoos and lists of Awesome Mixups have been released, and will be updated in the next two months. Checkpoints and traning logs will be updated soon!

License

This project is released under the Apache 2.0 license.

Acknowledgement

  • OpenMixup is an open source project for mixup methods created by researchers in CAIRI AI LAB. We encourage researchers interested in mixup methods to contribute to OpenMixup!
  • This repo borrows the architecture design and part of the code from MMSelfSup and MMClassification.

Citation

If you find this project useful in your research, please consider cite our repo:

@misc{2022openmixup,
    title={{OpenMixup}: Open Mixup Toolbox and Benchmark for Visual Representation Learning},
    author={Li, Siyuan and Liu, Zichen and Wu, Di, Stan Z. Li},
    howpublished = {\url{https://github.com/Westlake-AI/openmixup}},
    year={2022}
}

Contributors

For now, the direct contributors include: Siyuan Li (@Lupin1998), Zicheng Liu (@pone7), and Di Wu (@wudi-bu). We thanks contributors for MMSelfSup and MMClassification.

Contact

This repo is currently maintained by Siyuan Li (lisiyuan@westlake.edu.cn) and Zicheng Liu (liuzicheng@westlake.edu.cn).