News
- 'GaussianBlur' is replaced from Opencv to PIL, and MoCo v2 training speed doubles!
(time/iter 0.35s-->0.16s, SimCLR and BYOL are also affected.) - OpenSelfSup now supports Mixed Precision Training (apex AMP)!
- A bug of MoCo v2 has been fixed and now the results are reproducible.
- OpenSelfSup now supports BYOL!
The master branch works with PyTorch 1.1 or higher.
OpenSelfSup
is an open source unsupervised representation learning toolbox based on PyTorch.
Below is the relations among Unsupervised Learning, Self-Supervised Learning and Representation Learning. This repo focuses on the shadow area, i.e., Unsupervised Representation Learning. Self-Supervised Representation Learning is the major branch of it. Since in many cases we do not distingush between Self-Supervised Representation Learning and Unsupervised Representation Learning strictly, we still name this repo as OpenSelfSup
.
-
All methods in one repository
For comprehensive comparison in all benchmarks, refer to MODEL_ZOO.md. Most of the selfsup pretraining methods are under the
batch_size=256, epochs=200
setting.Method VOC07 SVM (best layer) ImageNet (best layer) ImageNet 87.17 76.17 Random 30.54 16.21 Relative-Loc 64.78 49.31 Rotation-Pred 67.38 54.99 DeepCluster 74.26 57.71 NPID 74.50 56.61 ODC 78.42 57.70 MoCo 79.18 60.60 MoCo v2 84.26 67.69 SimCLR 78.95 61.57 BYOL (epoch=300) 86.58 72.35 -
Flexibility & Extensibility
OpenSelfSup
follows a similar code architecture of MMDetection while is even more flexible than MMDetection, since OpenSelfSup integrates various self-supervised tasks including classification, joint clustering and feature learning, contrastive learning, tasks with a memory bank, etc.For existing methods in this repo, you only need to modify config files to adjust hyper-parameters. It is also simple to design your own methods, please refer to GETTING_STARTED.md.
-
Efficiency
All methods support multi-machine multi-gpu distributed training.
-
Standardized Benchmarks
We standardize the benchmarks including logistic regression, SVM / Low-shot SVM from linearly probed features, semi-supervised classification, and object detection. Below are the setting of these benchmarks.
Benchmarks Setting Remarks ImageNet Linear Classification (Multi) goyal2019scaling Evaluate different layers. ImageNet Linear Classification (Last) MoCo Evaluate the last layer after global pooling. Places205 Linear Classification goyal2019scaling Evaluate different layers. ImageNet Semi-Sup Classification PASCAL VOC07 SVM goyal2019scaling Costs="1.0,10.0,100.0" to save evaluation time w/o change of results. PASCAL VOC07 Low-shot SVM goyal2019scaling Costs="1.0,10.0,100.0" to save evaluation time w/o change of results. PASCAL VOC07+12 Object Detection MoCo COCO17 Object Detection MoCo
Please refer to CHANGELOG.md for details and release history.
[2020-10-14]
OpenSelfSup
v0.3.0 is released with some bugs fixed and support of new features.[2020-06-26]
OpenSelfSup
v0.2.0 is released with benchmark results and support of new features.[2020-06-16]
OpenSelfSup
v0.1.0 is released.Please refer to INSTALL.md for installation and dataset preparation.
Please see GETTING_STARTED.md for the basic usage of OpenSelfSup.
Please refer to MODEL_ZOO.md for for a comprehensive set of pre-trained models and benchmarks.
This project is released under the Apache 2.0 license.
If you use this toolbox in your research, please consider cite:
@inproceedings{zhan2020online, title={Online Deep Clustering for Unsupervised Representation Learning}, author={Zhan, Xiaohang and Xie, Jiahao and Liu, Ziwei and Ong, Yew-Soon and Loy, Chen Change}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={6688--6697}, year={2020} }
- This repo borrows the architecture design and part of the code from MMDetection.
- The implementation of MoCo and the detection benchmark borrow the code from moco.
- The SVM benchmark borrows the code from fair_self_supervision_benchmark.
openselfsup/third_party/clustering.py
is borrowed from deepcluster.
We encourage researchers interested in Self-Supervised Learning to contribute to OpenSelfSup. Your contributions, including implementing or transferring new methods to OpenSelfSup, performing experiments, reproducing of results, parameter studies, etc, will be recorded in MODEL_ZOO.md. For now, the contributors include: Xiaohang Zhan (@XiaohangZhan), Jiahao Xie (@Jiahao000), Enze Xie (@xieenze), Zijian He (@scnuhealthy).
This repo is currently maintained by Xiaohang Zhan (@XiaohangZhan), Jiahao Xie (@Jiahao000) and Enze Xie (@xieenze).
-