This project provides a strong single-stage baseline for Long-Tailed Classification (under ImageNet-LT, Long-Tailed CIFAR-10/-100 datasets), Detection, and Instance Segmentation (under LVIS dataset). It is also a PyTorch implementation of the NeurIPS 2020 paper Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect, which proposes a general solution to remove the bad momentum causal effect for a variety of Long-Tailed Recognition tasks. The codes are organized into three folders:
- The classification folder supports long-tailed classification on ImageNet-LT, Long-Tailed CIFAR-10/CIFAR-100 datasets.
- The lvis_old folder (deprecated) supports long-tailed object detection and instance segmentation on LVIS V0.5 dataset, which is built on top of mmdet V1.1.
- The latest version of long-tailed detection and instance segmentation is under lvis1.0 folder. Since both LVIS V0.5 and mmdet V1.1 are no longer available on their homepages, we have to re-implement our method on mmdet V2.4 using LVIS V1.0 annotations.
If my open source projects have inspired you, giving me some sponsorship will be a great help to my subsequent open source work. Support my subsequent open source work❤️🙏
If you want to present our work in your group meeting / introduce it to your friends / seek answers for some ambiguous parts in the paper, feel free to use our slides. It has two versions: one-hour full version and five-minute short version.
If you are interested in a more general long-tailed classification setting that considers both class-wise (inter-class) imbalance and attribute-wise (intra-class) imbalance, please refer to our ECCV 2022 paper Invariant Feature Learning for Generalized Long-Tailed Classification and corresponding project.
The classification part allows the lower version of the following requirements. However, in detection and instance segmentation (mmdet V2.4), I tested some lower versions of python and pytorch, which are all failed. If you want to try other environments, please check the updates of mmdetection.
- PyTorch >= 1.6.0
- Python >= 3.7.0
- CUDA >= 10.1
- torchvision >= 0.7.0
- gcc version >= 5.4.0
conda create -n longtail pip python=3.7 -y
source activate longtail
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
pip install pyyaml tqdm matplotlib sklearn h5py
# download the project
git clone https://github.com/KaihuaTang/Long-Tailed-Recognition.pytorch.git
cd Long-Tailed-Recognition.pytorch
# the following part is only used to build mmdetection
cd lvis1.0
pip install mmcv-full
pip install mmlvis
pip install -r requirements/build.txt
pip install -v -e . # or "python setup.py develop"
When we wrote the paper, we are using lvis V0.5 and mmdet V1.1 for our long-tailed instance segmentation experiments, but they've been deprecated by now. If you want to reproduce our results on lvis V0.5, you have to find a way to build mmdet V1.1 environments and use the code in lvis_old folder.
ImageNet-LT is a long-tailed subset of original ImageNet, you can download the dataset from its homepage. After you download the dataset, you need to change the data_root of 'ImageNet' in ./classification/main.py file.
When you run the code for the first time, our dataloader will automatically download the CIFAR-10/-100. You need to set the data_root in ./classification/main.py to the path where you want to put all CIFAR data.
Large Vocabulary Instance Segmentation (LVIS) dataset uses the COCO 2017 train, validation, and test image sets. If you have already downloaded the COCO images, you only need to download the LVIS annotations. LVIS val set contains images from COCO 2017 train in addition to the COCO 2017 val split.
You need to put all the annotations and images under ./data/LVIS like this:
data
|-- LVIS
|--lvis_v1_train.json
|--lvis_v1_val.json
|--images
|--train2017
|--.... (images)
|--test2017
|--.... (images)
|--val2017
|--.... (images)
For long-tailed classification, please go to [link]
For long-tailed object detection and instance segmentation, please go to [link]
- Compared with previous state-of-the-art Decoupling, our method only requires one-stage training.
- Most of the existing methods for long-tailed problems are using data distribution to conduct re-sampling or re-weighting during training, which is based on an inappropriate assumption that you can know the future distribution before you start to learn. Meanwhile, the proposed method doesn't need to know the data distribution during training, we only need to use an average feature for inference after we train the model.
- Our method can be easily transferred to any tasks. We outperform the previous state-of-the-arts Decoupling, BBN, OLTR in image classification, and we achieve better results than 2019 Winner of LVIS challenge EQL in long-tailed object detection and instance segmentation (under the same settings with even fewer GPUs).
If you find our paper or this project helps your research, please kindly consider citing our paper in your publications.
@inproceedings{tang2020longtailed,
title={Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect},
author={Tang, Kaihua and Huang, Jianqiang and Zhang, Hanwang},
booktitle= {NeurIPS},
year={2020}
}