/H2T

This is the example code for H2T

Primary LanguagePythonMIT LicenseMIT

Feature Fusion from Head to Tail for Long-Tailed Visual Recognition

Paper overview License Slides Slides Poster

This repo contains the sample code for our AAAI 2024: Feature Fusion from Head to Tail for Long-Tailed Visual Recognition. The core code is in methods.py: H2T.

To do list:

  • Camera-ready version including the appendix of the paper is updated ! [link]
  • Slides and the poster are released. [Slides (pptx), Slides (pdf), Poster]
  • CE loss for CIFAR-100-LT is realsed.
  • Code for other datasets and baseline methods are some what messy 😆😆😆. Detailed running instructions and the orignized code for more datasets and baselines will be released latter. (This repository reserves some interfaces for other loss functions and backbones, which have not yet been integrated into the training and configuration files.)

Training

Stage-1:

(e.g. CIFAR100-LT, imbalance ratio = 100, CrossEntropy Loss, MixUp, training from scratch)

python train_stage1.py --cfg ./config/cifar100_imb001_stage1_ce_mixup

Stage-2:

(e.g. CIFAR100-LT, imbalance ratio = 100, CrossEntropy Loss, H2T)

python train_stage2.py --cfg ./config/cifar100_imb001_stage2_ce_H2T.yaml resume /path/to/checkpoint/stage1

The saved folder (including logs, code, and checkpoints) is organized as follows.

H2T
├── saved
│   ├── modelname_date
│   │   ├── ckps
│   │   │   ├── current.pth.tar
│   │   │   └── model_best.pth.tar
│   │   └── logs
│   │       └── modelname.txt
│   │   └── codes
│   │       └── relevant code without data
│   ...   

Evaluation

To evaluate a trained model, run:

(e.g. CIFAR100-LT, imbalance ratio = 100, CrossEntropy Loss, Stage-1)

python eval-modified.py --cfg ./config/cifar100_imb001_stage1_ce_mixup resume /path/to/checkpoint/stage1

(e.g. CIFAR100-LT, imbalance ratio = 100, CrossEntropy Loss, Stage-2)

python eval.py --cfg ./config/cifar100_imb001_stage2_ce_H2T.yaml resume /path/to/checkpoint/stage2

Results and Models

1) CIFAR-10-LT and CIFAR-100-LT

  • Stage-1 (CE with mixup):
Dataset Top-1 Accuracy Model
CIFAR-100-LT IF=50 45.40% link
CIFAR-100-LT IF=100 39.55% link
CIFAR-100-LT IF=200 36.01% link
  • Stage-2 (CE with H2T):
Dataset Top-1 Accuracy Model
CIFAR-100-LT IF=50 52.95% link
CIFAR-100-LT IF=100 47.80% link
CIFAR-100-LT IF=200 43.95% link

Note: I reran Stage-2 with the config from this respository and got slightly better results than in the AAAI paper.

Misc

If you find our paper and repo useful, please cite our paper:

@inproceedings{li2024feature,
  title={Feature Fusion from Head to Tail for Long-Tailed Visual Recognition},
  author={Li, Mengke and Zhikai, HU and Lu, Yang and Lan, Weichao and Cheung, Yiu-ming and Huang, Hui},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={12},
  pages={13581--13589},
  year={2024}
}

Acknowledgment

We refer to the code architecture from MisLAS. Many thanks to the authors.