/Awesome-Backbones

Integrate deep learning models for image classification | Backbone learning/comparison/magic modification project

Primary LanguagePython

Awesome backbones for image classification

BILIBILI

GitHub forks GitHub stars

写在前面

  • 若训练效果不佳,首先需要调整学习率和Batch size,这俩超参很大程度上影响收敛。其次,从关闭图像增强手段(尤其小数据集)开始,有的图像增强方法会污染数据,如

  如何去除增强?如efficientnetv2-b0配置文件中train_pipeline可更改为如下

train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='RandomResizedCrop',
        size=192,
        efficientnet_style=True,
        interpolation='bicubic'),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='ImageToTensor', keys=['img']),
    dict(type='ToTensor', keys=['gt_label']),
    dict(type='Collect', keys=['img', 'gt_label'])
]

  若你的数据集提前已经将shape更改为网络要求的尺寸,那么Resize操作也可以去除。

更新日志

2023.08.05

  • 新增TinyViT(预训练权重不匹配)、DeiT3EdgeNeXtRevVisionTransformer

2023.03.07

  • 新增MobileViTDaViTRepLKNetBEiTEVAMixMIMEfficientNetV2

2022.11.20

  • 新增是否将测试集用作验证集选项,若不使用,从训练集按ratio划分验证集数量,随机从训练集某fold挑选作为验证集(类似k-fold但不是,可自己稍改达到k-fold目的),详见Training tutorial

2022.11.06

  • 新增HorNet, EfficientFormer, SwinTransformer V2, MViT模型

测试环境

  • Pytorch 1.7.1+
  • Python 3.6+

资料

数据集 视频教程 人工智能技术探讨群
花卉数据集 提取码:0zat 点我跳转 1群:78174903
3群:584723646

快速开始

python tools/single_test.py datas/cat-dog.png models/mobilenet/mobilenet_v3_small.py --classes-map datas/imageNet1kAnnotation.txt

教程

模型

预训练权重

名称 权重 名称 权重 名称 权重
LeNet5 None AlexNet None VGG VGG-11
VGG-13
VGG-16
VGG-19
VGG-11-BN
VGG-13-BN
VGG-16-BN
VGG-19-BN
ResNet ResNet-18
ResNet-34
ResNet-50
ResNet-101
ResNet-152
ResNetV1C ResNetV1C-50
ResNetV1C-101
ResNetV1C-152
ResNetV1D ResNetV1D-50
ResNetV1D-101
ResNetV1D-152
ResNeXt ResNeXt-50
ResNeXt-101
ResNeXt-152
SEResNet SEResNet-50
SEResNet-101
SEResNeXt None
RegNet RegNetX-400MF
RegNetX-800MF
RegNetX-1.6GF
RegNetX-3.2GF
RegNetX-4.0GF
RegNetX-6.4GF
RegNetX-8.0GF
RegNetX-12GF
MobileNetV2 MobileNetV2 MobileNetV3 MobileNetV3-Small
MobileNetV3-Large
ShuffleNetV1 ShuffleNetV1 ShuffleNetV2 ShuffleNetV2 EfficientNet EfficientNet-B0
EfficientNet-B1
EfficientNet-B2
EfficientNet-B3
EfficientNet-B4
EfficientNet-B5
EfficientNet-B6
EfficientNet-B7
EfficientNet-B8
RepVGG RepVGG-A0
RepVGG-A1
RepVGG-A2
RepVGG-B0
RepVGG-B1
RepVGG-A1
RepVGG-B1g2
RepVGG-B1g4
RepVGG-B2
RepVGG-B2g4
RepVGG-B2g4
RepVGG-B3
RepVGG-B3g4
RepVGG-D2se
Res2Net Res2Net-50-14w-8s
Res2Net-50-26w-8s
Res2Net-101-26w-4s
ConvNeXt ConvNeXt-Tiny
ConvNeXt-Small
ConvNeXt-Base
ConvNeXt-Large
ConvNeXt-XLarge
HRNet HRNet-W18
HRNet-W30
HRNet-W32
HRNet-W40
HRNet-W44
HRNet-W48
HRNet-W64
ConvMixer ConvMixer-768/32
ConvMixer-1024/20
ConvMixer-1536/20
CSPNet CSPDarkNet50
CSPResNet50
CSPResNeXt50
Swin Transformer tiny-224
small-224
base-224
large-224
base-384
large-384
Vision Transformer vit_base_p16_224
vit_base_p32_224
vit_large_p16_224
vit_base_p16_384
vit_base_p32_384
vit_large_p16_384
Transformer in Transformer TNT-small
MLP Mixer base_p16
large_p16
Deit DeiT-tiny
DeiT-tiny distilled
DeiT-small
DeiT-small distilled
DeiT-base
DeiT-base distilled
DeiT-base 384px
DeiT-base distilled 384px
Conformer Conformer-tiny-p16
Conformer-small-p32
Conformer-small-p16
Conformer-base-p16
T2T-ViT T2T-ViT_t-14
T2T-ViT_t-19
T2T-ViT_t-24
Twins PCPVT-small
PCPVT-base
PCPVT-large
SVT-small
SVT-base
SVT-large
PoolFormer PoolFormer-S12
PoolFormer-S24
PoolFormer-S36
PoolFormer-M36
PoolFormer-M48
DenseNet DenseNet121
DenseNet161
DenseNet169
DenseNet201
Visual Attention Network(VAN) VAN-Tiny
VAN-Small
VAN-Base
VAN-Large
Wide-ResNet WRN-50
WRN-101
HorNet HorNet-Tiny
HorNet-Tiny-GF
HorNet-Small
HorNet-Small-GF
HorNet-Base
HorNet-Base-GF
HorNet-Large
HorNet-Large-GF
HorNet-Large-GF384
EfficientFormer efficientformer-l1
efficientformer-l3
efficientformer-l7
Swin Transformer v2 tiny-256 window 8
tiny-256 window 16
small-256 window 8
small-256 window 16
base-256 window 8
base-256 window 16
large-256 window 16
large-384 window 24
MViTv2 MViTv2-Tiny
MViTv2-Small
MViTv2-Base
MViTv2-Large
MobileVit MobileViT-XXSmall
MobileViT-XSmall
MobileViT-Small
DaViT DaViT-T
DaViT-S
DaViT-B
RepLKNet RepLKNet-31B-224
RepLKNet-31B-384
RepLKNet-31L-384
RepLKNet-XL
BEiT BEiT-base EVA EVA-G-p14-224
EVA-G-p14-336
EVA-G-p14-560
EVA-G-p16-224
EVA-L-p14-224
EVA-L-p14-196
EVA-L-p14-336
MixMIM mixmim-base EfficientNetV2 EfficientNetV2-b0
EfficientNetV2-b1
EfficientNetV2-b2
EfficientNetV2-b3
EfficientNetV2-s
EfficientNetV2-m
EfficientNetV2-l
EfficientNetV2-xl
DeiT3 deit3_small_p16
deit3_small_p16_384
deit3_base_p16
deit3_base_p16_384
deit3_medium_p16
deit3_large_p16
deit3_large_p16_384
deit3_huge_p16
EdgeNeXt edgenext-base
edgenext-small
edgenext-X-small
edgenext-XX-small
RevVisionTransformer revvit-small
revvit-base

我维护的其他项目

参考

@repo{2020mmclassification,
    title={OpenMMLab's Image Classification Toolbox and Benchmark},
    author={MMClassification Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmclassification}},
    year={2020}
}