Official PyTorch Implementation
Emanuel Ben-Baruch, Tal Ridnik, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, Lihi Zelnik-Manor
DAMO Academy, Alibaba Group
Abstract
In a typical multi-label setting, a picture contains on average few positive labels, and many negative ones. This positive-negative imbalance dominates the optimization process, and can lead to under-emphasizing gradients from positive labels during training, resulting in poor accuracy. In this paper, we introduce a novel asymmetric loss ("ASL"), which operates differently on positive and negative samples. The loss enables to dynamically down-weights and hard-thresholds easy negative samples, while also discarding possibly mislabeled samples. We demonstrate how ASL can balance the probabilities of different samples, and how this balancing is translated to better mAP scores. With ASL, we reach state-of-the-art results on multiple popular multi-label datasets: MS-COCO, Pascal-VOC, NUS-WIDE and Open Images. We also demonstrate ASL applicability for other tasks, such as single-label classification and object detection. ASL is effective, easy to implement, and does not increase the training time or complexity.
In a new article we released, we share pretrain weights for different models, that dramatically outperfrom standard pretraining on downstream tasks, including multi-label ones.
We also compare in the article multi-label pretrianing with ASL on ImageNet21K to pretraining with standard loss functions (cross-entropy and focal loss).
With great collaboration by @GhostWnd, we now provide a script for fully reproducing the article results, and finally a modern multi-label training code is available for the community.
Some questions are repeatedly asked in the issues section. make sure to review them before starting a new issue:
- Regarding combining ASL with other techniques, see link
- Regarding implementation of asymmetric clipping, see link
- Regarding disable_torch_grad_focal_loss option, see link
- Regarding squish Vs crop resizing, see link
- Regarding training tricks, see link
- How to apply ASL to your own dataset, see link
In this PyTorch file, we provide implementations of our new loss function, ASL, that can serve as a drop-in replacement for standard loss functions (Cross-Entropy and Focal-Loss)
For the multi-label case (sigmoids), the two implementations are:
class AsymmetricLoss(nn.Module)
class AsymmetricLossOptimized(nn.Module)
The two losses are bit-accurate. However, AsymmetricLossOptimized() contains a more optimized (and complicated) way of implementing ASL, which minimizes memory allocations, gpu uploading, and favors inplace operations.
For the single-label case (softmax), the implementations is called:
class ASLSingleLabel(nn.Module)
In this link, we provide pre-trained models on various dataset.
Thanks to external contribution of @hellbell, we now provide a validation code that repdroduces the article results on MS-COCO:
python validate.py \
--model_name=tresnet_l \
--model_path=./models_local/MS_COCO_TRresNet_L_448_86.6.pth
We provide inference code, that demonstrate how to load our model, pre-process an image and do actuall inference. Example run of MS-COCO model (after downloading the relevant model):
python infer.py \
--dataset_type=MS-COCO \
--model_name=tresnet_l \
--model_path=./models_local/MS_COCO_TRresNet_L_448_86.6.pth \
--pic_path=./pics/000000000885.jpg \
--input_size=448
which will result in:
Example run of OpenImages model:
python infer.py \
--dataset_type=OpenImages \
--model_name=tresnet_l \
--model_path=./models_local/Open_ImagesV6_TRresNet_L_448.pth \
--pic_path=./pics/000000000885.jpg \
--input_size=448
@misc{benbaruch2020asymmetric,
title={Asymmetric Loss For Multi-Label Classification},
author={Emanuel Ben-Baruch and Tal Ridnik and Nadav Zamir and Asaf Noy and Itamar Friedman and Matan Protter and Lihi Zelnik-Manor},
year={2020},
eprint={2009.14119},
archivePrefix={arXiv},
primaryClass={cs.CV} }
Feel free to contact if there are any questions or issues - Emanuel Ben-Baruch (emanuel.benbaruch@alibaba-inc.com) or Tal Ridnik (tal.ridnik@alibaba-inc.com).