Confidence-Aware Calibration and Scoring Functions for Curriculum Learning

Despite the great success of state-of-the-art deep neural networks, several studies have reported models to be over-confident in predictions, indicating miscalibration. Label Smoothing has been proposed as a solution to the over-confidence problem and works by softening hard targets during training, typically by distributing part of the probability mass from a one-hot label uniformly to all other labels. However, neither model nor human confidence in a label are likely to be uniformly distributed in this manner, with some labels more likely to be confused than others. In this paper we integrate notions of model confidence and human confidence with label smoothing, respectively Model Confidence LS and Human Confidence LS, to achieve better model calibration and generalization. To enhance model generalization, we show how our model and human confidence scores can be successfully applied to curriculum learning, a training strategy inspired by learning of `easier to harder' tasks. A higher model or human confidence score indicates a more recognisable and therefore easier sample, and can therefore be used as a scoring function to rank samples in curriculum learning. We evaluate our proposed methods with four state-of-the-art architectures for image and text classification task, using datasets with multi-rater label annotations by humans. We report that integrating model or human confidence information in label smoothing and curriculum learning improves both model performance and model calibration.

Dataset

CIFAR10-H includes the full-label distribution as annotated by humans only for each of the 10,000 images in the CIFAR10 test set. It utilizes around 500,000 crowdsourced human labels, with 50 annotators labelling each image on average.

WikiArt dataset reveals the interrelations between visual art, text describing it, and human emotion. It is relatively small, and consists of 4,105 pieces of art, selected from the WikiArt.org's collection with twenty-two categories (cubism, baroque etc). Each category covers about 200 items to make it a balanced dataset. Both the image of the art and its corresponding title are annotated through crowdsourcing independently using a 20-way classification of the emotion (contentment, amusement, sadness, etc.) evoked by the artwork or its title.