/noisy_label_papers

This repository is used to record current noisy label paper in mainstream ML and CV conference and journal.

Noisy_label_papers Awesome

This repository is used to record current noisy label papers published in mainstream ML and CV conference and journal.

Other similar repositories:

The following list is mainly arranged according to the publishing sites. For chronological order, please see Chronological Order.

Survey

  • TNNLS - 2014 - Classification in the presence of label noise: a survey
  • ESANN - 2014 - A comprehensive introduction to label noise (上一文缩减版)
  • Arxiv - 2019 - Image Classification with Deep Learning in the Presence of Noisy Labels: A Survey
  • Arxiv - 2020 - Label Noise Types and Their Effects on Deep Learning
  • Arxiv - 2020 - Learning from Noisy Labels with Deep Neural Networks: A Survey
  • Arxix - 2020 - A Survey of Label-noise Representation Learning: Past, Present and Future
  • Arxix - 2020 - A Survey on Deep Learning with Noisy Labels: How to train your model when you cannot trust on the annotations?

ICML

  • Arxiv - 2017 - Learning with bounded instance- and label-dependent label noise
  • ICML - 2020 - Learning with Bounded Instance- and Label-dependent Label Noise
  • ICML - 2020 - Does label smoothing mitigate label noise?
  • ICML - 2020 - Error-Bounded Correction of Noisy Labels
  • ICML - 2020 - Deep k-NN for Noisy Labels
  • ICML - 2020 - Searching to Exploit Memorization Effect in Learning from Noisy Labels
  • ICML - 2020 - Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels
  • ICML - 2020 - SIGUA: Forgetting May Make Learning with Noisy Labels More Robust
  • ICML - 2020 - Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
  • ICML - 2020 - Improving Generalization by Controlling Label-Noise Information in Neural Network Weights
  • ICML - 2020 - Normalized Loss Functions for Deep Learning with Noisy Labels
  • ICML - 2020 - Training Binary Neural Networks through Learning with Noisy Supervision
  • ICML-reject - 2020 - IMAE for Noise-Robust Learning: Mean Absolute Error Does Not Treat Examples Equally and Gradient Magnitude’s Variance Matters
  • ICML - 2020 - Strength from Weakness: Fast Learning Using Weak Supervision
  • ICML - 2020 - Understanding and Mitigating the Tradeoff between Robustness and Accuracy
  • ICML - 2020 - Overfitting in adversarially robust deep learning
  • ICML - 2020 - On the Noisy Gradient Descent that Generalizes as SGD
  • ICML - 2019 - Using Pre-Training Can Improve Model Robustness and Uncertainty
  • ICML - 2019 - Learning with bad training data via iterative trimmed loss minimization
  • ICML - 2019 - SELFIE: Refurbishing Unclean Samples for Robust Deep Learning.
  • ICML - 2019 - Unsupervised Label Noise Modeling and Loss Correction
  • ICML - 2019 - On Symmetric Losses for Learning from Corrupted Labels
  • ICML - 2019 - Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels.
  • ICML - 2019 - Robust Inference via Generative Classifiers for Handling Noisy Labels
  • ICML - 2019 - Fast Rates for a kNN Classifier Robust to Unknown Asymmetric Label Noise
  • ICML - 2019 - Combating Label Noise in Deep Learning using Abstention
  • ICML - 2019 - How does Disagreement Help Generalization against Label Corruption?
  • ICML - 2018 - MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
  • ICML - 2018 - Dimensionality-Driven Learning with Noisy Labels
  • ICML - 2018 - Learning to reweight examples for Robust Deep learning
  • ICML - 2018 - Does Distributionally Robust Supervised Learning Give Robust Classifiers?
  • ICML - 2016 - Mixture proportion estimation via kernel embeddings of distributions.
  • ICML - 2016 - Robust Probabilistic Modeling with Bayesian Data Reweighting
  • ICML - 2016 - Loss factorization, weakly supervised learning and label noise robustness
  • ICML - 2015 - Learning from Corrupted Binary Labels via Class-Probability Estimation
  • ICML - 2012 - Learning to label aerial images from noisy data
  • ICML - 2008 - Random classification noise defeats all convex potential boosters
  • MLJ - 2010 - Random classification noise defeats all convex potential boosters
  • ICML - 2008 - Deep learning via semi-supervised embedding
  • ICML - 2003 - Eliminating class noise in large datasets
  • ICML - 2001 - Estimating a Kernel Fisher Discriminant in the Presence of Label Noise

NIPS

  • NIPS-reject - 2020 - Which Strategies Matter for Noisy Label Classification Insight into Loss and Uncertainty
  • NIPS-reject - 2020 - Analysis of Softmax Approximation for Deep Classifiers under Input-Dependent Label Noise
  • NIPS-reject - 2020 - Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale
  • NIPS - 2020 - What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
  • NIPS - 2020 - Early-Learning Regularization Prevents Memorization of Noisy Labels
  • NIPS - 2020 - Coresets for Robust Training of Deep Neural Networks against Noisy Labels
  • NIPS - 2020 - A Topological Filter for Learning with Label Noise
  • NIPS - 2020 - Identifying Mislabeled Data using the Area Under the Margin Ranking
  • NIPS - 2020 - Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning
  • NIPS - 2020 - Parts-dependent Label Noise: Towards Instance-dependent Label Noise
  • NIPS - 2020 - Rethinking Importance Weighting for Deep Learning under Distribution Shift
  • NIPS - 2020 - What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
  • NIPS - 2020 - What Do Neural Networks Learn When Trained With Random Labels?
  • NIPS - 2019 - When does label smoothing help?
  • NIPS-reject - 2019 - Understanding generalization of deep neural networks trained with noisy labels
  • NIPS - 2019 - Robust bi-tempered logistic loss based on bregman divergence
  • NIPS - 2019 - Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting
  • NIPS - 2019 - Combinatorial Inference against Label Noise
  • NIPS - 2019 - L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise
  • NIPS - 2019 - Are Anchor Points Really Indispensable in Label-Noise Learning?
  • NIPS - 2018 - Co-teaching: Robust training of deep neural networks with extremely noisy labels
  • NIPS - 2018 - Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
  • NIPS - 2018 - Robustness of conditional GANs to noisy labels
  • NIPS - 2018 - Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels.
  • NIPS - 2018 - Masking: A New Perspective of Noisy Supervision
  • NIPS-reject-ACCESS - 2018 - Limited Gradient Descent: Learning with Noisy Labels
  • NIPS - 2017 - Decouping when to update from how to update
  • NIPS - 2017 - Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks
  • NIPS-workshop - 2017 - Learning to learn from weak Supervision by Full Supervision
  • NIPS - 2017 - Active bias: Training more accurate neural networks by emphasizing high variance samples
  • NIPS - 2016 - beta-risk: a New Surrogate Risk for Learning from Weakly Labeled Data
  • NIPS - 2015 - Learning with Symmetric Label Noise: The Importance of Being Unhinged
  • NIPS - 2013 - Learning with Noisy Labels
  • NIPS - 2010 - Exploiting weakly-labeled Web images to improve object classification: a domain adaptation approach
  • NIPS - 2009 - On the design of loss functions for classification: theory, robustness to outliers, and savageboost

MLJ-JMLR

  • MLJ - 2018 - Learning from Binary Labels with Instance-Dependent Corruption
  • JMLR - 2018 - Cost-Sensitive Learning with Noisy Labels
  • NeuroComputing - 2015 - Making risk minimization tolerant to label noise
  • Cybernetics - 2013 - Noise tolerance under risk minimization
  • TNNLS - 2016 - Multiclass learning with partially corrputed labels
  • TPAMI - 2015 - Classification with Noisy Labels by Importance Reweighting
  • TPAMI - 2019 - Learning from Large-scale Noisy Web Data with Ubiquitous Reweighting for Image Classification
  • TIP - 2018 - Deep learning from noisy image labels with quality embedding
  • MLJ - 1988 - Learning from noisy examples
  • JMLR - 2018 - A Theory of Learning with Corrupted Labels
  • STOC - 2017 - Learning from untrusted Data
  • COLT - 2013 - Classification with asymmetric label noise: Consisitency and maximal denoising
  • MLKDD - 2012 - Label-noise robust logistic regression and its applications
  • MLJ - 2010 - Random classification noise defeats all convex potential boosters
  • AIR - 2010 - A study of the effect of different types of noise on the precision of supervised learning techniques
  • AIR - 2004 - Class noise vs attribute noise: A quantitative study
  • JASA - 2006 - Convexity, classification, and risk bounds
  • JAIR - 1999 - Identifying mislabeled training data
  • JMLR - 2010 - Composite Binary Losses
  • JMLR - 2006 - Consistency of Multiclass Empirical Risk Minimization Methods Based on Convex Loss
  • ECML-PKDD - 2014 - Consistency of Losses for Learning from Weak Labels
  • ArXiv - 2018 - On the Resistance of Nearest Neighbor to Random Noisy Labels
  • COLT - 2019 - Classification with unknown class conditional label noise on non-compact feature space
  • KDD - 2019 - Learning from Incomplete and Inaccurate Supervision
  • KDD - 2018 - Active Deep Learning to Tune Down the Noise in Labels

ICLR

  • ICLR-accept - 2021 - MoPro: Webly Supervised Learning with Momentum Prototypes
  • ICLR-spotlight - 2021 - Noise against noise: stochastic label noise helps combat inherent label noise
  • ICLR-poster - 2021 - When Optimizing f-Divergence is Robust with Label Noise
  • ICLR-poster - 2021 - Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
  • ICLR-spotlight - 2021 - Learning with feature dependent label noise: a progressive approach
  • ICLR-accept - 2021 - Robust Curriculum Learning: from clean label detection to noisy label self-correction
  • ICLR-accept - 2021 - Robust early-learning: Hindering the memorization of noisy labels
  • ICLR-poster - 2021 - In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
  • ICLR-poster - 2021 - Multiscale Score Matching for Out-of-Distribution Detection
  • ICLR-spotlight - 2021 - Sharpness-aware Minimization for Efficiently Improving Generalization
  • ICLR-spotlight - 2021 - How Benign is Benign Overfitting ?
  • ICLR-Oral - 2021 - Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
  • ICLR-spotlight - 2021 - How Does Mixup Help With Robustness and Generalization?
  • ICLR-spotlight - 2021 - Understanding the role of importance weighting for deep learning
  • ICLR-withdraw - 2021 - Contrast to Divide: self-supervised pre-training for learning with noisy labels
  • ICLR-reject - 2021 - Learning from Noisy Data with Robust Representation Learning
  • ICLR-reject - 2021 - Robust Temporal Ensembling
  • Arxiv - 2020 - Multi-Class Classification from Noisy-Similarity-Labeled Data
  • ICLR-reject - 2021 - Class2Simi: A New Perspective on Learning with Label Noise
  • ICLR-reject - 2021 - Me-momentum: Extracting Hard Confident Examples From Noisily Labeled Data
  • ICLR-reject - 2021 - Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
  • ICLR-reject - 2021 - Provable Robust Learning under Agnostic Corrupted Supervision
  • ICLR-reject - 2021 - An information-theoretic framework for learning models of instance-independent label noise
  • ICLR-reject - 2021 - Robust Learning via Golden Symmetric Loss of (un)Trusted Labels
  • ICLR-reject - 2021 - Bayesian Metric Learning for Robust Training of Deep Models under Noisy Labels
  • ICLR-withdraw - 2021 - Searching for Robustness: Loss Learning for Noisy Classification Tasks
  • ICLR-reject - 2021 - Robust Meta-learning with Noise via Eigen-Reptile
  • ICLR-reject - 2021 - Robust Loss Functions for Complementary Labels Learning
  • ICLR-withdraw - 2021 - ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks
  • ICLR-withdraw - 2021 - Derivative Manipulation for General Example Weighting
  • ICCV-reject - 2019 - Improved Mean Absolute Error for Learning Meaningful Patterns from Abnormal Training Data
  • ICLR-withdraw - 2021 - A Spectral Perspective of Neural Networks Robustness to Label Noise
  • ICLR-reject - 2021 - Implicit Regularization Effects of Unbiased Random Label Noises with SGD
  • ICLR-withdraw - 2021 - Learning Image Labels On-the-fly for Training Robust Classification Models
  • ICLR-reject - 2021 - Towards Noise-resistant Object Detection with Noisy Annotations
  • ICLR-withdraw - 2021 - Exploring Sub-Pseudo Labels for Learning from Weakly-Labeled Web Videos
  • ICLR-reject - 2021 - Towards Robust Graph Neural Networks against Label Noise
  • Arxiv - 2021 - Unified Robust Training for Graph Neural Networks against Label Noise
  • ICLR-workshop - 2019 - Learning Graph Neural Networks With Noisy Labels
  • ICLR-reject - 2020 - Graph convolutional networks for learning with few clean and many noisy labels
  • ICLR-reject - 2020 - Transfer Active Learning For Graph Neural Networks
  • ICLR-reject - 2020 - Learning in Confusion: Batch Active Learning with Noisy Oracle
  • ICLR-reject - 2020 - Rethinking deep active learning: Using unlabeled data at model training
  • ICLR-reject - 2020 - Combining Mixmatch And Active Learning For Better Accuracy With Fewer Labels
  • ICLR - 2020 - Mutual Mean-Teaching: Pseudo Label Refinery for Unsupervised Domain Adaptation on Person Re-identification
  • ICLR_reject - 2020 - Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution
  • ICLR_reject - 2020 - Semi-Supervised Boosting via Self Labelling
  • ICLR - 2020 - Robust training with ensemble consensus
  • ICLR - 2020 - DivideMix: Learning with Noisy Labels as Semi-supervised Learning
  • ICLR - 2020 - SELF: Learning to Filter Noisy Labels with Self-Ensembling
  • Arxiv - 2020 - Robust Learning under Label Noise with Iterative Noise-Filter
  • ICLR_accept - 2020 - Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee
  • aistats - 2020 - Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
  • ICLR_accept - 2020 - Can gradient clipping mitigate label noise?
  • ICLR_accept - 2020 - Curriculum Loss: Robust Learning and Generalization against Label Corruption
  • ICLR_withdraw - 2020 - IEG: Robust neural net training with severe label noises
  • ICLR_reject - 2020 - Is The Label Trustful: Training Better Deep Learning Model Via Uncertainty Mining Net
  • ICLR_reject - 2020 - Detecting Noisy Training Data with Loss Curves
  • ICLR_reject - 2020 - Confidence Scores Make Instance-dependent Label-noise Learning Possible
  • ICLR_reject - 2020 - Searching to Exploit Memorization Effect in Learning from Corrupted Labels
  • ICLR_reject - 2020 - Synthetic vs Real: Deep Learning on Controlled Noise
  • ICLR_reject - 2020 - Meta Label Correction for Learning with Weak Supervision
  • ICLR_reject - 2020 - Prestopping: How Does Early Stopping Help Generalization Against Label Noise?
  • ICLR_reject - 2020 - Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
  • ICLR_reject - 2020 - A Simple Approach to the Noisy Label Problem Through the Gambler's Loss
  • ICLR_reject - 2020 - Deep k-NN for Noisy Labels
  • ICLR - 2019 - Benchmarking neural network robustness to common corruptions and perturbations
  • ICLR_reject - 2019 - An Energy-Based Framework for Arbitrary Label Noise Correction
  • ICLR_w_reject - 2019 - A Simple Yet Effective Baseline For Robust Deep Learning With Noisy Labels
  • Arxiv - 2019 - Countering Noisy Labels by Learning from Auxiliary Clean Labels
  • ICLR_reject - 2019 - Pumpout: A Meta Approach For Robustly Training Deep Neural Networks With Noisy Labels
  • ICLR_reject - 2019 - ChoiceNet: Robust Learning By Revealing Output Correlations
  • ICLR - 2018 - mixup: Beyond empirical risk minimation
  • ICLR - 2018 - Learning From Noisy Singly-labeled Data
  • ICLR(workshop) - 2018 - How Do Neural Networks Overcome Label Noise?
  • ICLR - 2017 - Training deep neural-networks using a noise adaptiation layer
  • ICLR - 2017 - A baseline for detecting misclassified and out-of-distribution examples in neural networks
  • ICLR - 2016 - Auxiliary Image Regularization for Deep CNNs with Noisy Labels
  • ICLR(workshop) - 2015 - Training Deep Neural Networks on Noisy Labels with Bootstrapping
  • Arxiv - 2014 - Learning from Noisy Labels with Deep Neural Networks
  • ICLR(workshop) - 2015 - Training convolutional networks with noisy labels
  • ICML workshop - 2020 - How Does Early Stopping Help Generalization Against Label Noise?
  • project - 2017 - Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels

AAAI

  • AAAI - 2021 - Learning to Purify Noisy Labels via Meta Soft Label Corrector
  • AAAI - 2021 - Meta Label Correction for Learning with Weak Supervision
  • AAAI - 2021 - Analysing the Noise Model Error for Realistic Noisy Label Data
  • AAAI - 2021 - Learning from Noisy Labels with Complementary Loss Functions
  • AAAI - 2021 - Learning with Group Noise
  • AAAI - 2021 - Tackling Instance-­Dependent Label Noise via a Universal Probabilistic Model
  • AAAI - 2021 - Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise
  • AAAI - 2021 - Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels
  • AAAI - 2020 - Deep Discriminative CNN with Temporal Ensembling for Ambiguously-­‐Labeled Image Classification
  • AAAI - 2020 - Self-Paced Robust Learning for Leveraging Clean Labels in Noisy Data
  • AAAI - 2020 - Partial Multi-label Learning with Noisy Label Identification
  • AAAI - 2020 - Coupled-view Deep Classifier Learning from Multiple Noisy Annotators
  • AAAI - 2019 - How Does Knowledge of the AUC Constrain the Set of Possible Ground-Truth Labelings?
  • AAAI - 2019 - Single-Label Multi-Class Image Classification by Deep Logistic Regression
  • AAAI - 2019 - Adversarial Label Learning
  • AAAI - 2019 - Learning to Localize Objects with Noisy Labeled Instances
  • AAAI - 2019 - Exploiting Class Learnability in Noisy Data
  • AAAI - 2019 - Safeguarded Dynamic Label Regression for Noisy Supervision
  • AAAI - 2019 - Learning from Web Data Using Adversarial Discriminative Neural Networks for Fine-Grained Classification
  • AAAI - 2018 - Label Distribution Learning by Exploiting Label Correlations
  • AAAI - 2018 - Label Distribution Learning by Exploiting Sample Correlations Locally
  • AAAI - 2017 - Robust Loss Functions under Label Noise for Deep Neural Networks
  • AAAI - 2016 - Risk Minimization in the Presence of Label Noise
  • AAAI - 2016 - Learning with Marginalized Corrupted Features and Labels Together.
  • AAAI - 2016 - Robust Semi-Supervised Learning through Label Aggregation
  • AAAI - 2015 - Spectral Label Refinement for Noisy and Missing Text Labels
  • AAAI - 2015 - OMNI-Prop: Seamless Node Classification on Arbitrary Label Correlationa
  • AAAI - 2015 - Modelling class noise with symmetric and asymmetric distributions
  • AAAI - 2014 - Robust Distance Metric Learning in the Presence of Label Noise
  • AAAI - 2014 - Multilabel Classification with Label Correlations and Missing Labels
  • AAAI - 2013 - Imbalanced Multiple Noisy Labeling for Supervised Learning
  • AAAI - 2012 - Multi-Label Learning by Exploiting Label Correlations Locally

IJCAI

  • IJCAI - 2020 - Learning from Noisy Similar and Dissimilar Data
  • IJCAI - 2020 - A Bi-level Formulation for Label Noise Learning with Spectral Cluster Discovery
  • IJCAI - 2020 - Label Distribution for Learning with Noisy Labels
  • IJCAI - 2020 - Can Cross Entropy Loss be Robust to Label Noise?
  • IJCAI - 2019 - Multiple Noisy Label Distribution Propagation for Crowdsourcing
  • IJCAI - 2019 - Learning Sound Events from Webly Labeled Data
  • IJCAI - 2003 - Evaluating Classifiers by Means of Test Data with Noisy Labels

CVPR

  • CVPR - 2021 - Partially View-aligned Representation Learning with Noise-robust Contrastive Loss
  • CVPR - 2021 - Multi-Objective Interpolation Training for Robustness to Label Noise
  • CVPR - 2021 - Noise-resistant Deep Metric Learning with Ranking-based Instance Selection
  • CVPR - 2021 - Joint Negative and Positive Learning for Noisy Labels
  • CVPR-oral - 2021 - A Second-Order Approach to Learning with Instance-Dependent Label Noise
  • CVPR - 2021 - MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised Domain Adaptation in Semantic Segmentation
  • CVPR - 2021 - AutoDO: Robust AutoAugment for Biased Data with Label Noise via Scalable Probabilistic Implicit Differentiation
  • CVPR - 2021 - Jo-SRC: A Contrastive Approach for Combating Noisy Labels
  • CVPR - 2021 - Augmentation Strategies for Learning with Noisy Labels
  • CVPR-submit - 2021 - Decoupling Representation and Classifier for Noisy Label Learning
  • CVPR - 2020 - IEG: Robust Neural Network Training to Tackle Severe Label Noise
  • CVPR - 2020 - Distilling Effective Supervision from Severe Label Noise
  • CVPR - 2020 - Training Noise-Robust Deep Neural Networks via Meta-Learning
  • CVPR - 2020 - Task Agnostic Robust Learning on Corrupt Outputs by Correlation-Guided Mixture Density Networks
  • CVPR - 2020 - Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization
  • CVPR - 2019 - Label-Noise Robust Generative Adversarial Networks
  • CVPR - 2019 - Learning From Noisy Labels by Regularized Estimation of Annotator Confusion
  • CVPR - 2019 - Learning to Learn from Noisy Labeled Data
  • CVPR - 2019 - Weakly Supervised Image Classification through Noise Regularization
  • CVPR - 2019 - MetaCleaner: Learning to Hallucinate Clean Representations for Noisy-Labeled Visual Recognition
  • CVPR - 2019 - Probabilistic End-to-End Noise Correction for Learning with Noisy Labels
  • CVPR - 2018 - Webly Supervised Learning Meets Zero-Shot Learning: A Hybrid Approach for Fine-Grained Classification
  • CVPR - 2018 - Learning From Noisy Web Data With Category-Level Supervision
  • CVPR - 2018 - An efficient and provable approach for mixture proportion estimation using linear independence assumption
  • CVPR - 2018 - CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise
  • CVPR - 2018 - Joint Optimization Framework for Learning With Noisy Labels
  • CVPR - 2018 - Iterative Learning With Open-Set Noisy Labels
  • CVPR - 2018 - Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling Perspective.
  • CVPR - 2017 - Attend in Groups: A Weakly-Supervised Deep Learning Framework for Learning from Web Data
  • CVPR - 2017 - Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach
  • CVPR - 2017 - Learning from noisy large-scale datasets with minimal supervision
  • CVPR - 2016 - Seeing through the human reporting bias: Visual classifiers from noisy human-centric labels
  • CVPR - 2015 - Visual recognition by learning from web data: A weakly supervised domain generalization approach
  • CVPR - 2015 - Learning from massive noisy labeled data for image classification
  • CVPR - 2012 - Robust Non-negative Graph Embedding: Towards noisy data, unreliable graphs, and noisy labels
  • CVPR - 2008 - Keywords to visual categories: Multiple-instance learning for weakly supervised object categorization

ICCV/ECCV

  • ICCV - 2019 - Co-Mining: Deep face recognition with noisy labels
  • ICCV - 2019 - Deep Self-Learning From Noisy Labels
  • ICCV - 2019 - Symmetric Cross Entropy for Robust Learning With Noisy Labels
  • ICCV - 2019 - NLNL: Negative Learning for Noisy Labels
  • ICCV - 2019 - O2U-Net: A Simple Noisy Label Detection Approach for Deep Neural Networks
  • ICCV - 2017 - Learning from Noisy Labels with Distillation
  • ICCV - 2015 - Webly supervised learning of convolutional networks
  • ECCV - 2020 - Webly supervised image classification with self-contained confidence
  • ECCV - 2020 - Robust and On-the-fly Dataset Denoising for Image Classification
  • ECCV - 2020 - Learn to Propagate Reliably on Noisy Affinity Graphs
  • ECCV - 2020 - Learning with Noisy Class Labels for Instance Segmentation
  • ECCV - 2020 - Weakly Supervised Learning with Side Information for Noisy Labeled Images
  • ECCV - 2020 - NoiseRank: Unsupervised Label Noise Reduction with Dependence Models
  • ECCV - 2018 - Deep bilevel learning
  • ECCV - 2018 - CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images
  • ECCV - 2018 - Cross-Modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking.
  • ECCV - 2016 - The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition
  • ECCV - 2016 - learning visual features from large weakly supervised data
  • ECCV - 2014 - Exploiting priviledged information from web data for image categorization
  • ICASSP - 2016 - Training deep neural-networks based on unreliable labels
  • ICDM-short - 2016 - Learning deep networks from noisy labels with dropout regularization
  • Arxiv - 2017 - Deep Learning is robust to massive label noise
  • cs231n course report - 2017 - On the robustness of convnets to training on noisy labels
  • Blog - 2019 - Weak Supervision: The New Programming Paradigm for Machine Learning
  • ICML-reject - 2020 - Self-Adaptive Training beyond Empirical Risk Minimization
  • ICML-reject - 2020 - Learning Not to Learn in the Presence of Noisy Labels
  • CVPR-reject - 2020 - Learning from Noisy Labels with Noise Modeling Network
  • WACV - 2021 - Noisy Concurrent Training for Efficient Learning under Label Noise
  • WACV - 2021 - EvidentialMix: Learning with Combined Open-set and Closed-set Noisy Labels
  • WACV - 2021 - Do We Really Need Gold Samples for Sample Weighting Under Label Noise?
  • WACV - 2020 - Learning from Noisy Labels via Discrepant Collaborative Training
  • WACV - 2018 - Iterative Cross Learning on Noisy Labels
  • WACV - 2018 - A semi-supervised Two-stage approach to learning from noise labels
  • BMVC - 2019 - What Happens when self-supervision meets noisy labels
  • aistats-JAIR - 2019 - Confident Learning: Estimating Uncertanty in Data Labels
  • UAI - 2017 - Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels.
  • aistats - 2018 - Robust Active Label Correction
  • aistats - 2014 - Learning and evaluation in presence of non-iid label noise
  • ECML/PKDD - 2016 - Interactive Learning from Multiple Noisy Labels
  • ECML/PKDD - 2016 - On the convergence of a family of robust losses for stochastic gradient descent
  • TNNLS - 2018 - Progressive stochastic learning for noisy labels
  • Access - 2019 - Making Deep Neural Networks Robust to Label Noise: Cross-Training With a Novel Loss Function
  • Access - 2019 - Recycling: Semi-supervised Learning with noisy labels in deep neural networs
  • ICPR - 2020 - Towards robust learning with different label noise distribution
  • Arxiv - 2019 - Leveraging inductive bias of neural networks for learning without explicit human annotations
  • Arxiv - 2019 - Uncertainty Based Detection and Relabeling of Noisy Image Labels
  • Arxiv - 2020 - Learning Halfspaces with Tsybakov Noise
  • Arxiv - 2020 - Efficient active learning of sparse halfspaces with arbitrary bounded noise
  • Arxiv - 2020 - Classification with imperfect training labels
  • BMVC - 2020 - ExpertNet Adversarial Learning and Recovery Against Noisy Labels
  • ECCV-submit - 2020 - TrustNet Learning from Trusted Data Against (A)symmetric Label Noise
  • ECCV-submit - 2020 - Webly Supervised Image Classification with Self-Contained Confidence
  • Arxiv - 2020 - Universal Lower-Bounds on Classification Error under Adversarial Attacks and Random Corruption
  • IJCNN - 2020 - Temporal Calibrated Regularization for Robust Noisy Label Learning
  • Arxiv - 2020 - Particle Competition and Cooperation for Semi-Supervised Learning with Label Noise
  • Arxiv - 2020 - On-line Active Learning for Noisy Labeled Stream Data
  • ICPR-CCF-C 类 - 2020 - Meta Soft Label Generation for Noisy Labels
  • ICML-reject - 2020 - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels
  • Arxiv - 2020 - Learning to Purify Noisy Labels via Meta Soft Label Corrector
  • Arxiv - 2020 - Learning Adaptive Loss for Robust Learning with Noisy Labels
  • Arxiv - 2020 - Deep learning classification with noisy labels
  • Arxiv - 2020 - Combined Cleaning and Resampling Algorithm for Multi-Class Imbalanced Data with Label Noise
  • Arxiv - 2020 - Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training
  • Arxiv - 2020 - Using Under-trained Deep Ensembles to Learn Under Extreme Label Noise
  • Arxiv - 2020 - Identifying noisy labels with a transductive semi-supervised leave-one-out filter
  • Arxiv - 2020 - Handling Noisy Labels via One-Step Abductive Multi-Target Learning
  • Arxiv - 2020 - Robust Federated Learning with Noisy Labels
  • Arxiv - 2020 - Robust Optimal Classification Trees under Noisy Labels
  • Arxiv - 2020 - Identifying Training Stop Point with Noisy Labeled data
  • Arxiv - 2020 - Attention-Aware Noisy Label Learning for Image Classification
  • Arxiv - 2020 - Regularization in neural network optimization via trimmed stochastic gradient descent with noisy label
  • Arxiv - 2020 - KNN-enhanced Deep Learning Against Noisy Labels
  • Arxiv - 2020 - SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning
  • Arxiv - 2020 - Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels
  • AAAI-reject - 2021 - Two-Phase Learning for Overcoming Noisy Labels
  • CVPR-21-submit - 2020 - No Regret Sample Selection with Noisy Labels
  • ICDM-short - 2020 - Robust Collaborative Learning with Noisy Labels
  • ICDM-short - 2019 - Collaborative Label Correction via
  • Arxiv - 2020 - Self-semi-supervised Learning to Learn from Noisy Labeled Data
  • Arxiv - 2020 - Noisy Labels Can Induce Good Representations
  • Arxiv - 2021 - Transform consistency for learning with noisy labels
  • Arxiv - 2021 - Co-matching: Combating Noisy Labels by Augmentation Anchoring
  • Arxiv - 2021 - On the Robustness of Monte Carlo Dropout Trained with Noisy Labels
  • Arxiv - 2021 - Detecting Label Noise via Leave-One-Out Cross Validation
  • TNNLS - 2021 - MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels
  • TMM - 2021 - Exploiting Web Images for Fine-Grained Visual Recognition by Eliminating Noisy Samples and Utilizing Hard Ones
  • TMM - 2021 - Ensemble Learning with Manifold-Based Data Splitting for Noisy Label Correction
  • CVPR-submit - 2021 - DST: Data Selection and joint Training for Learning with Noisy Labels
  • CVPR-reject - 2021 - Exploiting Class Similarity for Machine Learning with Confidence Labels and Projective Loss Functions
  • Arxiv - 2021 - A Novel Perspective for Positive-Unlabeled Learning via Noisy Labels
  • Arxiv - 2021 - Searching for Robustness: Loss Learning for Noisy Classification Tasks
  • ICML-submit - 2021 - ScanMix: Learning from Severe Label Noise via Semantic Clustering and Semi-Supervised Learning
  • ICML_submit - 2021 - LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment
  • ICML-submit - 2021 - Multiplicative Reweighting for Robust Neural Network Optimization
  • ICML-submit - 2021 - Winning Ticket in Noisy Image Classification
  • ICML-submit - 2021 - Self-Supervised Noisy Label Learning for Source-Free Unsupervised Domain Adaptation
  • ICLR-reject - 2020 - Unsupervised domain adaptation through self-supervision
  • ICML-submit - 2021 - Evaluating Multi-label Classifiers with Noisy Labels
  • ICLR-21 reject - 2021 - Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels
  • ICLR - 2021 - reject 2021 Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization

Arxiv

  • Arxiv - 2021 - The importance of understanding instance-level noisy labels
  • Arxiv - 2021 - Clusterability as an Alternative to Anchor Points When Learning with Noisy Labels
  • Arxiv - 2021 - Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization
  • Arxiv - 2021 - Approximating Instance-Dependent Noise via Instance-Confidence Embedding
  • Arxiv - 2021 - Provably End-to-end Label-Noise Learning without Anchor Points
  • Arxiv - 2021 - Understanding the Interaction of Adversarial Training with Noisy Labels
  • Arxiv - 2021 - Learning to Combat Noisy Labels via Classification Margins
  • Arxiv - 2021 - Optimizing Black-box Metrics with Iterative Example Weighting
  • Arxiv - 2021 - Co-Seg: An Image Segmentation Framework Against Label Corruption
  • Arxiv - 2021 - Towards Robustness to Label Noise in Text Classification via Noise Modeling
  • Arxiv - 2021 - Learning From How Human Correct
  • Arxiv - 2021 - Auto-weighted Robust Federated Learning with Corrupted Data Sources
  • ICASSP - 2021 - Semi-Supervised Singing Voice Separation with Noisy Self-Training
  • WWW - 2021 - Data Poisoning Attacks and Defenses to Crowdsourcing Systems
  • NIPS - 2019 - Input Similarity from the Neural Network Perspective
  • NIPS - 2019 - Towards Understanding the Importance of Shortcut Connections in Residual Networks
  • ACL - 2019 - Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with Noisy Labels
  • NAACL - 2021 - Self-Training with Weak Supervision
  • NAACL - 2021 - Noisy-Labeled NER with Confidence Estimation
  • Arxiv - 2021 - A Theoretical Analysis of Learning with Noisily Labeled Data
  • Arxiv - 2021 - Harmless label noise and informative soft-labels in supervised classification
  • Arxiv - 2021 - Learning from Noisy Labels via Dynamic Loss Thresholding
  • Arxiv - 2021 - Friends and Foes in Learning from Noisy Labels
  • Arxiv - 2021 - Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks

Acknowledgements:

Thanks for the following inspiring repos: