/label-noise-papers

An update-to-date list for papers related with label-noise representation learning is here.

Label Noise Papers

This repository contains Label-Noise Representation Learning (LNRL) papers mentioned in our survey "A Survey of Label-noise Representation Learning: Past, Present, and Future".

We will update this paper list to include new LNRL papers periodically.

Citation

Please cite our paper if you find it helpful.

@article{han2020survey,
  title={A survey of label-noise representation learning: Past, present and future},
  author={Han, Bo and Yao, Quanming and Liu, Tongliang and Niu, Gang and Tsang, Ivor W and Kwok, James T and Sugiyama, Masashi},
  year={2021}
}

Content

  1. Survey
  2. Data
    1. Transition Matrix
    2. Adaptation Layer
    3. Loss Correction
    4. Prior Knowledge
    5. Others
  3. Objective
    1. Regularization
    2. Reweighting
    3. Redesigning
    4. Others
  4. Optimization
    1. Memorization Effect
    2. Self-training
    3. Co-training
    4. Beyond Memorization
    5. Others
  5. Future Directions
    1. New Datasets
    2. Instance-dependent LNRL
    3. Adversarial LNRL
    4. Automated INRL
    5. Noisy Data
    6. Double Descent
  1. B. Frénay and M. Verleysen, Classification in the presence of label noise: a survey, IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp. 845–869, 2013. paper

  2. G. Algan and I. Ulusoy, Image classification with deep learning in the presence of noisy labels: A survey, arXiv preprint arXiv:1912.05170, 2019. paper

  3. D. Karimi, H. Dou, S. K. Warfield, and A. Gholipour, Deep learning with noisy labels: exploring techniques and remedies in medical image analysis, Medical Image Analysis, 2020. paper

  4. H. Song, M. Kim, D. Park, and J.-G. Lee, Learning from noisy labels with deep neural networks: A survey, arXiv preprint arXiv:2007.08199, 2020. paper

Transition Matrix

  1. B. van Rooyen and R. C. Williamson, A theory of learning with corrupted labels, Journal of Machine Learning Research, vol. 18, no. 1, pp. 8501–8550, 2017. paper

  2. G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu, Making deep neural networks robust to label noise: A loss correction approach, in CVPR, 2017. paper

Adaptation Layer

  1. S. Sukhbaatar, J. Bruna, M. Paluri, L. Bourdev, and R. Fergus, Training convolutional networks with noisy labels, in ICLR Workshop, 2015. paper

  2. J. Goldberger and E. Ben-Reuven, Training deep neural-networks using a noise adaptation layer, in ICLR, 2017. paper

  3. I. Misra, C. Lawrence Zitnick, M. Mitchell, and R. Girshick, Seeing through the human reporting bias: Visual classifiers from noisy human-centric labels, in CVPR, 2016. paper

Loss Correction

  1. G. Patrini, A. Rozza, A. Krishna Menon, R. Nock, and L. Qu, Making deep neural networks robust to label noise: A loss correction approach, in CVPR, 2017. paper

  2. D. Hendrycks, M. Mazeika, D. Wilson, and K. Gimpel, Using trusted data to train deep networks on labels corrupted by severe noise, in NeurIPS, 2018. paper

  3. M. Lukasik, S. Bhojanapalli, A. K. Menon, and S. Kumar, Does label smoothing mitigate label noise? in ICML, 2020. paper

Prior Knowledge

  1. B. Han, J. Yao, G. Niu, M. Zhou, I. Tsang, Y. Zhang, and M. Sugiyama, Masking: A new perspective of noisy supervision, in NeurIPS, 2018. paper

  2. X. Xia, T. Liu, N.Wang, B. Han, C. Gong, G. Niu, and M. Sugiyama, Are anchor points really indispensable in label-noise learning? in NeurIPS, 2019. paper

  3. Y. Li, J. Yang, Y. Song, L. Cao, J. Luo, and L.-J. Li, Learning from noisy labels with distillation, in ICCV, 2017. paper

Others

  1. J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei, The unreasonable effectiveness of noisy data for fine-grained recognition, in ECCV, 2016. paper

  2. C. G. Northcutt, T.Wu, and I. L. Chuang, Learning with confident examples: Rank pruning for robust classification with noisy labels, in UAI, 2017. paper

  3. Y. Kim, J. Yim, J. Yun, and J. Kim, Nlnl: Negative learning for noisy labels, in ICCV, 2019. paper

  4. P. H. Seo, G. Kim, and B. Han, Combinatorial inference against label noise, in NeurIPS, 2019. paper

  5. T. Kaneko, Y. Ushiku, and T. Harada, Label-noise robust generative adversarial networks, in CVPR, 2019. paper

  6. A. Lamy, Z. Zhong, A. K. Menon, and N. Verma, Noise-tolerant fair classification, in NeurIPS, 2019. paper

  7. J. Yao, H. Wu, Y. Zhang, I. W. Tsang, and J. Sun, Safeguarded dynamic label regression for noisy supervision, in AAAI, 2019. paper

Regularization

  1. S. Azadi, J. Feng, S. Jegelka, and T. Darrell, Auxiliary image regularization for deep cnns with noisy labels, in ICLR, 2016. paper

  2. D.-H. Lee, Pseudo-label: The simple and efficient semisupervised learning method for deep neural networks, in ICML Workshop, 2013.

  3. S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich, Training deep neural networks on noisy labels with bootstrapping, in ICLR Workshop, 2015. paper

  4. H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, mixup: Beyond empirical risk minimization, in ICLR, 2018. paper

  5. T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii, Virtual adversarial training: a regularization method for supervised and semi-supervised learning, IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 8, pp. 1979–1993, 2018. paper

  6. B. Han, G. Niu, X. Yu, Q. Yao, M. Xu, I. Tsang, and M. Sugiyama, Sigua: Forgetting may make learning with noisy labels more robust, in ICML, 2020. paper

Reweighting

  1. T. Liu and D. Tao, Classification with noisy labels by importance reweighting, IEEE Transactions on pattern analysis and machine intelligence, vol. 38, no. 3, pp. 447–461, 2015. paper

  2. Y. Wang, A. Kucukelbir, and D. M. Blei, Robust probabilistic modeling with bayesian data reweighting, in ICML, 2017. paper

  3. E. Arazo, D. Ortego, P. Albert, N. E. O’Connor, and K. McGuinness, Unsupervised label noise modeling and loss correction, in ICML, 2019. paper

  4. J. Shu, Q. Xie, L. Yi, Q. Zhao, S. Zhou, Z. Xu, and D. Meng, Meta-weight-net: Learning an explicit mapping for sample weighting, in NeurIPS, 2019. paper

Redesigning

  1. A. K. Menon, A. S. Rawat, S. J. Reddi, and S. Kumar, Can gradient clipping mitigate label noise? in ICLR, 2020. paper

  2. Z. Zhang and M. Sabuncu, Generalized cross entropy loss for training deep neural networks with noisy labels, in NeurIPS, 2018. paper

  3. N. Charoenphakdee, J. Lee, and M. Sugiyama, On symmetric losses for learning from corrupted labels, in ICML, 2019. paper

  4. S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, and J. Mohd-Yusof, Combating label noise in deep learning using abstention, in ICML, 2019. paper

  5. Y. Lyu and I. W. Tsang, Curriculum loss: Robust learning and generalization against label corruption, in ICLR, 2020. paper

  6. S. Laine and T. Aila, Temporal ensembling for semi-supervised learning, in ICLR, 2017. paper

  7. D. T. Nguyen, C. K. Mummadi, T. P. N. Ngo, T. H. P. Nguyen, L. Beggel, and T. Brox, Self: Learning to filter noisy labels with self-ensembling, in ICLR, 2020. paper

  8. X. Ma, Y. Wang, M. E. Houle, S. Zhou, S. M. Erfani, S.-T. Xia, S. Wijewickrema, and J. Bailey, Dimensionality-driven learning with noisy labels, in ICML, 2018. paper

Others

  1. S. Branson, G. Van Horn, and P. Perona, Lean crowdsourcing: Combining humans and machines in an online system, in CVPR, 2017. paper

  2. A. Vahdat, Toward robustness against label noise in training deep discriminative neural networks, in NeurIPS, 2017. paper

  3. H.-S. Chang, E. Learned-Miller, and A. McCallum, Active bias: Training more accurate neural networks by emphasizing high variance samples, in NeurIPS, 2017. paper

  4. A. Khetan, Z. C. Lipton, and A. Anandkumar, Learning from noisy singly-labeled data, ICLR, 2018. paper

  5. D. Tanaka, D. Ikami, T. Yamasaki, and K. Aizawa, Joint optimization framework for learning with noisy labels, in CVPR, 2018. paper

  6. Y. Wang, W. Liu, X. Ma, J. Bailey, H. Zha, L. Song, and S.-T. Xia, Iterative learning with open-set noisy labels, in CVPR, 2018. paper

  7. S. Jenni and P. Favaro, Deep bilevel learning, in ECCV, 2018. paper

  8. Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, and J. Bailey, Symmetric cross entropy for robust learning with noisy labels, in ICCV, 2019. paper

  9. J. Li, Y. Song, J. Zhu, L. Cheng, Y. Su, L. Ye, P. Yuan, and S. Han, Learning from large-scale noisy web data with ubiquitous reweighting for image classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. paper

  10. Y. Xu, P. Cao, Y. Kong, and Y. Wang, L_dmi: A novel informationtheoretic loss function for training deep nets robust to label noise, in NeurIPS, 2019. paper

  11. Y. Liu and H. Guo, Peer loss functions: Learning from noisy labels without knowing noise rates, in ICML, 2020. paper

  12. X. Ma, H. Huang, Y. Wang, S. Romano, S. Erfani, and J. Bailey, Normalized loss functions for deep learning with noisy labels, in ICML, 2020. paper

Memorization Effect

  1. C. Zhang, S. Bengio, M. Hardt, . BRecht, and O. Vinyals. Understanding deep learning requires rethinking generalization, in ICML, 2016. paper

  2. D. Arpit, S. Jastrzębski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, and S. Lacoste-Julien. A closer look at memorization in deep networks, In ICML, 2017. paper

Self-training

  1. L. Jiang, Z. Zhou, T. Leung, L.-J. Li, and L. Fei-Fei, Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels, in ICML, 2018. paper

  2. M. Ren, W. Zeng, B. Yang, and R. Urtasun, Learning to reweight examples for robust deep learning, in ICML, 2018. paper

  3. L. Jiang, D. Huang, M. Liu, W. Yang. Beyond synthetic noise: Deep learning on controlled noisy labels, in ICML 2020. paper

Co-training

  1. B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama, Co-teaching: Robust training of deep neural networks with extremely noisy labels, in NeurIPS, 2018. paper

  2. X. Yu, B. Han, J. Yao, G. Niu, I. W. Tsang, and M. Sugiyama, How does disagreement help generalization against label corruption? in ICML, 2019. paper

  3. Q. Yao, H. Yang, B. Han, G. Niu, and J. T. Kwok, Searching to exploit memorization effect in learning with noisy labels, in ICML, 2020. paper

Beyond Memorization

  1. J. Li, R. Socher, and S. C. Hoi, Dividemix: Learning with noisy labels as semi-supervised learning, in ICLR, 2020. paper

  2. D. Hendrycks, K. Lee, and M. Mazeika, Using pre-training can improve model robustness and uncertainty, in ICML, 2019. paper

  3. D. Bahri, H. Jiang, and M. Gupta, Deep k-nn for noisy labels, in ICML, 2020. paper

  4. P. Chen, B. Liao, G. Chen, and S. Zhang, Understanding and utilizing deep neural networks trained with noisy labels, in ICML, 2019. paper

Others

  1. A. Veit, N. Alldrin, G. Chechik, I. Krasin, A. Gupta, and S. Belongie, Learning from noisy large-scale datasets with minimal supervision, in CVPR, 2017. paper

  2. B. Zhuang, L. Liu, Y. Li, C. Shen, and I. Reid, Attend in groups: a weakly-supervised deep learning framework for learning from web data, in CVPR, 2017. paper

  3. K.-H. Lee, X. He, L. Zhang, and L. Yang, Cleannet: Transfer learning for scalable image classifier training with label noise, in CVPR, 2018. paper

  4. S. Guo, W. Huang, H. Zhang, C. Zhuang, D. Dong, M. R. Scott,and D. Huang, Curriculumnet: Weakly supervised learning from large-scale web images, in ECCV, 2018. paper

  5. J. Deng, J. Guo, N. Xue, and S. Zafeiriou, Arcface: Additive angular margin loss for deep face recognition, in CVPR, 2019. paper

  6. X. Wang, S. Wang, J. Wang, H. Shi, and T. Mei, Co-mining: Deep face recognition with noisy labels, in ICCV, 2019. paper

  7. J. Huang, L. Qu, R. Jia, and B. Zhao, O2u-net: A simple noisylabel detection approach for deep neural networks, in ICCV, 2019. paper

  8. J. Han, P. Luo, and X. Wang, Deep self-learning from noisy labels, in ICCV, 2019. paper

  9. H. Harutyunyan, K. Reing, G. V. Steeg, and A. Galstyan, Improving generalization by controlling label-noise information in neural network weights, in ICML, 2020. paper

  10. H. Wei, L. Feng, X. Chen, and B. An, Combating noisy labels by agreement: A joint training method with co-regularization, in CVPR, 2020. paper

  11. Z. Zhang, H. Zhang, S. O. Arik, H. Lee, and T. Pfister, Distilling effective supervision from severe label noise, in CVPR, 2020. paper

New Datasets

  1. T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, Learning from massive noisy labeled data for image classification, in CVPR, 2015. paper

  2. L. Jiang, D. Huang, M. Liu, and W. Yang, Beyond synthetic noise: Deep learning on controlled noisy labels, in ICML, 2020. paper

  3. W. Li, L. Wang, W. Li, E. Agustsson, and L. Van Gool. Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862, 2017. paper

Instance-dependent LNRL

  1. A. Menon, B. Van Rooyen, and N. Natarajan, Learning from binary labels with instance-dependent corruption, Machine Learning, vol. 107, p. 1561–1595, 2018. paper

  2. J. Cheng, T. Liu, K. Ramamohanarao, and D. Tao, Learning with bounded instance-and label-dependent label noise, in ICML, 2020. paper

  3. A. Berthon, B. Han, G. Niu, T. Liu, and M. Sugiyama, Confidence scores make instance-dependent label-noise learning possible, arXiv preprint arXiv:2001.03772, 2020. paper

Adversarial LNRL

  1. Y. Wang, D. Zou, J. Yi, J. Bailey, X. Ma, and Q. Gu, Improving adversarial robustness requires revisiting misclassified examples, in ICLR, 2020. paper

  2. J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama, and M. Kankanhalli, Attacks which do not kill training make adversarial learning stronger, in ICML, 2020. paper

Automated LNRL

  1. Q. Yao, H. Yang, B. Han, G. Niu, J. Kwok. Searching to exploit memorization effect in learning from noisy labels, in ICML, 2020. paper code

Noisy Data

  1. J. Zhang, B. Han, L. Wynter, K. H. Low, and M. Kankanhalli, Towards robust resnet: A small step but a giant leap, in IJCAI, 2019. paper

  2. B. Han, Y. Pan, and I. W. Tsang, Robust plackett–luce model for k-ary crowdsourced preferences, Machine Learning, vol. 107, no. 4, pp. 675–702, 2018. paper

  3. Y. Pan, B. Han, and I.W. Tsang, Stagewise learning for noisy k-ary preferences, Machine Learning, vol. 107, no. 8-10, pp. 1333–1361, 2018. paper

  4. F. Liu, J. Lu, B. Han, G. Niu, G. Zhang, and M. Sugiyama, Butterfly: A panacea for all difficulties in wildly unsupervised domain adaptation, arXiv preprint arXiv:1905.07720, 2019. paper

  5. X. Yu, T. Liu, M. Gong, K. Zhang, K. Batmanghelich, and D. Tao, Label-noise robust domain adaptation, in ICML, 2020. paper

  6. S. Wu, X. Xia, T. Liu, B. Han, M. Gong, N. Wang, H. Liu, and G. Niu, Multi-class classification from noisy-similarity-labeled data, arXiv preprint arXiv:2002.06508, 2020. paper

  7. C. Wang, B. Han, S. Pan, J. Jiang, G. Niu, and G. Long, Crossgraph: Robust and unsupervised embedding for attributed graphs with corrupted structure, in ICDM, 2020. paper

  8. Y.-H. Wu, N. Charoenphakdee, H. Bao, V. Tangkaratt, and M. Sugiyama, Imitation learning from imperfect demonstration, in ICML, 2019. paper

  9. D. S. Brown, W. Goo, P. Nagarajan, and S. Niekum, Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations, in ICML, 2019. paper

  10. J. Audiffren, M. Valko, A. Lazaric, and M. Ghavamzadeh, Maximum entropy semi-supervised inverse reinforcement learning, in IJCAI, 2015. paper

  11. V. Tangkaratt, B. Han, M. E. Khan, and M. Sugiyama, Variational imitation learning with diverse-quality demonstrations, in ICML, 2020. paper

Double Descent

  1. P. Nakkiran, G. Kaplun, Y. Bansal, T. Yang, B. Barak, and I. Sutskever. Deep double descent: Where bigger models and more data hurt, in ICLR, 2019. paper

  2. Z. Yang, Y. Yu, C. You, J. Steinhardt, Y. Ma. Rethinking Bias-Variance Trade-off for Generalization of Neural Networks, in ICML, 2020. paper