Lupin1998/Awesome-MIM

Welcome to Add Paper for Masked Image Modeling (MIM)

Closed this issue · 1 comments

Welcome Contributions to Awesome MIM

Currently, we are working on a survey of Masked Image Modeling (MIM) pre-training and applications to various vision tasks. Feel free to send pull requests to add more paper links with the following Markdown format. Note that the Abbreviation, the code link, and the figure link are optional attributes.

* **Abbreviation**: Author List.
  - Paper Name. [[Conference'Year](link)] [[code](link)]
  <p align="center"><img width="90%" src="link_to_image" /></p>

Related Project

We acknowledge similar awesome list projects and self-supervised learning repositories as follows:

Paper List of Masked Image Modeling

Project of Self-supervised Learning

  • OpenMixup: CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.
  • MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
  • solo-learn: A library of self-supervised methods for visual representation learning powered by Pytorch Lightning.
  • unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities.
  • VISSL: FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.

Upload our survey Masked Modeling for Self-supervised Representation Learning on Vision and Beyond. Welcome to open a new issue for relevant Masked Modeling paper.