Welcome to Add Paper for Masked Image Modeling (MIM)
Closed this issue · 1 comments
Lupin1998 commented
Welcome Contributions to Awesome MIM
Currently, we are working on a survey of Masked Image Modeling (MIM) pre-training and applications to various vision tasks. Feel free to send pull requests to add more paper links with the following Markdown format. Note that the Abbreviation, the code link, and the figure link are optional attributes.
* **Abbreviation**: Author List.
- Paper Name. [[Conference'Year](link)] [[code](link)]
<p align="center"><img width="90%" src="link_to_image" /></p>
Related Project
We acknowledge similar awesome list projects and self-supervised learning repositories as follows:
Paper List of Masked Image Modeling
- Awesome-Masked-Autoencoders: A collection of literature after or concurrent with Masked Autoencoder (MAE).
- awesome-MIM: Reading list for research topics in Masked Image Modeling.
- Awesome-MIM: Awesome list of masked image modeling methods for self-supervised visual representation.
- awesome-self-supervised-learning: A curated list of awesome self-supervised methods.
Project of Self-supervised Learning
- OpenMixup: CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.
- MMSelfSup: OpenMMLab self-supervised learning toolbox and benchmark.
- solo-learn: A library of self-supervised methods for visual representation learning powered by Pytorch Lightning.
- unilm: Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities.
- VISSL: FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.
Lupin1998 commented
Upload our survey Masked Modeling for Self-supervised Representation Learning on Vision and Beyond. Welcome to open a new issue for relevant Masked Modeling paper.