/Awesome-Masked-Autoencoders

A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).

MIT LicenseMIT

Awesome Masked Autoencoders

Contrib PaperNum

Fig. 1. Masked Autoencoders from Kaiming He et al.

Masked Autoencoder (MAE, Kaiming He et al.) has renewed a surge of interest due to its capacity to learn useful representations from rich unlabeled data. Until recently, MAE and its follow-up works have advanced the state-of-the-art and provided valuable insights in research (particularly vision research). Here I list several follow-up works after or concurrent with MAE to inspire future research.

*:octocat: code link, 🌐 project page

Vision

Audio

Graph

Point Cloud

Language (Omitted)

There has been a surge of language research focused on such masking-and-predicting paradigm, e.g. BERT, so I'm not going to report these works.

Miscellaneous

TODO List

  • Add code links
  • Add authers list
  • Add conference/journal venues
  • Add more illustrative figures