- CPCv1: Representation learning with contrastive predictive coding [arxiv:1807]
- CMC: Contrastive multiview coding [arxiv:1906]
- MoCo_v2: Improved Baselines with Momentum Contrastive Learning [arxiv:2003]
- DeepClusterv2: Prototypical contrastive learning of unsupervised representations [arxiv:2005]
- Hard Negative Mixing for Contrastive Learning [arxiv:2010]
- Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning [arxiv:2011]【Pixel-level Contrast Learning】code
- ImCLR: Implicit Contrastive Learning for Image Classification [arxiv:2011]
- FNC: Boosting Contrastive Self-Supervised Learning with False Negative Cancellation [arxiv:2011] [*****]
- Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering [arxiv:2012]
- Contrastive Learning for Label-Efficient Semantic Segmentation [arxiv:2012]【Pixel-level Contrast Learning】
- Hierarchical Semantic Aggregation for Contrastive Representation Learning [arxiv:2012] [*****]
- Contrastive Transformation for Self-supervised Correspondence Learning [arxiv:2012]
- Joint Generative and Contrastive Learning for Unsupervised Person Re-identification [arxiv:2012]
- Self-Supervised Learning with Fully Convolutional Networks [arxiv:2012]
- Information-Preserving Contrastive Learning for Self-Supervised Representations [arxiv:2012]
- Online Bag-of-Visual-Words Generation for Unsupervised Representation Learning [arxiv:2012] [*****]
- Improving Unsupervised Image Clustering With Robust Learning [arxiv:2012]
- Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation [arxiv:2012]
- Self-Supervision based Task-Specific Image Collection Summarization [arxiv:2012]
- Training data-efficient image transformers & distillation through attention [arxiv:2012]
- Spatial Contrastive Learning for Few-Shot Classification [arxiv:2012]
- Few-Shot Learning with No Labels [arxiv:2012]
- Self-supervised Pre-training with Hard Examples Improves Visual Representations [arxiv:2012]
- Adversarial Momentum-Contrastive Pre-Training [arxiv:2012]
- P4Contrast: Contrastive Learning with Pairs of Point-Pixel Pairs for RGB-D Scene Understanding [arxiv:2012]
- Explicit homography estimation improves contrastive self-supervised learning [arxiv:2101]
- Momentum^2 Teacher: Momentum Teacher with Momentum Statistics for Self-Supervised Learning [arxiv:2101]
- Understanding self-supervised Learning Dynamics without Contrastive Pairs [arxiv:2102]
- Barlow Twins: Self-Supervised Learning via Redundancy Reduction [arxiv:2103]
- Deep Clustering by Semantic Contrastive Learning [arxiv:2103]
- SimTriplet: Simple Triplet Representation Learning with a Single GPU [arxiv:2103]
- Doubly Contrastive Deep Clustering [arxiv:2103]
- Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones [arxiv:2103]
- Information Maximization Clustering via Multi-View Self-Labelling [arxiv:2103]
- Self-Feature Regularization: Self-Feature Distillation Without Teacher Models [arxiv:2103]
- Self-supervised Pretraining of Visual Features in the Wild [arxiv:2103]
- UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning [arxiv:2103]【****】
- Cluster Contrast for Unsupervised Person Re-Identification [arxiv:2103]
- Bootstrapped Self-Supervised Training with Monocular Video for Semantic Segmentation and Depth Estimation [arxiv:2103]
- Self-Supervised Classification Network [arxiv:2103]
- Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning [arxiv:2103]
- Self-Supervised Pretraining Improves Self-Supervised Pretraining [arxiv:2103]
- BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search [arxiv:2103]
- Self-Supervised Training Enhances Online Continual Learning [arxiv:2103]
- Contrasting Contrastive Self-Supervised Representation Learning Models [arxiv:2103]
- Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels [arxiv:2103]
- Rethinking Self-Supervised Learning: Small is Beautiful [arxiv:2103] [******]
- Vision Transformers for Dense Prediction [arxiv:2103]
- 3D Point Cloud Registration with Multi-Scale Architecture and Self-supervised Fine-tuning [arxiv:2103]
- Self-supervised Discriminative Feature Learning for Multi-view Clustering [arxiv:2103]
- Rethinking image mixture for unsupervised visual representation learning [arxiv:2103]
- Quantum Self-Supervised Learning [arxiv:2103]
- An Empirical Study of Training Self-Supervised Visual Transformers [arxiv:2104] [******]
- SiT: Self-supervised vIsion Transformer [arxiv:2104]
- Pseudo-supervised Deep Subspace Clustering [arxiv:2104]
- Self-Supervised Learning of Remote Sensing Scene Representations Using Contrastive Multiview Coding [arxiv:2104]
- Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation [arxiv:2104]
- Self-supervised Video Object Segmentation by Motion Grouping [arxiv:2104]
- Self-supervised Video Retrieval Transformer Network [arxiv:2104]
- Pareto Self-Supervised Training for Few-Shot Learning [arxiv:2104]
- Contrastive Learning with Stronger Augmentations [arxiv:2104]
- Discriminative unsupervised feature learning with convolutional neural networks [NIPS2014]
- Unsupervised Deep Embedding for Clustering Analysis [ICML2016]
- Learning deep parsimonious representations. [NIPS2016]
- Joint unsupervised learning of deep representations and image clusters [CVPR2016]
- Deep Clustering for Unsupervised Learning of Visual Features [ECCV2018]
- Unsupervised feature learning via non-parametric instance discrimination [CVPR2018]
- unsupervised representation learning by predicting image rotations [ICLR2018]
- DeeperCluster: Unsupervised Pre-Training of Image Features on Non-Curated Data [ICCV2019]
- Local Aggregation for Unsupervised Learning of Visual Embeddings [CVPR2019]
- Online Deep Clustering for Unsupervised Representation Learning [CVPR2020]
- CPCv2: Data-Efficient Image Recognition with Contrastive Predictive Coding [ICML 2019]
- Generative Pretraining from Pixels [ICML2020]
- SimCLR: A Simple Framework for Contrastive Learning of Visual Representations [ICML2020]
- Self-labelling via simultaneous clustering and representation learning [ICLR2020]
- MoCo: Momentum contrast for unsupervised visual representation learning [CVPR2020]
- Learning Representations by Predicting Bags of Visual Words [CVPR2020] [*****]
- SimCLR_v2: Big Self-Supervised Models are Strong Semi-Supervised Learners [NIPS2020] [code]
- BYOL: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning [NIPS2020]
- SwAV: Unsupervised learning of visual features by contrasting cluster assignments [NIPS2020]
- What makes for good views for contrastive learning [NIPS2020]
- InfoMin: What Makes for Good Views for Contrastive Learning? [NIPS2020]
- Self-Supervised Relational Reasoning for Representation Learning [NIPS2020]
- Supervised Contrastive Learning
- Self-Supervised Graph Transformer on Large-Scale Molecular Data
- Region Similarity Representation Learning [CVPR2021]
- VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples [CVPR2021] [arxiv:2103]
- Exploring Simple Siamese Representation Learning [CVPR2021] [arxiv:2011]
- Self-supervised Geometric Perception [CVPR2021]
- Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning [CVPR2021]
- Spatially Consistent Representation Learning [CVPR2021]
- Self-supervised Video Representation Learning by Context and Motion Decoupling [CVPR2021] [arxiv:2104]
- Reconsidering Representation Alignment for Multi-view Clustering[CVPR2021]
- Model-based 3D Hand Reconstruction via Self-Supervised Learning [CVPR2021]
- Dense Contrastive Learning for Self-Supervised Visual Pre-Training [CVPR2011] [arxiv:2011]【Pixel-level Contrast Learning】code
- PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View Depth Estimation with Neural Positional Encoding and Distilled Matting Loss [CVPR2021]
- AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries [CVPR2021] [*****]
- SEED: Self-supervised Distillation For Visual Representation [ICLR2021]
- Self-supervised Learning from a Multi-view Perspective [ICLR2021]
- SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR2021]
- SEED: Self-supervised Distillation For Visual Representation [ICLR2021] [*****]
- Self-supervised Representation Learning with Relative Predictive Coding [ICLR2021] [*****]
- THEORETICAL ANALYSIS OF SELF-TRAINING WITH DEEP NETWORKS ON UNLABELED DATA [ICLR2021] [***]
- CoCon: A Self-Supervised Approach for Controlled Text Generation [ICLR2021]
- Self-supervised Adversarial Robustness for the Low-label, High-data Regime [ICLR2021]
- Model-Based Visual Planning with Self-Supervised Functional Distances [ICLR2021]
- Self-supervised Visual Reinforcement Learning with Object-centric Representations [ICLR2021]
- SEED: Self-supervised Distillation For Visual Representation [ICLR2021]