MULTI-AUGMENTATION FOR EFFICIENT VISUAL REPRESENTATION LEARNING FOR SELF-SUPERVISED PRE-TRAINING

This repo is official TensorFlow implementation MASSRL.

MASSRL Paper link

[Blog Post]("Coming Soon")

This repo contains the source code for the MASSRL multi-Augmentation Strategies in Tensorflow models effortless and less error-prone.

Table of Contents

Installation

pip or conda installs these dependents in your local machine
  • tensorflow==2.7.0, tensorflow-addons==0.15.0, tensorflow-datasets==4.4.0, tensorflow-estimator==2.7.0
  • tqdm
  • wandb
  • imgaug

Visualization MASSRL Multi-Augmentation Strategies

Open In Colab Visualization Multi-Augmentation Strategies on Google-Colab Notebook: https://colab.research.google.com/drive/1fquGOr_psJfDXxOmdFVkfrbedGfi1t-X?usp=sharing

Note the Visualization Augmentation do not need to be trained --- we are only Visualize Image after apply different Augmentation transformations. However, you need to make sure that the dataset is appropriately passed down to the constructor of all submodules. If you want to see this happen, please upvote [this Repo issue]

Configuration Self-Supervised Pretraining

This implementation supports Single-gpu, Multi-GPUs training.

To do self-superivsed pre-training of a ResNet-50 model on ImageNet in an (1-8)-gpus following Three Stesp:

**1.Training Hyperparaneters Configures**: 

- you can change training hyperparameters setting (Dataset paths, All other training hyperperameters) base on
config/non_contrast_config_v1.py as Reference configure
- Consider you GPUs memmory >= 12G ResNet50 --> Recommend training on 4-> 8 GPUs.

**2.Execute MASSRL With 3 Augmentations Strategies SimCRL'Augmentation Pipeline, RandAug, AutoAugment**: 

-Nevigate to this directory
self_supervised_learning_frameworks/none_contrastive_framework/run_MASSRL.py
- Execute the 🏃‍♀️ file.
python run_MASSRL.py 

Note: for 8-gpus training, we recommend following the linear lr scaling recipe: --lr 0.2 --batch-size 128. Other Hyperparameters can set default. for 1-gpu training, we recommend following the linear lr scaling recipe: --lr 0.3 --batch-size 256. Other Hyperparameters can set default.

Dataset

Note: Public ImageNet dataset is implement in this work, if you have your own dataset you can change the path corresponding.

Downloading ImageNet-1K dataset (https://www.image-net.org/download.php).

Using your own dataset

Update Soon

Changing dataset path(your path) in pretraining Flags:

Update Soon

Hyperparameter Setting

Update Soon

Number Augmentation Strategies Implementation

Update Soon

Training Single or Multiple GPUs

Update Soon

Checkout Guideline for Contribution

Awesome! Thank You for being a part this project > > Before you start to contribute for this repository, please quick go through Guidelines. Update Soon

See Also

Citation for Our Paper

@Article{TranMASSRL,
  author  = {Van-Nhiem Tran, Chi-En Huang, Shen-Hsuan Liu, Kai-Lin Yang, Timothy Ko, Yung-Hui Li},
  title   = {Multi-Augmentation Strategies Disentangle represenation learning Self-Supervised},
  journal = {https://arxiv.org/abs/2205.11772},
  year    = {2022},
}