/lightning-hydra-template

PyTorch Lightning + Hydra. A very user-friendly template for rapid and reproducible ML experimentation with best practices. ⚑πŸ”₯⚑

Primary LanguagePython

Lightning-Hydra-Template

Python PyTorch Lightning Config: hydra Code style: black

A clean and scalable template to kickstart your deep learning project πŸš€βš‘πŸ”₯
Click on Use this template to initialize new repository.

Suggestions are always welcome!



πŸ“Œ  Introduction

This template tries to be as general as possible.

Effective usage requires learning of a couple of technologies: PyTorch, PyTorch Lightning and Hydra. Knowledge of some experiment logging framework like Weights&Biases, Neptune or MLFlow is also recommended.

Why you should use it: it allows you to rapidly iterate over new models/datasets and scale your projects from small single experiments to hyperparameter searches on computing clusters, without writing any boilerplate code. To my knowledge, it's one of the most convenient all-in-one technology stack for deep learning prototyping. Quick starting point for reproducing papers, hackathons, kaggle competitions or small-team research projects. It's also a collection of best practices for efficient workflow and reproducibility.

Why you shouldn't use it: this template is not fitted to be a production/deployment environment, should be used more as a fast experimentation tool. Apart from that, Lightning and Hydra are still evolving and integrate many libraries, which means sometimes things break - for the list of currently known bugs, visit this page. Also, even though Lightning is very flexible, it's not well suited for every possible deep learning task. See #Limitations for more.

Why PyTorch Lightning?

PyTorch Lightning is a lightweight PyTorch wrapper for high-performance AI research. It makes your code neatly organized and provides lots of useful features, like ability to run model on CPU, GPU, multi-GPU cluster and TPU.

Why Hydra?

Hydra is an open-source Python framework that simplifies the development of research and other complex applications. The key feature is the ability to dynamically create a hierarchical configuration by composition and override it through config files and the command line. It allows you to conveniently manage experiments and provides many useful plugins, like Optuna Sweeper for hyperparameter search, or Ray Launcher for running jobs on a cluster.


Main Ideas Of This Template

  • Predefined Structure: clean and scalable so that work can easily be extended and replicated | #Project Structure
  • Rapid Experimentation: thanks to automating pipeline with config files and hydra command line superpowers | #Your Superpowers
  • Reproducibility: obtaining similar results is supported in multiple ways | #Reproducibility
  • Little Boilerplate: so pipeline can be easily modified | #How It Works
  • Main Configuration: main config file specifies default training configuration | #Main Project Configuration
  • Experiment Configurations: can be composed out of smaller configs and override chosen hyperparameters | #Experiment Configuration
  • Workflow: comes down to 4 simple steps | #Workflow
  • Experiment Tracking: many logging frameworks can be easily integrated, like Tensorboard, MLFlow or W&B | #Experiment Tracking
  • Logs: all logs (checkpoints, data from loggers, hparams, etc.) are stored in a convenient folder structure imposed by Hydra | #Logs
  • Hyperparameter Search: made easier with Hydra built-in plugins like Optuna Sweeper | #Hyperparameter Search
  • Tests: unit tests and shell/command based tests for speeding up the development | #Tests
  • Best Practices: a couple of recommended tools, practices and standards for efficient workflow and reproducibility | #Best Practices

Project Structure

The directory structure of new project looks like this:

β”œβ”€β”€ configs                   <- Hydra configuration files
β”‚   β”œβ”€β”€ callbacks                <- Callbacks configs
β”‚   β”œβ”€β”€ datamodule               <- Datamodule configs
β”‚   β”œβ”€β”€ debug                    <- Debugging configs
β”‚   β”œβ”€β”€ experiment               <- Experiment configs
β”‚   β”œβ”€β”€ hparams_search           <- Hyperparameter search configs
β”‚   β”œβ”€β”€ local                    <- Local configs
β”‚   β”œβ”€β”€ log_dir                  <- Logging directory configs
β”‚   β”œβ”€β”€ logger                   <- Logger configs
β”‚   β”œβ”€β”€ model                    <- Model configs
β”‚   β”œβ”€β”€ trainer                  <- Trainer configs
β”‚   β”‚
β”‚   β”œβ”€β”€ test.yaml             <- Main config for testing
β”‚   └── train.yaml            <- Main config for training
β”‚
β”œβ”€β”€ data                   <- Project data
β”‚
β”œβ”€β”€ logs                   <- Logs generated by Hydra and PyTorch Lightning loggers
β”‚
β”œβ”€β”€ notebooks              <- Jupyter notebooks. Naming convention is a number (for ordering),
β”‚                             the creator's initials, and a short `-` delimited description,
β”‚                             e.g. `1.0-jqp-initial-data-exploration.ipynb`.
β”‚
β”œβ”€β”€ scripts                <- Shell scripts
β”‚
β”œβ”€β”€ src                    <- Source code
β”‚   β”œβ”€β”€ datamodules              <- Lightning datamodules
β”‚   β”œβ”€β”€ models                   <- Lightning models
β”‚   β”œβ”€β”€ utils                    <- Utility scripts
β”‚   β”œβ”€β”€ vendor                   <- Third party code that cannot be installed using PIP/Conda
β”‚   β”‚
β”‚   β”œβ”€β”€ testing_pipeline.py
β”‚   └── training_pipeline.py
β”‚
β”œβ”€β”€ tests                  <- Tests of any kind
β”‚   β”œβ”€β”€ helpers                  <- A couple of testing utilities
β”‚   β”œβ”€β”€ shell                    <- Shell/command based tests
β”‚   └── unit                     <- Unit tests
β”‚
β”œβ”€β”€ test.py               <- Run testing
β”œβ”€β”€ train.py              <- Run training
β”‚
β”œβ”€β”€ .env.example              <- Template of the file for storing private environment variables
β”œβ”€β”€ .gitignore                <- List of files/folders ignored by git
β”œβ”€β”€ .pre-commit-config.yaml   <- Configuration of pre-commit hooks for code formatting
β”œβ”€β”€ requirements.txt          <- File for installing python dependencies
β”œβ”€β”€ setup.cfg                 <- Configuration of linters and pytest
└── README.md

πŸš€  Quickstart

# clone project
git clone https://github.com/ashleve/lightning-hydra-template
cd lightning-hydra-template

# [OPTIONAL] create conda environment
conda create -n myenv python=3.8
conda activate myenv

# install pytorch according to instructions
# https://pytorch.org/get-started/

# install requirements
pip install -r requirements.txt

Template contains example with MNIST classification.
When running python train.py you should see something like this:

⚑  Your Superpowers

Override any config parameter from command line

Hydra allows you to easily overwrite any parameter defined in your config.

python train.py trainer.max_epochs=20 model.lr=1e-4

You can also add new parameters with + sign.

python train.py +model.new_param="uwu"
Train on CPU, GPU, multi-GPU and TPU

PyTorch Lightning makes it easy to train your models on different hardware.

# train on CPU
python train.py trainer.gpus=0

# train on 1 GPU
python train.py trainer.gpus=1

# train on TPU
python train.py +trainer.tpu_cores=8

# train with DDP (Distributed Data Parallel) (4 GPUs)
python train.py trainer.gpus=4 +trainer.strategy=ddp

# train with DDP (Distributed Data Parallel) (8 GPUs, 2 nodes)
python train.py trainer.gpus=4 +trainer.num_nodes=2 +trainer.strategy=ddp
Train with mixed precision
# train with pytorch native automatic mixed precision (AMP)
python train.py trainer.gpus=1 +trainer.precision=16
Train model with any logger available in PyTorch Lightning, like Weights&Biases or Tensorboard

PyTorch Lightning provides convenient integrations with most popular logging frameworks, like Tensorboard, Neptune or simple csv files. Read more here. Using wandb requires you to setup account first. After that just complete the config as below.
> Click here to see example wandb dashboard generated with this template.

# set project and entity names in `configs/logger/wandb`
wandb:
  project: "your_project_name"
  entity: "your_wandb_team_name"
# train model with Weights&Biases (link to wandb dashboard should appear in the terminal)
python train.py logger=wandb
Train model with chosen experiment config

Experiment configurations are placed in configs/experiment/.

python train.py experiment=example
Attach some callbacks to run

Callbacks can be used for things such as as model checkpointing, early stopping and many more.
Callbacks configurations are placed in configs/callbacks/.

python train.py callbacks=default
Use different tricks available in Pytorch Lightning

PyTorch Lightning provides about 40+ useful trainer flags.

# gradient clipping may be enabled to avoid exploding gradients
python train.py +trainer.gradient_clip_val=0.5

# stochastic weight averaging can make your models generalize better
python train.py +trainer.stochastic_weight_avg=true

# run validation loop 4 times during a training epoch
python train.py +trainer.val_check_interval=0.25

# accumulate gradients
python train.py +trainer.accumulate_grad_batches=10

# terminate training after 12 hours
python train.py +trainer.max_time="00:12:00:00"
Easily debug

Visit configs/debug/ for different debugging configs.

# runs 1 epoch in default debugging mode
# changes logging directory to `logs/debugs/...`
# sets level of all command line loggers to 'DEBUG'
# enables extra trainer flags like tracking gradient norm
# enforces debug-friendly configuration
python train.py debug=default

# runs test epoch without training
python train.py debug=test_only

# run 1 train, val and test loop, using only 1 batch
python train.py +trainer.fast_dev_run=true

# raise exception if there are any numerical anomalies in tensors, like NaN or +/-inf
python train.py +trainer.detect_anomaly=true

# print execution time profiling after training ends
python train.py +trainer.profiler="simple"

# try overfitting to 1 batch
python train.py +trainer.overfit_batches=1 trainer.max_epochs=20

# use only 20% of the data
python train.py +trainer.limit_train_batches=0.2 \
+trainer.limit_val_batches=0.2 +trainer.limit_test_batches=0.2

# log second gradient norm of the model
python train.py +trainer.track_grad_norm=2
Resume training from checkpoint

Checkpoint can be either path or URL.

python train.py trainer.resume_from_checkpoint="/path/to/ckpt/name.ckpt"

⚠️ Currently loading ckpt in Lightning doesn't resume logger experiment, but it will be supported in future Lightning release.

Execute evaluation for a given checkpoint

Checkpoint can be either path or URL.

python test.py ckpt_path="/path/to/ckpt/name.ckpt"
Create a sweep over hyperparameters
# this will run 6 experiments one after the other,
# each with different combination of batch_size and learning rate
python train.py -m datamodule.batch_size=32,64,128 model.lr=0.001,0.0005

⚠️ This sweep is not failure resistant (if one job crashes than the whole sweep crashes).

Create a sweep over hyperparameters with Optuna

Using Optuna Sweeper plugin doesn't require you to code any boilerplate into your pipeline, everything is defined in a single config file!

# this will run hyperparameter search defined in `configs/hparams_search/mnist_optuna.yaml`
# over chosen experiment config
python train.py -m hparams_search=mnist_optuna experiment=example_simple

⚠️ Currently this sweep is not failure resistant (if one job crashes than the whole sweep crashes). Might be supported in future Hydra release.

Execute all experiments from folder

Hydra provides special syntax for controlling behavior of multiruns. Learn more here. The command below executes all experiments from folder configs/experiment/.

python train.py -m 'experiment=glob(*)'
Execute sweep on a remote AWS cluster

This should be achievable with simple config using Ray AWS launcher for Hydra. Example is not yet implemented in this template.

Use Hydra tab completion

Hydra allows you to autocomplete config argument overrides in shell as you write them, by pressing tab key. Learn more here.

Apply pre-commit hooks

Apply pre-commit hooks to automatically format your code and configs, perform code analysis and remove output from jupyter notebooks. See # Best Practices for more.

pre-commit run -a

❀️  Contributions

Have a question? Found a bug? Missing a specific feature? Have an idea for improving documentation? Feel free to file a new issue, discussion or PR with respective title and description. If you already found a solution to your problem, don't hesitate to share it. Suggestions for new best practices are always welcome!


ℹ️  Guide

How To Get Started


How It Works

All PyTorch Lightning modules are dynamically instantiated from module paths specified in config. Example model config:

_target_: src.models.mnist_model.MNISTLitModule
lr: 0.001

net:
  _target_: src.models.components.simple_dense_net.SimpleDenseNet
  input_size: 784
  lin1_size: 256
  lin2_size: 256
  lin3_size: 256
  output_size: 10

Using this config we can instantiate the object with the following line:

model = hydra.utils.instantiate(config.model)

This allows you to easily iterate over new models! Every time you create a new one, just specify its module path and parameters in appropriate config file.

Switch between models and datamodules with command line arguments:

python train.py model=mnist

The whole pipeline managing the instantiation logic is placed in src/training_pipeline.py.


Main Project Configuration

Location: configs/train.yaml
Main project config contains default training configuration.
It determines how config is composed when simply executing command python train.py.

Show main project config
# specify here default training configuration
defaults:
  - _self_
  - datamodule: mnist.yaml
  - model: mnist.yaml
  - callbacks: default.yaml
  - logger: null # set logger here or use command line (e.g. `python train.py logger=tensorboard`)
  - trainer: default.yaml
  - log_dir: default.yaml

  # experiment configs allow for version control of specific configurations
  # e.g. best hyperparameters for each combination of model and datamodule
  - experiment: null

  # debugging config (enable through command line, e.g. `python train.py debug=default)
  - debug: null

  # config for hyperparameter optimization
  - hparams_search: null

  # optional local config for machine/user specific settings
  # it's optional since it doesn't need to exist and is excluded from version control
  - optional local: default.yaml

  # enable color logging
  - override hydra/hydra_logging: colorlog
  - override hydra/job_logging: colorlog

# path to original working directory
# hydra hijacks working directory by changing it to the new log directory
# https://hydra.cc/docs/next/tutorials/basic/running_your_app/working_directory
original_work_dir: ${hydra:runtime.cwd}

# path to folder with data
data_dir: ${original_work_dir}/data/

# pretty print config at the start of the run using Rich library
print_config: True

# disable python warnings if they annoy you
ignore_warnings: True

# set False to skip model training
train: True

# evaluate on test set, using best model weights achieved during training
# lightning chooses best weights based on the metric specified in checkpoint callback
test: True

# seed for random number generators in pytorch, numpy and python.random
seed: null

# default name for the experiment, determines logging folder path
# (you can overwrite this name in experiment configs)
name: "default"

Experiment Configuration

Location: configs/experiment
Experiment configs allow you to overwrite parameters from main project configuration.
For example, you can use them to version control best hyperparameters for each combination of model and dataset.

Show example experiment config
# to execute this experiment run:
# python train.py experiment=example

defaults:
  - override /datamodule: mnist.yaml
  - override /model: mnist.yaml
  - override /callbacks: default.yaml
  - override /logger: null
  - override /trainer: default.yaml

# all parameters below will be merged with parameters from default configurations set above
# this allows you to overwrite only specified parameters

# name of the run determines folder name in logs
name: "simple_dense_net"

seed: 12345

trainer:
  min_epochs: 10
  max_epochs: 10
  gradient_clip_val: 0.5

model:
  lr: 0.002
  net:
    lin1_size: 128
    lin2_size: 256
    lin3_size: 64

datamodule:
  batch_size: 64

logger:
  wandb:
    tags: ["mnist", "${name}"]

Local Configuration

Location: configs/local
Some configurations are user/machine/installation specific (e.g. configuration of local cluster, or harddrive paths on a specific machine). For such scenarios, a file configs/local/default.yaml can be created which is automatically loaded but not tracked by Git.

Show example local Slurm cluster config
# @package _global_

defaults:
  - override /hydra/launcher@_here_: submitit_slurm

data_dir: /mnt/scratch/data/

hydra:
  launcher:
    timeout_min: 1440
    gpus_per_task: 1
    gres: gpu:1
  job:
    env_set:
      MY_VAR: /home/user/my/system/path
      MY_KEY: asdgjhawi8y23ihsghsueity23ihwd

Workflow

  1. Write your PyTorch Lightning module (see models/mnist_module.py for example)
  2. Write your PyTorch Lightning datamodule (see datamodules/mnist_datamodule.py for example)
  3. Write your experiment config, containing paths to your model and datamodule
  4. Run training with chosen experiment config: python train.py experiment=experiment_name

Logs

Hydra creates new working directory for every executed run. By default, logs have the following structure:

β”œβ”€β”€ logs
β”‚   β”œβ”€β”€ experiments                     # Folder for the logs generated by experiments
β”‚   β”‚   β”œβ”€β”€ runs                          # Folder for single runs
β”‚   β”‚   β”‚   β”œβ”€β”€ experiment_name             # Experiment name
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ YYYY-MM-DD_HH-MM-SS       # Datetime of the run
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ .hydra                  # Hydra logs
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ csv                     # Csv logs
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ wandb                   # Weights&Biases logs
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ checkpoints             # Training checkpoints
β”‚   β”‚   β”‚   β”‚   β”‚   └── ...                     # Any other thing saved during training
β”‚   β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”‚   └── ...
β”‚   β”‚   β”‚
β”‚   β”‚   └── multiruns                     # Folder for multiruns
β”‚   β”‚       β”œβ”€β”€ experiment_name             # Experiment name
β”‚   β”‚       β”‚   β”œβ”€β”€ YYYY-MM-DD_HH-MM-SS       # Datetime of the multirun
β”‚   β”‚       β”‚   β”‚   β”œβ”€β”€1                        # Multirun job number
β”‚   β”‚       β”‚   β”‚   β”œβ”€β”€2
β”‚   β”‚       β”‚   β”‚   └── ...
β”‚   β”‚       β”‚   └── ...
β”‚   β”‚       └── ...
β”‚   β”‚
β”‚   β”œβ”€β”€ evaluations                       # Folder for the logs generated during testing
β”‚   β”‚   └── ...
β”‚   β”‚
β”‚   └── debugs                            # Folder for the logs generated during debugging
β”‚       └── ...

You can change this structure by modifying paths in hydra configuration.


Experiment Tracking

PyTorch Lightning supports many popular logging frameworks:
Weights&Biases Β· Neptune Β· Comet Β· MLFlow Β· Tensorboard

These tools help you keep track of hyperparameters and output metrics and allow you to compare and visualize results. To use one of them simply complete its configuration in configs/logger and run:

python train.py logger=logger_name

You can use many of them at once (see configs/logger/many_loggers.yaml for example).

You can also write your own logger.

Lightning provides convenient method for logging custom metrics from inside LightningModule. Read the docs here or take a look at MNIST example.


Hyperparameter Search

Defining hyperparameter optimization is as easy as adding new config file to configs/hparams_search.

Show example
defaults:
  - override /hydra/sweeper: optuna

# choose metric which will be optimized by Optuna
optimized_metric: "val/acc_best"

hydra:
  # here we define Optuna hyperparameter search
  # it optimizes for value returned from function with @hydra.main decorator
  # learn more here: https://hydra.cc/docs/next/plugins/optuna_sweeper
  sweeper:
    _target_: hydra_plugins.hydra_optuna_sweeper.optuna_sweeper.OptunaSweeper
    storage: null
    study_name: null
    n_jobs: 1

    # 'minimize' or 'maximize' the objective
    direction: maximize

    # number of experiments that will be executed
    n_trials: 20

    # choose Optuna hyperparameter sampler
    # learn more here: https://optuna.readthedocs.io/en/stable/reference/samplers.html
    sampler:
      _target_: optuna.samplers.TPESampler
      seed: 12345
      consider_prior: true
      prior_weight: 1.0
      consider_magic_clip: true
      consider_endpoints: false
      n_startup_trials: 10
      n_ei_candidates: 24
      multivariate: false
      warn_independent_sampling: true

    # define range of hyperparameters
    search_space:
      datamodule.batch_size:
        type: categorical
        choices: [32, 64, 128]
      model.lr:
        type: float
        low: 0.0001
        high: 0.2
      model.net.lin1_size:
        type: categorical
        choices: [32, 64, 128, 256, 512]
      model.net.lin2_size:
        type: categorical
        choices: [32, 64, 128, 256, 512]
      model.net.lin3_size:
        type: categorical
        choices: [32, 64, 128, 256, 512]

Next, you can execute it with: python train.py -m hparams_search=mnist_optuna

Using this approach doesn't require you to add any boilerplate into your pipeline, everything is defined in a single config file.

You can use different optimization frameworks integrated with Hydra, like Optuna, Ax or Nevergrad.

The optimization_results.yaml will be available under logs/multirun folder.

This approach doesn't support advanced technics like prunning - for more sophisticated search, you probably shouldn't use hydra multirun feature and instead write your own optimization pipeline.


Inference

The following code is an example of loading model from checkpoint and running predictions.

Show example
from PIL import Image
from torchvision import transforms

from src.models.mnist_module import MNISTLitModule


def predict():
    """Example of inference with trained model.
    It loads trained image classification model from checkpoint.
    Then it loads example image and predicts its label.
    """

    # ckpt can be also a URL!
    CKPT_PATH = "last.ckpt"

    # load model from checkpoint
    # model __init__ parameters will be loaded from ckpt automatically
    # you can also pass some parameter explicitly to override it
    trained_model = MNISTLitModule.load_from_checkpoint(checkpoint_path=CKPT_PATH)

    # print model hyperparameters
    print(trained_model.hparams)

    # switch to evaluation mode
    trained_model.eval()
    trained_model.freeze()

    # load data
    img = Image.open("data/example_img.png").convert("L")  # convert to black and white
    # img = Image.open("data/example_img.png").convert("RGB")  # convert to RGB

    # preprocess
    mnist_transforms = transforms.Compose(
        [
            transforms.ToTensor(),
            transforms.Resize((28, 28)),
            transforms.Normalize((0.1307,), (0.3081,)),
        ]
    )
    img = mnist_transforms(img)
    img = img.reshape((1, *img.size()))  # reshape to form batch of size 1

    # inference
    output = trained_model(img)
    print(output)


if __name__ == "__main__":
    predict()

Tests

Template comes with example tests implemented with pytest library. To execute them simply run:

# run all tests
pytest

# run tests from specific file
pytest tests/shell/test_basic_commands.py

# run all tests except the ones marked as slow
pytest -k "not slow"

To speed up the development, you can once in a while execute tests that run a couple of quick experiments, like training 1 epoch on 25% of data, executing single train/val/test step, etc. Those kind of tests don't check for any specific output - they exist to simply verify that executing some bash commands doesn't end up in throwing exceptions. You can find them implemented in tests/shell folder.

You can easily modify the commands in the scripts for your use case. If 1 epoch is too much for your model, then make it run for a couple of batches instead (by using the right trainer flags).


Callbacks

The branch wandb-callbacks contains example callbacks enabling better Weights&Biases integration, which you can use as a reference for writing your own callbacks (see wandb_callbacks.py).

Callbacks which support reproducibility:

  • WatchModel
  • UploadCodeAsArtifact
  • UploadCheckpointsAsArtifact

Callbacks which provide examples of logging custom visualisations:

  • LogConfusionMatrix
  • LogF1PrecRecHeatmap
  • LogImagePredictions

To try all of the callbacks at once, switch to the right branch:

git checkout wandb-callbacks

And then run the following command:

python train.py logger=wandb callbacks=wandb

To see the result of all the callbacks attached, take a look at this experiment dashboard.


Multi-GPU Training

Lightning supports multiple ways of doing distributed training. The most common one is DDP, which spawns separate process for each GPU and averages gradients between them. To learn about other approaches read the lightning docs.

You can run DDP on mnist example with 4 GPUs like this:

python train.py trainer.gpus=4 +trainer.strategy=ddp

⚠️ When using DDP you have to be careful how you write your models - learn more here.


Docker

First you will need to install Nvidia Container Toolkit to enable GPU support.

The template Dockerfile is provided on branch dockerfiles. Copy it to the template root folder.

To build the container use:

docker build -t <project_name> .

To mount the project to the container use:

docker run -v $(pwd):/workspace/project --gpus all -it --rm <project_name>

Reproducibility

What provides reproducibility:

  • Hydra manages your configs.
  • Hydra manages your logging paths and makes every executed run store its hyperparameters and config overrides in a separate file in logs.
  • LightningDataModule allows you to encapsulate data split, transformations and default parameters in a single, clean abstraction.
  • LightningModule separates your research code from engineering code in a clean way.
  • Experiment tracking frameworks take care of logging metrics and hparams, some can also store results and artifacts in cloud.
  • Pytorch Lightning takes care of creating training checkpoints.
  • Example callbacks for wandb show how you can save and upload a snapshot of codebase every time the run is executed, as well as upload ckpts and track model gradients.

Limitations

  • Currently, template doesn't support k-fold cross validation, but it's possible to achieve it with Lightning Loop interface. See the official example. Implementing it requires rewriting the training pipeline.
  • Currently hyperparameter search with Hydra Optuna Plugin doesn't support prunning.
  • Hydra changes working directory to new logging folder for every executed run, which might not be compatible with the way some libraries work.
  • Restoring logger state is currently not supported. This might change in future lightning realease.

Useful Tricks

Accessing datamodule attributes in model
  1. The simplest way is to pass datamodule attribute directly to model on initialization:

    # ./src/training_pipeline.py
    datamodule = hydra.utils.instantiate(config.datamodule)
    model = hydra.utils.instantiate(config.model, some_param=datamodule.some_param)

    This is not a very robust solution, since it assumes all your datamodules have some_param attribute available (otherwise the run will crash).

  2. If you only want to access datamodule config, you can simply pass it as an init parameter:

    # ./src/training_pipeline.py
    model = hydra.utils.instantiate(config.model, dm_conf=config.datamodule, _recursive_=False)

    Now you can access any datamodule config part like this:

    # ./src/models/my_model.py
    class MyLitModel(LightningModule):
    	def __init__(self, dm_conf, param1, param2):
    		super().__init__()
    
    		batch_size = dm_conf.batch_size
  3. If you need to access the datamodule object attributes, a little hacky solution is to add Omegaconf resolver to your datamodule:

    # ./src/datamodules/my_datamodule.py
    from omegaconf import OmegaConf
    
    class MyDataModule(LightningDataModule):
    	def __init__(self, param1, param2):
    		super().__init__()
    
    		self.param1 = param1
    
    		resolver_name = "datamodule"
    		OmegaConf.register_new_resolver(
    			resolver_name,
    			lambda name: getattr(self, name),
    			use_cache=False
    		)

    This way you can reference any datamodule attribute from your config like this:

    # this will return attribute 'param1' from datamodule object
    param1: ${datamodule:param1}

    When later accessing this field, say in your lightning model, it will get automatically resolved based on all resolvers that are registered. Remember not to access this field before datamodule is initialized or it will crash.

    You also need to set resolve=False in print_config(...) in utils to prevent config printing from accessing the parameter before datamodule is initialized:

    # ./src/urils/__init__.py
     def extras(config: DictConfig) -> None:
    
       ...
    
       log.info("Printing config tree with Rich! <config.print_config=True>")
       print_config(config, resolve=False)
    
       ...
Automatic activation of virtual environment and tab completion when entering folder
  1. Create a new file called .autoenv (this name is excluded from version control in .gitignore).
    You can use it to automatically execute shell commands when entering folder. Add some commands to your .autoenv file, like in the example below:

    # activate conda environment
    conda activate myenv
    
    # activate hydra tab completion for bash
    eval "$(python train.py -sc install=bash)"

    (these commands will be executed whenever you're openning or switching terminal to folder containing .autoenv file)

  2. To setup this automation for bash, execute the following line (it will append your .bashrc file):

    echo "autoenv() { if [ -x .autoenv ]; then source .autoenv ; echo '.autoenv executed' ; fi } ; cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv" >> ~/.bashrc
  3. Lastly add execution previliges to your .autoenv file:

    chmod +x .autoenv
    

    (for safety, only .autoenv with previligies will be executed)

Explanation

The mentioned line appends your .bashrc file with 2 commands:

  1. autoenv() { if [ -x .autoenv ]; then source .autoenv ; echo '.autoenv executed' ; fi } - this declares the autoenv() function, which executes .autoenv file if it exists in current work dir and has execution previligies
  2. cd() { builtin cd \"\$@\" ; autoenv ; } ; autoenv - this extends behaviour of cd command, to make it execute autoenv() function each time you change folder in terminal or open new terminal

Best Practices

Use Miniconda for GPU environments

Use miniconda for your python environments (it's usually unnecessary to install full anaconda environment, miniconda should be enough). It makes it easier to install some dependencies, like cudatoolkit for GPU support. It also allows you to acccess your environments globally.

Example installation:

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh

Create new conda environment:

conda create -n myenv python=3.8
conda activate myenv
Use automatic code formatting

Use pre-commit hooks to standardize code formatting of your project and save mental energy.
Simply install pre-commit package with:

pip install pre-commit

Next, install hooks from .pre-commit-config.yaml:

pre-commit install

After that your code will be automatically reformatted on every new commit.
Currently template contains configurations of black (python code formatting), isort (python import sorting), docformatter (python docstring formatting), flake8 (python code analysis), prettier (yaml formating) and nbstripout (clearing output from jupyter notebooks).

To reformat all files in the project use command:

pre-commit run -a
Set private environment variables in .env file

System specific variables (e.g. absolute paths to datasets) should not be under version control or it will result in conflict between different users. Your private keys also shouldn't be versioned since you don't want them to be leaked.

Template contains .env.example file, which serves as an example. Create a new file called .env (this name is excluded from version control in .gitignore). You should use it for storing environment variables like this:

MY_VAR=/home/user/my_system_path

All variables from .env are loaded in train.py automatically.

Hydra allows you to reference any env variable in .yaml configs like this:

path_to_data: ${oc.env:MY_VAR}
Name metrics using '/' character

Depending on which logger you're using, it's often useful to define metric name with / character:

self.log("train/loss", loss)

This way loggers will treat your metrics as belonging to different sections, which helps to get them organised in UI.

Use torchmetrics

Use official torchmetrics library to ensure proper calculation of metrics. This is especially important for multi-GPU training!

For example, instead of calculating accuracy by yourself, you should use the provided Accuracy class like this:

from torchmetrics.classification.accuracy import Accuracy


class LitModel(LightningModule):
    def __init__(self)
        self.train_acc = Accuracy()
        self.val_acc = Accuracy()

    def training_step(self, batch, batch_idx):
        ...
        acc = self.train_acc(predictions, targets)
        self.log("train/acc", acc)
        ...

    def validation_step(self, batch, batch_idx):
        ...
        acc = self.val_acc(predictions, targets)
        self.log("val/acc", acc)
        ...

Make sure to use different metric instance for each step to ensure proper value reduction over all GPU processes.

Torchmetrics provides metrics for most use cases, like F1 score or confusion matrix. Read documentation for more.

Follow PyTorch Lightning style guide

The style guide is available here.

  1. Be explicit in your init. Try to define all the relevant defaults so that the user doesn’t have to guess. Provide type hints. This way your module is reusable across projects!

    class LitModel(LightningModule):
        def __init__(self, layer_size: int = 256, lr: float = 0.001):
  2. Preserve the recommended method order.

    class LitModel(LightningModule):
    
        def __init__():
            ...
    
        def forward():
            ...
    
        def training_step():
            ...
    
        def training_step_end():
            ...
    
        def training_epoch_end():
            ...
    
        def validation_step():
            ...
    
        def validation_step_end():
            ...
    
        def validation_epoch_end():
            ...
    
        def test_step():
            ...
    
        def test_step_end():
            ...
    
        def test_epoch_end():
            ...
    
        def configure_optimizers():
            ...
    
        def any_extra_hook():
            ...
Version control your data and models with DVC

Use DVC to version control big files, like your data or trained ML models.
To initialize the dvc repository:

dvc init

To start tracking a file or directory, use dvc add:

dvc add data/MNIST

DVC stores information about the added file (or a directory) in a special .dvc file named data/MNIST.dvc, a small text file with a human-readable format. This file can be easily versioned like source code with Git, as a placeholder for the original data:

git add data/MNIST.dvc data/.gitignore
git commit -m "Add raw data"
Support installing project as a package

It allows other people to easily use your modules in their own projects. Change name of the src folder to your project name and add setup.py file:

from setuptools import find_packages, setup


setup(
    name="src",  # change "src" folder name to your project name
    version="0.0.0",
    description="Describe Your Cool Project",
    author="...",
    author_email="...",
    url="https://github.com/ashleve/lightning-hydra-template",  # replace with your own github project link
    install_requires=[
        "pytorch>=1.10.0",
        "pytorch-lightning>=1.4.0",
        "hydra-core>=1.1.0",
    ],
    packages=find_packages(),
)

Now your project can be installed from local files:

pip install -e .

Or directly from git repository:

pip install git+git://github.com/YourGithubName/your-repo-name.git --upgrade

So any file can be easily imported into any other file like so:

from project_name.models.mnist_module import MNISTLitModule
from project_name.datamodules.mnist_datamodule import MNISTDataModule

Other Repositories

Inspirations

This template was inspired by: PyTorchLightning/deep-learninig-project-template, drivendata/cookiecutter-data-science, tchaton/lightning-hydra-seed, Erlemar/pytorch_tempest, lucmos/nn-template.

Useful repositories

License

This project is licensed under the MIT License.

MIT License

Copyright (c) 2021 ashleve

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.




DELETE EVERYTHING ABOVE FOR YOUR PROJECT


Your Project Name

PyTorch Lightning Config: Hydra Template
Paper Conference

Description

What it does

How to run

Install dependencies

# clone project
git clone https://github.com/YourGithubName/your-repo-name
cd your-repo-name

# [OPTIONAL] create conda environment
conda create -n myenv python=3.8
conda activate myenv

# install pytorch according to instructions
# https://pytorch.org/get-started/

# install requirements
pip install -r requirements.txt

Train model with default configuration

# train on CPU
python train.py trainer.gpus=0

# train on GPU
python train.py trainer.gpus=1

Train model with chosen experiment configuration from configs/experiment/

python train.py experiment=experiment_name.yaml

You can override any parameter from command line like this

python train.py trainer.max_epochs=20 datamodule.batch_size=64