MONAI Tutorials
This repository hosts the MONAI tutorials.
1. Requirements
Most of the examples and tutorials require matplotlib and Jupyter Notebook.
These can be installed with:
python -m pip install -U pip
python -m pip install -U matplotlib
python -m pip install -U notebook
Some of the examples may require optional dependencies. In case of any optional import errors, please install the relevant packages according to MONAI's installation guide. Or install all optional requirements with:
pip install -r https://raw.githubusercontent.com/Project-MONAI/MONAI/dev/requirements-dev.txt
Run the notebooks from Colab
Most of the Jupyter Notebooks have an "Open in Colab" button. Please right-click on the button, and select "Open Link in New Tab" to start a Colab page with the corresponding notebook content.
To use GPU resources through Colab, please remember to change the runtime type to GPU
:
- From the
Runtime
menu selectChange runtime type
- Choose
GPU
from the drop-down menu - Click
SAVE
This will reset the notebook and may ask you if you are a robot (these instructions assume you are not).
Running:
!nvidia-smi
in a cell will verify this has worked and show you what kind of hardware you have access to.
Data
Some notebooks will require additional data. They can be downloaded by running the runexamples.sh script.
2. Questions and bugs
- For questions relating to the use of MONAI, please us our Discussions tab on the main repository of MONAI.
- For bugs relating to MONAI functionality, please create an issue on the main repository.
- For bugs relating to the running of a tutorial, please create an issue in this repository.
3. Note to developers
During integration testing, we run these notebooks. To save time, we modify variables to avoid unecessary for
loop iterations. Hence, during training please use the variables max_epochs
and val_interval
for the number of training epochs and validation interval, respectively.
If your notebook doesn't use the idea of epochs, then please add it to the variable doesnt_contain_max_epochs
in runner.sh
. This lets the runner know that it's not a problem if it doesn't find max_epochs
.
If you have any other variables that would benefit by setting them to 1
during testing, add them to strings_to_replace
in runner.sh
.
4. List of notebooks and examples
2D classification
mednist_tutorial
This notebook shows how to easily integrate MONAI features into existing PyTorch programs. It's based on the MedNIST dataset which is very suitable for beginners as a tutorial. This tutorial also makes use of MONAI's in-built occlusion sensitivity functionality.
2D segmentation
torch examples
Training and evaluation examples of 2D segmentation based on UNet and synthetic dataset. The examples are standard PyTorch programs and have both dictionary-based and array-based versions.
3D classification
ignite examples
Training and evaluation examples of 3D classification based on DenseNet3D and IXI dataset. The examples are PyTorch Ignite programs and have both dictionary-based and array-based transformation versions.
torch examples
Training and evaluation examples of 3D classification based on DenseNet3D and IXI dataset. The examples are standard PyTorch programs and have both dictionary-based and array-based transformation versions.
3D segmentation
ignite examples
Training and evaluation examples of 3D segmentation based on UNet3D and synthetic dataset. The examples are PyTorch Ignite programs and have both dictionary-base and array-based transformations.
torch examples
Training, evaluation and inference examples of 3D segmentation based on UNet3D and synthetic dataset. The examples are standard PyTorch programs and have both dictionary-based and array-based versions.
brats_segmentation_3d
This tutorial shows how to construct a training workflow of multi-labels segmentation task based on MSD Brain Tumor dataset.
spleen_segmentation_3d_lightning
This notebook shows how MONAI may be used in conjunction with the PyTorch Lightning framework.
spleen_segmentation_3d
This notebook is an end-to-end training and evaluation example of 3D segmentation based on MSD Spleen dataset. The example shows the flexibility of MONAI modules in a PyTorch-based program:
- Transforms for dictionary-based training data structure.
- Load NIfTI images with metadata.
- Scale medical image intensity with expected range.
- Crop out a batch of balanced image patch samples based on positive / negative label ratio.
- Cache IO and transforms to accelerate training and validation.
- 3D UNet, Dice loss function, Mean Dice metric for 3D segmentation task.
- Sliding window inference.
- Deterministic training for reproducibility.
unet_segmentation_3d_catalyst
This notebook shows how MONAI may be used in conjunction with the Catalyst framework.
unet_segmentation_3d_ignite
This notebook is an end-to-end training & evaluation example of 3D segmentation based on synthetic dataset. The example is a PyTorch Ignite program and shows several key features of MONAI, especially with medical domain specific transforms and event handlers for profiling (logging, TensorBoard, MLFlow, etc.).
COVID 19-20 challenge baseline
This folder provides a simple baseline method for training, validation, and inference for COVID-19 LUNG CT LESION SEGMENTATION CHALLENGE - 2020 (a MICCAI Endorsed Event).
unetr_btcv_segmentation_3d
This notebook demonstrates how to construct a training workflow of UNETR on multi-organ segmentation task using the BTCV challenge dataset.
unetr_btcv_segmentation_3d_lightning
This tutorial demonstrates how MONAI can be used in conjunction with PyTorch Lightning framework to construct a training workflow of UNETR on multi-organ segmentation task using the BTCV challenge dataset.
2D registration
registration using mednist
This notebook shows a quick demo for learning based affine registration of 64 x 64
X-Ray hands.
3D registration
3D registration using paired lung CT
This tutorial shows how to use MONAI to register lung CT volumes acquired at different time points for a single patient.
DeepAtlas
This tutorial demonstrates the use of MONAI for training of registration and segmentation models together. The DeepAtlas approach, in which the two models serve as a source of weakly supervised learning for each other, is useful in situations where one has many unlabeled images and just a few images with segmentation labels. The notebook works with 3D images from the OASIS-1 brain MRI dataset.
deepgrow
Deepgrow
The example show how to train/validate a 2D/3D deepgrow model. It also demonstrates running an inference for trained deepgrow models.
deployment
BentoML
This is a simple example of training and deploying a MONAI network with BentoML as a web server, either locally using the BentoML respository or as a containerized service.
Ray
This uses the previous notebook's trained network to demonstrate deployment a web server using Ray.
federated learning
NVFlare
The examples show how to train federated learning models with NVFlare and MONAI-based trainers.
OpenFL
The examples show how to train federated learning models based on OpenFL and MONAI.
Substra
The example show how to execute the 3d segmentation torch tutorial on a federated learning platform, Substra.
Digital Pathology
Whole Slide Tumor Detection
The example show how to train and evaluate a tumor detection model (based on patch classification) on whole-slide histopathology images.
Profiling Whole Slide Tumor Detection
The example show how to use MONAI NVTX transforms to tag and profile pre- and post-processing transforms in the digital pathology whole slide tumor detection pipeline.
acceleration
fast_model_training_guide
The document introduces details of how to profile the training pipeline, how to analyze the dataset and select suitable algorithms, and how to optimize GPU utilization in single GPU, multi-GPUs or even multi-nodes.
distributed_training
The examples show how to execute distributed training and evaluation based on 3 different frameworks:
- PyTorch native
DistributedDataParallel
module withtorch.distributed.launch
. - Horovod APIs with
horovodrun
. - PyTorch ignite and MONAI workflows.
They can run on several distributed nodes with multiple GPU devices on every node.
automatic_mixed_precision
And compares the training speed and memory usage with/without AMP.
dataset_type_performance
This notebook compares the performance of Dataset
, CacheDataset
and PersistentDataset
. These classes differ in how data is stored (in memory or on disk), and at which moment transforms are applied.
fast_training_tutorial
This tutorial compares the training performance of pure PyTorch program and optimized program in MONAI based on NVIDIA GPU device and latest CUDA library.
The optimization methods mainly include: AMP
, CacheDataset
and Novograd
.
multi_gpu_test
This notebook is a quick demo for devices, run the Ignite trainer engine on CPU, GPU and multiple GPUs.
threadbuffer_performance
Demonstrates the use of the ThreadBuffer
class used to generate data batches during training in a separate thread.
transform_speed
Illustrate reading NIfTI files and test speed of different transforms on different devices.
modules
engines
Training and evaluation examples of 3D segmentation based on UNet3D and synthetic dataset with MONAI workflows, which contains engines, event-handlers, and post-transforms. And GAN training and evaluation example for a medical image generative adversarial network. Easy run training script uses GanTrainer
to train a 2D CT scan reconstruction network. Evaluation script generates random samples from a trained network.
The examples are built with MONAI workflows, mainly contain: trainer/evaluator, handlers, post_transforms, etc.
3d_image_transforms
This notebook demonstrates the transformations on volumetric images.
2d_inference_3d_volume
Tutorial that demonstrates how monai SlidingWindowInferer
can be used when a 3D volume input needs to be provided slice-by-slice to a 2D model and finally, aggregated into a 3D volume.
autoencoder_mednist
This tutorial uses the MedNIST hand CT scan dataset to demonstrate MONAI's autoencoder class. The autoencoder is used with an identity encode/decode (i.e., what you put in is what you should get back), as well as demonstrating its usage for de-blurring and de-noising.
batch_output_transform
Tutorial to explain and show how to set batch_transform
and output_transform
of handlers to work with MONAI engines.
compute_metric
Example shows how to compute metrics from saved predictions and labels with PyTorch multi-processing support.
csv_datasets
Tutorial shows the usage of CSVDataset
and CSVIterableDataset
, load multiple CSV files and execute postprocessing logic.
decollate_batch
Tutorial shows how to decollate batch data to simplify post processing transforms and execute more flexible following operations.
image_dataset
Notebook introduces basic usages of monai.data.ImageDataset
module.
dynunet_tutorial
This tutorial shows how to train 3D segmentation tasks on all the 10 decathlon datasets with the reimplementation of dynUNet in MONAI.
integrate_3rd_party_transforms
This tutorial shows how to integrate 3rd party transforms into MONAI program. Mainly shows transforms from BatchGenerator, TorchIO, Rising and ITK.
inverse transformations and test-time augmentations
This notebook demonstrates the use of invertible transforms, and then leveraging inverse transformations to perform test-time augmentations.
layer wise learning rate
This notebook demonstrates how to select or filter out expected network layers and set customized learning rate values.
learning rate finder
This notebook demonstrates how to use LearningRateFinder
API to tune the learning rate values for the network.
load_medical_imagesl
This notebook introduces how to easily load different formats of medical images in MONAI and execute many additional operations.
mednist_GAN_tutorial
This notebook illustrates the use of MONAI for training a network to generate images from a random input tensor. A simple GAN is employed to do with a separate Generator and Discriminator networks.
mednist_GAN_workflow_dict
This notebook shows the GanTrainer
, a MONAI workflow engine for modularized adversarial learning. Train a medical image reconstruction network using the MedNIST hand CT scan dataset. Dictionary version.
mednist_GAN_workflow_array
This notebook shows the GanTrainer
, a MONAI workflow engine for modularized adversarial learning. Train a medical image reconstruction network using the MedNIST hand CT scan dataset. Array version.
cross_validation_models_ensemble
This tutorial shows how to leverage CrossValidation
, EnsembleEvaluator
, MeanEnsemble
and VoteEnsemble
modules in MONAI to set up cross validation and ensemble program.
nifti_read_example
Illustrate reading NIfTI files and iterating over image patches of the volumes loaded from them.
network_api
This tutorial illustrates the flexible network APIs and utilities.
postprocessing_transforms
This notebook shows the usage of several postprocessing transforms based on the model output of spleen segmentation task.
public_datasets
This notebook shows how to quickly set up training workflow based on MedNISTDataset
and DecathlonDataset
, and how to create a new dataset.
tcia_csv_processing
This notebook shows how to load the TCIA data with CSVDataset from CSV file and extract information for TCIA data to fetch DICOM images based on REST API.
transforms_demo_2d
This notebook demonstrates the image transformations on histology images using the GlaS Contest dataset.
UNet_input_size_constrains
This tutorial shows how to determine a reasonable spatial size of the input data for MONAI UNet, which not only supports residual units, but also can use more hyperparameters (like strides
, kernel_size
and up_kernel_size
) than the basic UNet implementation.
TorchIO, MONAI, PyTorch Lightning
This notebook demonstrates how the three libraries from the official PyTorch Ecosystem can be used together to segment the hippocampus on brain MRIs from the Medical Segmentation Decathlon.
varautoencoder_mednist
This tutorial uses the MedNIST scan (or alternatively the MNIST) dataset to demonstrate MONAI's variational autoencoder class.
interpretability
Tutorials in this folder demonstrate model visualisation and interpretability features of MONAI. Currently, it consists of class activation mapping and occlusion sensitivity for 3D classification model visualisations and analysis.
Transfer learning with MMAR
This tutorial demonstrates a transfer learning pipeline from a pretrained model in Clara Train's Medical Model Archive format. The notebook also shows the use of LMDB-based dataset.
Transform visualization
This tutorial shows several visualization approaches for 3D image during transform augmentation.