keywords: Vision Transformer, Swin Transformer, convolutional neural networks, image registration
This is a PyTorch implementation of my paper:
03/24/2022 - TransMorph is currently ranked 1st place on the TEST set of task03 (brain MR) @ MICCAI 2021 L2R challenge (results obtained from the Learn2Reg challenge organizers). The training scripts, dataset, and the pretrained models are available here: TransMorph on OASIS
02/03/2022 - TransMorph is currently ranked 1st place on the VALIDATION set of task03 (brain MR) @ MICCAI 2021 L2R challenge.
12/29/2021 - Our preprocessed IXI dataset and the pre-trained models are now publicly available! Check out this page for more information: TransMorph on IXI
There are four TransMorph variants: TransMorph, TransMorph-diff, TransMorph-bspl, and TransMorph-Bayes.
Training and inference scripts are in TransMorph/
, and the models are contained in TransMorph/model/
.
- TransMorph: A hybrid Transformer-ConvNet network for image registration.
- TransMorph-diff: A probabilistic TransMorph that ensures a diffeomorphism.
- TransMorph-bspl: A B-spline TransMorph that ensures a diffeomorphism.
- TransMorph-Bayes: A Bayesian uncerntainty TransMorph that produces registration uncertainty estimate.
The scripts for TransMorph affine model are in TransMorph_affine/
folder.
04/27/2022 - We provided a Jupyter notebook for training and testing TransMorph-affine here. The pre-trained weights can be downloaded here along with two sample data here.
train_xxx.py
and infer_xxx.py
are the training and inference scripts for TransMorph models.
TransMorph supports both mono- and multi-modal registration. We provided the following loss functions for image similarity measurements (the links will take you directly to the code):
- Mean squared error (MSE)
- Normalized cross correlation (NCC)
- Structural similarity index (SSIM)
- Mutual information (MI)
- Local mutual information (LMI)
- Modality independent neighbourhood descriptor with self-similarity context (MIND-SSC)
and the following deformation regularizers:
We compared TransMorph with eight baseline registration methods + four Transformer architectures.
The links will take you to their official repositories.
Baseline registration methods:
Training and inference scripts are in Baseline_registration_models/
- SyN/ANTsPy (Official Website)
- NiftyReg (Official Website)
- LDDMM (Official Website)
- deedsBCV (Official Website)
- VoxelMorph-1 & -2 (Official Website)
- CycleMorph (Official Website)
- MIDIR (Official Website)
Baseline Transformer architectures:
Training and inference scripts are in Baseline_Transformers/
- PVT (Official Website)
- nnFormer (Official Website)
- CoTr (Official Website)
- ViT-V-Net (Official Website)
Due to restrictions, we cannot distribute our brain MRI and CT data. However, several brain MRI datasets are publicly available online: ADNI, OASIS, ABIDE, etc. Note that those datasets may not contain labels (segmentation). To generate labels, you can use FreeSurfer, which is an open-source software for normalizing brain MRI images. Here are some useful commands in FreeSurfer: Brain MRI preprocessing and subcortical segmentation using FreeSurfer.
You may find our preprocessed IXI dataset in the next section.
You may find the preprocessed IXI dataset, the pre-trained baseline and TransMorph models, and the training and inference scripts for IXI dataset here 👉 TransMorph on IXI
If you find this code is useful in your research, please consider to cite:
@article{chen2021transmorph,
title={TransMorph: Transformer for unsupervised medical image registration},
author={Chen, Junyu and Frey, Eric C and He, Yufan and Segars, William P and Li, Ye and Du, Yong},
journal={arXiv preprint arXiv:2111.10480},
year={2021}
}