/PTNet

Pyramid Transformer Net (PTNet) for high resolution infant MRI synthesis

Primary LanguagePythonOtherNOASSERTION

Pyramid Transformer Net (PTNet)

Pytorch implementation of PTNet for high-resolution and longitudinal infant MRI synthesis.

PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer
Xuzhe Zhang1, Xinzi He1, Jia Guo2, Nabil Ettehadi1, Natalie Aw2, David Semanek2, Jonathan Posner2, Andrew Laine1, Yun Wang2
1Columbia University Department of Biomedical Engineering, 2CUMC Department of Psychiatry

Reminder:

This 2D-only PTNet repo has been deprecated. Please visit our latest repo containing both 2D and 3D versions with a better data sampling strategy.

This repo contains the code of our first version preprint paper. This version of PTNet is only designed for pure MAE/MSE loss. Combining it with adversarial training will significantly impair performance. If you want to integrate an adversarial training framework, please refer to our updated version for the journal paper which introduces substantial improvements (e.g., 3D version, perceptual and adversarial losses). https://github.com/XuzheZ/PTNet3D

Usage and Demo

To synthesize high resolution infant brain MRI.

Prerequisites

  • Linux
  • Python3.6
  • NVIDIA GPU (11G memory or larger) + CUDA cuDNN

Getting Started

Installation

git clone https://github.com/XuzheZ/PTNet.git

Testing

coming soon

Dataset

In our first version preprint paper, we conducted experiments only on dHCP dataset (http://www.developingconnectome.org/), For more challenging longitudinal tasks, please refer to our updated version for the journal paper: https://github.com/XuzheZ/PTNet3D

Training

coming soon

More Training/Test Details

coming soon

Citation

If you find this useful for your research, please use the following.

@article{zhang2021ptnet,
  title={PTNet: A High-Resolution Infant MRI Synthesizer Based on Transformer},
  author={Zhang, Xuzhe and He, Xinzi and Guo, Jia and Ettehadi, Nabil and Aw, Natalie and Semanek, David and Posner, Jonathan and Laine, Andrew and Wang, Yun},
  journal={arXiv preprint arXiv:2105.13993},
  year={2021}
}

Acknowledgments

This code borrows heavily from: Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet, pix2pixHD, pytorch-CycleGAN-and-pix2pix.