/MultiRegEval

Evaluation Framework for Multimodal Biomedical Image Registration Methods

Primary LanguagePythonMIT LicenseMIT

License 996.icu

Evaluation Framework for Multimodal Biomedical Image Registration Methods

Code of the paper Is Image-to-Image Translation the Panacea for Multimodal Image Registration? A Comparative Study

Open-access data: Datasets for Evaluation of Multimodal Image Registration

Overview

This repository provides an open-source quantitative evaluation framework for multimodal biomedical registration, aiming to contribute to the openness and reproducibility of future research.

  • evaluate.py is the main script to call the registration methods and calculate their performance.

  • ./Datasets/ contains detailed descriptions of the evaluation datasets, and instructions and scripts to customise them.

  • The *.sh scripts provide examples to set large-scale evaluations.

  • plot.py and show_samples.py can be used to plot the registration performance and visualise the modality-translation results (see paper for examples).

  • Each folder contains the modified implementation of a method, whose compatibility with this evaluation framework is tested (see paper for details).

  • Other files should be self-explanatory, otherwise, please open an issue.

Usage

Image-to-Image translation

  • pix2pix and CycleGAN: run commands_*.sh to train and predict_*.sh to translate
# train and test 
cd pytorch-CycleGAN-and-pix2pix/
./commands_{dataset}.sh {fold} {gpu_id} # no {fold} for Histological data

# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_{dataset}.sh
# train and test 
cd ../DRIT/src/
./commands_{dataset}.sh

# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_{dataset}.sh
  • StarGANv2: run commands_*.sh to train and predict_all.sh to translate
# train (for all datasets)
cd ../stargan-v2/
./commands_{dataset}.sh

# test
# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_{dataset}.sh
# train and test (for all datasets)
cd ../CoMIR/
./commands_train.sh

# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_all.sh {gpu_id}

Evaluate registration performance

Run python evaluate.py -h to see the options.

Dependencies

environment.yml includes the full list of packages used to run most of the experiments. Some packages might be unnecessary. And here are some exceptions:

Citation

Please consider citing our paper and dataset if you find the code useful for your research.

@article{luImagetoImageTranslationPanacea2021,
  title = {Is {{Image}}-to-{{Image Translation}} the {{Panacea}} for {{Multimodal Image Registration}}? {{A Comparative Study}}},
  shorttitle = {Is {{Image}}-to-{{Image Translation}} the {{Panacea}} for {{Multimodal Image Registration}}?},
  author = {Lu, Jiahao and {\"O}fverstedt, Johan and Lindblad, Joakim and Sladoje, Nata{\v s}a},
  year = {2021},
  month = mar,
  archiveprefix = {arXiv},
  eprint = {2103.16262},
  eprinttype = {arxiv},
  journal = {arXiv:2103.16262 [cs, eess]},
  primaryclass = {cs, eess}
}

@datasettype{luDatasetsEvaluationMultimodal2021,
  title = {Datasets for {{Evaluation}} of {{Multimodal Image Registration}}},
  author = {Lu, Jiahao and {\"O}fverstedt, Johan and Lindblad, Joakim and Sladoje, Nata{\v s}a},
  year = {2021},
  month = apr,
  publisher = {{Zenodo}},
  doi = {10.5281/zenodo.4587903},
  language = {eng}
}

Code Reference