/CMT-AMAI24paper

Official repo for the paper: "Quantifying Knee Cartilage Shape and Lesion: From Image to Metrics"

Primary LanguagePython

CMT-AMAI24paper

Paper

Quantifying Knee Cartilage Shape and Lesion: From Image to Metrics

AMAI’24 (MICCAI workshop) (in press)

paper-CMT

TL;DR

CMT, a toolbox for knee MRI analysis, model training, and visualization.

Contributions

  • Joint Template-Learning and Registration Mode – CMT-reg
  • CartiMorph Toolbox (CMT)

Models for CMT

  • models (both for segmentation and registration) for this work – can be loaded into CMT
  • more models from the CMT models page

Quick Start

  • For model evaluation and the training of other SoTA models:
git clone https://github.com/YongchengYAO/CMT-AMAI24paper.git
cd CMT-AMAI24paper
conda create --name CMT_AMAI24paper --file env.txt

Code

We compared the proposed CMT-reg with other template learning and/or registration models – Aladdin and LapIRN.

Data

This is the data used for reproducing Tables 2 & 3.

Data for this repo for model training, inference, and evaluation

# data folder structure
├── Code
  ├── Aladdin
    ├── Model
  ├── LapIRN
    ├── Model
├── Data
  ├── Aladdin
  ├── CMT_data4AMAI 
  ├── LapIRN
  • How to use files in the Data folder?
    1. clone this repo: CMT-AMAI24paper
    2. put the Data folder under CMT-AMAI24paper/
  • How to use files in the Code folder?
    1. clone this repo: CMT-AMAI24paper
    2. put corresponding Model folders to
      • CMT-AMAI24paper/Code/Aladdin/
      • CMT-AMAI24paper/Code/LapIRN/

Raw Data

  • MR Image: OAI
  • Annotation: OAI-ZIB

Data Information: here (link CMT-ID to OAI-SubjectID)

Processed Data from CartiMorph

This is the data used for training CMT-reg and nnUNet in CMT

If you use the processed data, please note that the manual segmentation annotations come from this work:

Citation

(conference proceedings in press)
@misc{yao2024quantifyingkneecartilageshape,
      title={Quantifying Knee Cartilage Shape and Lesion: From Image to Metrics}, 
      author={Yongcheng Yao and Weitian Chen},
      year={2024},
      eprint={2409.07361},
      archivePrefix={arXiv},
      primaryClass={eess.IV},
      url={https://arxiv.org/abs/2409.07361}, 
}

Acknowledgment

The training, inference, and evaluation code for Aladdin and LapIRN are adapted from these GitHub repos:

CMT is based on CartiMorph: https://github.com/YongchengYAO/CartiMorph

@article{YAO2024103035,
title = {CartiMorph: A framework for automated knee articular cartilage morphometrics},
journal = {Medical Image Analysis},
author = {Yongcheng Yao and Junru Zhong and Liping Zhang and Sheheryar Khan and Weitian Chen},
volume = {91},
pages = {103035},
year = {2024},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2023.103035}
}