A multi-scale approach identifies and leverages the patterns of the multiple scales within a deep neural network.
The patterns of the features across multiple scales are encoded as a binary pattern code and further converted to a decimal number,
before embedded back to the classification model.
Link to J-BHI paper.
All the models in this project were evaluated on the following datasets:
- Colon_KBSMC (Colon TMA from Kangbuk Samsung Hospital)
- Colon_KBSMC (Colon WSI from Kangbuk Samsung Hospital)
- Prostate_KBSMC (Colon WSI from Kangbuk Samsung Hospital)
conda env create -f environment.yml
conda activate msbp_net
pip install torch~=1.8.1+cu111
Above, we install PyTorch version 1.8.1 with CUDA 11.1. The code still work older Pytorch version (PyTorch >=1.1).
Below are the main directories in the repository:
dataset/
: the data loader and augmentation pipelinedocs/
: figures/GIFs used in the repomodel/
: model definition, along with the main run step and hyperparameter settingsprenet/
: model definition, along with the main run step and hyperparameter settingsscript/
: defines the training/infer loop
Below are the main executable scripts in the repository:
config.py
: configuration filedataset.py
: defines the dataset classesdefine_network.py
: defines the networktrainer.py
: main training scriptinfer_produce_predict_map_wsi.py
: following sliding window fashion to generate a predicted map for WSI images
python trainer.py [--gpu=<id>] [--network_name=<network_name>] [--dataset=<colon/prostate>]
Options: ** Our proposed and other common/state-of-the-art multi-scale and single-scale methods, including:**
METHOD | run_info | Description |
---|---|---|
ResNet | Resnet | Feature extractor: ResNet50 (Code from Pytorch library) |
VGG | VGG | Feature extractor: VGG16 (Code from Pytorch library) |
MobileNetV1 | MobileNetV1 | Feature extractor: MobileNetV1 (Code from Pytorch library) |
EfficientNet | EfficientNet | Feature extractor: EfficientNetB1 (Code from lukemelas) [Github] |
ResNeSt | ResNeSt | Feature extractor: ResNeSt50 (Code from Pytorch library) |
MuDeep | MuDeep | Multi_scale: Multi-scale deep learning architectures for person re-identification. [paper] [code] |
MSDNet | MSDNet | Multi_scale: Multi-scale dense networks for resource efficient image classification. [paper] [code] |
Res2Net | Res2Net | Multi_scale: Res2Net: A New Multi-scale Backbone Architecture [paper] [code] |
FFN_concat | ResNet_concat | Multi_scale: Concat(multi_scale features) |
FFN_add | ResNet_add | Multi_scale: Add(multi_scale features) |
FFN_conv | ResNet_conv | Multi_scale: Conv(multi_scale features) |
FFN_concat(z−µ) | ResNet_concat_zm | Multi_scale: Concat(multi_scale features - mean(multi_scale features)) |
FFN_conv(z−µ) | ResNet_conv_zm | Multi_scale: Conv(multi_scale features - mean(multi_scale features)) |
MSBP-Net | ResNet_MSBP | Multi_scale: Binary Pattern encoding layer (Ours) |
python infer_produce_predict_map_wsi.py [--gpu=<id>] [--network_name=<network_name>]
Model weights obtained from training MSBP here:
Access the entire checkpoints here.
If any of the above checkpoints are used, please ensure to cite the corresponding paper.
- Trinh, TL Vuong, Song, Boram and Kim, Kyungeun and Cho, Yong M. and Jin Tae Kwak
If any part of this code is used, please give appropriate citation to our paper.
BibTex entry:
@ARTICLE{9496153,
author={Vuong, Trinh T. L. and Song, Boram and Kim, Kyungeun and Cho, Yong M. and Kwak, Jin T.},
journal={IEEE Journal of Biomedical and Health Informatics},
title={Multi-Scale Binary Pattern Encoding Network for Cancer Classification in Pathology Images},
year={2022},
volume={26},
number={3},
pages={1152-1163},
doi={10.1109/JBHI.2021.3099817}}