The new better and stronger pre-trained transformers models (CTransPath) has been released.
- 128GB of RAM
- 32*Nvidia V100 32G GPUs
1.Download all TCGA WSIs.
2.Download all PAIP WSI
New: So, there will be about 15,000,000 images.
Old: We crop these WSIs into patch images.we randomly select 100 images from each WSI.Finally,So, there will be about 2,700,521 unlabeled histopathological images.If you want these images, you can contact me.
It is recommended that you use CTransPath as the preferred histopathology images feature extractor
Install the modified timm library
pip install timm-0.5.4.tar
The pre-trained models can be downloaded
python get_features_CTransPath.py
It is recommended to first try to extract features at 1.0mpp, and then try other magnifications
For linear classification on frozen features/weights
python ctrans_lincls.py
Similar to Swin or ViT,please see the instructions or DEiT
We also trained MoCo v3 on these histopathological images. The pre-trained models can be downloaded as following:
Undated the latest weights have been uploaded (1/10/2022)
please see the instructions
python get_features_mocov3.py \
-a vit_small
To perform end-to-end fine-tuning for ViT, use our script to convert the pre-trained ViT checkpoint to DEiT format:
python convert_to_deit.py \
--input [your checkpoint path]/[your checkpoint file].pth.tar \
--output [target checkpoint file].pth
Then run the training (in the DeiT repo) with the converted checkpoint:
python $DEIT_DIR/main.py \
--resume [target checkpoint file].pth \
--epochs 150
The pre-trained models can be downloaded
These codes are partly based on byol and moco v2
python main_byol_transpath.py \
--lr 0.0001 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --mlp --moco-t 0.2 --aug-plus --cos
python get_feature_transpath.py
use our script to convert the pre-trained ViT checkpoint to Transformers format:
python convert_to_transpath.py
Please open new threads or address all questions to xiyue.wang.scu@gmail.com
TransPath is released under the GPLv3 License and is available for non-commercial academic purposes.
Please use below to cite this paper if you find our work useful in your research.
@{wang2022,
title={Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification},
author={Wang, Xiyue and Yang, Sen and Zhang, Jun and Wang, Minghui and Zhang, Jing and Yang, Wei and Huang, Junzhou and Han, Xiao},
journal={Medical Image Analysis},
year={2022},
publisher={Elsevier}
}
@inproceedings{wang2021transpath,
title={TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification},
author={Wang, Xiyue and Yang, Sen and Zhang, Jun and Wang, Minghui and Zhang, Jing and Huang, Junzhou and Yang, Wei and Han, Xiao},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={186--195},
year={2021},
organization={Springer}
}