A contrastive learning based semi-supervised segmentation network for medical image segmentation This repository contains the implementation of a novel contrastive learning based semi-segmentation networks to segment the surgical tools.
Fig. 1. The architecture of Min-Max Similarity.
🔥 NEWS 🔥 The full paper is available: Min-Max Similarity
🔥 NEWS 🔥 The paper has been accepted by IEEE Transactions on Medical Imaging. The early access is available at Here.
- python==3.6
- packages:
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
conda install opencv-python pillow numpy matplotlib
- Clone this repository
git clone https://github.com/AngeLouCN/Min_Max_Similarity
We use five dataset to test its performance:
- Kvasir-instrument
- EndoVis'17
- Cochlear Implant
- RoboTool
- ART-NET
File structure
|-- data
| |-- kvasir
| | |-- train
| | | |--image
| | | |--mask
| | |-- test
| | | |--image
| | | |--mask
| |-- EndoVis17
| | |-- train
| | | |--image
| | | |--mask
| | |-- test
| | | |--image
| | | |--mask
......
You can also test on some other public medical image segmentation dataset with above file architecture
-
Training: You can change the hyper-parameters like labeled ratio, leanring rate, and e.g. in
train_mms.py
, and directly run the code. -
Testing: You can change the dataset name in
test.py
and run the code.
Fig. 2. Visual comparison of our method with state-of-the-art models. Segmentation results are shown for 50% of labeled training data for Kvasir-instrument, EndVis’17, ART-NET and RoboTool, and 2.4% labeled training data for cochlear implant. From left to right are EndoVis’17, Kvasir-instrument, ART-NET, RoboTool, Cochlear implant and region of interest (ROI) of Cochlear implant.
@article{lou2023min,
title={Min-Max Similarity: A Contrastive Semi-Supervised Deep Learning Network for Surgical Tools Segmentation},
author={Lou, Ange and Tawfik, Kareem and Yao, Xing and Liu, Ziteng and Noble, Jack},
journal={IEEE Transactions on Medical Imaging},
year={2023},
publisher={IEEE}
}
Our code is based on the Duo-SegNet, we thank their excellent work and repository.