This github repository contains the code used to achieve the results reported in Transfer Learning in Polyp and Endoscopic Tool Segmentation from Colonoscopy Images https://journals.uio.no/NMI/article/view/9132>.
Full reference to the original paper :
N.P.Tzavara and B.-J. Singstad, ‘Transfer Learning in Polyp and Endoscopic Tool Segmentation from Colonoscopy Images’, NMI, vol. 1, no. 1, pp. 32–34, Nov. 2021, doi: 10.5617/nmi.9132.
The code in hyperkvasir-polyp-cv.ipynb is used to cross-validate the performance of the different models on the provided development set, containing 1000 polyp images and masks. The results from our experiments are available in Neptune.ai
In the code; hyperkvasir-polyp-testset-prediction.ipynb , we train the final model based on the best model from cross-validation on the development set (ranked based on Dice score). The final model is then applied on the test set and the masks are saved in the same resolution as the original image.
Our model achieves the following performance on : Hyper-kvasir
Results polyp segmentation
Metric Training set (Cross-validation) Test set Dice 0.874 $\pm$ 0.0110.857 IoU 0.804 $\pm$ 0.0130.800
The code in kvasir-instrument-cv.ipynb is used to cross-validate the performance of the different models on the provided development set, containing 590 endoscopic tool images and masks. The results from our experiments are available in Neptune.ai
In code kvasir-instrument-testset-prediction.ipynb we train the final model based on the best model from cross-validation on the development set (ranked based on Dice score). The final model is then applied on the test set and the masks are saved in the same resolution as the original image.
Our model achieves the following performance on : Hyper-kvasir
Results instrument segmentation
Metric Training set (Cross-validation) Test set Dice 0.937 \pm 0.015 0.948 IoU 0.893 \pm 0.020 0.911
The pretrained segmentation models were downloaded from this scource: segmentation_models
In addition to the segmentation model we have also developed an algorithm which detects the borders and counts the segmented polyps in the dataset ( polyp-counter.ipynb ). We belive this feature has clinical relevance, because the clinician now may only use time on interpreting the images with detected polyps and not the images without polyps. On the other hand this should be used with caution because undetected polyps will not be counted and thus not be reviewed by the clinician. Figure 1 show an example of the counting algorithm where the model has successfully counted 1 and 2 segmented polyps.
Figure 1: Counting number of polyps in the segmented image
In some cases the predicted masks are fragmented and the counter algorithm may interpret one polyps as two or more like in figure 2.
Figure 2: One polyp, segmented with a small outlier, interpreted as two polyps by the counter algorithm
The same jupyter notebooks that is avilable here are also available in Kaggle which gives you free GPU wich is necessary to train the models and reproduce the results:
- The hyperkvasir-polyp-cv.ipynb is available as a Kaggle notebook here
- The kvasir-instrument-cv.ipynb is available as a Kaggle notebook here
- The final training and prediction on the the test set hyperkvasir-polyp-testset-prediction.ipynb is available as a Kaggle notebook here
- The final training and prediction on the the test set kvasir-instrument-testset-prediction.ipynb is available as a Kaggle notebook here