/MICCAI2023-VVC-Screening

This repository contains the implementation of the methods described in "Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis Screening", which is submitted to MICCAI 2023.

Primary LanguagePython

Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis Screening

Authors: Jiangdong Cai1, Honglin Xiong1, Maosong Cao1, Luyan Liu1, Lichi Zhang2, Qian Wang1
1ShanghaiTech University, 2Shanghai Jiao Tong University

This repository contains the implementation of the methods described in "Progressive Attention Guidance for Whole Slide Vulvovaginal Candidiasis Screening", which is submitted to MICCAI 2023.

Paper link: https://arxiv.org/abs/2309.02670


Abstract: Vulvovaginal candidiasis (VVC) is the most prevalent human candidal infection, estimated to afflict approximately 75% of all women at least once in their lifetime. It will lead to several symptoms including pruritus, vaginal soreness, and so on. Automatic whole slide image (WSI) classification is highly demanded, for the huge burden of disease control and prevention. However, the WSI-based computer-aided VCC screening method is still vacant due to the scarce labeled data and unique properties of candida. Candida in WSI is challenging to be captured by conventional classification models due to its distinctive elongated shape, the small proportion of their spatial distribution, and the style gap from WSIs. To make the model focus on the candida easier, we propose an attention-guided method, which can obtain a robust diagnosis classification model. Specifically, we first use a pre-trained detection model as prior instruction to initialize the classification model. Then we design a Skip Self-Attention module to refine the attention onto the fined-grained features of candida. Finally, we use a contrastive learning method to alleviate the overfitting caused by the style gap of WSIs and suppress the attention to false positive regions. Our experimental results demonstrate that our framework achieves state-of-the-art performance.


Method

Attention Guided Image-level Classification

Requirements

See requirements.txt for the prerequisites.

Example usage

image-level pre-training

Retinanet is used for pre-training, which refers to https://github.com/yhenon/pytorch-retinanet. Then we keep the trained detection model for subsequent training.

image-level training

Modify dataloader in utils/data_loading.py according to the format of your datasets.

python train.py will train the model.

sample-level training

Use save_feature.py to save the features and scores generated by image-level model.

python train.py will train the model.