/SAM4MIS

Segment Anything Model for Medical Image Segmentation: paper list and open-source project summary

Segment Anything Model (SAM) for Medical Image Segmentation.

  • [New] We update a new version of annual review of Segment Anything Model for Medical Image Segmentation in 2023. Please refer to the paper for more details.

  • Due to the inherent flexibility of prompting, foundation models have emerged as the predominant force in the fields of natural language processing and computer vision. The recent introduction of the Segment Anything Model (SAM) signifies a noteworthy expansion of the prompt-driven paradigm into the domain of image segmentation, thereby introducing a plethora of previously unexplored capabilities. However, the viability of its application to medical image segmentation remains uncertain, given the substantial distinctions between natural and medical images.

  • In this work, we provide a comprehensive overview of recent endeavors aimed at extending the efficacy of SAM to medical image segmentation tasks, encompassing both empirical benchmarking and methodological adaptations. Additionally, we explore potential avenues for future research directions in SAM's role within medical image segmentation.

  • This repo will continue to track and summarize the latest research progress of SAM in medical image segmentation to support ongoing research endeavors. If you find this project helpful, please consider stars or citing. Feel free to contact for any suggestions.

@article{SAM4MIS-2024,
  title={Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions},
  author={Zhang, Yichi and Shen, Zhenrong and Jiao, Rushi},
  journal={arXiv preprint arXiv:2401.03495},
  year={2024}
}

@article{SAM4MIS-2023,
  title={How Segment Anything Model (SAM) Boost Medical Image Segmentation?},
  author={Zhang, Yichi and Jiao, Rushi},
  journal={arXiv preprint arXiv:2305.03678},
  year={2023}
}

A brief chronology of Segment Anything Model (SAM) and its variants for medical image segmentation in 2023.

image

About Segment Anything Model (SAM)

image

Segment Anything Model (SAM) uses vision transformer-based image encoder to extract image features and compute an image embedding, and prompt encoder to embed prompts and incorporate user interactions. Then extranted information from two encoders are combined to alightweight mask decoder to generate segmentation results based on the image embedding, prompt embedding, and output token. For more details, please refer to the original paper.

Large-scale Datasets for Foundation Models for Medical Imaging.

Date Authors Title Dataset
202311 J. Ye et al. SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks (paper) Link

Literature Reviews of Applying SAM for Medical Image Segmentation.

Date Authors Title Code
202401 J. Wan et al. TriSAM: Tri-Plane SAM for zero-shot cortical blood vessel segmentation in VEM images (paper) None
202401 S. Na et al. Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for Nuclei Segmentation (paper) None
202401 H. Gu et al. SegmentAnyBone: A Universal Model that Segments Any Bone at Any Location on MRI (paper) Code
202401 S. Li et al. ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation (paper) Code
202401 JD. Gutiérrez et al. No More Training: SAM's Zero-Shot Transfer Capabilities for Cost-Efficient Medical Image Segmentation(paper) None
202401 H. Wang et al. Leveraging SAM for Single-Source Domain Generalization in Medical Image Segmentation (paper) Code
202401 Z. Feng et al. Swinsam: Fine-Grained Polyp Segmentation in Colonoscopy Images Via Segment Anything Model Integrated with a Swin Transformer Decoder (paper) None
202312 Z. Zhao et al. One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts (paper) Code
202312 W. Yue et al. Part to Whole: Collaborative Prompting for Surgical Instrument Segmentation (paper) Code
202312 ZM. Colbert et al. Repurposing Traditional U-Net Predictions for Sparse SAM Prompting in Medical Image Segmentation (paper) None
202312 W. Xie et al. SAM Fewshot Finetuning for Anatomical Segmentation in Medical Images (paper) None
202312 JG. Almeida et al. Testing the Segment Anything Model on radiology data (paper) None
202312 M. Barakat et al. Towards SAMBA: Segment Anything Model for Brain Tumor Segmentation in Sub-Sharan African Populations (paper) None
202312 Y. Zhang et al. SQA-SAM: Segmentation Quality Assessment for Medical Images Utilizing the Segment Anything Model (paper) Code
202312 S. Chen et al. ASLseg: Adapting SAM in the Loop for Semi-supervised Liver Tumor Segmentation (paper) None
202312 HE. Wong et al. ScribblePrompt: Fast and Flexible Interactive Segmentation for Any Medical Image (paper) Code
202312 Y. Zhang et al. SemiSAM: Exploring SAM for Enhancing Semi-Supervised Medical Image Segmentation with Extremely Limited Annotations (paper) None
202312 Y. Zhao et al. Segment Anything Model-guided Collaborative Learning Network for Scribble-supervised Polyp Segmentation (paper) None
202311 N. Li et al. Segment Anything Model for Semi-Supervised Medical Image Segmentation via Selecting Reliable Pseudo-Labels (paper) None
202311 X. Wei et al. I-MedSAM: Implicit Medical Image Segmentation with Segment Anything (paper) None
202311 Z. Shui et al. Unleashing the Power of Prompt-driven Nucleus Instance Segmentation (paper) Code
202311 M. Li and G. Yang et al. Where to Begin? From Random to Foundation Model Instructed Initialization in Federated Learning for Medical Image Segmentation (paper) None
202311 AK. Tyagi et al. Guided Prompting in SAM for Weakly Supervised Cell Segmentation in Histopathological Images (paper) Code
202311 Y. Du et al. SegVol: Universal and Interactive Volumetric Medical Image Segmentation (paper) Code
202311 DM. Nguyen et al. On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation (paper) None
202311 U. Israel et al. A Foundation Model for Cell Segmentation (paper) Code
202311 Q. Quan et al. Slide-SAM: Medical SAM Meets Sliding Window (paper) None
202311 Y. Zhang et al. Segment Anything Model with Uncertainty Rectification for Auto-Prompting Medical Image Segmentation (paper) Code
202311 Y. Wang et al. SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage Segmentation (paper) Code
202311 H. Jiang et al. GlanceSeg: Real-time microangioma lesion segmentation with gaze map-guided foundation model for early detection of diabetic retinopathy (paper) None
202311 Y. Xu et al. EviPrompt: A Training-Free Evidential Prompt Generation Method for Segment Anything Model in Medical Images (paper) None
202311 DL. Ferreira and R. Arnaout Are foundation models efficient for medical image segmentation? (paper) Code
202310 H. Li et al. Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models (paper) Code
202310 D. Anand et al. One-shot Localization and Segmentation of Medical Images with Foundation Models (paper) None
202310 H. Wang et al. SAM-Med3D (paper) Code
202310 SK. Kim et al. Evaluation and improvement of Segment Anything Model for interactive histopathology image segmentation (paper) Code
202310 X. Chen et al. SAM-OCTA: Prompting Segment-Anything for OCTA Image Segmentation (paper) Code
202310 M. Peivandi et al. Empirical Evaluation of the Segment Anything Model (SAM) for Brain Tumor Segmentation (paper) None
202310 H. Ravishankar et al. SonoSAM - Segment Anything on Ultrasound Images (paper) None
202310 A. Ranem et al. Exploring SAM Ablations for Enhancing Medical Segmentation in Radiology and Pathology (paper) None
202310 S. Pandey et al. Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models (paper) None
202309 Y. Li et al. nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance (paper) Code
202309 Y. Zhao et al. MFS Enhanced SAM: Achieving Superior Performance in Bimodal Few-shot Segmentation (paper) Code
202309 C. Wang et al. SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to OCTA Image Segmentation Tasks (paper) Code
202309 Y. Zhang et al. 3D-U-SAM Network For Few-shot Tooth Segmentation in CBCT Images (paper) None
202309 CJ. Chao et al. Comparative Eminence: Foundation versus Domain-Specific Model for Cardiac Ultrasound Segmentation (paper) None
202309 H. Ning et al. An Accurate and Efficient Neural Network for OCTA Vessel Segmentation and a New Dataset (paper) Code
202309 C. Chen et al. MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation (paper) Code
202309 P. Zhang and Y. Wang Segment Anything Model for Brain Tumor Segmentation (paper) None
202309 B. Fazekas et al. Adapting Segment Anything Model (SAM) for Retinal OCT (paper) None
202309 X. Lin et al. SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation (paper) Code
202309 X. Xing et al. SegmentAnything helps microscopy images based automatic and quantitative organoid detection and analysis (paper) Code
202309 NT. Bui et al. SAM3D: Segment Anything Model in Volumetric Medical Images (paper) Code
202308 Y. Zhang et al. Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation with Meta-Learning (paper) None
202308 J. Cheng et al. SAM-Med2D (paper) Code
202308 C. Li et al. Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation (paper) None
202308 W. Feng et al. Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few Exemplars (paper) None
202308 Y. Zhang et al. SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge for Semi-Supervised Learning in Medical Image Segmentation (paper) None
202308 A. Lou et al. SAMSNeRF: Segment Anything Model (SAM) Guides Dynamic Surgical Scene Reconstruction by Neural Radiance Field (NeRF) (paper) Code
202308 A. Archit et al. Segment Anything for Microscopy (paper) Code
202308 X. Yao et al. False Negative/Positive Control for SAM on Noisy Medical Images (paper) Code
202308 B. Fazekas et al. SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT (paper) None
202308 W. Yue et al. SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation (paper) Code
202308 H. Zhang et al. CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark Model for Rectal Cancer Segmentation (paper) Code
202308 Q. Wu et al. Self-Prompting Large Vision Models for Few-Shot Medical Image Segmentation (paper) Code
202308 A. Wang et al. SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation (paper) None
202308 D. Shin et al. CEmb-SAM: Segment Anything Model with Condition Embedding for Joint Learning from Heterogeneous Datasets (paper) None
202308 R. Biswas Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp Segmentation? (paper) Code
202308 S. Cao et al. TongueSAM: An Universal Tongue Segmentation Model Based on SAM with Zero-Shot (paper) Code
202308 X. Li et al. Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot Segment Anything Model for Molecular-empowered Learning (paper) None
202308 JN. Paranjape et al. AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene Segmentation (paper) Code
202308 Z. Huang et al. Push the Boundary of SAM: A Pseudo-label Correction Framework for Medical Segmentation (paper) None
202307 J. Zhang et al. SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital Pathology (paper) None
202307 MS. Hossain et al. Robust HER2 Grading of Breast Cancer Patients using Zero-shot Segment Anything Model (SAM) (paper) None
202307 C. Wang et al. SAM^Med^ : A medical image annotation framework based on large vision model (paper) None
202307 G. Deng et al. SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image (paper) None
202307 H. Kim et al. Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging (paper) None
202307 X. Shi et al. Cross-modality Attention Adapter: A Glioma Segmentation Fine-tuning Method for SAM Using Multimodal Brain MR Images (paper) None
202307 C. Cui et al. All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning (paper) None
202306 E. Kellener et al. Utilizing Segment Anything Model for Assessing Localization of Grad-CAM in Medical Imaging (paper) None
202306 F. Hörst et al. CellViT: Vision Transformers for Precise Cell Segmentation and Classification (paper) Code
202306 W. Lei et al. MedLSAM: Localize and Segment Anything Model for 3D Medical Images (paper) Code
202306 X. Hu et al. How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images (paper) Code
202306 S. Gong et al. 3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation (paper) Code
202306 DMH. Nguyen et al. LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching (paper) Code
202306 S. Chai et al. Ladder Fine-tuning approach for SAM integrating complementary network (paper) Code
202306 L. Zhang et al. Segment Anything Model (SAM) for Radiation Oncology (paper) None
202306 G. Ning et al. The potential of 'Segment Anything' (SAM) for universal intelligent ultrasound image guidance (paper) None
202306 C. Shen et al. Temporally-Extended Prompts Optimization for SAM in Interactive Medical Image Segmentation (paper) None
202306 T. Shaharabany et al. AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt Encoder (paper) None
202306 Y. Gao et al. DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation (paper) Code
202305 D. Lee et al. IAMSAM : Image-based Analysis of Molecular signatures using the Segment-Anything Model (paper) Code
202305 M. Hu et al. BreastSAM: A Study of Segment Anything Model for Breast Tumor Detection in Ultrasound Images (paper) None
202305 J. Wu PromptUNet: Toward Interactive Medical Image Segmentation (paper) Code
202305 Y. Li et al. Polyp-SAM: Transfer SAM for Polyp Segmentation (paper) Code
202305 C. Mattjie et al. Exploring the Zero-Shot Capabilities of the Segment Anything Model (SAM) in 2D Medical Imaging: A Comprehensive Evaluation and Practical Guideline (paper) None
202305 D. Cheng et al. SAM on Medical Images: A Comprehensive Study on Three Prompt Modes (paper) None
202304 A. Wang et al. SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective (paper) None
202304 Y. Huang et al. Segment Anything Model for Medical Images? (paper) None
202304 M. Hu et al. SkinSAM: Empowering Skin Cancer Segmentation with Segment Anything Model (paper) None
202304 B. Wang et al. GazeSAM: What You See is What You Segment (paper) Code
202304 K. Zhang and D. Liu Customized Segment Anything Model for Medical Image Segmentation (paper) Code
202304 Z. Qiu et al. Learnable Ophthalmology SAM (paper) Code
202304 P. Shi et al. Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation (paper) None
202304 J. Wu et al. Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation (paper) Code
202304 J. Ma and B. Wang Segment Anything in Medical Images (paper) Code
202304 Y. Zhang et al. Input Augmentation with SAM: Boosting Medical Image Segmentation with Segmentation Foundation Model (paper) None
202304 MA. Mazurowski et al. Segment Anything Model for Medical Image Analysis: an Experimental Study (paper) Code
202304 S. He et al. Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks (paper) None
202304 T. Chen et al. SAM Fails to Segment Anything? – SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More (paper) Code
202304 C. Hu and X. Li When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation (paper) None
202304 F. Putz et al. The “Segment Anything” foundation model achieves favorable brain tumor autosegmentation accuracy on MRI to support radiotherapy treatment planning (paper) None
202304 T. Zhou et al. Can SAM Segment Polyps? (paper) Code
202304 Y. Liu et al. SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM (paper) Code
202304 S. Roy et al. SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model (paper) None
202304 S. Mohapatra et al. SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of Magnetic Resonance Images using Deep Learning (paper) None
202304 R. Deng et al. Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging (paper) None