/SAM4MIS

SAM & SAM 2 for Medical Image Segmentation: Open-Source Project Summary

SAM & SAM 2 for Medical Image Segmentation.

  • Due to the inherent flexibility of prompting, foundation models have emerged as the predominant force in the fields of natural language processing and computer vision. The introduction of the Segment Anything Model (SAM) (paper) and SAM2 (paper) signifies a noteworthy expansion of the prompt-driven paradigm into the domain of image/video segmentation, introducing a plethora of previously unexplored capabilities.

  • We provide a comprehensive survey of recent endeavors aimed at extending the efficacy of SAM to medical image segmentation tasks, encompassing both empirical benchmarking and methodological adaptations. Additionally, we explore potential avenues for future research directions in SAM's role within medical image segmentation. Please refer to the paper for more details.

  • This repo will continue to track and summarize the latest research progress of SAM in medical image segmentation to support ongoing research endeavors. If you find this project helpful, please consider stars or citing. Feel free to contact for any suggestions. If you would like to contribute, please open an issue.

@article{SAM4MIS,
  title={Segment Anything Model for Medical Image Segmentation: Current Applications and Future Directions},
  author={Zhang, Yichi and Shen, Zhenrong and Jiao, Rushi},
  journal={Computers in Biology and Medicine},
  volume={171},
  pages={108238},
  year={2024}
}
  • Last update 2024-8-21

Table of Contents

About Segment Anything Model (SAM)

Segment Anything Model (SAM) uses vision transformer-based image encoder to extract image features and compute an image embedding, and prompt encoder to embed prompts and incorporate user interactions. Then extranted information from two encoders are combined to alightweight mask decoder to generate segmentation results based on the image embedding, prompt embedding, and output token. For more details, please refer to the original paper of SAM.

image

image

image

A brief chronology of Segment Anything Model (SAM) and its variants for medical image segmentation in 2023.

Literature Reviews of SAM 2 Adaptions for Medical Image Segmentation.

Date Authors Title Code
202408 H. Liu et al. Surgical SAM 2: Real-time Segment Anything in Surgical Video by Efficient Frame Pruning (paper) Code
202408 Y. Yamagishi et al. Zero-shot 3D Segmentation of Abdominal Organs in CT Scans Using Segment Anything Model 2: Adapting Video Tracking Capabilities for 3D Medical Imaging (paper) None
202408 M. Mansoori et al. Polyp SAM 2: Advancing Zero shot Polyp Segmentation in Colorectal Cancer Detection (paper) Code
202408 AS. Yu et al. Novel adaptation of video segmentation to 3D MRI: efficient zero-shot knee segmentation with SAM2 (paper) None
202408 J. Yu et al. SAM 2 in Robotic Surgery: An Empirical Evaluation for Robustness and Generalization in Surgical Video Segmentation (paper) None
202408 T. Chen et al. SAM2-Adapter: Evaluating & Adapting Segment Anything 2 in Downstream Tasks: Camouflage, Shadow, Medical Image Segmentation, and More (paper) None
202408 S. Sengupta et al. Is SAM 2 Better than SAM in Medical Image Segmentation? (paper) None
202408 Y. Shen et al. Performance and Non-adversarial Robustness of the Segment Anything Model 2 in Surgical Video Segmentation (paper) None
202408 M. Zhang et al. SAM2-PATH: A better segment anything model for semantic segmentation in digital pathology (paper) Code
202408 J. Ma et al. Segment Anything in Medical Images and Videos: Benchmark and Deployment (paper) Code
202408 Z. Yan et al. Biomedical SAM 2: Segment Anything in Biomedical Images and Videos (paper) None
202408 C. Shen et al. Interactive 3D Medical Image Segmentation with SAM 2 (paper) Code
202408 A. Lou et al. Zero-Shot Surgical Tool Segmentation in Monocular Video Using Segment Anything Model 2 (paper) Code
202408 J. Zhu et al. Medical SAM 2: Segment medical images as video via Segment Anything Model 2 (paper) Code
202408 H. Dong et al. Segment anything model 2: an application to 2D and 3D medical images (paper) None

Literature Reviews of Foundation Models / SAM for Medical Image Segmentation.

Date Authors Title Code
202408 S. Yang et al. SAM-UNet: Enhancing Zero-Shot Segmentation of SAM for Universal Medical Images (paper) Code
202408 J. Wei et al. SAM-FNet: SAM-Guided Fusion Network for Laryngo-Pharyngeal Tumor Detection (paper) Code
202408 X. Wei et al. PromptSAM+: Malware Detection based on Prompt Segment Anything Model (paper) Code
202407 J. Cai et al. PESAM: Privacy-Enhanced Segment Anything Model for Medical Image Segmentation (paper) None
202407 M. Asokan et al. A Federated Learning-Friendly Approach for Parameter-Efficient Fine-Tuning of SAM in 3D Segmentation (paper) Code
202407 SN. Gowda et al. CC-SAM: SAM with Cross-feature Attention and Context for Ultrasound Image Segmentation(paper) None
202407 X. Huo et al. Dr-SAM: U-Shape Structure Segment Anything Model for Generalizable Medical Image Segmentation (paper) None
202407 H. Fang et al. SAM-MIL: A Spatial Contextual Aware Multiple Instance Learning Approach for Whole Slide Image Classification (paper) None
202407 Q. Xu et al. ESP-MedSAM: Efficient Self-Prompting SAM for Universal Domain-Generalized Medical Image Segmentation (paper) Code
202407 X. Zhao et al. SAM-Driven Weakly Supervised Nodule Segmentation with Uncertainty-Aware Cross Teaching (paper) None
202407 Q. Xu et al. ProtoSAM: One Shot Medical Image Segmentation With Foundational Models (paper) Code
202407 A. Murali et al. CycleSAM: One-Shot Surgical Scene Segmentation using Cycle-Consistent Feature Matching to Prompt SAM (paper) None
202407 T. Song et al. TinySAM-Med3D: A Lightweight Segment Anything Model for Volumetric Medical Imaging with Mixture of Experts (paper) None
202407 Y. Gao et al. MBA-Net: SAM-driven Bidirectional Aggregation Network for Ovarian Tumor Segmentation (paper) None
202407 J. Miao et al. Cross Prompting Consistency with Segment Anything Model for Semi-supervised Medical Image Segmentation (paper) Code
202407 G. Wang et al. SAM-Med3D-MoE: Towards a Non-Forgetting Segment Anything Model via Mixture of Experts for 3D Medical Image Segmentation (paper) None
202407 Z. Zhang et al. Quantification of cardiac capillarization in basement-membrane-immunostained myocardial slices using Segment Anything Model (paper) None
202407 H. Li et al. ASPS: Augmented Segment Anything Model for Polyp Segmentation (paper) Code
202406 Y. Xie et al. SimTxtSeg: Weakly-Supervised Medical Image Segmentation with Simple Text Cues (paper) None
202406 X. Deng et al. MemSAM: Taming Segment Anything Model for Echocardiography Video Segmentation (paper) Code
202406 Yunhe Gao Training Like a Medical Resident: Context-Prior Learning Toward Universal Medical Image Segmentation (paper) Code
202406 C.D Albelda et al. How SAM Perceives Different mp-MRI Brain Tumor Domains? (paper) Code
202406 T. Huang et al. Improving Segment Anything on the Fly: Auxiliary Online Learning and Adaptive Fusion for Medical Image Segmentation (paper) Code
202406 B. Towle et al. SimSAM: Zero-shot Medical Image Segmentation via Simulated Interaction (paper) Code
202405 Y. Gu et al. LeSAM: Adapt Segment Anything Model for medical lesion segmentation (paper) None
202405 J. Leng et al. Development of UroSAM: A Machine Learning Model to Automatically Identify Kidney Stone Composition from Endoscopic Video (paper) None
202405 MM. Rahman et al. PP-SAM: Perturbed Prompts for Robust Adaptation of Segment Anything Model for Polyp Segmentation (paper) Code
202405 X. Zhang et al. A Foundation Model for Brain Lesion Segmentation with Mixture of Modality Experts (paper) Code
202405 TJ. Chan et al. SAM3D: Zero-Shot Semi-Automatic Segmentation in 3D Medical Images with the Segment Anything Model (paper) None
202405 HL. Zedda et al. SAMMI: Segment Anything Model for Malaria Identification (paper) None
202404 H. Zhou et al. AGSAM: Agent-Guided Segment Anything Model for Automatic Segmentation in Few-Shot Scenarios (paper) None
202404 V. Zohranyan et al. Dr-SAM: An End-to-End Framework for Vascular Segmentation, Diameter Estimation, and Anomaly Detection on Angiography Images (paper) Code
202404 Z. Tu et al. Ultrasound SAM Adapter: Adapting SAM for Breast Lesion Segmentation in Ultrasound Images (paper) Code
202404 Y. Sheng et al. Surgical-DeSAM: Decoupling SAM for Instrument Segmentation in Robotic Surgery (paper) None
202404 J. Yu et al. Adapting SAM for Surgical Instrument Tracking and Segmentation in Endoscopic Submucosal Dissection Videos (paper) None
202404 H. Gu et al. How to build the best medical image segmentation algorithm using foundation models: a comprehensive empirical study with Segment Anything Model (paper) Code
202404 W. Abebe et al. SAM-I-Am: Semantic Boosting for Zero-shot Atomic-Scale Electron Micrograph Segmentation (paper) None
202404 S. Aleem et al. Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero-shot Medical Image Segmentation (paper) Code
202404 Z. Su et al. Adapting SAM to histopathology images for tumor bud segmentation in colorectal cancer (paper) None
202404 Y. Ding et al. Barely-supervised Brain Tumor Segmentation via Employing Segment Anything Model (paper) None
202404 Y. Zhu et al. SAM-Att: A Prompt-free SAM-related Model with an Attention Module for Automatic Segmentation of the Left Ventricle in Echocardiography (paper) None
202404 Y. Liu et al. Universal 3D CT lesion segmentation using SAM with RECIST annotation (paper) None
202403 Z. Cheng et al. Unleashing the Potential of SAM for Medical Adaptation via Hierarchical Decoding (paper) Code
202403 Y. Liu et al. Segment Any Medical Model Extended (paper) None
202403 P. Kulkarni et al. Anytime, Anywhere, Anyone: Investigating the Feasibility of Segment Anything Model for Crowd-Sourcing Medical Image Annotations (paper) None
202403 H. Guo et al. Towards a Comprehensive, Efficient and Promptable Anatomic Structure Segmentation Model using 3D Whole-body CT Scans (paper) None
202403 S. Li et al. Concatenate, Fine-tuning, Re-training: A SAM-enabled Framework for Semi-supervised 3D Medical Image Segmentation (paper) Code
202403 M. Jiang et al. Uncertainty-Aware Adapter: Adapting Segment Anything Model (SAM) for Ambiguous Medical Image Segmentation (paper) None
202403 Z. Chen et al. Cardiac Magnetic Resonance 2D+T Short- and Long-axis Segmentation via Spatio-temporal SAM Adaptation (paper) None
202403 Y. Shen et al. FastSAM3D: An Efficient Segment Anything Model for 3D Volumetric Medical Images (paper) Code
202403 H. Liu et al. WSI-SAM: Multi-resolution Segment Anything Model (SAM) for histopathology whole-slide images (paper) Code
202403 YX. Teoh et al. Segmentation of Knee Bones for Osteoarthritis Assessment: A Comparative Analysis of Supervised, Few-Shot, and Zero-Shot Learning Approaches (paper) None
202403 Y. Wang et al. SAMDA: Leveraging SAM on Few-Shot Domain Adaptation for Electronic Microscopy Segmentation (paper) None
202403 Y. Liu et al. FedFMS: Exploring Federated Foundation Models for Medical Image Segmentation (paper) Code
202403 C. Zhao et al. Part-aware Personalized Segment Anything Model for Patient-Specific Segmentation (paper) None
202403 J. Wang et al. ProMISe: Promptable Medical Image Segmentation using SAM (paper) None
202402 L. Zhang et al. BLO-SAM: Bi-Level Optimization Based Finetuning of the Segment Anything Model for Overfitting-Preventing Semantic Segmentation (paper) Code
202402 KJ. Oguine et al. From Generalization to Precision: Exploring SAM for Tool Segmentation in Surgical Environments (paper) None
202402 J. Ren et al. Segment anything model for head and neck tumor segmentation with CT, PET and MRI multi-modality images (paper) None
202402 Z. Chen et al. UN-SAM: Universal Prompt-Free Segmentation for Generalized Nuclei Images (paper) Code
202402 H. Wu et al. Tumor segmentation on whole slide images: training or prompting? (paper) None
202402 P. Farmanifard et al. Iris-SAM: Iris Segmentation Using a Foundational Model (paper) None
202402 A. Guo et al. ClickSAM: Fine-tuning Segment Anything Model using click prompts for ultrasound image segmentation (paper) None
202401 J. Wan et al. TriSAM: Tri-Plane SAM for zero-shot cortical blood vessel segmentation in VEM images (paper) None
202401 S. Na et al. Segment Any Cell: A SAM-based Auto-prompting Fine-tuning Framework for Nuclei Segmentation (paper) None
202401 H. Gu et al. SegmentAnyBone: A Universal Model that Segments Any Bone at Any Location on MRI (paper) Code
202401 S. Li et al. ClipSAM: CLIP and SAM Collaboration for Zero-Shot Anomaly Segmentation (paper) Code
202401 JD. Gutiérrez et al. No More Training: SAM's Zero-Shot Transfer Capabilities for Cost-Efficient Medical Image Segmentation(paper) None
202401 H. Wang et al. Leveraging SAM for Single-Source Domain Generalization in Medical Image Segmentation (paper) Code
202401 Z. Feng et al. Swinsam: Fine-Grained Polyp Segmentation in Colonoscopy Images Via Segment Anything Model Integrated with a Swin Transformer Decoder (paper) None
202312 Z. Zhao et al. One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts (paper) Code
202312 W. Yue et al. Part to Whole: Collaborative Prompting for Surgical Instrument Segmentation (paper) Code
202312 ZM. Colbert et al. Repurposing Traditional U-Net Predictions for Sparse SAM Prompting in Medical Image Segmentation (paper) None
202312 W. Xie et al. SAM Fewshot Finetuning for Anatomical Segmentation in Medical Images (paper) None
202312 JG. Almeida et al. Testing the Segment Anything Model on radiology data (paper) None
202312 M. Barakat et al. Towards SAMBA: Segment Anything Model for Brain Tumor Segmentation in Sub-Sharan African Populations (paper) None
202312 Y. Zhang et al. SQA-SAM: Segmentation Quality Assessment for Medical Images Utilizing the Segment Anything Model (paper) Code
202312 S. Chen et al. ASLseg: Adapting SAM in the Loop for Semi-supervised Liver Tumor Segmentation (paper) None
202312 HE. Wong et al. ScribblePrompt: Fast and Flexible Interactive Segmentation for Any Medical Image (paper) Code
202312 Y. Zhang et al. SemiSAM: Exploring SAM for Enhancing Semi-Supervised Medical Image Segmentation with Extremely Limited Annotations (paper) None
202312 Y. Zhao et al. Segment Anything Model-guided Collaborative Learning Network for Scribble-supervised Polyp Segmentation (paper) None
202311 N. Li et al. Segment Anything Model for Semi-Supervised Medical Image Segmentation via Selecting Reliable Pseudo-Labels (paper) None
202311 X. Wei et al. I-MedSAM: Implicit Medical Image Segmentation with Segment Anything (paper) None
202311 Z. Shui et al. Unleashing the Power of Prompt-driven Nucleus Instance Segmentation (paper) Code
202311 M. Li and G. Yang et al. Where to Begin? From Random to Foundation Model Instructed Initialization in Federated Learning for Medical Image Segmentation (paper) None
202311 AK. Tyagi et al. Guided Prompting in SAM for Weakly Supervised Cell Segmentation in Histopathological Images (paper) Code
202311 Y. Du et al. SegVol: Universal and Interactive Volumetric Medical Image Segmentation (paper) Code
202311 DM. Nguyen et al. On the Out of Distribution Robustness of Foundation Models in Medical Image Segmentation (paper) None
202311 U. Israel et al. A Foundation Model for Cell Segmentation (paper) Code
202311 Q. Quan et al. Slide-SAM: Medical SAM Meets Sliding Window (paper) None
202311 Y. Zhang et al. Segment Anything Model with Uncertainty Rectification for Auto-Prompting Medical Image Segmentation (paper) Code
202311 Y. Wang et al. SAMIHS: Adaptation of Segment Anything Model for Intracranial Hemorrhage Segmentation (paper) Code
202311 H. Jiang et al. GlanceSeg: Real-time microangioma lesion segmentation with gaze map-guided foundation model for early detection of diabetic retinopathy (paper) None
202311 Y. Xu et al. EviPrompt: A Training-Free Evidential Prompt Generation Method for Segment Anything Model in Medical Images (paper) None
202311 DL. Ferreira and R. Arnaout Are foundation models efficient for medical image segmentation? (paper) Code
202310 H. Li et al. Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained Image Foundation Models (paper) Code
202310 D. Anand et al. One-shot Localization and Segmentation of Medical Images with Foundation Models (paper) None
202310 H. Wang et al. SAM-Med3D (paper) Code
202310 SK. Kim et al. Evaluation and improvement of Segment Anything Model for interactive histopathology image segmentation (paper) Code
202310 X. Chen et al. SAM-OCTA: Prompting Segment-Anything for OCTA Image Segmentation (paper) Code
202310 M. Peivandi et al. Empirical Evaluation of the Segment Anything Model (SAM) for Brain Tumor Segmentation (paper) None
202310 H. Ravishankar et al. SonoSAM - Segment Anything on Ultrasound Images (paper) None
202310 A. Ranem et al. Exploring SAM Ablations for Enhancing Medical Segmentation in Radiology and Pathology (paper) None
202310 S. Pandey et al. Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models (paper) None
202309 Y. Li et al. nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance (paper) Code
202309 Y. Zhao et al. MFS Enhanced SAM: Achieving Superior Performance in Bimodal Few-shot Segmentation (paper) Code
202309 C. Wang et al. SAM-OCTA: A Fine-Tuning Strategy for Applying Foundation Model to OCTA Image Segmentation Tasks (paper) Code
202309 Y. Zhang et al. 3D-U-SAM Network For Few-shot Tooth Segmentation in CBCT Images (paper) None
202309 CJ. Chao et al. Comparative Eminence: Foundation versus Domain-Specific Model for Cardiac Ultrasound Segmentation (paper) None
202309 H. Ning et al. An Accurate and Efficient Neural Network for OCTA Vessel Segmentation and a New Dataset (paper) Code
202309 C. Chen et al. MA-SAM: Modality-agnostic SAM Adaptation for 3D Medical Image Segmentation (paper) Code
202309 P. Zhang and Y. Wang Segment Anything Model for Brain Tumor Segmentation (paper) None
202309 B. Fazekas et al. Adapting Segment Anything Model (SAM) for Retinal OCT (paper) None
202309 X. Lin et al. SAMUS: Adapting Segment Anything Model for Clinically-Friendly and Generalizable Ultrasound Image Segmentation (paper) Code
202309 X. Xing et al. SegmentAnything helps microscopy images based automatic and quantitative organoid detection and analysis (paper) Code
202309 NT. Bui et al. SAM3D: Segment Anything Model in Volumetric Medical Images (paper) Code
202308 Y. Zhang et al. Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation with Meta-Learning (paper) None
202308 J. Cheng et al. SAM-Med2D (paper) Code
202308 C. Li et al. Auto-Prompting SAM for Mobile Friendly 3D Medical Image Segmentation (paper) None
202308 W. Feng et al. Cheap Lunch for Medical Image Segmentation by Fine-tuning SAM on Few Exemplars (paper) None
202308 Y. Zhang et al. SamDSK: Combining Segment Anything Model with Domain-Specific Knowledge for Semi-Supervised Learning in Medical Image Segmentation (paper) None
202308 A. Lou et al. SAMSNeRF: Segment Anything Model (SAM) Guides Dynamic Surgical Scene Reconstruction by Neural Radiance Field (NeRF) (paper) Code
202308 A. Archit et al. Segment Anything for Microscopy (paper) Code
202308 X. Yao et al. False Negative/Positive Control for SAM on Noisy Medical Images (paper) Code
202308 B. Fazekas et al. SAMedOCT: Adapting Segment Anything Model (SAM) for Retinal OCT (paper) None
202308 W. Yue et al. SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation (paper) Code
202308 H. Zhang et al. CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark Model for Rectal Cancer Segmentation (paper) Code
202308 Q. Wu et al. Self-Prompting Large Vision Models for Few-Shot Medical Image Segmentation (paper) Code
202308 A. Wang et al. SAM Meets Robotic Surgery: An Empirical Study on Generalization, Robustness and Adaptation (paper) None
202308 D. Shin et al. CEmb-SAM: Segment Anything Model with Condition Embedding for Joint Learning from Heterogeneous Datasets (paper) None
202308 R. Biswas Polyp-SAM++: Can A Text Guided SAM Perform Better for Polyp Segmentation? (paper) Code
202308 S. Cao et al. TongueSAM: An Universal Tongue Segmentation Model Based on SAM with Zero-Shot (paper) Code
202308 X. Li et al. Leverage Weakly Annotation to Pixel-wise Annotation via Zero-shot Segment Anything Model for Molecular-empowered Learning (paper) None
202308 JN. Paranjape et al. AdaptiveSAM: Towards Efficient Tuning of SAM for Surgical Scene Segmentation (paper) Code
202308 Z. Huang et al. Push the Boundary of SAM: A Pseudo-label Correction Framework for Medical Segmentation (paper) None
202307 J. Zhang et al. SAM-Path: A Segment Anything Model for Semantic Segmentation in Digital Pathology (paper) Code
202307 MS. Hossain et al. Robust HER2 Grading of Breast Cancer Patients using Zero-shot Segment Anything Model (SAM) (paper) None
202307 C. Wang et al. SAM^Med^ : A medical image annotation framework based on large vision model (paper) None
202307 G. Deng et al. SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image (paper) None
202307 H. Kim et al. Empirical Analysis of a Segmentation Foundation Model in Prostate Imaging (paper) None
202307 X. Shi et al. Cross-modality Attention Adapter: A Glioma Segmentation Fine-tuning Method for SAM Using Multimodal Brain MR Images (paper) None
202307 C. Cui et al. All-in-SAM: from Weak Annotation to Pixel-wise Nuclei Segmentation with Prompt-based Finetuning (paper) None
202306 E. Kellener et al. Utilizing Segment Anything Model for Assessing Localization of Grad-CAM in Medical Imaging (paper) None
202306 F. Hörst et al. CellViT: Vision Transformers for Precise Cell Segmentation and Classification (paper) Code
202306 W. Lei et al. MedLSAM: Localize and Segment Anything Model for 3D Medical Images (paper) Code
202306 X. Hu et al. How to Efficiently Adapt Large Segmentation Model (SAM) to Medical Images (paper) Code
202306 S. Gong et al. 3DSAM-adapter: Holistic Adaptation of SAM from 2D to 3D for Promptable Medical Image Segmentation (paper) Code
202306 DMH. Nguyen et al. LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical Imaging via Second-order Graph Matching (paper) Code
202306 S. Chai et al. Ladder Fine-tuning approach for SAM integrating complementary network (paper) Code
202306 L. Zhang et al. Segment Anything Model (SAM) for Radiation Oncology (paper) None
202306 G. Ning et al. The potential of 'Segment Anything' (SAM) for universal intelligent ultrasound image guidance (paper) None
202306 C. Shen et al. Temporally-Extended Prompts Optimization for SAM in Interactive Medical Image Segmentation (paper) None
202306 T. Shaharabany et al. AutoSAM: Adapting SAM to Medical Images by Overloading the Prompt Encoder (paper) None
202306 Y. Gao et al. DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image Segmentation (paper) Code
202305 D. Lee et al. IAMSAM : Image-based Analysis of Molecular signatures using the Segment-Anything Model (paper) Code
202305 M. Hu et al. BreastSAM: A Study of Segment Anything Model for Breast Tumor Detection in Ultrasound Images (paper) None
202305 J. Wu PromptUNet: Toward Interactive Medical Image Segmentation (paper) Code
202305 Y. Li et al. Polyp-SAM: Transfer SAM for Polyp Segmentation (paper) Code
202305 C. Mattjie et al. Exploring the Zero-Shot Capabilities of the Segment Anything Model (SAM) in 2D Medical Imaging: A Comprehensive Evaluation and Practical Guideline (paper) None
202305 D. Cheng et al. SAM on Medical Images: A Comprehensive Study on Three Prompt Modes (paper) None
202304 A. Wang et al. SAM Meets Robotic Surgery: An Empirical Study in Robustness Perspective (paper) None
202304 Y. Huang et al. Segment Anything Model for Medical Images? (paper) None
202304 M. Hu et al. SkinSAM: Empowering Skin Cancer Segmentation with Segment Anything Model (paper) None
202304 B. Wang et al. GazeSAM: What You See is What You Segment (paper) Code
202304 K. Zhang and D. Liu Customized Segment Anything Model for Medical Image Segmentation (paper) Code
202304 Z. Qiu et al. Learnable Ophthalmology SAM (paper) Code
202304 P. Shi et al. Generalist Vision Foundation Models for Medical Imaging: A Case Study of Segment Anything Model on Zero-Shot Medical Segmentation (paper) None
202304 J. Wu et al. Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation (paper) Code
202304 J. Ma and B. Wang Segment Anything in Medical Images (paper) Code
202304 Y. Zhang et al. Input Augmentation with SAM: Boosting Medical Image Segmentation with Segmentation Foundation Model (paper) None
202304 MA. Mazurowski et al. Segment Anything Model for Medical Image Analysis: an Experimental Study (paper) Code
202304 S. He et al. Accuracy of Segment-Anything Model (SAM) in medical image segmentation tasks (paper) None
202304 T. Chen et al. SAM Fails to Segment Anything? – SAM-Adapter: Adapting SAM in Underperformed Scenes: Camouflage, Shadow, Medical Image Segmentation, and More (paper) Code
202304 C. Hu and X. Li When SAM Meets Medical Images: An Investigation of Segment Anything Model (SAM) on Multi-phase Liver Tumor Segmentation (paper) None
202304 F. Putz et al. The “Segment Anything” foundation model achieves favorable brain tumor autosegmentation accuracy on MRI to support radiotherapy treatment planning (paper) None
202304 T. Zhou et al. Can SAM Segment Polyps? (paper) Code
202304 Y. Liu et al. SAMM (Segment Any Medical Model): A 3D Slicer Integration to SAM (paper) Code
202304 S. Roy et al. SAM.MD: Zero-shot medical image segmentation capabilities of the Segment Anything Model (paper) None
202304 S. Mohapatra et al. SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of Magnetic Resonance Images using Deep Learning (paper) None
202304 R. Deng et al. Segment Anything Model (SAM) for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging (paper) None

Large-Scale Datasets for Developing Medical Foundation Models.

Date Authors Title Dataset
202404 F. Bai et al. M3D: Advancing 3D Medical Image Analysis with Multi-Modal Large Language Models (paper) Link
202311 J. Ye et al. SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks (paper) Link

CVPR2024 Workshop: Segment Anything in Medical Images on Laptop.

(Challenge Website) (Papers)

The field of medical image segmentation is currently experiencing a paradigm shift, moving from specialized models designed for individual tasks to foundation models capable of managing a multitude of segmentation scenarios. This challenge seeks universal promptable medical image segmentation models that are deployable on laptops or other edge devices without reliance on GPUs.