Medical_SAM

Step0: Install

Follows the installation of GroundingDINO and Segment-anything


Step1: Conver data format

Please follow the instruction in dir ./data


The baseline experiment:

Step1: Extract the image embedding

# e.g.
python extract_fm \
    --dataset PanNuke \
    --weight_path WEIGHT_DIR/sam_vit_h_4b8939.pth \
    --model_type vit_h

Argument:
dataset: the folder name in ./dataset
weight_path: the path of pretrained model weight, which is download from SAM
model_type: should correspone to the weight_path, which could be checked at SAM'github
Output:
The image embedding would be save as a torch.tensor in ./datasets/DATASET_NAME/features

Step2: Evaluate the Dataset with different prompt

e.g.
python evaluate_from_pt.py \
    --dataset MoNuSeg \
    --weight_path WEIGHT_DIR/sam_vit_h_4b8939.pth \
    --model_type vit_h \
    --prompt_type One_Point \
    --wandb_log
Argument:
    such arguments are as same in Step2
    prompt_type: which type of prompt you want to use
    wandb_log : save the score and visualization to wandb 

MaskGrounding DINO

cd ./multi_modal

Finetuned

python finetune_GDINO.py --dataset DASTASET_NAME --output_dir SAVE_PATH

Evaluate the pretrained or finetuend model, e.g. SegPC-2021's valid set

python Grounding_dino_infer.py --dataset SegPC-2021 --dataset_type valid

Generate the segmentation tasks e.g. SegPC-2021's valid set

python generate_mask.py --dataset SegPC-2021 --dataset_type valid --wandb_log

YoloV8 + SAM

cd ./box_prompt, and follows the instruction.


SPPnet : point prompts

unzip ./sppnet.tar

Follows the instruction


Model ZOO

Pretrain-model

  1. SAM : pleased download from Segment-anything
  2. The multi-modal pretrained model pleased downloaded from GroundingDINO

Fintuned-model

  1. MaskGroundingDINO : weights , Config
  2. Medical's YoloV8 : weights