/Segment-Anything-CLIP

Connecting segment-anything's output masks with the CLIP model; Awesome-Segment-Anything-Works

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

Conect Segment-Anything with CLIP

We aim to classify the output masks of segment-anything with the off-the-shelf CLIP models. The cropped image corresponding to each mask is sent to the CLIP model.

Other Nice Works

Editing-Related Works

  1. sail-sg/EditAnything
  2. IDEA-Research/Grounded-Segment-Anything
  3. geekyutao/Inpaint-Anything
  4. Luodian/RelateAnything

Nerf-Related Works

  1. ashawkey/Segment-Anything-NeRF
  2. Anything-of-anything/Anything-3D
  3. Jun-CEN/SegmentAnyRGBD
  4. Pointcept/SegmentAnything3D

Segmentation-Related Works

  1. maxi-w/CLIP-SAM
  2. Curt-Park/segment-anything-with-clip
  3. kadirnar/segment-anything-video
  4. fudan-zvg/Semantic-Segment-Anything
  5. continue-revolution/sd-webui-segment-anything
  6. RockeyCoss/Prompt-Segment-Anything
  7. ttengwang/Caption-Anything
  8. ngthanhtin/owlvit_segment_anything
  9. lang-segment-anything
  10. helblazer811/RefSAM
  11. Hedlen/awesome-segment-anything
  12. ziqi-jin/finetune-anythin
  13. ylqi/Count-Anything
  14. xmed-lab/CLIP_Surgery
  15. RockeyCoss/Prompt-Segment-Anything
  16. segments-ai/panoptic-segment-anything
  17. Cheems-Seminar/grounded-segment-any-parts
  18. aim-uofa/Matcher
  19. SysCV/sam-hq
  20. CASIA-IVA-Lab/FastSAM
  21. ChaoningZhang/MobileSAM
  22. JamesQFreeman/Sam_LoRA
  23. UX-Decoder/Semantic-SAM
  24. cskyl/SAM_WSSS
  25. ggsDing/SAM-CD
  26. yformer/EfficientSAM
  27. XiaRho/SEMat

Labelling-Related Works

  1. vietanhdev/anylabeling
  2. anuragxel/salt

Tracking-Related Works

  1. gaomingqi/track-anything
  2. z-x-yang/Segment-and-Track-Anything
  3. achalddave/segment-any-moving

Medical-Related Works

  1. bowang-lab/medsam
  2. hitachinsk/SAMed
  3. cchen-cc/MA-SAM
  4. OpenGVLab/SAM-Med2D

Todo

  1. We plan to connect segment-anything with MaskCLIP.
  2. We plan to finetune on the COCO and LVIS datasets.

Run Demo

Download the sam_vit_h_4b8939.pth model from the SAM repository and put it at ./SAM-CLIP/. Follow the instructions to install segment-anything and clip packages using the following command.

cd SAM-CLIP; pip install -e .
pip install git+https://github.com/openai/CLIP.git

Then run the following script:

sh run.sh

Example

Input an example image and a point (250, 250) to the SAM model. The input image and output three masks as follows:

The three masks and corresponding predicted category are as follows:

You can change the point location at L273-274 of scripts/amp_points.py.

## input points 
input_points_list = [[250, 250]]
label_list = [1]