/FT-SAM

Primary LanguagePython

FT-SAM

This repo is the implementation for Fine Tuning Segment Anything Model. This segmentation model is compatible with Binary segmetnation and Multi-class sementation. Please try it with your various datasets.

Installation

Following Segment Anything, python=3.8.16, pytorch=1.8.0, and torchvision=0.9.0 are used in FT-SAM.

  1. Clone this repository.
    git clone https://github.com/usagisukisuki/FT-SAM.git
    cd FT-SAM
    
  2. Install Pytorch and TorchVision. (you can follow the instructions here)
  3. Install other dependencies.
    pip install -r requirements.txt
    

Checkpoints

We use checkpoint of SAM in vit_b version. Additionally, we also use checkpoint of MobileSAM. Please download from SAM and MobileSAM, and extract them under "models/Pretrained_model".

models
├── Pretrained_model
    ├── sam_vit_b_01ec64.pth
    ├── mobile_sam.pt

Dataset

As examples, we can evaluate two biological segmentation datasets: ISBI2012 (2 class) and ssTEM (5 class) in this repo.

Please download from [FT-SAM] and extract them under "Dataset", and make them look like this:

Dataset
├── ISBI2012
    ├── Image
        ├── train_volume00
        ├── train_volume01
        ├── ...
    ├── Label

├── ssTEM
    ├── data
    ├── ...

Fine tuning on SAM

Binary segmentation (ISBI2012)

If we prepared ISBI2012 dataset, we can directly run the following code to train the model with single GPU.

python3 train.py --gpu 0 --dataset 'ISBI2012' --out result_sam --modelname 'SAM' --batchsize 8

If we want to utilize multi GPUs, we can directly run the following code.

CUDA_VISIBLE_DEVICES=0,1 python3 train.py --dataset 'ISBI2012' --out result_sam --modelname 'SAM' --batchsize 8 --multi

Multi-class segmentation (ssTEM)

If we prepared ssTEM dataset, we can directly run the following code to train the model with single GPU.

python3 train.py --gpu 0 --dataset 'ssTEM' --out result_sam --modelname 'SAM' --batchsize 8 --num_classes=5 --multimask_output=True

Fine tuning on SAM with Anything

We can try to use variour adaptation methods. Please run the following code to train the improved SAM.

Fine tuning with LoRA [paper]

python3 train.py --gpu 0 --dataset 'ISBI2012' --modelname 'SAM_LoRA' 

Fine tuning with ConvLoRA [paper]

python3 train.py --gpu 0 --dataset 'ISBI2012' --modelname 'SAM_ConvLoRA'

Fine tuning with AdaptFormer [paper]

python3 train.py --gpu 0 --dataset 'ISBI2012' --modelname 'SAM_AdaptFormer'

Fine tuning with SAMUS [paper]

python3 train.py --gpu 0 --dataset 'ISBI2012' --modelname 'SAMUS'

Fine tuning on MobileSAM [paper]

python3 train.py --gpu 0 --dataset 'ISBI2012' --modelname 'MobileSAM'

Fine tuning on MobileSAM with AdaptFormer

 python3 train.py --gpu 0 --dataset 'ISBI2012' --modelname 'MobileSAM_AdaptFormer'

Testing

python3 test.py --gpu 0 --dataset 'ISBI2012' --out result_sam --modelname 'SAM'