/Fast_Segmentation

Semantic Segmentation Toys

Primary LanguagePythonMIT LicenseMIT

pytorch-semseg

license

Personal Use

  • source activate oldtorch
  • python train.py --arch gcnnet --dataset cellcancer --n_epoch 10 --batch_size 2
  • source activate deepenv
  • pytorch version 0.4.0
  • python train.py --arch bisenet3D --dataset brats17_loader --n_epoch 10 --batch_size 2 (not working because of 3D)
  • tensorboard --logdir runs

Semantic Segmentation Algorithms Implemented in PyTorch

This repository aims at mirroring popular semantic segmentation architectures in PyTorch.

Networks implemented

  • Segnet - With Unpooling using Maxpool indices
  • FCN - All 1( FCN8s), 2 (FCN16s) and 3 (FCN8s) stream variants
  • U-Net - With optional deconvolution and batchnorm
  • Link-Net

Upcoming

DataLoaders implemented

Upcoming

Requirements

  • pytorch >=0.1.12
  • torchvision ==0.1.7
  • visdom >=1.0.1 (for loss and results visualization)
  • scipy
  • tqdm

One-line installation

pip install -r requirements.txt

Data

  • Download data for desired dataset(s) from list of URLs here.
  • Extract the zip / tar and modify the path appropriately in config.json

Usage

Launch visdom by running (in a separate terminal window)

python -m visdom.server

To train the model :

python train.py [-h] [--arch [ARCH]] [--dataset [DATASET]]
                [--img_rows [IMG_ROWS]] [--img_cols [IMG_COLS]]
                [--n_epoch [N_EPOCH]] [--batch_size [BATCH_SIZE]]
                [--l_rate [L_RATE]] [--feature_scale [FEATURE_SCALE]]

  --arch           Architecture to use ['fcn8s, unet, segnet etc']
  --dataset        Dataset to use ['pascal, camvid, ade20k etc']
  --img_rows       Height of the input image
  --img_cols       Height of the input image
  --n_epoch        # of the epochs
  --batch_size     Batch Size
  --l_rate         Learning Rate
  --feature_scale  Divider for # of features to use

To validate the model :

python validate.py [-h] [--model_path [MODEL_PATH]] [--dataset [DATASET]]
                   [--img_rows [IMG_ROWS]] [--img_cols [IMG_COLS]]
                   [--batch_size [BATCH_SIZE]] [--split [SPLIT]]

  --model_path   Path to the saved model
  --dataset      Dataset to use ['pascal, camvid, ade20k etc']
  --img_rows     Height of the input image
  --img_cols     Height of the input image
  --batch_size   Batch Size
  --split        Split of dataset to validate on

To test the model w.r.t. a dataset on custom images(s):

python test.py [-h] [--model_path [MODEL_PATH]] [--dataset [DATASET]]
               [--img_path [IMG_PATH]] [--out_path [OUT_PATH]]

Contributors