Pixel-wise segmentation on the VOC2012 dataset using pytorch.
For a more complete implementation of segmentation networks checkout semseg.
Note:
- FCN differs from original implementation see this issue
- SegNet does not match original paper performance see here
- PSPNet misses "atrous convolution" (conv layers of ResNet101 should be amended to preserve image size)
Keeping this in mind feel free to PR. Thank you!
See dataset examples here.
Download image archive and extract and do:
mkdir data
mv VOCdevkit/VOC2012/JPEGImages data/images
mv VOCdevkit/VOC2012/SegmentationClass data/classes
rm -rf VOCdevkit
We recommend using pyenv:
pyenv virtualenv 3.6.0 piwise
pyenv activate piwise
then install requirements with pip install -r requirements.txt
.
For latest documentation use:
python main.py --help
Supported model parameters are fcn8
, fcn16
, fcn32
, unet
, segnet1
,
segnet2
, pspnet
.
If you want to have visualization open an extra tab with:
python -m visdom.server -port 5000
Train the SegNet model 30 epochs with cuda support, visualization and checkpoints every 100 steps:
python main.py --cuda --model segnet2 train --datadir data \
--num-epochs 30 --num-workers 4 --batch-size 4 \
--steps-plot 50 --steps-save 100
Then we want to do semantic segmentation on foo.jpg
:
python main.py --model segnet2 --state segnet2-30-0 eval foo.jpg foo.png
The segmented class image can now be found at foo.png
.
These are some results based on segnet after 40 epoches. Set
loss_weights[0] = 1 / 1
to deal gracefully with the unbalanced problem.
Input | Output | Ground Truth |
---|---|---|