A TensorFlow Implementation of:
[CVPR 2015] Long et al. Fully Convolutional Networks for Semantic Segmentation
NOTE: In this repository, we only implement the VGG16 version.
-
TensorFlow r0.10 (r0.11 should be fine, not sure if this can work for later versions)
-
OpenCV 2 and its Python bindings
-
Ipdb: IPython environment debugger
-
(Optional) pathos. Check the other branch for further details
In this implementation, we use the VOC2011 dataset. Please do as follows to set up the dataset:
-
mkdir data
to set up the directory of dataset -
Download the train/val dataset and Development kit tar files, put them under the
data
folder. Unzip Development kit tar file, then unzip train/val tar file and rename the folder asVOC2011
. -
It should have this basic structure (under
data
directory)
$ VOCdevkit/ # development kit
$ VOCdevkit/VOCcode # VOC utility code
$ VOCdevkit/VOC2011 # image sets, annotations, etc.
# ... and several other directories ...
You may also download the test set if you want to evaluate your prediction results on this dataset.
mkdir model
We use a ImageNet pre-trained model to initialize the network, please download the npy file here and put it under the model
folder.
Since input images have different sizes, in order to make them as minibatch input, we used two different strategies: 1) padding to a large size; or 2) resize to a small size (256, 256)
cd src
python train.py # padding
python train_small.py # resize
You can choose either one to run. And you can also change the config
dictionary to use custom settings.
cd src
python demo.py
You can change the config
dictionary to use custom settings.
First you should run the following code:
cd src
python test.py
You might want to change the used model. Check the code for futher details.
After that, you should find the following structure in result
folder:
$ FCN8_adam_iter_10000/ # folder name depends on the model you used
$ FCN8_adam_iter_10000/gray/ # gray-scale segmentation result
$ FCN8_adam_iter_10000/rgb/ # rgb segmentation result
# ... and maybe several other directories ...
Then you can use the VOC2011 provided eval code to do evaluation (see the next section for details).
If you want to evaluate your model on the test split, you may submit your prediction results to their server
-
cd misc
-
Run
save_colorful_grayscale(in_directory,out_directory)
(our generated results is grayscale png format, but the eval code uses indexed png format) -
Run
report.m
Note:
-
make sure
VOCinit.m
is at/tf_fcn-master/data/VOCdevkit/VOCcode/
-
make sure the segmentation result is stored in
/tf_fcn-master/data/VOCdevkit/results/VOC2011/Segmentation/%s_val_cls/
while the folder is named as%s_val_cls
-
make sure the second input of
Evaluation(VOCopts, ~)
is the string%s
above.
Padding to (640, 640):
Padding to (500, 500):
-
FCN32_adam_35000: ckpt (You can extract npy with
extract
method defined inModel.py
) -
FCN8_adam_30000: ckpt
Note: When you train the shortcut version model (FCN16 and FCN8), you will need FCN32 model npy file as initialization, instead of the ImageNet pre-trained model npy file.