This repository is holding experiments related to my M.Tech thesis. The project is aiming to address the road extraction problem in an end-to-end fashion. Our dataset is a set of aerial images taken from UAV(drones) from local areas within the NIT Rourkela campus. Throughout the experiments, we are benchmarking different state-of-the-art models and taking advantage of the techniques being used to tackle our problem. The main objective of this project is to build an effective CNN model, being able to distinguish roads from occlusion and background and able to generalize to later extension as well as to build our own dataset.
Please note: the development is undergoing and details will gradually be provided below.
There are a few important dependencies which need to be installed before making any change in pre-processing or expermenting the models.
Note that the code has been tested using python==3.6.7
.
pip install -r requirements.txt
Few parameters(environment variables) have to be set according to the need.
- In
train.sh
andpredict.sh
export PRE_TRAINED=True # set to False if not loading the pre-trained weight
export WEIGHTS_IN_PATH=path/to/the/weights.h5 # full path of the weight
Create new conda environment
conda create --name ENV_NAME python=3.6.7
source activate ENV_NAME
/input
contains images/frames and their respective masks within each set(training/validation/testing). Withinmask
, the ground truth set would be a Tensor of dimension(height x width)
and each pixel's value maps the spatial distribution of the respective label.
/input
├── testing # Testing data
│ ├── images
│ └── mask
├── training # Training data
│ ├── images
│ └── mask
└── validation # Validation data
├── images
└── mask
/output
contains predicted masks referred from input/prediction along with their respective frames within each respective subfolders and named the same way to avoid confusion.masks
andprediction
sets would be a (3D) tensor of dimension(height x width x 3)
colored with respect to the spatial distribution of each class/label. As an example, please find here
/output
└── prediction # Prediction data
├── images
├── mask
└── prediction
/models
is place to store and load the model weights.
- For training and prediction, refer to
train.sh
andpredict.sh
respectively to define necessary environment variables.
# How to train?
sh train.sh
# How to predict?
sh predict.sh
Timeline | Comments | Source | Reference |
---|---|---|---|
18-12-2019 | U-Net model added | source | paper |
22-02-2020 | BCDU-Net model added | source | paper |
29-02-2020 | FC-DenseNet model added | source | paper |
08-03-2020 | DeepLab-v3+ model added | source | paper |
08-03-2020 | FCN model added | source | paper |
18-04-2020 | SegNet model added | source | paper |
18-04-2020 | Dense UNet model added | source | proposed model |
21-05-2020 | Depth-wise Separable UNet model added | source | proposed model |
21-05-2020 | Depth-wise Separable Dense UNet model added | source | proposed model |