This work proses an approach for learning semantic segmentation from only event-based information (event-based cameras).
For more details, here is the Paper
[This repository contains the basic and the core implementation and data from the paper. It will be updated with more detail with the time]
- Python 2.7+
- Tensorflow 1.11
- Opencv
- Keras
- Imgaug
- Sklearn
If you find EV-SegNet useful in your research, please consider citing:
@inproceedings{alonso2019EvSegNet,
title={EV-SegNet: Semantic Segmentation for Event-based Cameras},
author={Alonso, I{\~n}igo and Murillo, Ana C},
booktitle={IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
year={2019}
}
Our dataset is a subset of the DDD17: DAVIS Driving Dataset. This original dataset do not provide any semantic segmentation label, we provide them as well as some modification of the event images.
The semantic segmentation labels of the data are: flat:0, construction+sky:1, object:2, nature:3, human:4, vehicle:5, ignore_labels:255
For testing the pre-trained model just execute:
python train_eager.py --epochs 0
python train_eager.py --epochs 500 --dataset path_to_dataset --model_path path_to_model --batch_size 8
Where [path_to_dataset] is the path to the downloaded dataset (uncompressed) and [path_to_model] is the path where the weights are going to be saved
First, download this folder and copy it into the weights folder of this repository (so that you have weights/cityscapes_grasycale folder).
Then execute this script specifying the grayscale image path to obtain the labels from.
python get_segmentation.py --image_path ./image.png --weights/cityscapes_grayscale