/DOTIE

Implementation of DOTIE - Detecting Objects through Temporal Isolation of Events using a Spiking Architecture

Primary LanguagePythonCreative Commons Zero v1.0 UniversalCC0-1.0

DOTIE - Detecting Objects through Temporal Isolation of Events using a Spiking Architecture.

Alt Text This repository contains the source code associated with DOTIE - Detecting Objects through Temporal Isolation of Events using a Spiking Architecture, ICRA 2023. This code has most recently been tested with Python 3.7 and Pytorch 1.1.0.

Introduction

Vision-based autonomous navigation systems rely on fast and accurate object detection algorithms to avoid obstacles. Algorithms and sensors designed for such systems need to be computationally efficient, due to the limited energy of the hardware used for deployment. Biologically inspired event cameras are a good candidate as a vision sensor for such systems due to their speed, energy efficiency, and robustness to varying lighting conditions. However, traditional computer vision algorithms fail to work on event-based outputs, as they lack photometric features such as light intensity and texture. In this work, we propose a novel technique that utilizes the temporal information inherently present in the events to efficiently detect moving objects. Our technique consists of a lightweight spiking neural architecture that is able to separate events based on the speed of the corresponding objects. These separated events are then further grouped spatially to determine object boundaries. This method of object detection is both asynchronous and robust to camera noise. In addition, it shows good performance in scenarios with events generated by static objects in the background, where existing event-based algorithms fail. We show that by utilizing our architecture, autonomous navigation systems can have minimal latency and energy overheads for performing object detection.

Alt Text

Installation

Clone this repository using: git clone https://github.com/manishnagaraj/DOTIE.git

Create a conda environment using the environment.yml file: conda env create -f environment.yml

Activate the conda environment: conda activate DOTIE

Download the YOLO model weights and config folders from here into the models folder

Dataset

The experiments shown in the paper use the MVSEC dataset outdoor_day2 sequence (found here).

Download the outdoor_day2_data.hdf5 into the datasets folder.

Convert the hdf5 files into the preferred encoding by running python MVSEC_encoding.py

For testing out the algorithm on a small portion of the encoded dataset, you can create and save a smaller version as a second step using python Quickloading.py

Note: To use YOLO, we used a minimilastic implementation described in https://github.com/eriklindernoren/PyTorch-YOLOv3

Usage

  1. To run the code that demonstrates the spiking architecture run python speed_separating_spiking_arch.py
  2. To run the code that demonstrates the entire DOTIE framework (spiking architecture + clustering) run python DOTIE_complete_framework.py
  3. To run the comparisons of DOTIE with existing works, run python Comparisons.py

This code will work for all datasets and users are encouraged to experiment with additional datasets and finetune the hyperparameters accordingly.

Citations

If you find this code useful in your research, please consider citing our main paper: Nagaraj, Manish, Chamika Mihiranga Liyanagedera, and Kaushik Roy. "DOTIE-Detecting Objects through Temporal Isolation of Events using a Spiking Architecture." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023.

@inproceedings{nagaraj2023dotie,
  title={DOTIE-Detecting Objects through Temporal Isolation of Events using a Spiking Architecture},
  author={Nagaraj, Manish and Liyanagedera, Chamika Mihiranga and Roy, Kaushik},
  booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={4858--4864},
  year={2023},
  organization={IEEE}
}

Our Demonstration at the CVPR 2023 Event-based workshop is also available at: Roy A, Nagaraj M, Liyanagedera CM, Roy K. Live Demonstration: Real-time Event-based Speed Detection using Spiking Neural Networks.

@inproceedings{roy2023live,
  title={Live Demonstration: Real-time Event-based Speed Detection using Spiking Neural Networks},
  author={Roy, Arjun and Nagaraj, Manish and Liyanagedera, Chamika Mihiranga and Roy, Kaushik},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={4080--4081},
  year={2023}
}

Authors

Manish Nagaraj, Arjun Roy, Chamika Mihiringa Liyanagedera, Kaushik Roy

All authors are with Purdue University, West Lafayette, IN, USA

Acknowledgement

This work was supported in part by, Center for Brain-inspired Computing (C-BRIC), a DARPA sponsored JUMP center, Semiconductor Research Corporation (SRC), National Science Foundation, the DoD Vannevar Bush Fellowship, and IARPA MicroE4AI.

Parts of this code were derived from chan8972/Spike-FlowNet and mondalanindya/ICCVW2021_GSCEventMOD