SNN_CV_Applications_Resources

Paper list for SNN or event camera based computer vision tasks.

rgbt_car10

Related Resources:

  • Event-based Vision, Event Cameras, Event Camera SLAM [ETH page]

  • The Event-Camera Dataset and Simulator:Event-based Data for Pose Estimation, Visual Odometry, and SLAM [ETH page]

  • Event-based Vision Resources [Github]

  • DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition [Project] [Paper]

Survey && Reviews:

  • 神经形态视觉传感器的研究进展及应用综述,计算机学报,李家宁 田永鸿 [Paper]

  • Spiking Neural Networks and Online Learning: An Overview and Perspectives, Neural Networks 121 (2020): 88-100. Jesus L. Lobo, Javier Del Ser, Albert Bifet, Nikola Kasabov [Paper]

  • Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Networks (2020). Wang, Xiangwen, Xianghong Lin, and Xiaochao Dang. [Paper]

Datasets:

Deep Feature Learning for Event Camera:

  • Gehrig, Daniel, et al. "End-to-end learning of representations for asynchronous event-based data." Proceedings of the IEEE International Conference on Computer Vision. 2019. [Paper] [Code]

Tools && Packages:

  • SpikingJelly: an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch. [Document] [Github]

  • SNN-toolbox: [Document] [Github]

  • Norse: [Document] [Github] [Home]

  • V2E Simulator (From video frames to realistic DVS event camera streams): [Home] [Github] [Paper]

  • ESIM: an Open Event Camera Simulator [Github]

  • SLAYER PyTorch [Documents]

  • BindsNET also builds on PyTorch and is explicitly targeted at machine learning tasks. It implements a Network abstraction with the typical 'node' and 'connection' notions common in spiking neural network simulators like nest.

  • cuSNN is a C++ GPU-accelerated simulator for large-scale networks. The library focuses on CUDA and includes spike-time dependent plasicity (STDP) learning rules.

  • decolle implements an online learning algorithm described in the paper "Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE)" by J. Kaiser, M. Mostafa and E. Neftci.

  • Long short-term memory Spiking Neural Networks (LSNN) is a tool from the University of Graaz for modelling LSNN cells in Tensorflow. The library focuses on a single neuron and gradient model.

  • Nengo is a neuron simulator, and Nengo-DL is a deep learning network simulator that optimised spike-based neural networks based on an approximation method suggested by Hunsberger and Eliasmith (2016). This approach maps to, but does not build on, the deep learning framework Tensorflow, which is fundamentally different from incorporating the spiking constructs into the framework itself. In turn, this requires manual translations into each individual backend, which influences portability.

  • Neuron Simulation Toolkit (NEST) constructs and evaluates highly detailed simulations of spiking neural networks. This is useful in a medical/biological sense but maps poorly to large datasets and deep learning.

  • PyNN is a Python interface that allows you to define and simulate spiking neural network models on different backends (both software simulators and neuromorphic hardware). It does not currently provide mechanisms for optimisation or arbitrary synaptic plasticity.

  • PySNN is a PyTorch extension similar to Norse. Its approach to model building is slightly different than Norse in that the neurons are stateful.

  • Rockpool is a Python package developed by SynSense for training, simulating and deploying spiking neural networks. It offers both JAX and PyTorch primitives.

  • SlayerPyTorch is a Spike LAYer Error Reassignment library, that focuses on solutions for the temporal credit problem of spiking neurons and a probabilistic approach to backpropagation errors. It includes support for the Loihi chip.

  • SNN toolbox automates the conversion of pre-trained analog to spiking neural networks. The tool is solely for already trained networks and omits the (possibly platform specific) training.

  • SpyTorch presents a set of tutorials for training SNNs with the surrogate gradient approach SuperSpike by F. Zenke, and S. Ganguli (2017). Norse implements SuperSpike, but allows for other surrogate gradients and training approaches.

  • s2net is based on the implementation presented in SpyTorch, but implements convolutional layers as well. It also contains a demonstration how to use those primitives to train a model on the Google Speech Commands dataset.

Hardware:

neuromorphic processors such as the IBM TrueNorth [Paper] and Intel Loihi [Paper].

SNN papers:

  • Deep Spiking Neural Network: Energy Efficiency Through Time based Coding, Bing Han and Kaushik Roy, [ECCV-2020]

  • Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations, Saima Sharmin1, Nitin Rathi1, Priyadarshini Panda2, and Kaushik Roy1 [ECCV2020] [Code]

  • Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks, Chankyu Lee, Adarsh Kumar Kosta, Alex Zihao Zhu, Kenneth Chaney, Kostas Daniilidis, and Kaushik Roy [ECCV2020] [Code]

  • Surrogate gradient learning in spiking neural networks. Neftci, Emre O., Hesham Mostafa, and Friedemann Zenke. IEEE Signal Processing Magazine 36 (2019): 61-63., [Paper]

  • Long short-term memory and learning-to-learn in networks of spiking neurons. Bellec, Guillaume, et al. Advances in Neural Information Processing Systems. 2018. [Paper] [Code]

  • Slayer: Spike layer error reassignment in time. Shrestha, Sumit Bam, and Garrick Orchard. Advances in Neural Information Processing Systems. 2018. [Paper] [Offical Code] [PyTorch-version] [Video]

  • RMP-SNN: Residual Membrane Potential Neuron for Enabling Deeper High-Accuracy and Low-Latency Spiking Neural Network, [cvpr-2020]

  • Retina-Like Visual Image Reconstruction via Spiking Neural Model, Lin Zhu, Siwei Dong, Jianing Li, Tiejun Huang, Yonghong Tian [cvpr-2020]

  • Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. Bellec, G., Scherr, F., Hajek, E., Salaj, D., Legenstein, R., & Maass, W. (2019). arXiv preprint arXiv:1901.09049. [Paper]

  • Unsupervised Learning of a Hierarchical Spiking Neural Network for Optical Flow Estimation: From Events to Global Motion Perception, T-PAMI, Paredes-Vallés, Federico, Kirk Yannick Willehm Scheper, and Guido Cornelis Henricus Eugene De Croon. , [Paper]

  • Deep neural networks with weighted spikes. Kim, Jaehyun, et al. Neurocomputing 311 (2018): 373-386., [Paper]

  • Spiking deep residual network. Hu, Yangfan, et al. arXiv preprint arXiv:1805.01352 (2018). [Paper]

  • Towards artificial general intelligence with hybrid Tianjic chip architecture. Nature, 572(7767), 106-111. Pei, J., Deng, L., Song, S., Zhao, M., Zhang, Y., Wu, S., ... & Chen, F. (2019). [Paper]

  • Training Spiking Deep Networks for Neuromorphic Hardware, [Paper]

  • Direct Training for Spiking Neural Networks: Faster, Larger, Better, Wu, Yujie, et al. AAAI-2019. [Paper]

Optical-Flow Estimation and Motion Segmentation:

  • Spike-FlowNet: Event-based Optical Flow Estimation with Energy-Efficient Hybrid Neural Networks, Lee, Chankyu and Kosta, Adarsh and Zhu, Alex Zihao and Chaney, Kenneth and Daniilidis, Kostas and Roy, Kaushik [ECCV-2020] [Code]

  • EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras. Zhu, Alex Zihao, et al. arXiv preprint arXiv:1802.06898 (2018). [Paper] [Code]

  • Stoffregen, Timo, et al. "Event-based motion segmentation by motion compensation." Proceedings of the IEEE International Conference on Computer Vision. 2019. [Paper]

  • Bisulco A, Ojeda F C, Isler V, et al. Fast Motion Understanding with Spatiotemporal Neural Networks and Dynamic Vision Sensors[J]. arXiv preprint arXiv:2011.09427, 2020. [Paper]

Object Recognition:

  • TactileSGNet: A Spiking Graph Neural Network for Event-based Tactile Object Recognition, Fuqiang Gu, Weicong Sng, Tasbolat Taunyazov, and Harold Soh [Paper] [Code]

Object Detection:

  • "Spiking-yolo: Spiking neural network for real-time object detection." Kim, Seijoon, et al. AAAI-2020 [Paper]

  • "A large scale event-based detection dataset for automotive." de Tournemire, Pierre, et al. arXiv (2020): arXiv-2001. [Paper] [Dataset]

  • "Event-based Asynchronous Sparse Convolutional Networks." Messikommer, Nico, et al. arXiv preprint arXiv:2003.09148 (2020). [Paper] [Youtube] [Code]

  • Structure-Aware Network for Lane Marker Extraction with Dynamic Vision Sensor, Wensheng Cheng*, Hao Luo*, Wen Yang, Senior Member, IEEE, Lei Yu, Member, IEEE, and Wei Li, CVPR-workshop [Paper] [Dataset]

Visual Tracking:

  • Jiang, Rui, et al. "Object tracking on event cameras with offline–online learning." CAAI Transactions on Intelligence Technology (2020) [Paper]

  • Retinal Slip Estimation and Object Tracking with an Active Event Camera [AICAS-2020]

  • Zhang, Y. (2019). Real‑time object tracking for event cameras. Master's thesis, Nanyang Technological University, Singapore. [Thesis]

  • Yang, Zheyu, et al. "DashNet: A hybrid artificial and spiking neural network for high-speed object tracking." arXiv preprint arXiv:1909.12942 (2019). [Paper]

  • End-to-end Learning of Object Motion Estimation from Retinal Events for Event-based Object Tracking, aaai-2020 [Paper]

  • HASTE: multi-Hypothesis Asynchronous Speeded-up Tracking of Events, bmvc2020, [Paper]

  • High-speed event camera tracking, bmvc2020, [Paper]

  • A Hybrid Neuromorphic Object Tracking and Classification Framework for Real-time Systems, [Paper] [Code] [Demo]

  • Long-term object tracking with a moving event camera. Ramesh, Bharath, et al. Bmvc. 2018. [Paper]

  • e-TLD: Event-based Framework for Dynamic Object Tracking, [Paper] [Dataset]

  • Spiking neural network-based target tracking control for autonomous mobile robots. Cao, Zhiqiang, et al. Neural Computing and Applications 26.8 (2015): 1839-1847. [Paper]

  • Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for Event-based Object Tracking, Chen, Haosheng, et al. Proceedings of the 27th ACM International Conference on Multimedia. 2019. [Paper]

  • High-Speed Object Tracking with Dynamic Vision Sensor. Wu, J., Zhang, K., Zhang, Y., Xie, X., & Shi, G. (2018, October). In China High Resolution Earth Observation Conference (pp. 164-174). Springer, Singapore. [Paper]

  • High-speed object tracking with its application in golf playing. Lyu, C., Liu, Y., Jiang, X., Li, P., & Chen, H. (2017). International Journal of Social Robotics, 9(3), 449-461. [Paper]

  • A Spiking Neural Network Architecture for Object Tracking. Luo, Yihao, et al. International Conference on Image and Graphics. Springer, Cham, 2019. [Paper]

  • SiamSNN: Spike-based Siamese Network for Energy-Efficient and Real-time Object Tracking, Yihao Luo, Min Xu, Caihong Yuan, Xiang Cao, Liangqi Zhang, Yan Xu, Tianjiang Wang and Qi Feng [Paper]

  • Event-guided structured output tracking of fast-moving objects using a CeleX sensor. Huang, Jing, et al. IEEE Transactions on Circuits and Systems for Video Technology 28.9 (2018): 2413-2417. [Paper]

  • EKLT: Asynchronous photometric feature tracking using events and frames." Gehrig, Daniel, et al. International Journal of Computer Vision 128.3 (2020): 601-618. [Paper] [Code] [Video]

  • Spatiotemporal Multiple Persons Tracking Using Dynamic Vision Sensor, Piątkowska, Ewa, et al. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2012. [Paper]

  • Event-Driven Stereo Visual Tracking Algorithm to Solve Object Occlusion, IEEE TNNLS [Paper]

  • Ni, Zhenjiang, et al. "Asynchronous event‐based high speed vision for microparticle tracking." Journal of microscopy 245.3 (2012): 236-244. [Paper]

High-Quality Image Reconvery:

  • Event Enhanced High-Quality Image Recovery, Bishan Wang, Jingwei He, Lei Yu, Gui-Song Xia, and Wen Yang [ECCV2020] [Code]

Binocular Vision:

  • U2Eyes: a binocular dataset for eye tracking and gaze estimation, ICCV-2019 Workshop [Paper]

  • Robust object tracking via multi-cue fusion. Hu, Mengjie, et al. Signal Processing 139 (2017): 86-95. [Paper]