ESDA: A Composable Dynamic Sparse Dataflow Architecture for Efficient Event-based Vision Processing on FPGA
This repo contains the implementation for
ESDA: A Composable Dynamic Sparse Dataflow Architecture for Efficient Event-based Vision Processing on FPGA
Yizhao Gao, Baoheng Zhang, Yuhao Ding, Hayden So
(FPGA 2024)
ESDA is a framework for building customized DNN accelerators for event-based vision tasks. It leverages the spatial sparsity of event-based input by a novel dynamic sparse dataflow architecture. This is achieved by formulating the computation of each dataflow module as a unified token-feature computation scheme. To enhance the spatial sparsity, ESDA also incorporates Submanifold Sparse Convolution to build our DNN models.
The project mainly consists of three parts
- Software model training on event-based datasets with sparsity and quantization
- Hardware design optimization (use constrained optimization to search for optimal mapping)
- Hardware synthesis, implementation and evaluation
If you are going to repoduce our Artifact on FPGA'24, Please refer to evaluation.md
- [2024/06] We participate in the we also participate in the Eye-tracking challenge.
The source code for the challenge is in
eye_tracking
branch. More details can be found in the 'eye_tracking' branch by
git checkout eye_tracking
This project depends on Vivado 2020.2. Please download and follow the installation guide from xilinx. If you use newer vision, you might need to modify some project tcl by yourself.
To install SCIP Optimization Suite, please refer to Installation guide. Note that SCIP must be compiled with -DTPI=tny
to support concurrent solver.
After installation, please export your $SCIPOPTDIR
export SCIPOPTDIR=/path/to/scipoptdir
Assuming you have download the anaconda. You can create a new environment by:
conda create -n ESDA python=3.8
conda activate ESDA
Then install the required packages by:
pip3 install -r requirements.txt
Important: Make sure $SCIPOPTDIR
is set up first before you install pysciopt.
For Pytorch installation, make sure you have installed the nvidia driver successfully. Then you can refer to Pytorch to download the correct Pytorch 1.8 version.
For example, if your cuda version is 11.X, you can use
pip3 install torch==1.8.2 torchvision==0.9.2 torchaudio==0.8.2 --extra-index-url https://download.pytorch.org/whl/lts/1.8/cu111
Finally you need to install the Minkowski Engine:
cd software
conda install -c conda-forge cudatoolkit-dev
python setup.py install
(This is a temporal solution as we modified source code of Minkowski Engine for quantization. Will modify later.)
We use five datasets for the project: README (1).md DvsGesture, RoShamBo17, ASL-DVS, N-MNIST, and N-Caltech101
More about dataset preparation, please refer to software readme.
The model training source code lies in software
folder. After obtained a trained model, use toolflows in optimization
folder to generate hardware configuration. Finally, use the hardware template and makefile in hardware
folder to generate vitis_hls, vivado projects and synthesis your bitstream.
Apart from the FPGA'24 artifact,
ESDA is inspired by and relies on many exisitng open-source libraries, including Asynet, MinkowskiEngine, HAWQ, AGNA, DPCAS and etc.