/LUTNet

Primary LanguageVerilogBSD 2-Clause "Simplified" LicenseBSD-2-Clause

LUTNet

Repo organisation

The repo contains two versions of LUTNet.

  • Unrolled LUTNet: Operators in convolutional layers are mapped to FPGA resources with one-to-one LUT binding. No BRAM is consumed for weight storage as weights are hardened in LUT configuration masks. Details can be found in our paper LUTNet: Rethinking Inference in FPGA Soft Logic.
  • Tiled LUTNet: Operators are tiled and reused, trading off area efficiency for resource savings. BRAMs are consumed for weight storage. Details can be found in our article LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference.

Prerequisites

For training LUTNet, you should have the following packages installed:

  • Keras (v2)
  • TensorFlow

For hardware synthesis, we developed and tested the project with Vivado (+ HLS) 2016.3. Newer versions of Vivado HLS do not work with our project. In newer versions of Vivado HLS, loop unrolling factors are limited, reducing the area-efficiency advantage of LUTNet.

Results

Dataset Tiling Top-1 Accuracy (%) LUT FPS
MNIST Fully unrolled 98.01 58192 200M
SVHN Largest conv layer fully unrolled 96.20 154814 200M (Target layer only)
SVHN Tiled 96.42 361528 10.2k
CIFAR-10 Largest conv layer fully unrolled 84.21 246044 200M (Target layer only)
CIFAR-10 Tiled 84.80 106776 10.2k
ImageNet Largest conv layer fully unrolled 41.45 496160 5.56M (Target layer only)
ImageNet Tiled 41.28 482903 260

Citation

If you make use of this code, please acknowledge us by citing our conference paper and/or journal article:

@inproceedings{lutnet_fccm,
	author={Wang, Erwei and Davis, James J. and Cheung, Peter Y. K. and Constantinides, George A.},
	title={{LUTNet}: Rethinking Inference in {FPGA} Soft Logic},
	booktitle={IEEE International Symposium on Field-Programmable Custom Computing Machines},
	year={2019}
}

@article{lutnet_tc,
	author={Wang, Erwei and Davis, James J. and Cheung, Peter Y. K. and Constantinides, George A.},
	title={{LUTNet}: Learning {FPGA} Configurations for Highly Efficient Neural Network Inference},
	journal={IEEE Transactions on Computers},
	year={2020},
	note={to appear}
}