This repository serves as the official code release of the paper FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations (pubilished at FPGA 2021).
FracBNN, as a binary neural network, achieves MobileNetV2-level accuracy by leveraging fractional activations. In the meantime, its input layer is binarized using a novel thermometer encoding with minimal accuracy degradation, which improves the hardware resource efficiceny.
If FracBNN helps your research, please consider citing:
@article{Zhang2021fracbnn,
title = "{FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations}",
author = {Zhang, Yichi and Pan, Junhao and Liu, Xinheng and Chen, Hongzheng and Chen, Deming and Zhang, Zhiru},
journal = {The 2021 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays},
year = {2021}
}
| cifar10.py (training script)
| imagenet.py (training script)
|
└── models/
| | fracbnn_cifar10.py
| | fracbnn_imagenet.py
|
└───utils/
| | quantization.py
| | utils.py
|
└───xcel-cifar10/
| | High-level synthesis code for FracBNN CIFAR-10 accelerator
|
└───xcel-imagenet/
| | High-level synthesis code for FracBNN ImageNet accelerator
Python 3.6.8
torch 1.6.0
torchvision 0.7.0
numpy 1.16.4
- For CIFAR-10, run
python cifar10.py -gpu 0 -t -r /path/to/pretrained-cifar10-model.pt -d /path/to/cifar10-data
- For ImageNet, run
python imagenet.py -gpu 0,1,2,3 -t -r /path/to/pretrained-imagenet-model.pt -d /path/to/imagenet-data
Please refer to the paper for details such as hyperparameters.
- Step 1: Binary activations, floating-point weights.
- In
utils/quantization.py
, useself.binarize = nn.Sequential()
inBinaryConv2d()
, or modifyself.binarize(self.weight)
toself.weight
inPGBinaryConv2d()
. - Run
python cifar10.py -gpu 0 -s
- In
- Step 2: Binary activations, binary weights.
- In
utils/quantization.py
, useself.binarize = FastSign()
inBinaryConv2d()
, orself.binarize(self.weight)
inPGBinaryConv2d()
. - Run
python cifar10.py -gpu 0 -f -r /path/to/model_checkpoint.pt -s
- Use
-g
to set the gating target if training withPGBinaryConv2d
- In
Dataset | Precision (W/A) | 1-bit Input Layer | Top-1 % |
---|---|---|---|
CIFAR-10 | 1/1.4 (PG) | Yes | 89.1 |
ImageNet | 1/1.4 (PG) | Yes | 71.7 |
cd ./xcel-cifar10/source/
make hls
This step should be done after compiling the HLS code. Assume you are in the directory of ./xcel-cifar10/source/
.
make vivado
To test the bitstream on the board, the following files (sample images and labels) are needed:
To deploy the bitstream:
- Step1: Download the files and move them to
/xcel-cifar10/deploy/
- Step2: Move the generated bitstream and hardware definition files to
/xcel-cifar10/deploy/
- Step3: Upload the entire directory
/xcel-cifar10/deploy/
to the board - Step4: Go to
deploy/
, runsudo python3 FracNet-CIFAR10.py
on the board
Please run the compilation flow using Vivado HLS. The top function is FracNet()
in xcel-imagenet/source/bnn.cc
.
To test the bitstream on the board, the following files (sample images, weights) are needed:
- sample image
- conv1x1 weights
- conv3x3 weights
- classifier bias
- classifier weights
- batchnorm/BPReLU weights
To deploy the bitstream:
- Step1: Download the files and move them to
/xcel-imagenet/deploy/
- Step2: Move the generated bitstream and hardware definition files to
/xcel-imagenet/deploy/
- Step3: Upload the entire directory
/xcel-imagenet/deploy/
to the board - Step4: Go to
deploy/
, runsudo python3 FracNet-ImageNet.py
on the board