Xilinx/BNN-PYNQ

To rebuild the hardware design

Changyiyu opened this issue · 7 comments

Hello , I am trying to rebuild the hardware design by the params I trained from resized 128x128 images with command ./make-hw.sh cnvW1A1 Z1-Z2 a, and I get the error:
/BNN-PYNQ-master/bnn/src/library/finn-hlslib/mvau.hpp:154:27: error: no matching function for call to object of type 'Recast'
auto const act = TSrcI()(inElem, mmv);
and BNN-PYNQ-master/bnn/src/library/finn-hlslib/bnn-library.h:62:
/home/carol/桌面/BNN-PYNQ-master/bnn/src/library/finn-hlslib/fclayer.h:107:3: error: no matching function for call to 'Matrix_Vector_Activate_Batch'.

I have no idea how can I fixed the error, where should I fix it while I want to have a larger size input, and is the config.h file effect? I guess the problem is about activation but I am not sure, and a recipe for target 'obj/top.o' failed. If config.h effect can you tell me what is variable PE, SIMD, WMEM, TMEM means? I am confused about this, thank you

Hello , I am trying to rebuild the hardware design by the params I trained from resized 128x128 images with command ./make-hw.sh cnvW1A1 Z1-Z2 a, and I get the warning In file included from ../../../../../../../library/host/rawhls-offload.cpp:44:
In file included from /home/carol/BNN-PYNQ/bnn/src/library/host/foldedmv-offload.h:136:
/home/carol/BNN-PYNQ/bnn/src/library/finn-hlslib/bnn-library.h:47:9: warning: 'AP_INT_MAX_W' macro redefined [-Wmacro-redefined]
#define AP_INT_MAX_W 4096
^
/opt/Xilinx/Vivado/2017.4/include/etc/ap_private.h:96:9: note: previous definition is here
#define AP_INT_MAX_W 1024

And
WARNING: Hls::stream 'DoCompute.inter8' is read while empty, which may result in RTL simulation hanging.
WARNING: Hls::stream 'DoCompute.inter0_2' contains leftover data, which may result in RTL simulation hanging.

I have no idea how can I fixe it, Does any other file need to change if I want to have a larger size input instead of 32x32? And is the config.h file in folder hw effect? If config.h effect can you tell me what is variable PE, SIMD, WMEM, TMEM means? Do I have to change the INPUT_BUF_ENTRIES and OUTPUT_BUF_ENTRIES in foldedmv-offload.h? thank you!

Hi Changyiyu, do you still have both of these problems (no matching fxn call to Recast and hls::stream warnings), or just the latter one?

Hi, just the latter one. Thank you!

As already explained in the other thread #124 (btw, please close either one or the other) you need to change the dimensions of the feature maps in the script generating the config.h file. Additionally, you will need to change in the top.cpp file the value of the input bits in lines 163, 168 and 169.

Hi, thank you for the explanation. Then I have one more question is the file of w1a1 that I got after training, there have bin files of weight and the config.h in hw file, but does not generat the top.cpp. ls the top.cpp a original file in /network/W1A1/hw? Actually I just need copy all of the bin files and config.h to /network/W1A1 from /training/binpara-cnvW1A1-pynq, Is it right?

The top.cpp file is not yet automatically generated. That file have to be hand written and reflect the network topology you have trained. Have you changed the number or type of layers?

No, I just change the pixel size of the input image. Should I change the number of layers in the model for the bigger pixel?