dgschwend/zynqnet

wrong results of the last layer of customized model

Opened this issue · 1 comments

Hi,
We try to run an object detection model with your project.
To achieve that, we designed a customized model. There are two differences between our model and your original network model. 1) the input size is changed to 224 from 256. 2)we added a convolutional layer to your original network model, the details of this layer is showed in the image below.
微信图片_20200505192617
The model is retrained, so the parameters of our model are totally different from your original model. Our model and our test images are converted to XX.bin with the tools provided in your repository, and tested with classify.py. so far, all things went smoothly.
When we run our model on 7045, something strange happened. The result of our model (same with the result of the extra layer we added) is wrong, and this result has a strange pattern (see image below), all values within a channel are the same (the values across different channels are different). Something more strange is, the result of the second last layer (same with the input of the extra layer we added) is correct (we verified this result with the result generated with classify.py).
微信图片_20200505192623
we checked this process multiple times, and got the same result every time: the result of the last layer is wrong, the result of the second last layer is correct. It doesn't make sense in any way. Does anyone have any clue why this would happen? And how to fix this?

hello,do you have solved this problem? I'm also adding other layers in zynqnet, the result of the first layer is incorrect.Can you help me? Maybe we can communicate with e-mial?