Xilinx/finn-examples

Problem in "Tidy Up" of .onnx file

Aakasha01Agarwal opened this issue · 1 comments

I took part in the ITU-ML5G-PS-007 Lightning-Fast Modulation Classification with Hardware-Efficient Neural Networks.

I want to implement my solution in hardware and for that I looked up on the internet and got the FINN repository. Now I am facing some issues while working in the said repository.

  1. In the Xilinx/finn-examples/build/vgg10-radioml/README it is mentioned that “The quantized VGG10 is based on the baseline topology for our problem statement in the ITU AI/ML in 5G Challenge. You can find it in our sandbox repository. IN addition, the ONNX model has been tidied up by removing the input quantization, which we do in software for this example, and by adding a top-k (k=1) node at the output. Thus, the accelerator returns the top-1 class index instead of logits.”
    tidy

  2. My query is that how is the ONNX model tidied up. I have searched over the internet and got no fruitful results.

  3. The ONNX file that we get from the said sandbox repository and the ONNX file that we use in the vgg10-radioml in the FINN repository are not same.

  4. Vgg10-radioml in the FINN-Examples have used a tidy up .onnx file. But how will we convert the .onnx file generated from the sandbox repository to the .onnx file that can be directly used for hardware implementation.
    I have been stuck on this problem for almost 3 weeks. I do not see any other option to proceed further. Kindly help me out. Looking forward to your response. Thank you.

Hi,
thanks for reaching out, I provided our code for this "tidy up" in this answer: Xilinx/finn#420 (comment)

It is not generally applicable to any model, but should work with models of the same style as the sandbox baseline. I agree that we should place this code more prominently along the sandbox or finn-example.

If you have further questions about the network surgery, please comment on the discussion linked above :)