Issue with data type of the import model
Closed this issue · 1 comments
I can see from this line of code
zigzag/zigzag/classes/io/onnx/conv.py
Line 119 in 54cf7c2
zigzag/zigzag/classes/io/onnx/conv.py
Lines 164 to 167 in 54cf7c2
Hi, yes you are correct that these precisions are assumed. Traditional accelerators will use 8 bit for the operands and a higher precision for the partial output sums. Here, 16 bit is assumed, but 24 bit is also used frequently.
If your accelerator uses larger precision, this will affect the cost estimation because the data fetches to/from the memories will be more expensive and not as many operands can be stored in lower memory levels. You can modify the operand_precision accordingly.