Code speedup
Closed this issue · 1 comments
There's a lot of low-hanging fruit in the optimization of the code. Most of the simulation time is spent on dict-related operations, such as __hash__
and __eq__
in particular. Re-writing some of the often-called and dictionary-heavy functions will greatly improve the runtime.
See #60 for WIP.
Currently, Zigzag has the concept of MemoryOperands
(eg I1
, I2
) and LayerOperands
(I
, W
). The idea behind this distinction is that users can create layers with different names for the layer operands. To map the layer operands to the memory operands, each layer has a MemoryOperandLinks
instance to map layer to memory operands.
In practice, only the same two layer and memory operands occur, and the MemoryOperandLinks
is always the exact same, namely the default. I don't know any use case of having a different layer-to-memory-operand mapping.
Dealing with this variable mapping from layer to memory operand costs a lot of execution time and complexity. There are many, many occurrences of this translation. The code could be significantly sped up by removing MemoryOperandLinks
, merging the concept of memory and layer operands into one, and hardcoding the operands (e.g. layer.operand_1
or layer.weight_op
).
Are there any drawbacks in this approach? Am I missing use cases where a non-default ``MemoryOperandLinks` is needed?