Issues
- 0
Do batch as the last (contiguous) dimension
#56 opened by certik - 0
HLIR: add add_32_f16
#18 opened by certik - 0
Operations like "+" and matmul
#9 opened by certik - 0
Speedup the PyTorch training
#73 opened by certik - 5
Checks and warnings in LLIR
#58 opened by certik - 0
Implement the other MNIST network with batchnorm
#65 opened by certik - 0
Finish f16 implementation
#55 opened by certik - 2
Binary format choice
#34 opened by certik - 3
Crash generating mnist-tests.gguf
#36 opened by rebcabin - 0
Design CPU LLIR
#23 opened by certik - 0
Fixed vs free dimensions
#21 opened by certik - 0
- 0
Design LLIR
#19 opened by certik - 0
NNIR: Dropout node for training
#12 opened by certik - 1
Consider not putting previous dimension in NNIR
#11 opened by certik - 0
TensorFlow and PyTorch backend
#13 opened by certik - 0
conv2d should only work for 4D arrays in HLIR
#8 opened by certik