Training without full precision
Closed this issue · 1 comments
Sneakr commented
Is it possible to train in quantized mode without full precision? Could maybe use 2-bit quant to hold the ternary values? Thanks
GiacomoLeoneMaria commented
During training, when calling torch train methods, the BitLinear class is called by applying a surclass of