microsoft/microxcaling
PyTorch emulation library for Microscaling (MX)-compatible data formats
PythonMIT
Issues
- 1
Can this library support flash attention?
#34 opened by ryusaeba - 0
Incompatible requirements?
#31 opened by awf - 1
mx.matmul overhead
#29 opened by tsengalb99 - 1
- 2
- 1
- 3
examples ffn_mx.py - Module not found error - no module named mx (needs path.append('..'))
#9 opened by lessw2020 - 1
How is the matmul for MX format implemented?
#23 opened by xijiu9 - 1
BFloat16 compatibility contributions
#21 opened by hmellor - 0
Large accuracy loss on MobileVit model
#27 opened by zks878 - 1
Consider a PyPI package?
#22 opened by awf - 0
Inference Error with OPT models
#18 opened by ruisizhang123 - 4
- 1
- 1
- 2
Support for LSTMCell
#12 opened by rjfnobre - 1
Custom CUDA code vs. Pytorch CPU/GPU
#11 opened by rjfnobre - 23
- 1
- 4
example or docs for getting started and converting an existing model to MX dtypes ala FP6?
#5 opened by lessw2020 - 1
- 1
957 is quantized as 896
#4 opened by zhuango - 1
- 3
Action required: migrate or opt-out of migration to GitHub inside Microsoft
#2 opened by microsoft-github-policy-service