huggingface/optimum
🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools
PythonApache-2.0
Pinned issues
Community contribution - `BetterTransformer` integration for more models!
#488 opened by younesbelkada
Open25
Community contribution - `optimum.exporters.onnx` support for new models!
#555 opened by michaelbenayoun
Closed43
Issues
- 1
Support for GliNER
#2182 opened by polodealvarado - 3
Support onnx conversion for wav2vec2-bert
#2082 opened by fawazahmed0 - 1
Pruning & Knowledge Distillation For Onnx Format
#2178 opened by harshakhmk - 1
Feature Request: ONNX Opset 21 Support
#2180 opened by aendk - 1
Add ONNX export optimization support for ModernBERT
#2177 opened by amas0 - 1
PEFT to ONNX conversion
#2189 opened by morteza89 - 1
Failed to export Llama with past-key-values to ONNX
#2204 opened by ahmedehabessa - 1
Bug exporting Whisper?
#2200 opened by AlArgente - 3
- 2
- 1
Support for RT-DETR model export to onnx
#2176 opened by fahadishaq1 - 1
- 1
optimum cant use custom pipelines
#2170 opened by xiaoyao9184 - 1
Issue when converting Exaone 3.0 7.8B model
#2202 opened by Zhaeong - 2
Fix optimum-cli export executorch
#2172 opened by guangy10 - 1
Support for `jina-embeddings-v3`
#2166 opened by arianomidi - 2
Revisit PR 1874
#2160 opened by jambonne - 1
ORTModel.device setter raises unexpected error
#2197 opened by mdambski - 4
Export-to-ExecuTorch via Optimum integration
#2128 opened by guangy10 - 0
no __version__ attribute
#2188 opened by umbilnm - 1
RuntimeError: Unable to find data type for weight_name='/model/layers.0/self_attn/k_proj/MatMul_output_0'
#2186 opened by dongwonmoon - 1
Support for ONNX export of UMT5
#2142 opened by cyanic-selkie - 2
Support for ONNX export of SeamlessM4TModel
#2174 opened by AlArgente - 1
An installed package with a different distribution name is not properly detected by Optimum
#2163 opened by kazssym - 0
Support for exaone models
#2167 opened by Zhaeong - 1
OnnxRuntime Support for Text2Video and Img2Video Pipelines
#2168 opened by jdp8 - 0
- 1
Allow access restricted models in the CI
#2127 opened by guangy10 - 1
- 0
--dtype fp16 does not decrease the model size
#2156 opened by chansonzhang - 1
Doesnt recognize model type 'modernbert'
#2149 opened by kguruswamy - 0
- 1
Slim pypi packages
#2143 opened by twoertwein - 2
- 0
KeyError: 'swinv2 model type is not supported yet in NormalizedConfig.
#2140 opened by Billybeast2003 - 6
High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release
#2069 opened by niyathimariya - 1
- 0
install instructions result is pip version conflicts.
#2125 opened by hpcpony - 1
GPTQ kernel inference not compatible with some models
#2120 opened by Qubitium - 2
- 1
Add support for RemBERT in the ONNX export
#2092 opened by mlynatom - 5
Flux Pipeline doesn't work
#2103 opened by clintg6 - 4
Stable Diffusion 3 ONNX support
#2093 opened by gmarcosf - 2
Add support for Musicgen Melody in the ONNX export
#2095 opened by rubeniskov - 0
TFJS support model.json to ONNX conversion
#2097 opened by JohnRSim - 0
"ValueError: Trying to export a codesage model" while trying to export codesage/codesage-large
#2080 opened by TurboEncabulator9000 - 0
LLama 3.2 vision - unable to convert
#2079 opened by pdufour - 0
- 0
Problem converting DeBERTaV3 to ONNX using optimum-cli
#2075 opened by marcovzla - 0
Conversion innaccuracy specific Opus-MT model
#2068 opened by FricoRico