Issues
- 0
- 0
ORTModelUnet config file not found after initial run
#1526 opened by willkara - 4
BERT has not final model
#1439 opened by dangokuson - 2
onnx GPU optimization
#1514 opened by heman-CL - 2
- 2
`olive shared-cache` throws exception
#1509 opened by cecheta - 0
Using a local model in input_model causes SameFileError in MergeAdapters Pass
#1442 opened by samuel100 - 0
- 1
olive auto-opt does not generate genai_config.json when --provider option is given
#1471 opened by natke - 1
Stable Diffiusion example does not exist at all
#1475 opened by BrickDesignerNL - 0
ORTStableDiffusionXLPipeline received config, but do not accept those arguments.
#1470 opened by jesenzhang - 0
olive auto-opt with --model_builder --provider DmlExecutionProvider and --adapter errors out
#1464 opened by natke - 1
Convert-adapters method output file name error when given name has dot inside
#1459 opened by liuyunms - 0
- 0
olive auto-opt with --model_builder seems to run the onnxruntime optimizer then crashes
#1460 opened by natke - 3
--use_model_builder does not work with olive auto-opt
#1454 opened by natke - 3
--provider does not work with olive auto-opt
#1449 opened by natke - 4
Shape error with .onnx_adapter after convert-adapter
#1451 opened by natke - 1
Phi3 example error evaluators -> common_evaluator gqa_transformer_prompt_dummy_data not found in {} (type=value_error)
#1447 opened by MikeYeager - 4
- 0
Use model builder does not work with capture-onnx-graph
#1450 opened by natke - 0
Unable to get dummy inputs for the model
#1397 opened by dangokuson - 0
- 1
[Bug]: ImportError: cannot import name 'load_model'
#1406 opened by Pilaf4567 - 6
- 1
ONNX quantization MatMul4BitsQuantizer model failed
#1411 opened by DimQ1 - 1
- 0
ONNX model optimization failed.
#1405 opened by prashant-saxena - 2
phi3 inference RuntimeError
#1391 opened by khmyznikov - 6
getting error while running llama2/bert on GPU
#1279 opened by himanshushukla12 - 1
Whisper olive setup error
#1312 opened by liuyulvv - 4
- 0
capture-onnx-graph CLI Bug: list append()
#1348 opened by samuel100 - 1
Mistral int4 error
#1330 opened by eddan168 - 7
- 1
Mistral optimization(GPU) for a locally saved model, Failed to run Olive on gpu-cuda.
#1341 opened by tjinjin95 - 0
- 6
- 1
KeyError: 'unet_dataloader' occurs when optimizing unet in stable_diffusion_xl.py
#1327 opened by giocafe - 0
error while inferencing the mistral LLM
#1309 opened by himanshushukla12 - 3
[FR]: Could not find a version that satisfies the requirement ort-nightly-directml==1.18.0 (from version: none)
#1280 opened by purejomo - 1
Missing implementation error for CoreML
#1299 opened by thewh1teagle - 2
whisper transcriptions is empty
#1291 opened by thewh1teagle - 0
Very slow inference of optimized whisper gpu
#1300 opened by thewh1teagle - 0
Optimize whisper medium gpu failed
#1298 opened by thewh1teagle - 3
LLM Optimization with DirectML reply only displays "O"s
#1282 opened by yichunx1 - 1
Getting KeyError: 'input_model' when trying to optimize whisper-tiny.en model
#1283 opened by mram0509 - 2
Whisper optimization using ORT toolchain
#1264 opened by reeselevine - 2
[FR]: Gather per-pass output logs
#1269 opened by skywall - 1