openvinotoolkit/openvino_notebooks
📚 Jupyter notebook tutorials for OpenVINO™
Jupyter NotebookApache-2.0
Issues
- 12
Unable to run qwen2-vl inference on Intel integrated GPU. Works fine with CPU
#2740 opened by paks100 - 8
Unable to use GPU in Automatic speech recognition using Whisper and OpenVINO with Generate API.
#2689 opened by VaibMittal7 - 2
DeepSeek-R1-Distill-Qwen-1.5B got a error
#2729 opened by etchosts - 0
Request to Add DeepSeek R1 Qwen 32B
#2718 opened by ekurniaw - 0
add mistral-7b-instruct v0.3 in the llm config?
#2720 opened by JamieVC - 3
- 7
- 1
How to use Whisper on the NPU!
#2691 opened by hrshy0629 - 2
Issue with Intel GPU Memory on Qwen2 Inference Code - Error Code -5
#2632 opened by Logesh-Babu-ZS0169 - 3
- 7
Unable to run Text to Image and Image to Text models on NPU. [NPU Excluded]
#2680 opened by Harsha0056 - 1
- 0
llm-agent-rag-llamaindex error if meta-llama/Meta-Llama-3.1-8B-Instruct model is selected
#2670 opened by js333031 - 2
mllama-3.2 INT8 Quantized Model on GPU
#2639 opened by ekurniaw - 3
Flux.1 image generation hasn't yet implemented iGPU and Compile-mode but PRs are ready
#2646 opened by JamieVC - 2
- 4
Dynamic speculative decoding is significantly slower than auto-regressive and than speculative decoding generation
#2621 opened by shira-g - 3
[Feature Request] CogAgent-9B OpenVINO support
#2624 opened by sanbuphy - 5
Instant-id return black image and report invalid value encountered in cast
#2622 opened by dannyweng88122 - 1
Typo in qwen2-vl.ipynb
#2620 opened by jkjung-avt - 3
intel的工程师自己有没有测过?
#2612 opened by ranzsz - 5
- 3
notebook "llm rag llama-index" fail to initial
#2587 opened by JamieVC - 5
Generic NPU optimizing notebook
#2531 opened by SRai22 - 7
How Can I Run the LLM-Chatbot on NPU?
#2573 opened by tim102187S - 5
notebook "llm-rag-llamaindex" crash on running GPU
#2591 opened by JamieVC - 5
NPU support on yolov8
#2398 opened by weberwcwei - 4
How to use NPU while compiling hello-world
#2575 opened by bonihaniboni - 2
(Optimization of LLM inference) Does Intel OpenVINO support offloading LLM models, allowing some layers to remain on the SSD while loading the main layers into RAM during inference computation?
#2533 opened by hsulin0806 - 2
- 3
Visual-language assistant with Pixtral and OpenVINO
#2489 opened by matrix1233 - 7
- 12
llava-multimodal-chatbot-genai run failed
#2484 opened by Johere - 0
Maybe something outdate when quantizing YOLO11
#2493 opened by weiyusheng - 6
Memory Leak in "Human 3D Pose Estimation" https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/3D-pose-estimation-webcam/3D-pose-estimation.ipynb
#2420 opened by TheArbitraryConstant - 4
Loading the embedding model with NPU does not work
#2364 opened by Nicogs43 - 2
- 0
- 6
ip-adapter plus is not working with full face
#2400 opened by circuluspibo - 9
Qwen2-vl unable to run
#2393 opened by afreedizDB - 0
- 2
openvoice inference doesn't work on Intel ARC A770
#2346 opened by maxkim-kr - 3
- 8
iGPU memory usage problem
#2241 opened by NNsauce - 1
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 when running parler-tts-text-to-speech.ipynb
#2301 opened by eugeooi - 2
Why not python sample code instead of Colab?
#2215 opened by wb666greene - 3
llm-chatbot-generate-api notebook fails during pip / git install routine
#2273 opened by RyanMetcalfeInt8 - 2
- 2
supplementary_materials /qwen2 have api
#2249 opened by show1abc - 2
Cannot create StringTensorUnpack layer StringTensorUnpack_220719 id:5 from unsupported opset: extension
#2263 opened by GaryLin-04