Error handling model response
Opened this issue · 1 comments
mrjjq commented
Before submitting your bug report
- I've tried using the "Ask AI" feature on the Continue docs site to see if the docs have an answer
- I believe this is a bug. I'll try to join the Continue Discord for questions
- I'm not able to find an open issue that reports the same bug
- I've seen the troubleshooting guide on the Continue Docs
Relevant environment info
- OS:
- Continue version:
- IDE version:
- Model:
- config:
OR link to agent in Continue hub:Description
models:
- name: qwen3-coder-30b
provider: ollama
model: qwen3-coder:30b
apiBase: http://192.168.2.157:11434
There was an error handling the response from qwen3-coder-30b.
Please try to submit your message again, and if the error persists, let us know by reporting the issue using the buttons below.
To reproduce
No response
Log output
web2bruno commented
I am also seeing this error, but I'm using Qwen3 Coder 480B via Nano-GPT, the error normally occurs at the end, after it's done working in my prompt. Any idea of why this happens?