ahyatt/llm

Error callback not called if url request failed

Closed this issue · 1 comments

Hi @ahyatt

Simple code for reproducing:

(require 'llm-ollama)
(setq provider
      (make-llm-ollama
       :chat-model "1" :embedding-model "2" :port 3333))

(require 'llm-openai)
(setq llm-warn-on-nonfree nil)
(setq provider
      (make-llm-openai-compatible
       :key "0"
       :chat-model "1" :embedding-model "2" :url "http://localhost:3333"))

(llm-chat-streaming provider (llm-make-simple-chat-prompt "test") #'ignore #'ignore
		    (lambda (_)
		      (message "error callback called")))

There is no process listening port 3333.
Reproducing with both providers.

Message will never received.

This is fixed, but keep in mind you can still get errors thrown from the initial (sync) part of llm calls. For example, bad providers may throw errors, such as when Open AI providers aren't initialized with a key. It's just that all async parts should go to the callback.