ngxson/wllama

404-ed model still ended up in wllama_cache?

Closed this issue ยท 3 comments

Because I made a typo in the URL of a local model file I noticed something strange. It seems that invalid URL ended up in the wllama_cache anyway.

I checked to see if my code was perhaps adding the URL to the cache manually, but didn't think so, and also didn't find that.

The result is that my projects thinks the model is loaded, but then Wllama crashes when loading it.

Perhaps it cached the 404 page?

Screenshot 2024-05-24 at 13 08 37

(currently my code tries to recognise an event object, and see that as an indication that the (down)load process failed, and not the actual loading of the model itself. See also: #56

Oh yeah! I've faced this problem too!

FYI, for the downloading function, I'm planning to replace XMLHttpRequest with modern fetch API. This should fix the problem.

Resolved by #59 . Any invalid URL (which does not have status code 200) will be rejected by fetch API.