Support for locally hosted LLMs
Schallabajzr opened this issue · 4 comments
I was following a blog post and self hosting both:
the requests to these servers work via curl as per this
curl http://localhost:8080/v1/models {"object":"list","data":[{"id":"gpt-3.5-turbo","object":"model"}]}%
but when I try to communicate with it through the text generator plugin with the model rename to gpt-3.5-turbo to the self hosted instance (running inside docker at localhost:8080) I get the following error.
`
[!failure]- Failure
TypeError: Failed to fetch
plugin:obsidian-textgenerator-plugin:57307 eval
plugin:obsidian-textgenerator-plugin:57307:9new Promise
plugin:obsidian-textgenerator-plugin:57291 TextGenerator.eval
plugin:obsidian-textgenerator-plugin:57291:14Generator.next
plugin:obsidian-textgenerator-plugin:78 eval
plugin:obsidian-textgenerator-plugin:78:61new Promise
plugin:obsidian-textgenerator-plugin:62 __async
plugin:obsidian-textgenerator-plugin:62:10plugin:obsidian-textgenerator-plugin:57290 TextGenerator.request
plugin:obsidian-textgenerator-plugin:57290:12plugin:obsidian-textgenerator-plugin:57275 TextGenerator.eval
plugin:obsidian-textgenerator-plugin:57275:86Generator.next
`
The support for locally hosted llms are coming soon.
it is in the beta release,
it will be out soon
+1 on this as well. LMStudio support with this will be awesome.
Any related tasks on this @haouarihk, or are you doing all this in private?
+1 on this as well. LMStudio support with this will be awesome.
Any related tasks on this @haouarihk, or are you doing all this in private?
i forgot to mention it there, LMStudio support is out,
if you want up to date tasks. you can join our discord community
Relevant Discord link here, below is copypasta from the thread, c/o haitam(526577079509057537). https://discord.com/channels/1083485983879741572/1159894948636799126
- step 1: Download LM Studio, and install a model of your choice
- step 2: Start the server, with CORS enabled, and don't forget to copy the endpoint as shown in the image
step 3: go to the plugin settings and select Openai either chat or instruct, and paste in the new base path
note that it will not work without api key being set to something,
we recommend setting it to random text like "asdf"
if the generation is slow, and you have a decent gpu
make sure to set these