Errors using the chat
Opened this issue · 3 comments
I just installed the plugin, but can't get it to work. I have several Ollama models installed which work.
but I'm getting this error whenever I try using the chat:
*An error occurred. See console logs for details.*
And this is in the console:
plugin:smart-connections:2397
POST https://openrouter.ai/api/v1/chat/completions 429 (Too Many Requests)
plugin:smart-connections:3095
CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}
plugin:smart-connections:2989
CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}
I'm also not seeing a way to choose the model, just the embed_model is that correct? I've tried several of those but no change.
Any ideas?
This is in a test vault with only one note, so there's not much to index. I wanted to test the chat before I started "connecting".
Thanks
I just installed the plugin, but can't get it to work. I have several Ollama models installed which work.
but I'm getting this error whenever I try using the chat:
*An error occurred. See console logs for details.*
And this is in the console:
plugin:smart-connections:2397 POST https://openrouter.ai/api/v1/chat/completions 429 (Too Many Requests) plugin:smart-connections:3095 CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …} plugin:smart-connections:2989 CustomEvent {isTrusted: false, data: '{"error":{"message":"Rate limit exceeded: free-models-per-day","code":429}}', source: SmartStreamer, detail: null, type: 'error', …}
I'm also not seeing a way to choose the model, just the embed_model is that correct? I've tried several of those but no change.
Any ideas?
This is in a test vault with only one note, so there's not much to index. I wanted to test the chat before I started "connecting".
Thanks
I just realized why it does not work, the settings for API keys and models have now moved to a different setting page (for which the icon was originally hidden for me).
This is where you can find it. The error is the result of no API key being configured.
Oh, cool, I hadn't seen that button, it was being covered by the dev console lol.
Which Model Platform do I choose for Ollama? Custom Local (OpenAI format)
?
And what do need to put into hostname, protocol, path, and port?
Oh, cool, I hadn't seen that button, it was being covered by the dev console lol.
Which Model Platform do I choose for Ollama?
Custom Local (OpenAI format)
?And what do need to put into hostname, protocol, path, and port?
Exactly my question too!