twinnydotdev/twinny

Can't get connected to my Ollama

Closed this issue · 7 comments

Describe the bug
Can't detect Ollama models.

To Reproduce
Steps to reproduce the behavior:

  1. brew services start ollama
  2. open VSCode
  3. Click twinny
  4. Select active model

Expected behavior
A drag up menu appears

Screenshots

image

Desktop (please complete the following information):

  • OS: macOS Sonoma
  • Browser Chrome
  • Version 122 for Chrome, 1.87 for VSCode

Additional context
No.

Please could you provide more information to your problem? Many thanks.

Sure.

  1. I installed Ollama and twinny, it worked then, because I saw a pullable menu for the model.
  2. However, after I updated Ollama this week, twinny somehow couldn't detect my Ollama that was running backgroud.
  3. I am sure Ollama works well, as I can see
curl http://localhost:11434/api/tags
{"models":[{"name":"gemma:7b","model":"gemma:7b","modified_at":"2024-03-03T11:21:28.356683666+08:00","size":5202317200,"digest":"430ed3535049f562814f612a653457ba9fd390cccc77a94ab88964b8d8fd18a8","details":{"parent_model":"","format":"gguf","family":"gemma","families":["gemma"],"parameter_size":"9B","quantization_level":"Q4_0"}}]}% 
  1. The environment variable is set by
launchctl setenv OLLAMA_ORIGINS "http://localhost:*"

So, I guess the requests from twinny have been rejected. Is there any way I can trace twinny's activity by a detailed log to check what happened?

Plus, my VSCode settings for twinny is

{
  "twinny.enableCompletionCache": true,
  "twinny.fimTemplateFormat": "deepseek",
  "twinny.useFileContext": true,
  "twinny.useMultiLineCompletions": true,
  "twinny.useTls": true,
}

You are using useTls but running on http://localhost:11434/ ? What happens when you set the useTls to false?

I set it to false, and restarted VSCode, then it works now! Thank you so much @AntonKrug ! You can close it issue as complete!

Thanks @AntonKrug for helping.

Not using TLS and I seem to be having similar issue, not sure how to debug it. I just get spinning wheel forever and no memory allocation. I have no idea what is going on everything is set right even the following works

curl http://localhost:11434/api/generate -d '{
  "model": "deepseek-coder:1.3b",
  "prompt": "Why is the sky blue?"
}'

Please try to update Ollama and twinny to latest version @nonetrix