stackblitz-labs/bolt.diy

API Request Fails with LM Studio's LLM (CORS Enabled) and Unexpected Ollama Errors

Closed this issue · 3 comments

Describe the bug

I encountered the following issues while using LM Studio and would like some assistance. I am running bolt.diy directly on WSL without using Docker.

CORS設定を有効化
As shown in the attached image, I enabled the "CORS" option in LM Studio's server settings. After doing so, I was able to select the LLM (Large Language Model) from the dropdown menu.

APIリクエストエラー
When I selected LM Studio's LLM and attempted to generate code, I received the following error message:
`
There was an error processing your request: An error occurred.

`
Ollama関連の警告
Despite not using the Ollama server, I see these warning messages in the terminal:

WARN Constants Failed to get Ollama models: fetch failed
WARN Constants Failed to get Ollama models: fetch failed


Environment Details:
WSL (Windows Subsystem for Linux): Running bolt.diy directly (without Docker).
I would appreciate any guidance on:

Why the API request to LM Studio's server fails.
Why Ollama warnings appear even though I am not using it.
Thank you for your support!

Link to the Bolt URL that caused the error

http://localhost:5173/chat/10

Steps to reproduce

I entered the prompt, but I get an error

Expected behavior

I noticed in the screenshot of the debug info that the URL for the chat API in LMStudio seemed different from what LMStudio shows.

Screen Recording / Screenshot

スクリーンショット 2024-12-18 215035
スクリーンショット 2024-12-18 215051
スクリーンショット 2024-12-18 215143
image

Platform

  • OS: [ Linux]
  • Browser: [Chrome]
  • Version: [eb6d435(v0.0.3) - stable]

Provider Used

No response

Model Used

No response

Additional context

I am getting the following error on the LMStudio server.
2024-12-18 21:50:37 [ERROR] Unexpected endpoint or method. (GET /api/health). Returning 200 anyway

I think the URL that bolt.diy requests to LMStudio's API is different? I thought so.

2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234
2024-12-18 20:52:33 [INFO]
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] Supported endpoints:
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions
2024-12-18 20:52:33 [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/embeddings

in lm studio try to enable: Serve on Local Network
if that dose not work than also try the local ip address of the computer that LM Stuido is running on
In this video I used docker for bolt.diy and ollama but it might help: https://youtu.be/TMvA10zwTbI

Thank you very much.
I have turned on serving on the local network of the LMStudio server.
But the error did not improve.

With the Docker method, I was able to use the Model of LMStudio.

Thank you very much for teaching me how to do this.
スクリーンショット 2024-12-18 235018

Sorry.
I re-cloned the repository this morning.
And I also turned on serving the LM Studio URL on the local network.
I set this URL in env.local and started it up, and it worked with WSL.
I was able to use it with WSL without using Docker.
スクリーンショット 2024-12-19 104005