This demo showcases how to use offline GPT with the help of LM Studio within a .NET 8 Blazor app.
- Download and install LM Studio.
- Once LM Studio is installed, download the Phi-3 model (use the 2GB version if your machine does not have 8+GB VRAM).
- Go to the "Local Server" tab.
- Select model.
- Enable "Cross-Origin-Resource-Sharing (CORS)".
- Disable "Verbose Server Logging".
- (Optional) Under "Advanced Settings" on the right, set the GPU memory limit to max in "GPU Settings".
- Stop the server and apply changes if required.
- Start the server.
- Ensure you have the latest .NET 8 SDK installed. (Windows Winget:
winget install dotnet-sdk-8
) - Run the application in Visual Studio or use
dotnet run
in thesrc/HomeAutomationGpt
folder.
On the root of the repo run the following commands:
docker build -t home-automation-gpt .\src\HomeAutomationGpt\
docker run -d -p 8080:80 home-automation-gpt
Open http://localhost:8080/
in your browser.
Try various prompts to see the home automation in action:
- "Turn on the lights in the living room."
- "Turn off the lights in the kitchen."
- "Set the temperature to 23 degrees."
- "It's really cold" (This will set the A/C to 23, turn on the TV, and turn on the light in the living room).
- "Turn off all devices but keep the A/C on."
- "I need more light."
- "I'm feeling cold."