/simple_llm_search

A simplest conversation-based search demo based on Lepton AI's work.

Primary LanguageTypeScriptApache License 2.0Apache-2.0

Simple LLM Search (in 200 lines)

A simplest conversation-based search demo based on Lepton AI's work.

Supported LLM Providers

  • OpenAI
  • Azure
  • ZhipuAI

Setup

  1. Setup Python Dependencies
pip install fastapi loguru toml zhipuai openai
  1. Configure your LLM providers' API keys in secrets.toml
  2. Run the backend
uvicorn backend:app
  1. Build the frontend
cd web && npm install && npm run build
  1. Run the frontend
export SERVER_URL=http://localhost:8000 && npm run dev

Search with Lepton

Build your own conversational search engine using less than 500 lines of code.
Live Demo

Features

  • Built-in support for LLM
  • Built-in support for search engine
  • Customizable pretty UI interface
  • Shareable, cached search results

Setup Search Engine API

Note

Visit here to get your Bing subscription key.

Setup LLM and KV

Note

We recommend using the built-in llm and kv functions with Lepton. Running the following commands to set up them automatically.

pip install -U leptonai && lep login

Build

  1. Set Bing subscription key
export BING_SEARCH_V7_SUBSCRIPTION_KEY=YOUR_BING_SUBSCRIPTION_KEY
  1. Build web
cd web && npm install && npm run build
  1. Run server
BACKEND=BING python search_with_lepton.py

Deploy

You can deploy this to Lepton AI with one click:

Deploy with Lepton AI

You can also deploy your own version via

lep photon run -n search-with-lepton-modified -m search_with_lepton.py --env BACKEND=BING --env BING_SEARCH_V7_SUBSCRIPTION_KEY=YOUR_BING_SUBSCRIPTION_KEY

Learn more about lep photon here.