/fireside-chat

An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!

Primary LanguageRustGNU Affero General Public License v3.0AGPL-3.0

Fireside Chat

(prev. "Candle Chat")

An LLM interface implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!

Watch the introduction video: Watch the video

Goals

This project is designed for single and multi-user chat with many Large Language Models (LLMs).

Features

  • Local or Remote Inference Backend
  • Local or Remote SQLite Database

Setup / Operation

You can configure your model and default inference settings by putting files in your Config Directory. This is automatically configured when you choose a model in the frontend, but you can manually add models if you like.

Example:

# config_model.yaml
repo_id: DanielClough/Candle_Puffin-Phi-v2
q_lvl: q2k
revision: main
tokenizer_file: null
weight_file: null
quantized: true
cpu: false
use_flash_attn: false
template: ShareGPT
# config_inference.yaml
temperature: 
top_p: 
seed: 299792458
sample_len: 150
repeat_penalty: 1.3
repeat_last_n: 150
load_context: false
role: 

If load_context: true then you can add (small) in <Config Directory>/fireside-chat/context/. Large files may cause Out Of Memory errors.

Directories

Config Directory is $HOME/.config/fireside-chat

Config Directory is $HOME/.cache/hugging-face

Development

You can compile with environment variable the FIRESIDE_BACKEND_URL, and FIRESIDE_DATABASE_URL to call a server other than localhost.

This can be configured in tauri.conf.json, or in your system environment.

# eg. for Linux
export FIRESIDE_BACKEND_URL=192.168.1.6 && trunk serve

Limitations

  • I am not testing in Windows environments.