/farfalle

🔍 ai search engine - self-host local or cloud language models

Primary LanguageTypeScriptApache License 2.0Apache-2.0

Farfalle

Open-source AI-powered search engine.

Run your local LLM (llama3, gemma, mistral) or use cloud models (Groq/Llama3, OpenAI/gpt4-o)

Demo answering questions with llama3 on my M1 Macbook Pro:

local-demo.mp4

💻 Live Demo

farfalle.dev (Cloud models only)

📖 Overview

🛣️ Roadmap

  • Add support for local LLMs through Ollama
  • Docker deployment setup
  • Integrate with LiteLLM
  • Add support for searxng. Eliminates the need for external dependencies.

🛠️ Tech Stack

🏃🏿‍♂️ Getting Started

Prerequisites

  • Docker
  • Ollama
    • Download any of the supported models: llama3, mistral, gemma
    • Start ollama server ollama serve

Get API Keys

1. Clone the Repo

git clone git@github.com:rashadphz/farfalle.git
cd farfalle

2. Add Environment Variables

touch .env

Add the following variables to the .env file:

Required

TAVILY_API_KEY=...

Optional

# Cloud Models
OPENAI_API_KEY=...
GROQ_API_KEY=...

3. Run Containers

This requires Docker Compose version 2.22.0 or later.

docker-compose -f docker-compose.dev.yaml up -d

Visit http://localhost:3000 to view the app.

🚀 Deploy

Backend

Deploy to Render

After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.

Frontend

Use the copied backend URL in the NEXT_PUBLIC_API_URL environment variable when deploying with Vercel.

Deploy with Vercel

And you're done! 🥳