/larry-ai

larry.ai: A Batteries Included ChatGPT Frontend Framework & HTTP Proxy

Primary LanguagePythonApache License 2.0Apache-2.0

🐦 larry.ai: A Batteries Included ChatGPT Frontend Framework & HTTP Proxy

version ChatGPT React python FastAPI ruff build coverage


πŸ’ͺ Motivation

  • Currently there are some good options to expose your ChatGPT chatbot with a nice user interface, such as Gradio and Streamlit. However, customizing the look and feel of the UI for these frameworks is not that straightforward.
  • larry.ai was created with two main principles:
    • Ease of use: just install it, configure your Open AI token, and voila - you have a sleak chatbot frontend
    • Flexibility: want to use larry.ai as a simple (internal) proxy and communicate with it via your own frontend? You're welcome to do so via the exposed REST API endpoints.

🐣 Getting Started

Installation

pip install larry-ai

πŸƒβ€β™‚οΈ Running

  • Make sure to properly set the environment variable containing your Open AI Token (OPENAI_API_TOKEN), and then:
larry

INFO:root:Starting Larry server...
INFO:     Started server process [2856]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

πŸ•ΈοΈ Accessing Web UI & REST API

  • Just fire up your browser and head to https://localhost:8000.
  • The default/root endpoint (/) shows the ReactJS frontend, but other endpoints are also available:
    • /generate: REST API endpoint that communicates with Open AI.
    • /docs: FastAPI Swagger UI documentation.

πŸ“· Screenshots

πŸ›£οΈ Roadmap

We also have some exciting features in the roadmap, namely:

  • Ability to easily change color themes
  • Prompt Injection protection
  • Caching GPT API calls
  • Rate limiting
  • Authentication & Authorization
  • API Key Management

🀝 Contributing

  • Have a cool idea? Feel free to create an issue and submit a PR!
  • You can have a look at the current issues here