LLMonitor helps AI devs monitor their apps in production, with features such as:
- ๐ต Cost, token & latency analytics
- ๐ช Track users
- ๐ Traces to debug easily
- ๐ Inspect full requests
- ๐ฒ๏ธ Collect feedback from users (soon)
- ๐งช Unit tests for your agents (soon)
- ๐ท๏ธ Label and create fine-tuning datasets (soon)
It also designed to be:
- ๐ค Usable with any model, not just OpenAI
- ๐ฆ Easy to integrate (2 minutes)
- ๐งโ๐ป Simple to self-host (deploy to Vercel & Supabase)
demo720.mp4
Modules available for:
LLMonitor natively supports:
- LangChain (JS & Python)
- OpenAI module
- LiteLLM
Additionally you can use it with any framework by wrapping the relevant methods.
Full documentation is available on the website.
We offer a hosted version with a free plan of up to 1k requests / days.
With the hosted version:
- ๐ท don't worry about devops or managing updates
- ๐ get priority 1:1 support with our team
- ๐ช๐บ your data is stored safely in Europe
Chat with us on Discord or email one of the founders: vince [at] llmonitor.com.
Made by @vincelwt and Hugh.
This project is licensed under the Apache 2.0 License.