๐ llmonitor
Open-source monitoring & observability for AI apps and agents
Features
LLMonitor helps AI devs monitor their apps in production, with features such as:
- ๐ต Cost, token & latency analytics
- ๐ Log & edit prompts
- ๐ Trace agents/chains to debug easily
- ๐ช Track users
- ๐ท๏ธ Label and export fine-tuning datasets
- ๐ฒ๏ธ Collect feedback from users
- ๐งช Unit tests & prompt evaluations (soon)
It also designed to be:
- ๐ค Usable with any model, not just OpenAI
- ๐ฆ Easy to integrate (2 minutes)
- ๐งโ๐ป Simple to self-host (deploy to Vercel & Supabase)
Demo
demo720.mp4
โ๏ธ Integration
Modules available for:
LLMonitor natively supports:
- LangChain (JS & Python)
- OpenAI module
- LiteLLM
Additionally you can use it with any framework by wrapping the relevant methods.
๐ Documentation
Full documentation is available on the website.
โ๏ธ Hosted version
We offer a hosted version with a free plan of up to 1k requests / days.
With the hosted version:
- ๐ท don't worry about devops or managing updates
- ๐ get priority 1:1 support with our team
- ๐ช๐บ your data is stored safely in Europe
๐ Support
Need help or have questions? Chat with us on Discord or email one of the founders: vince [at] llmonitor.com. We're here to support you every step of the way.
๐จโ๐ฉโ๐ง Team
License
This project is licensed under the Apache 2.0 License.