/agentops

Python SDK for agent evals and observability

Primary LanguagePythonMIT LicenseMIT

AgentOps ๐Ÿ•ต๏ธ

AI agents suck. Weโ€™re fixing that.

Build your next agent with benchmarks, observability, and replay analytics. AgentOps is the toolkit for evaluating and developing robust and reliable AI agents.

AgentOps is still in closed alpha. You can sign up for an API key here.

License: MIT PyPI - Version

Quick Start โŒจ๏ธ

pip install agentops

Session replays in 3 lines of code

Initialize the AgentOps client, and automatically get analytics on every LLM call.

import agentops

# Beginning of program's code (i.e. main.py, __init__.py)
ao_client = agentops.Client(<INSERT YOUR API KEY HERE>)

...
# (optional: record specific functions)
@ao_client.record_action('sample function being record')
def sample_function(...):
    ...

# End of program
ao_client.end_session('Success')
# Woohoo You're done ๐ŸŽ‰

Refer to our API documentation for detailed instructions.

Time travel debugging ๐Ÿ”ฎ

(coming soon!)

Agent Arena ๐ŸฅŠ

(coming soon!)

Evaluations Roadmap ๐Ÿงญ

Platform Dashboard Evals
โœ… Python SDK โœ… Multi-session and Cross-session metrics โœ… Custom eval metrics
๐Ÿšง Evaluation builder API โœ… Custom event tag tracking  ๐Ÿ”œ Agent scorecards
๐Ÿ”œ Javascript/Typescript SDK โœ… Session replays ๐Ÿ”œ Evaluation playground + leaderboard

Debugging Roadmap ๐Ÿงญ

Performance testing Environments LLM Testing Reasoning and execution testing
โœ… Event latency analysis ๐Ÿ”œ Non-stationary environment testing ๐Ÿ”œ LLM non-deterministic function detection ๐Ÿšง Infinite loops and recursive thought detection
โœ… Agent workflow execution pricing ๐Ÿ”œ Multi-modal environments ๐Ÿšง Token limit overflow flags ๐Ÿ”œ Faulty reasoning detection
๐Ÿšง Success validators (external) ๐Ÿ”œ Execution containers ๐Ÿ”œ Context limit overflow flags ๐Ÿ”œ Generative code validators
๐Ÿ”œ Agent controllers/skill tests ๐Ÿ”œ Honeypot and prompt injection evaluation ๐Ÿ”œ API bill tracking ๐Ÿ”œ Error breakpoint analysis
๐Ÿ”œ Information context constraint testing ๐Ÿ”œ Anti-agent roadblocks (i.e. Captchas)
๐Ÿ”œ Regression testing Multi-agent framework visualization

Why AgentOps? ๐Ÿค”

Our mission is to bring your agent from protype to production.

Agent developers often work with little to no visibility into agent testing performance. This means their agents never leave the lab. We're changing that.

AgentOps is the easiest way to evaluate, grade, and test agents. Is there a feature you'd like to see AgentOps cover? Just raise it in the issues tab, and we'll work on adding it to the roadmap.