/relia

Find the Best LLM for Your Needs through E2E Testing

Primary LanguageTypeScriptApache License 2.0Apache-2.0

Relia

Try Relia

中文文档

Relia is an E2E testing framework for LLMs, designed to help you build AI benchmarks tailored to your specific use cases.

It identifies the most suitable LLM model for your needs and ensures that model upgrades do not cause performance regressions through continuous testing.

Built specifically for function calling (or "tool use") scenarios, which are at the core of agent-based AI applications.

Documents

Usage Guide

Self-Deployment

How to Contribute

Use Cases

Selecting the Most Suitable LLM

When selecting models, identify the best LLM for your specific use case, ensuring high performance and cost efficiency.

Optimizing Prompts

When developing an application, compare the results of multiple sets of prompts on the same model to understand the impact of different prompts and complete optimization.

Continuous Testing to Prevent Regressions

After the application is released, continuously test different versions of the same model to avoid regressions during upgrades.

Roadmap

  • Enable customization of provider titles and suite titles in test reports for better organization and clarity.
  • Improve the efficiency and reliability of executing large-scale test plans.
  • Expand support to include more LLM providers.
  • Develop a form UI for editing test plans, making it easier and more intuitive to create and manage tests.
  • Implement persistent storage for test plans and reports.
  • Allow custom scoring for different suites to better evaluate and compare the performance of test cases.

Feel free to follow our project on GitHub, X, and Bilibili.