Pinned Repositories
-ARCHIVED--router-evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
adapters
Package for calling different models with same interface
demo-evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
leaderboard-backend
Open sourced backend for Martian's LLM Inference Provider Leaderboard
martian-python
martian-python-v1
The official Python library for the OpenAI API
refac-apps-llm
routerbench
The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System
rsi
trlx
withmartian's Repositories
withmartian/routerbench
The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System
withmartian/leaderboard-backend
Open sourced backend for Martian's LLM Inference Provider Leaderboard
withmartian/martian-python
withmartian/adapters
Package for calling different models with same interface
withmartian/martian-python-v1
The official Python library for the OpenAI API
withmartian/-ARCHIVED--router-evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
withmartian/demo-evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
withmartian/martian-evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
withmartian/martian-node
withmartian/refac-apps-llm
withmartian/rsi
withmartian/trlx
withmartian/martian-node-v1
The official Node.js / Typescript library for the OpenAI API