Request: Add openrouter.ai endpoint support
Opened this issue · 5 comments
Hi,
I have tried adding the openrouter.ai into ChainForge as the custom provider, but it always responsed"Error encountered while calling custom provider function: 400 Client Error: Bad Request for url: https://openrouter.ai/api/v1/chat/completions".
I'm not a developer, I have tried my best to debug this issue with GPT 4 turbo for multiple hours, and it was still not working:(
Could you please consider adding this endpoint like toghether.ai as the native endpoint when you are free?
https://openrouter.ai/docs#quick-start
This endpoint is really popular in the market.
Really appreciate if you can consider this request!
Hi, for this and all other requests to add a provider, I cannot personally devote time to it. The best solution is to submit a PR for it. Someone did this recently to add together.ai support to ChainForge.
The error you are getting suggests it is something with your call that is incorrect. The custom provider shouldn't be the problem here. I would check your Python code and make sure the call works outside of ChainForge first.
Thank you for your quick feedback! Since I'm not a developer(I can read some easy code but I don't professional), I think I have no ability to submit a PR for it...
But I can provide more background information for you or someone else who has ability to write the code in the future:
Actually the reason that I want to use openrouter.ai basically is for two models:
Google Gemini flash/pro 1.5
Cohere Command R Plus
The weried thing is that if I specify some models from OpenAI like 「gpt-3.5-turbo」 or 「gpt-4-turbo」 in Openrouter.ai customized provider settings, it worked, ChainForge can successfully response to openrouter.ai correctly.
However if I specify the model name like 「google/gemini-1.5-pro」 or 「cohere/command-r-plus」 in Openrouter.ai customized provider settings(.py), it doesn't worked, responsed 400 error.
Below is the python code I wrote under GPT 4 assist:
`# -- coding: utf-8 --
from chainforge.providers import provider
import requestsJSON schemas to pass react-jsonschema-form, one for this provider's settings and one to describe the settings UI.
THIRD_PARTY_GPT_SETTINGS_SCHEMA = {
"settings": {
"temperature": {
"type": "number",
"title": "temperature",
"description": "Controls the 'creativity' or randomness of the response.",
"default": 0.7,
"minimum": 0,
"maximum": 1.0,
"multipleOf": 0.01,
},
"max_tokens": {
"type": "integer",
"title": "max_tokens",
"description": "Maximum number of tokens to generate in the response.",
"default": 4096,
"minimum": 1,
"maximum": 4096,
},
},
"ui": {
"temperature": {
"ui:help": "Defaults to 0.75.",
"ui:widget": "range"
},
"max_tokens": {
"ui:help": "Defaults to 100.",
"ui:widget": "range"
},
}
}Custom model provider for the third-party OpenAI GPT service
@Provider(name="Openrouter",
emoji="\U0001F680",
models=["openai/gpt-3.5-turbo-16k ","cohere/command-r"],
rate_limit="sequential",
settings_schema=THIRD_PARTY_GPT_SETTINGS_SCHEMA)
def third_party_gpt_v2_completion(prompt: str, model: str, temperature: float = 0.75, max_tokens: int = 100, repetition_penalty: float = 1.0, **kwargs) -> str:
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer API-KEY"
}
data = {
"model": model,
"messages": [{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens,
"repetition_penalty": repetition_penalty
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]
`
Hmm this sounds like it has to do with the "/" in the path. It's certainly a workaround, but you can try changing all slashes to | or something else, then convert them back to slashes in your Python code.
It's probably something on CF's end with how custom providers work with the slash notation, that it's cutting off the prefix slashes.
Let me clarify:
Since Openrouter.ai also integrate the gpt model, and as I mentioned if I request this model ID--「openai/gpt-3.5-turbo-16k」 through openrouter.ai inside ChainForge, it worked, but when I request the model ID--「cohere/command-r」 , it failed.
So I personally assumed that it might not be the "/" issue?
Anyhow, due to I don't have the ability to write the coding, so I will let it go and wait for some great talent to push it forward some days;
Still really appreciate your time for help me debugging this issue, and I expcet ChainForge can be better in the future:)
Thank you for your quick feedback! Since I'm not a developer(I can read some easy code but I don't professional), I think I have no ability to submit a PR for it... But I can provide more background information for you or someone else who has ability to write the code in the future:
Actually the reason that I want to use openrouter.ai basically is for two models: Google Gemini flash/pro 1.5 Cohere Command R Plus
The weried thing is that if I specify some models from OpenAI like 「gpt-3.5-turbo」 or 「gpt-4-turbo」 in Openrouter.ai customized provider settings, it worked, ChainForge can successfully response to openrouter.ai correctly.
However if I specify the model name like 「google/gemini-1.5-pro」 or 「cohere/command-r-plus」 in Openrouter.ai customized provider settings(.py), it doesn't worked, responsed 400 error.
Below is the python code I wrote under GPT 4 assist:
`# -- coding: utf-8 --
from chainforge.providers import provider
import requestsJSON schemas to pass react-jsonschema-form, one for this provider's settings and one to describe the settings UI.
THIRD_PARTY_GPT_SETTINGS_SCHEMA = {
"settings": {
"temperature": {
"type": "number",
"title": "temperature",
"description": "Controls the 'creativity' or randomness of the response.",
"default": 0.7,
"minimum": 0,
"maximum": 1.0,
"multipleOf": 0.01,
},
"max_tokens": {
"type": "integer",
"title": "max_tokens",
"description": "Maximum number of tokens to generate in the response.",
"default": 4096,
"minimum": 1,
"maximum": 4096,
},
},
"ui": {
"temperature": {
"ui:help": "Defaults to 0.75.",
"ui:widget": "range"
},
"max_tokens": {
"ui:help": "Defaults to 100.",
"ui:widget": "range"
},
}
}Custom model provider for the third-party OpenAI GPT service
@Provider(name="Openrouter",
emoji="\U0001F680",
models=["openai/gpt-3.5-turbo-16k ","cohere/command-r"],
rate_limit="sequential",
settings_schema=THIRD_PARTY_GPT_SETTINGS_SCHEMA)
def third_party_gpt_v2_completion(prompt: str, model: str, temperature: float = 0.75, max_tokens: int = 100, repetition_penalty: float = 1.0, **kwargs) -> str:
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer API-KEY"
}
data = {
"model": model,
"messages": [{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens,
"repetition_penalty": repetition_penalty
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]
`
like this:
from chainforge.providers import provider
import requests
import json
import logging
Set up logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(name)
API_URL = "https://openrouter.ai/api/v1/chat/completions"
API_KEY = "XXXXXXX"
Define a mapping of simplified model names to full OpenRouter model identifiers
MODEL_MAPPING = {
"claude-3.5-sonnet": "anthropic/claude-3.5-sonnet",
"claude-3-opus": "anthropic/claude-3-opus",
"claude-3-haiku": "anthropic/claude-3-haiku",
"gemini-pro-1.5": "google/gemini-pro-1.5",
"gemini-flash-1.5": "google/gemini-flash-1.5",
"gpt-4o-mini": "openai/gpt-4o-mini-2024-07-18"
}
Define settings schema for the provider
OPENROUTER_SETTINGS_SCHEMA = {
"settings": {
"temperature": {
"type": "number",
"title": "temperature",
"description": "Controls the randomness of the output.",
"default": 0.7,
"minimum": 0,
"maximum": 2.0,
"multipleOf": 0.1,
},
"max_tokens": {
"type": "integer",
"title": "max_tokens",
"description": "Maximum number of tokens to generate.",
"default": 100,
"minimum": 1,
"maximum": 22937,
},
},
"ui": {
"temperature": {
"ui:help": "Higher values make the output more random.",
"ui:widget": "range"
},
"max_tokens": {
"ui:help": "The maximum length of the generated text.",
"ui:widget": "range"
},
}
}
@Provider(
name="OpenRouter",
emoji="🌐",
models=list(MODEL_MAPPING.keys()),
rate_limit="sequential",
settings_schema=OPENROUTER_SETTINGS_SCHEMA
)
def openrouter_completion(prompt: str, model: str, temperature: float = 0.7, max_tokens: int = 1000, **kwargs):
logger.debug(f"Function called with prompt: {prompt}, model: {model}, temperature: {temperature}, max_tokens: {max_tokens}")
# Map the simplified model name to the full OpenRouter model identifier
full_model_name = MODEL_MAPPING.get(model, model)
logger.debug(f"Mapped model name: {full_model_name}")
headers = {
"Authorization": f"Bearer {API_KEY}",
"HTTP-Referer": "XXXXXX",
"Content-Type": "application/json"
}
data = {
"model": full_model_name,
"messages": [
{"role": "user", "content": prompt}
],
"temperature": temperature,
"max_tokens": max_tokens
}
logger.debug(f"Sending request to {API_URL} with headers: {headers} and data: {json.dumps(data, indent=2)}")
try:
response = requests.post(url=API_URL, headers=headers, json=data)
logger.debug(f"Received response status code: {response.status_code}")
logger.debug(f"Received response headers: {response.headers}")
logger.debug(f"Received response text: {response.text}")
response.raise_for_status()
return response.json()['choices'][0]['message']['content']
except requests.exceptions.RequestException as e:
logger.error(f"API request error: {str(e)}")
if hasattr(e.response, 'text'):
logger.error(f"Error response text: {e.response.text}")
return f"Error: An unexpected error occurred. Please check the logs for more details."
Example usage (not typically needed in ChainForge, as it will call the function directly)
if name == "main":
print(f"Available models: {list(MODEL_MAPPING.keys())}")
result = openrouter_completion("What is the capital of France?", model="claude-3.5-sonnet")
if result:
print(result)