SciPhi-AI/R2R

Support external LiteLLM proxy

Closed this issue · 3 comments

Is your feature request related to a problem? Please describe.
I use R2R Docker and want to connect to an external LiteLLM proxy on the same local network (e.g., http://localhost:4000). This would enable seamless integration with providers like Voyage, Cohere, AWS Bedrock, or any other service that supports an OpenAI-like API. Typically, this can be done using the environment variable OPENAI_BASE_URL='http://localhost:4000'. However, neither the OPENAI_BASE_URL environment variable nor the api_base configuration parameter are recognized when the "openai" provider is set in the R2R config file.

Describe the solution you'd like
Enable the Docker (r2r service) to accept the OPENAI_BASE_URL variable from the environment.

Describe alternatives you've considered

  1. Modify the code in providers/{llm, embeddings}/openai.py to accept an api_base configuration parameter.
  2. Create a new provider (perhaps named "litellm") that requires an api_base configuration parameter.

This is a fine proposal, would you mind making a PR or would you like us to?

also, additional follow-up here. LiteLLM is already the default provider in R2R, read more here - https://r2r-docs.sciphi.ai/cookbooks/basic-configuration

merged your PR, also adding in more environment variables for those that would like to communicate directly with their target LLM - #816