uptrain-ai/uptrain

using custom llm

Opened this issue · 0 comments

I have my llm hosted on an ec2 instance through vllm. I'd like to use that llm while evaluating through uptrain. I currently use that llm in my RAG service by providing its name and its base_url. But I am unable to do the same for uptrain. Despite updating the api_base, it doesnt get routed to the llm at my ec2 instance and instead keeps on asking for an openai api key.

Code is below

from uptrain import EvalLLM, Evals, Settings as uptrain_settings

settings = uptrain_settings(model="TheBloke/Mistral-7B-Instruct-v0.1-AWQ", api_base="http://13.2142.169.431:8000/v1")

eval_llm = EvalLLM(settings=settings)

results = eval_llm.evaluate(
  data=responses,
  checks=[Evals.CONTEXT_RELEVANCE, Evals.FACTUAL_ACCURACY, Evals.RESPONSE_COMPLETENESS]
)

** PS **
The api_base above is tweaked a little before posting here(for security purposes)
I'm providing the exact same api_base and model to the llama_index RAG service and its working perfectly, but the uptrain eval is not.