truera/trulens

Facing Issue: Using an Custom OpenAI Gateway with trulens

Closed this issue · 5 comments

So in trulens we are having and OpenAI Object , which requires the api-key and stuff of Open AI.
from trulens_eval.feedback.provider.openai import OpenAI

But , if I want to send the request to my custom AI_Gateway, which is using a seperate AI model. How to use that with truelens?

Hey @krishmurarka

Can you do this with our langchain provider? https://www.trulens.org/trulens_eval/api/langchain_provider/

Basically you just need to set up your AI_Gateway as a chain, and then pass it to the TruLens LangChain provider

Hey @joshreini1 the link you provided doesn't exist
do you have any updated link ??

Also If I am using my custom model how the cost and token calculation would work ??

because for now I did monkey patching to make my model run through , but I am not getting cost and token

Hey @krish-murarka - here's the updated link: https://www.trulens.org/trulens_eval/api/provider/langchain/

We also made some fixes to our cost tracking in trulens-eval==0.24.1; would recommend upgrading if you're still on an older version

Feel free to share along your monkey-patching as well :)

@joshreini1 Thank you, ohh we were using 0.21 version for trulens-eval let me bump the package.

And For Monkey Patching, we were using the llamaIndex Open AI object , but as it has pydantic checks of model

For we monkey patched this function of Open AI in llama: def _get_model_kwargs(self, **kwargs: Any) -> Dict[str, Any]:

OpenAI._get_model_kwargs = Custom_model_kwargs_custom