truera/trulens

Add Gemini Support in the provider list

vikas351 opened this issue ยท 2 comments

Feature Description
Add Gemini Support

To add Gemini support to the provider list in your software, you would need to follow a similar pattern to how the LiteLLM class is implemented for handling different language models. Here's a general approach based on extending the existing architecture.

  1. Create a Gemini Provider Class: You would need to create a new class similar to LiteLLM but for Gemini. This class should implement any specific methods required to interact with the Gemini API.

  2. Handle Gemini Specifics: If Gemini has specific requirements (like Azure models needing an api_base), ensure these are handled either in the constructor or in a dedicated method within your new class.

  3. Integration with Endpoint: The LiteLLM class uses an Endpoint (specifically LiteLLMEndpoint) to interact with the model. You might need to create a similar endpoint class for Gemini if it has different interaction patterns or requirements.

  4. Optional Imports and Requirements: If Gemini requires specific libraries, ensure you handle these with OptionalImports or a similar mechanism to manage dependencies gracefully.

Here's a simplified example of what the structure might look like, assuming you have a GeminiEndpoint and the necessary Gemini interaction code:

from trulens_eval.feedback.provider.base import LLMProvider
from trulens_eval.feedback.provider.endpoint.base import Endpoint
# Assuming GeminiEndpoint is created similar to LiteLLMEndpoint
from trulens_eval.feedback.provider.endpoint import GeminiEndpoint

class GeminiProvider(LLMProvider):
    DEFAULT_MODEL_ENGINE: ClassVar[str] = "gemini-default-model"

    model_engine: str
    completion_args: Dict[str, str] = pydantic.Field(default_factory=dict)
    endpoint: Endpoint

    def __init__(
        self,
        model_engine: Optional[str] = None,
        completion_kwargs: Optional[Dict] = None,
        endpoint: Optional[Endpoint] = None,
        **kwargs: dict
    ):
        if model_engine is None:
            model_engine = self.DEFAULT_MODEL_ENGINE

        # Initialize Gemini specific settings and provider
        # This is where you would handle any Gemini-specific logic
        gemini_provider = "Initialize your Gemini provider here"

        if completion_kwargs is None:
            completion_kwargs = {}

        self_kwargs = dict()
        self_kwargs.update(**kwargs)
        self_kwargs['model_engine'] = model_engine
        self_kwargs['gemini_provider'] = gemini_provider
        self_kwargs['completion_args'] = completion_kwargs
        self_kwargs['endpoint'] = GeminiEndpoint(
            gemini_provider=gemini_provider, **kwargs
        )

        super().__init__(**self_kwargs)

    def _create_chat_completion(
        self,
        prompt: Optional[str] = None,
        messages: Optional[Sequence[Dict]] = None,
        **kwargs
    ) -> str:
        # Implement Gemini's completion logic here
        pass

This example is quite generic and assumes you have or will create a GeminiEndpoint class and have access to the necessary Gemini API interaction code. You'll need to adjust the implementation details based on Gemini's API documentation and any specific requirements or features of the Gemini model you wish to use.

For more details, you can refer to the LiteLLM class implementation in the trulens_eval/trulens_eval/feedback/provider/litellm.py file.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a ๐Ÿ‘ if this is helpful and ๐Ÿ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Hey @vikas351 - Gemini is already supported through the LiteLLM provider. Here's an example notebook: https://github.com/truera/trulens/blob/main/trulens_eval/examples/expositional/models/gemini_multi_modal.ipynb

Let me know if you have issues with this.