Add base LLM-generated evals to Huggingface Provider
Opened this issue · 0 comments
joshreini1 commented
Huggingface provider currently only contains feedback functions using specific model endpoints on huggingface. Add the option to use base feedback functions (in LLMProvider) through LLMs available on huggingface.
It should work with feedback factions with and without cot_reasons.
User flow should be similar to:
from trulens_eval import Huggingface
hugs_provider = Huggingface('mistralai/Mixtral-8x7B-Instruct-v0.1')
hugs_provider.relevance_with_cot_reasons('What is the capital of India?', 'New Delhi')