Website Β· Producthunt Β· Documentation
YiVal is an GenAI-Ops framework that allows you to iteratively tune your Generative AI model metadata, params, prompts and retrieval configs all at once with your preferred choices of test dataset generation, evaluation algorithms and improvement strategies.
Check out our quickstart guide! β
- Add ROUGE and BERTScore evaluators
- Add support to midjourney
- Add support to LLaMA2-70B, LLaMA2-7B, Falcon-40B,
- Support LoRA fine-tune to open source models
We support 100+ LLM ( gpt-4 , gpt-3.5-turbo , llama e.g.).
Different Model sources can be viewed as follow
Model | llm-Evaluate | Human-Evaluate | Variation Generate | Custom func |
---|---|---|---|---|
OpenAI | β | β | β | β |
Azure | β | β | β | β |
TogetherAI | β | β | β | β |
Cohere | β | β | β | β |
Huggingface | β | β | β | β |
Anthropic | β | β | β | β |
MidJourney | β | β |
To support different models in custom func(e.g. Model Comparison) , follow our example
To support different models in evaluators and generators , check our config
pip install yival
Yival has multimodal capabilities and can handle generated images in AIGC really well.
Find more information in the Animal story demo we provided.
yival run demo/configs/animal_story.yml
To get started with a demo for basic interactive mode of YiVal, run the following command:
yival demo --basic_interactive
Once started, navigate to the following address in your web browser:
http://127.0.0.1:8073/interactive
For more details on this demo, check out the Basic Interactive Mode Demo.
yival demo --qa_expected_results
Once started, navigate to the following address in your web browser: http://127.0.0.1:8073/
For more details, check out the Question Answering with expected result evaluator.