/starling

Starling-7B is a open LLM developed through Reinforcement Learning from AI Feedback (RLAIF), scoring 8.09 in the MT Bench evaluation with GPT-4 as the judge.

Primary LanguageJupyter Notebook

Overview

The provided Python script demonstrates interactions with the Starling-7B Language Model (Starling-7B LLM) for various language generation tasks. It highlights three scenarios:

  • Single-turn Conversation: Asking a question and obtaining a response.
  • Multi-turn Conversation: Continuing a conversation with multiple exchanges.
  • Coding Prompt: Generating code based on a provided prompt.

Ensure the transformers library and its dependencies are installed. Modify the prompts in the script or create your own prompts to interact with the model.

Usage

Example Prompts

Modify the prompt variables in the code to experiment with different inputs. For instance:

  • For a question: prompt = "What is AI?"
  • For a coding prompt: code_prompt = "Write a FastAPI code to develop a REST API."

Acknowledgments

This code utilizes the Hugging Face transformers library. The model used (berkeley-nest/Starling-LM-7B-alpha) is a product of the BerkeleyNLP group.