huggingface/chat-ui

Configuration for Llama 2

aisensiy opened this issue · 3 comments

I am trying to self host Llama 2 with https://github.com/huggingface/text-generation-inference and https://github.com/huggingface/chat-ui . If I give configuration for chat-ui like this:

  {
    "name": "llama2-7b-chat",
    "datasetName": "llama2-7b-chat",
    "description": "A good alternative to ChatGPT",
    "endpoints": [{"url": "http://127.0.0.1:8081/generate_stream"}],
    "userMessageToken": "<|prompter|>",
    "assistantMessageToken": "<|assistant|>",
    "messageEndToken": "</s>",
    "preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
    "promptExamples": [
      {
        "title": "Write an email from bullet list",
        "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
      }, {
        "title": "Code a snake game",
        "prompt": "Code a basic snake game in python, give explanations for each step."
      }, {
        "title": "Assist in a task",
        "prompt": "How do I make a delicious lemon cheesecake?"
      }
    ],
    "parameters": {
      "temperature": 0.8,
      "top_p": 0.95,
      "repetition_penalty": 1.8,
      "top_k": 10,
      "truncate": 1000,
      "max_new_tokens": 1024
    }
  }

It will not return good response like https://huggingface.co/chat.

chat-ui-with-llama2-7b

Hey! You might need to tweak a few model settings to make it work. Here's how it looks for us (although we're still tweaking things):

    {
    "name": "meta-llama/Llama-2-70b-chat-hf",
    "datasetName": "meta-llama/Llama-2-70b-chat-hf",
    "description": "llamas!",
    "websiteUrl": "https://ai.meta.com/llama/",
    "userMessageToken": "[INST]",
    "assistantMessageToken": "[/INST]",
    "messageEndToken": "</s>",
    "preprompt": "[INST]<<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.<</SYS>>\n\n[/INST]",
    "promptExamples": [
      {
        "title": "Write an email from bullet list",
        "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
      }, {
        "title": "Code a snake game",
        "prompt": "Code a basic snake game in python, give explanations for each step."
      }, {
        "title": "Assist in a task",
        "prompt": "How do I make a delicious lemon cheesecake?"
      }
    ],
    "parameters": {
      "temperature": 0.3,
      "top_p": 0.95,
      "repetition_penalty": 1.2,
      "top_k": 50,
      "truncate": 1000,
      "max_new_tokens": 1024
    },

The prompt & tokens are different that's why. Let me know if that helped!

Yes, this is quite helpful. Thanks so much.

Buiding on this:

  1. When I alter the preprompt for a Llama 2 type model, it appears to have no impact. It's as though the preprompt is not there. Sample config for .env.local:
MODELS=`[
{
        "name": "Trelis/Llama-2-7b-chat-hf-function-calling",
        "datasetName": "Trelis/function_calling_extended",
        "description": "function calling Llama-7B-chat",
        "websiteUrl": "https://research.Trelis.com",
        "preprompt": "Respond in French to all questions",
        "userMessageToken": "[INST]",
        "assistantMessageToken": "[/INST]",
        "parameters": {
                "temperature": 0.01,
                "top_p": 0.95,
                "repetition_penalty": 1.2,
                "top_k": 50,
                "truncate": 1000,
                "max_new_tokens": 1024
        },
        "endpoints": [{
                "url": "http://127.0.0.1:8080"
        }]
}
]`
  1. Llama-chat has weird templating whereby the first system and user have to be wrapped in INST. The best that can be done with the default templating is just to separately wrap the system message and each user input in [INST] and [/INST]. That said, I don't think that deviation should be significant enough to mean that the preprompt is ignored... but maybe it is OR maybe I'm making some other mistake?