OpenBMB/ToolBench

[Bug] Prompt construction of class `ToolLLaMA` and `ToolLLaMALoRA`

Opened this issue · 0 comments

The prompt construction code of toolbench/inference/LLM/tool_llama_model.py#L97-#L103:

for message in conversation_history:
    role = roles[message['role']]
    content = message['content']
    if role == "System" and functions != []:
        content = process_system_message(content, functions)
    prompt += f"{role}: {content}\n"
prompt += "Assistant:\n"

When the role is assistant, the content included in the prompt only contains the Thought and excludes Action and Action Input. This is because the action details are stored in the function_call key of the message.

Here is the code of conversation_history construction in toolbench/inference/LLM/tool_llama_model.py#L116-#L123:

message = {
    "role": "assistant",
    "content": thought,
    "function_call": {
        "name": action,
        "arguments": action_input
    }
}

This bug results in the assistant portion of the prompt during inference being inconsistent with the prompt used during training, potentially leading to decreased evaluation performance.