/LLaMA2-Duels

How do different prompts influence the performance of large language models?

Primary LanguageJupyter NotebookMIT LicenseMIT

How do different prompts influence the performance of large language models?

Here I have one LLaMA2 model but tuned with different prompts and see how the outcomes vary.

One is prompted as conservative and strict. Another is prompted as liberal without any restrictions and can answer the way the model wants.