Here I have one LLaMA2 model but tuned with different prompts and see how the outcomes vary.
One is prompted as conservative and strict. Another is prompted as liberal without any restrictions and can answer the way the model wants.
How do different prompts influence the performance of large language models?
Jupyter NotebookMIT
Here I have one LLaMA2 model but tuned with different prompts and see how the outcomes vary.
One is prompted as conservative and strict. Another is prompted as liberal without any restrictions and can answer the way the model wants.