Here I have one LLaMA2 model but tuned with different prompts and see how the outcomes vary.
One is prompted as conservative and strict. Another is prompted as liberal without any restrictions and can answer the way the model wants.
Use-cases of different LLMS to showcae how these models leverage the power of attention mechanisms to process language data.
Jupyter Notebook
Here I have one LLaMA2 model but tuned with different prompts and see how the outcomes vary.
One is prompted as conservative and strict. Another is prompted as liberal without any restrictions and can answer the way the model wants.