Question: how does template work with dataset in examples: llama3_lora_sft.yaml
Closed this issue · 2 comments
chuangzhidan commented
Reminder
- I have read the README and searched the existing issues.
Reproduction
**when i run the command line showed in the examples:
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml
i changed the model to qwen1.5 ,dataset remian the same(identity,alpaca_gpt4_en) , "template" to qwen,**
is it ok ? if so , can i do this for other models as well?
thank you !
Expected behavior
No response
System Info
No response
Others
No response
hiyouga commented
It should be ok. For more information, read: https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#supported-models
chuangzhidan commented
It should be ok. For more information, read: https://github.com/hiyouga/LLaMA-Factory?tab=readme-ov-file#supported-models
thank u