hiyouga/LLaMA-Factory

Question: how does template work with dataset in examples: llama3_lora_sft.yaml

Closed this issue · 2 comments

Reminder

  • I have read the README and searched the existing issues.

Reproduction

**when i run the command line showed in the examples:
CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml

i changed the model to qwen1.5 ,dataset remian the same(identity,alpaca_gpt4_en) , "template" to qwen,**

is it ok ? if so , can i do this for other models as well?
thank you !

Expected behavior

No response

System Info

No response

Others

No response