OptimalScale/LMFlow

finetune ChatGLM with lora[BUG]

wuhongyan123 opened this issue · 2 comments

Describe the bug
Sorry to trouble you. The doc demonstrates that the framework supports ChatGLM2. However, running the scripts "run_finetune_with_lora.sh" presents the error——
Traceback (most recent call last):
File "/root/data/LMFlow_raw/examples/finetune.py", line 62, in
main()
File "/root/data/LMFlow_raw/examples/finetune.py", line 55, in main
model = AutoModel.get_model(model_args)
File "/root/data/LMFlow/src/lmflow/models/auto_model.py", line 16, in get_model
return HFDecoderModel(model_args, *args, **kwargs)
File "/root/data/LMFlow/src/lmflow/models/hf_decoder_model.py", line 152, in init
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs)
File "/opt/conda/envs/LMFLOW/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 733, in from_pretrained
raise ValueError(
ValueError: Tokenizer class ChatGLMTokenizer does not exist or is not currently imported.

Hope you could help with the problm, thank you!

Thanks for your interest in LMFlow! The LoRA support for chatglm is not available in our latest versions. You may try older versions such as v0.0.3 to see if it works. This problem is majorly caused by the frequent dependency update in transformers. We will schedule an update in the future and let you know once it is done. Sorry for all the inconvenience 🙏

plz try it #619 (comment)