hiyouga/LLaMA-Factory

训练自定义数据集出现错误

zhangkuo-zk opened this issue · 2 comments

Reminder

  • I have read the README and searched the existing issues.

Reproduction

INFO|trainer.py:3305] 2024-05-16 14:03:34,400 >> Saving model checkpoint to saves/ChatGLM3-6B-Chat/lora/train_hinge2024
/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/peft/utils/save_and_load.py:154: UserWarning: Could not find a config file in /root/autodl-tmp/models/chatglm3-6b - will assume that the vocabulary was not modified.
warnings.warn(
[INFO|tokenization_utils_base.py:2488] 2024-05-16 14:03:34,429 >> tokenizer config file saved in saves/ChatGLM3-6B-Chat/lora/train_hinge2024/tokenizer_config.json
[INFO|tokenization_utils_base.py:2497] 2024-05-16 14:03:34,430 >> Special tokens file saved in saves/ChatGLM3-6B-Chat/lora/train_hinge2024/special_tokens_map.json
***** train metrics *****
epoch = 2.6667
total_flos = 1586103GF
train_loss = 2.6703
train_runtime = 0:02:08.47
train_samples_per_second = 0.397
train_steps_per_second = 0.023
[INFO|modelcard.py:450] 2024-05-16 14:03:34,433 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}
Traceback (most recent call last):
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/queueing.py", line 566, in process_events
response = await route_utils.call_process_api(
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/route_utils.py", line 270, in call_process_api
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/blocks.py", line 1847, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/blocks.py", line 1445, in call_function
prediction = await utils.async_iteration(iterator)
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/utils.py", line 629, in async_iteration
return await iterator.anext()
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/utils.py", line 622, in anext
return await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/utils.py", line 605, in run_sync_iterator_async
return next(iterator)
File "/root/miniconda3/envs/llama_factory/lib/python3.10/site-packages/gradio/utils.py", line 788, in gen_wrapper
response = next(iterator)
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/webui/runner.py", line 275, in run_train
yield from self._launch(data, do_train=True)
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/webui/runner.py", line 266, in _launch
yield from self.monitor()
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/webui/runner.py", line 302, in monitor
running_log, running_progress, running_loss = get_trainer_info(output_path, self.do_train)
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/webui/utils.py", line 95, in get_trainer_info
running_loss = gr.Plot(gen_loss_plot(trainer_log))
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/extras/ploting.py", line 46, in gen_loss_plot
ax.plot(steps, smooth(losses), color="#1f77b4", label="smoothed")
File "/root/autodl-tmp/LLaMA-Factory/src/llmtuner/extras/ploting.py", line 24, in smooth
last = scalars[0]
IndexError: list index out of range

Expected behavior

No response

System Info

No response

Others

No response

更新代码

Duplicate of #3728