OpenMOSS/CoLLiE

V100上执行examples/alpaca/train.py碰到错误No module named 'petrel_client,请问有人知道怎么解决吗

Closed this issue · 2 comments

使用命令
CUDA_VISIBLE_DEVICES=4,5,6,7 torchrun --rdzv_backend=c10d --rdzv_endpoint=localhost:29402 --nnodes=1 --nproc_per_node=4 train.py
错误信息
[INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Traceback (most recent call last):
  File "train.py", line 67, in <module>
    state_dict = LlamaForCausalLM.load_parallel_state_dict(
  File "/home/collie/collie/examples/alpaca/../../collie/models/llama/model.py", line 365, in load_parallel_state_dict
    if not io_driver.exists(path):
  File "/home/collie/collie/examples/alpaca/../../collie/driver/io/petrel.py", line 76, in exists
    from petrel_client.client import Client
ModuleNotFoundError: No module named 'petrel_client'
Traceback (most recent call last):
  File "train.py", line 67, in <module>
    state_dict = LlamaForCausalLM.load_parallel_state_dict(
  File "/home/collie/collie/examples/alpaca/../../collie/models/llama/model.py", line 365, in load_parallel_state_dict
    if not io_driver.exists(path):
  File "/home/collie/collie/examples/alpaca/../../collie/driver/io/petrel.py", line 76, in exists
    from petrel_client.client import Client
ModuleNotFoundError: No module named 'petrel_client'
Traceback (most recent call last):
  File "train.py", line 67, in <module>
Traceback (most recent call last):
  File "train.py", line 67, in <module>
    state_dict = LlamaForCausalLM.load_parallel_state_dict(
  File "/home/collie/collie/examples/alpaca/../../collie/models/llama/model.py", line 365, in load_parallel_state_dict
    if not io_driver.exists(path):
  File "/home/collie/collie/examples/alpaca/../../collie/driver/io/petrel.py", line 76, in exists
    from petrel_client.client import Client
ModuleNotFoundError: No module named 'petrel_client'
    state_dict = LlamaForCausalLM.load_parallel_state_dict(
  File "/home/collie/collie/examples/alpaca/../../collie/models/llama/model.py", line 365, in load_parallel_state_dict
    if not io_driver.exists(path):
  File "/home/collie/collie/examples/alpaca/../../collie/driver/io/petrel.py", line 76, in exists
    from petrel_client.client import Client
ModuleNotFoundError: No module named 'petrel_client'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 4188) of binary: /usr/bin/python
Traceback (most recent call last):
  File "/usr/local/bin/torchrun", line 33, in <module>
    sys.exit(load_entry_point('torch==2.1.0a0+fe05266', 'console_scripts', 'torchrun')())
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 794, in main
    run(args)
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
train.py FAILED
state_dict = LlamaForCausalLM.load_parallel_state_dict(
    path="hdd:s3://opennlplab_hdd/models/llama/llama-7b-hf",
    config=config,
    protocol="petrel",
    format="hf"
)

改成

state_dict = LlamaForCausalLM.load_parallel_state_dict(
    path=pretrained_path,
    config=config,
)

就可以了,或者可以直接使用from_pretrained:

model = LlamaForCausalLM.from_pretrained(pretrained_path, config=config)
state_dict = LlamaForCausalLM.load_parallel_state_dict(
    path="hdd:s3://opennlplab_hdd/models/llama/llama-7b-hf",
    config=config,
    protocol="petrel",
    format="hf"
)

改成

state_dict = LlamaForCausalLM.load_parallel_state_dict(
    path=pretrained_path,
    config=config,
)

就可以了,或者可以直接使用from_pretrained:

model = LlamaForCausalLM.from_pretrained(pretrained_path, config=config)

多谢大佬