单机单卡运行问题。 3090+cuda11.3+torch1.10
Closed this issue · 1 comments
修改配置为:
export CUDA_VISIBLE_DEVICES=0
GPUS_PER_NODE=1
NNODES=1
请问可能有什么原因造成呢?
但是得到了错误:
Traceback (most recent call last):
File "tune_cpm_ant.py", line 47, in
tune.run(data)
File "/home/shanhoo3/fkb/remote_project/cpm/cpm-live/examples/tune.py", line 222, in run
self.forward(train_dataloader, eval_dataloader, cls_num=self.cls_num)
File "/home/shanhoo3/fkb/remote_project/cpm/cpm-live/examples/tune.py", line 122, in forward
loss = self._forward(train_data, cls_num=cls_num)
File "/home/shanhoo3/fkb/remote_project/cpm/cpm-live/examples/tune.py", line 350, in _forward
loss = self.loss_function(logits, targets.view(-1))
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/bmtrain/loss/cross_entropy.py", line 200, in forward
w = (target != self.ignore_index).int()
RuntimeError: CUDA error: no kernel image is available for execution on the device
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 16863) of binary: /home/shanhoo3/anaconda3/envs/cpm/bin/python
Traceback (most recent call last):
File "/home/shanhoo3/anaconda3/envs/cpm/bin/torchrun", line 33, in
sys.exit(load_entry_point('torch==1.10.1', 'console_scripts', 'torchrun')())
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/torch/distributed/run.py", line 719, in main
run(args)
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/shanhoo3/anaconda3/envs/cpm/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
bmtrain related OpenBMB/BMTrain#81.