divelab/GraphBP

RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.

Closed this issue · 2 comments

Hi, I met with the error while running: CUDA_VISIBLE_DEVICES=0 python main_gen.py

Traceback (most recent call last):
  File "main_gen.py", line 6, in <module>
    runner = Runner(conf)
  File "/home/yipyewmun/GitHub/GraphBP/GraphBP/runner.py", line 25, in __init__
    self.model = GraphBP(**conf['model'])
  File "/home/yipyewmun/GitHub/GraphBP/GraphBP/model/graphbp.py", line 40, in __init__
    self.feat_net = self.feat_net.to('cuda')
  File "/home/yipyewmun/anaconda3/envs/gen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 852, in to
    return self._apply(convert)
  File "/home/yipyewmun/anaconda3/envs/gen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 530, in _apply
    module._apply(fn)
  File "/home/yipyewmun/anaconda3/envs/gen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 552, in _apply
    param_applied = fn(param)
  File "/home/yipyewmun/anaconda3/envs/gen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 850, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: the provided PTX was compiled with an unsupported toolchain.
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Any possible ideas on how I can resolve this? :)

Hi @yipy0005,

Thank you for your interest. Does torch.cuda.is_available() give true?

Hi! I have resolved the error by upgrading nvidia driver to the latest version. It works perfectly well now! Thanks! :)