ashawkey/RAD-NeRF

Inference on CPU?

OnceJune opened this issue · 0 comments

Hi, I tried to run test on a MacBook with CPU but got error:

/Users/me/anaconda3/envs/rad-nerf/lib/python3.6/site-packages/torch/autocast_mode.py:141: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of 'cuda', but CUDA is not available. Disabling')
Traceback (most recent call last):
File "/Users/me/Work/data/agi/infer/RAD-NeRF-main/raymarching/raymarching.py", line 10, in
import _raymarching_face as _backend
ModuleNotFoundError: No module named '_raymarching_face'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "test.py", line 131, in
from nerf.network import NeRFNetwork
File "/Users/me/Work/data/agi/infer/RAD-NeRF-main/nerf/network.py", line 7, in
from .renderer import NeRFRenderer
File "/Users/me/Work/data/agi/infer/RAD-NeRF-main/nerf/renderer.py", line 10, in
import raymarching
File "/Users/me/Work/data/agi/infer/RAD-NeRF-main/raymarching/init.py", line 1, in
from .raymarching import *
File "/Users/me/Work/data/agi/infer/RAD-NeRF-main/raymarching/raymarching.py", line 12, in
from .backend import _backend
File "/Users/me/Work/data/agi/infer/RAD-NeRF-main/raymarching/backend.py", line 36, in
'bindings.cpp',
File "/Users/me/anaconda3/envs/rad-nerf/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1136, in load
keep_intermediates=keep_intermediates)
File "/Users/me/anaconda3/envs/rad-nerf/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1347, in _jit_compile
is_standalone=is_standalone)
File "/Users/me/anaconda3/envs/rad-nerf/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1430, in _write_ninja_file_and_build_library
is_standalone)
File "/Users/me/anaconda3/envs/rad-nerf/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1534, in _prepare_ldflags
extra_ldflags.append(f'-L{_join_cuda_home("lib64")}')
File "/Users/me/anaconda3/envs/rad-nerf/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 2035, in _join_cuda_home
raise EnvironmentError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.

Can anyone help with this? Thanks in advance.