jonkhler/s2cnn

module 's2cnn.ops.gpu.lib_cufft' has no attribute 'destroy'

Closed this issue · 7 comments

Trying to run shrec17 example in a conda3 docker image with Python 3.6 [cuda-9.0 NVIDIA drivers 384.111] I faced the following error:

{'num_workers': 1, 'batch_size': 32, 'dataset': 'train', 'augmentation': 4, 'model_path': 'model.py', 'log_dir': 'my_run', 'learning_rate': 0.5}
Downloading http://3dvision.princeton.edu/ms/shrec17-data/train_perturbed.zip
Unzip data/train_perturbed.zip
Fix obj files
Downloading http://3dvision.princeton.edu/ms/shrec17-data/train.csv
Done!
402955 paramerters in total
5555 paramerters in the last layer
learning rate = 1 and batch size = 32
transform data/train_perturbed/046114.obj...
/root/miniconda3/lib/python3.6/site-packages/trimesh/triangles.py:188: RuntimeWarning: divide by zero encountered in true_divide
  center_mass = integrated[1:4] / volume
/root/miniconda3/lib/python3.6/site-packages/trimesh/triangles.py:188: RuntimeWarning: invalid value encountered in true_divide
  center_mass = integrated[1:4] / volume
transform data/train_perturbed/005351.obj...
transform data/train_perturbed/019736.obj...
transform data/train_perturbed/018758.obj...
transform data/train_perturbed/029336.obj...
transform data/train_perturbed/012867.obj...
transform data/train_perturbed/045223.obj...
transform data/train_perturbed/025009.obj...
transform data/train_perturbed/048329.obj...
transform data/train_perturbed/038370.obj...
transform data/train_perturbed/037326.obj...
transform data/train_perturbed/025172.obj...
transform data/train_perturbed/015628.obj...
transform data/train_perturbed/038990.obj...
transform data/train_perturbed/040417.obj...
transform data/train_perturbed/044571.obj...
transform data/train_perturbed/038458.obj...
transform data/train_perturbed/048180.obj...
transform data/train_perturbed/033437.obj...
transform data/train_perturbed/030847.obj...
transform data/train_perturbed/050627.obj...
transform data/train_perturbed/005628.obj...
transform data/train_perturbed/045656.obj...
transform data/train_perturbed/008172.obj...
transform data/train_perturbed/010100.obj...
transform data/train_perturbed/024292.obj...
transform data/train_perturbed/038671.obj...
transform data/train_perturbed/025215.obj...
transform data/train_perturbed/032604.obj...
transform data/train_perturbed/048823.obj...
transform data/train_perturbed/018781.obj...
transform data/train_perturbed/040830.obj...
transform data/train_perturbed/007740.obj...
Traceback (most recent call last):
  File "train.py", line 135, in <module>
    main(**args.__dict__)
  File "train.py", line 105, in main
    loss, correct = train_step(data, target)
  File "train.py", line 74, in train_step
    prediction = model(data)
  File "/root/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "my_run/model.py", line 47, in forward
    x = self.sequential(x)  # [batch, feature, beta, alpha, gamma]
  File "/root/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/root/miniconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 67, in forward
    input = module(input)
  File "/root/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in __call__
    result = self.forward(*input, **kwargs)
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/s2_conv.py", line 40, in forward
    x = S2_fft_real(b_out=self.b_out)(x) # [l * m, batch, feature_in, complex]
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 231, in forward
    return s2_fft(as_complex(x), b_out=self.b_out)
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 27, in s2_fft
    output = _s2_fft(x, for_grad=for_grad, b_in=b_in, b_out=b_out) # [l * m, batch, complex]
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 43, in _s2_fft
    plan = _setup_fft_plan(b_in, nbatch)
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 146, in _setup_fft_plan
    plan = Plan1d_c2c(N=2 * b, batch=nbatch * 2 * b)
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/ops/gpu/torchcufft.py", line 12, in __init__
    self.handler = cufft.plan1d_c2c(N, istride, idist, ostride, odist, batch)
AttributeError: module 's2cnn.ops.gpu.lib_cufft' has no attribute 'plan1d_c2c'
Exception ignored in: <bound method Plan1d_c2c.__del__ of <s2cnn.ops.gpu.torchcufft.Plan1d_c2c object at 0x7efc0f667438>>
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/ops/gpu/torchcufft.py", line 18, in __del__
    cufft.destroy(self.handler)
AttributeError: module 's2cnn.ops.gpu.lib_cufft' has no attribute 'destroy'

Did the installation process python setup.py install worked correctly ?

[Against good practices] I updated the error - installation of required libraries + s2cnn were successful except for an apparently harmless warning while installing lie_learn

plan1d_c2c and destroy are both defined in /s2cnn/ops/gpu/plan_cufft.c.
This is actually the only file that needs to be compiled during the installation process (see /build.py).
I think that the problem is still in the installation, can you make a new clean install and print the gcc commands please ?

Solved. I was solving the following encoding issue:

UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1264: ordinal not in range(128)

adding , encoding='utf-8' instead of 'r', encoding='utf8' in open() function in setup.py file.
My bad but you may want to double check that the setup was complete, as it was showing as successful (and maybe catching the encoding issue). Closing, thank you.

I have added , encoding='utf-8', now the open() function in setup.py is like this:
long_description=open(os.path.join(os.path.dirname(__file__), "README.md"), encoding='utf-8').read(),

Then after runing python setup.py install, the encoding issue :
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1264: ordinal not in range(128)
is gone.
But when I want to run the example of shrec17, the following problem still remains:
module 's2cnn.ops.gpu.lib_cufft' has no attribute 'destroy'

Can you help me?
Thanks a lot!

@zhixuanli try using long_description=open(os.path.join(os.path.dirname(__file__), "README.md"), 'r', encoding='utf8').read()

@blancaag
After using long_description=open(os.path.join(os.path.dirname(__file__), "README.md"), 'r', encoding='utf8').read(), when I run the example of equivariance_error, a new problem which seems like a bug in numpy occured:

main.py:38: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
x = torch.autograd.Variable(torch.randn(1, 12, 128, 128), volatile=True).cuda() # [batch, feature, beta, alpha]
Traceback (most recent call last):
File "main.py", line 40, in
y = phi(x)
File "main.py", line 27, in phi
x = s2_conv(x)
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/s2_conv.py", line 40, in forward
x = S2_fft_real(b_out=self.b_out)(x) # [l * m, batch, feature_in, complex]
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 231, in forward
return s2_fft(as_complex(x), b_out=self.b_out)
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 27, in s2_fft
output = _s2_fft(x, for_grad=for_grad, b_in=b_in, b_out=b_out) # [l * m, batch, complex]
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 44, in _s2_fft
wigner = _setup_wigner(b_in, nl=b_out, weighted=not for_grad, device=device)
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 107, in _setup_wigner
dss = __setup_wigner(b, nl, weighted)
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/s2cnn-1.0.0-py3.6-linux-x86_64.egg/s2cnn/nn/soft/gpu/s2_fft.py", line 113, in __setup_wigner
from lie_learn.representations.SO3.wigner_d import wigner_d_matrix
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/lie_learn/representations/SO3/wigner_d.py", line 4, in
from lie_learn.representations.SO3.pinchon_hoggan.pinchon_hoggan_dense import Jd, rot_mat
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/lie_learn/representations/SO3/pinchon_hoggan/pinchon_hoggan_dense.py", line 11, in
Jd = np.load(os.path.join(os.path.dirname(file), 'J_dense_0-278.npy'), encoding='latin1')
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/numpy/lib/npyio.py", line 421, in load
pickle_kwargs=pickle_kwargs)
File "/home/lizhixuan/anaconda3/envs/s2cnn_test/lib/python3.6/site-packages/numpy/lib/format.py", line 650, in read_array
array = pickle.load(fp, **pickle_kwargs)
_pickle.UnpicklingError: pickle data was truncated