RuntimeError: could not create a primitive
Stradichenko opened this issue · 0 comments
Stradichenko commented
I wanted to share the following error I got after trying to run the inference_script in the README updating: query
, input
and output
file.
$ python3 inference_script.py
/$USER/miniconda3/envs/AudioSep/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1670525552843/work/aten/src/ATen/native/TensorShape.cpp:3190.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at roberta-base were not used when initializing RobertaModel: ['lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Load AudioSep model from [checkpoint/audiosep_base_4M_steps.ckpt]
Separate audio from [/my/file/path/file.wav] with textual query [my_textual_query_to_separate]
Traceback (most recent call last):
File "/file/to/local/audio-agi/AudioSep/inference_script.py", line 16, in <module>
inference(model, audio_file, text, output_file, device)
File "/file/to/local/audio-agi/AudioSep/pipeline.py", line 47, in inference
sep_segment = model.ss_model(input_dict)["waveform"]
File "/$USER/miniconda3/envs/AudioSep/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/path/to/local/AudioSep/models/resunet.py", line 648, in forward
output_dict = self.base(
File "/$USER/miniconda3/envs/AudioSep/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/file/to/local/AudioSep/models/resunet.py", line 555, in forward
x = self.pre_conv(x)
File "/$USER/miniconda3/envs/AudioSep/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/$USER/miniconda3/envs/AudioSep/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/$USER/miniconda3/envs/AudioSep/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: could not create a primitive
This error was created in the latest commit: 2150ca8
In last snippet I changed the paths for readability.
Additionally, on another note, I had a previous issue:
File "/$USER/miniconda3/lib/python3.11/ctypes/__init__.py", line 376, in __init__
self._handle = _dlopen(self._name, mode)
^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: libcudart.so.12: cannot open shared object file: No such file or directory
That got solved by adding the following to my bashrc
:
export LD_LIBRARY_PATH=/us/local/cuda/lib64:$LD_LIBRARY_PATH