Compilation Problem with Pip Install
etibuteau opened this issue · 2 comments
I am unable to correctly install the package using the provided command because the build is failing:
pip install "git+https://github.com/patrick-kidger/generalised_shapelets/#egg=torchshapelets&subdirectory=torchshapelets"
The problem seems to be with OpenMp
I am using
- Ubuntu 20.04
- Python 3.8.10
- Pytorch 1.10
- x86_64-linux-gnu-gcc as my compiler
The error log is :
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-_1t8npic/torchshapelets/torchshapelets/setup.py'"'"'; __file__='"'"'/tmp/pip-install-_1t8npic/torchshapelets/torchshapelets/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-68btjbfr
cwd: /tmp/pip-install-_1t8npic/torchshapelets/torchshapelets
Complete output (39 lines):
running bdist_wheel
/home/etienne/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/torchshapelets
copying src/torchshapelets/regularisation.py -> build/lib.linux-x86_64-3.8/torchshapelets
copying src/torchshapelets/discrepancies.py -> build/lib.linux-x86_64-3.8/torchshapelets
copying src/torchshapelets/__init__.py -> build/lib.linux-x86_64-3.8/torchshapelets
copying src/torchshapelets/shapelet_transform.py -> build/lib.linux-x86_64-3.8/torchshapelets
running build_ext
building '_impl' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/src
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/etienne/.local/lib/python3.8/site-packages/torch/include -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/TH -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/include/python3.8 -c src/discrepancies.cpp -o build/temp.linux-x86_64-3.8/src/discrepancies.o -fvisibility=hidden -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_impl -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/etienne/.local/lib/python3.8/site-packages/torch/include -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/TH -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/include/python3.8 -c src/pytorchbind.cpp -o build/temp.linux-x86_64-3.8/src/pytorchbind.o -fvisibility=hidden -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_impl -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/etienne/.local/lib/python3.8/site-packages/torch/include -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/TH -I/home/etienne/.local/lib/python3.8/site-packages/torch/include/THC -I/usr/include/python3.8 -c src/shapelet_transform.cpp -o build/temp.linux-x86_64-3.8/src/shapelet_transform.o -fvisibility=hidden -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=_impl -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
src/shapelet_transform.cpp: In function ‘std::tuple<at::Tensor, at::Tensor> torchshapelets::shapelet_transform(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, int64_t, const std::function<at::Tensor(at::Tensor, at::Tensor, at::Tensor, at::Tensor)>&, at::Tensor)’:
src/shapelet_transform.cpp:313:9: error: ‘num_shapelets’ not specified in enclosing ‘parallel’
313 | for (int64_t shapelet_index = 0; shapelet_index < num_shapelets; ++shapelet_index) {
| ^~~
src/shapelet_transform.cpp:310:17: error: enclosing ‘parallel’
310 | #pragma omp parallel for default(none) \
| ^~~
src/shapelet_transform.cpp:316:116: error: ‘num_shapelet_samples’ not specified in enclosing ‘parallel’
316 | auto shapelet_times = torch::linspace(0, length.detach().item(), num_shapelet_samples, length.options());
| ^
src/shapelet_transform.cpp:310:17: error: enclosing ‘parallel’
310 | #pragma omp parallel for default(none) \
| ^~~
src/shapelet_transform.cpp:360:66: error: ‘num_samples’ not specified in enclosing ‘parallel’
360 | std::tie(discrepancy, index) = detail::continuous_min(times[0], times[-1] - length, min_fn, num_samples);
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/shapelet_transform.cpp:310:17: error: enclosing ‘parallel’
310 | #pragma omp parallel for default(none) \
| ^~~
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for torchshapelets
Hmm. You might be using a different version of OpenMP to what we used? FWIW OpenMP can be pretty flakey; in the edge cases it can be implemented in slightly different ways on slightly different OSs/etc.
(Incidentally I do notice that you're certainly not using the right versions of the other libraries, although that might not affect this.)
What might be simplest in your case is just to add the missing variables to the shared(...)
specifiers.
Thank you for the quick response.
To fix the issue, I added num_shapelets, num_shapelet_samples and num_samples in the #pragma omp parallel for default(none) shared(....) at line 310 of shapelet_transform.cpp