[Bug]: python setup.py install (_dlib_pybind11: not found error) / pip install dlib (CUDA disabled error)
eddiehe99 opened this issue ยท 11 comments
What Operating System(s) are you seeing this problem on?
Windows
dlib version
19.24.0 19.24.1 19.24.2 19.24.3 19.24.4
Python version
3.7 3.8 3.9 3.10 3.11 3.12
Compiler
MSVC 19
Expected Behavior
import dlib
without errors
dlib.DLIB_USE_CUDA
returns TRUE
Current Behavior
Environment
win11
Cmake 3.29.6
CUDA 12.5
cuDNN 9.2
I have tested my CUDA installation in different ways (including testing in PyTorch). The CUDA works fine.
python setup.py install
error
If I run python setup.py install
, the terminal says the dlib
is installed successfully. And the terminal logs:
-- Looking for cuDNN install...
-- Found cuDNN: C:/Program Files/NVIDIA/CUDNN/v9.2/lib/12.5/x64/cudnn.lib
-- Enabling CUDA support for dlib. DLIB WILL USE CUDA, compute capabilities: 50
-- Configuring done (15.7s)
-- Generating done (0.5s)
However, the dilb is NOT indeed installed successfully.
When I use import dlib
in a .py file, the terminal shows error: ImportError: DLL load failed while importing _dlib_pybind11: not found
, as is described in #2977
pip install dlib --verbose
error
If I run pip install dlib --verbose
, the terminal says CUDA was found but your compiler failed to compile a simple CUDA program so dlib isn't going to use CUDA.
However, the dlib is indeed installed.
I can use import dlib
in a .py file without error. But dlib.DLIB_USE_CUDA
returns false
.
Steps to Reproduce
run pip install dlib --verbose
or
run python setup.py install
following http://dlib.net/compile.html
Anything else?
I have tested using Python 3.7/3.8/3.9/3.10/3.11/3.12 and dlib 19.24.0/19.24.1/19.24.2/19.24.3/19.24.4, all these tests come with the same problem.
Hard to say, but something about your compiler or cuda install is broken. Which is out of control of dlib.
Do this to find out more:
cd dlib/cmake_utils/test_for_cuda/
mkdir build
cd build
cmake ..
cmake --build .
and see why that test program fails to build. It's just a trivial cuda program so you have to figure out why your computer is not capable of building it. All dlib's installer is doing is running that test build and if it fails it prints that message about not being able to use cuda.
Nothing seems to go wrong.
PS C:\Users\Eddie\Downloads\dlib-19.24.3> cd dlib/cmake_utils/test_for_cuda/
PS C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda> mkdir build
็ฎๅฝ: C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 2024/8/3 14:59 build
PS C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda> cd build
PS C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda\build> cmake ..
-- Building for: Visual Studio 17 2022
-- The C compiler identification is MSVC 19.40.33812.0
-- The CXX compiler identification is MSVC 19.40.33812.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.40.33807/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning (dev) at CMakeLists.txt:10 (find_package):
Policy CMP0146 is not set: The FindCUDA module is removed. Run "cmake
--help-policy CMP0146" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.5 (found suitable version "12.5", minimum required is "7.5")
-- Configuring done (14.5s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/build
PS C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda\build> cmake --build .
้็จไบ .NET Framework MSBuild ็ๆฌ 17.10.4+10fbfbf2e
1>Checking Build System
Building NVCC (Device) object CMakeFiles/cuda_test.dir/Debug/cuda_test_generated_cuda_test.cu.obj
cuda_test.cu
cuda_test.cu
tmpxft_00004358_00000000-10_cuda_test.cudafe1.cpp
Building Custom Rule C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/CMakeLi
sts.txt
CMake is re-running because C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/
build/CMakeFiles/generate.stamp is out-of-date.
the file 'C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/build/CMakeFiles
/cuda_test.dir/cuda_test_generated_cuda_test.cu.obj.depend'
is newer than 'C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/build/CMake
Files/generate.stamp.depend'
result='-1'
CMake Warning (dev) at CMakeLists.txt:10 (find_package):
Policy CMP0146 is not set: The FindCUDA module is removed. Run "cmake
--help-policy CMP0146" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done (0.3s)
-- Generating done (0.3s)
-- Build files have been written to: C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_
for_cuda/build
cuda_test.vcxproj -> C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda\build\D
ebug\cuda_test.lib
Building Custom Rule C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/CMakeLi
sts.txt
PS C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda\build>
You didn't run cmake --build .
The output of cmake --build .
is:
PS C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda\build> cmake --build .
้็จไบ .NET Framework MSBuild ็ๆฌ 17.10.4+10fbfbf2e
1>Checking Build System
Building NVCC (Device) object CMakeFiles/cuda_test.dir/Debug/cuda_test_generated_cuda_test.cu.obj
cuda_test.cu
cuda_test.cu
tmpxft_00001e98_00000000-10_cuda_test.cudafe1.cpp
Building Custom Rule C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/CMakeLi
sts.txt
cuda_test.vcxproj -> C:\Users\Eddie\Downloads\dlib-19.24.3\dlib\cmake_utils\test_for_cuda\build\D
ebug\cuda_test.lib
Building Custom Rule C:/Users/Eddie/Downloads/dlib-19.24.3/dlib/cmake_utils/test_for_cuda/CMakeLi
sts.txt
Hard to say what's wrong with your system. But do pip uninstall dlib
and ensure it is really uninstalled. Then do python setup.py install
and see if it works now.
Unfortunately, the same error mentioned occurs.
ImportError: DLL load failed while importing _dlib_pybind11
Unfortunately, the same error mentioned occurs.
ImportError: DLL load failed while importing _dlib_pybind11
I commented on a closed issue of the same thing. I happened to find the fix.
I found the fix for this. At least on windows for me.
This can be solved by copying the cudnn64_7.dll (available here: https://developer.nvidia.com/cudnn) into the %CUDA_PATH%/bin directory (probably something like this: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin)
I am using the latest from below
Visual Studio 2022
Cmake
TensorRT 10.4 GA for Windows 10, 11, Server 2019, Server 2022 and CUDA 12.0 to 12.6 ZIP Package
cuDNN 9.4.0
CUDA Toolkit 12.6 Update 1
Install the CUDA toolkit first and reboot. After install the cuDNN.
The path should have these already in it from the install
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\libnvvp
C:\Program Files\NVIDIA\CUDNN\v9.4\bin\
C:\Program Files\NVIDIA\CUDNN\v9.4\
With system variables like these
CUDA_PATH C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
CUDA_PATH_V12_6 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
Now in the directory C:\Program Files\NVIDIA\CUDNN\v9.4
Move all the 12.6 files from C:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
to C:\Program Files\NVIDIA\CUDNN\v9.4\bin
Move all the 12.6 files from C:\Program Files\NVIDIA\CUDNN\v9.4\include\12.6
to C:\Program Files\NVIDIA\CUDNN\v9.4\inlcude
Move all the 12.6 files from C:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
to C:\Program Files\NVIDIA\CUDNN\v9.4\bin
If you're using tensorrt as well then you want to extract the zip and move all files and folders to C:\Program Files\NVIDIA\CUDNN\v9.4
Then you want to copy the file like the stackoverflow says (for me it was this)
This can be solved by copying the cudnn64_9.dll
(from C:\Program Files\NVIDIA\CUDNN\v9.4\bin
) into the %CUDA_PATH%/bin directory (probably something like this: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin)
Then
git clone https://github.com/davisking/dlib.git
cd dlib
mkdir build
cd build
cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1 -DCMAKE_PREFIX_PATH="C:/Program Files/NVIDIA/CUDNN/v9.4"
You should see
-- Looking for cuDNN install...
-- Found cuDNN: C:/Program Files/NVIDIA/CUDNN/v9.4/lib/cudnn.lib
-- Building a CUDA test project to see if your compiler is compatible with CUDA...
CMake Warning (dev) at C:/Users/user/Downloads/dlib-master/dlib-master/dlib/cmake_utils/test_for_cuda/CMakeLists.txt:10 (find_package):
Policy CMP0146 is not set: The FindCUDA module is removed. Run "cmake
--help-policy CMP0146" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.This warning is for project developers. Use -Wno-dev to suppress it.
-- Building a cuDNN test project to check if you have the right version of cuDNN installed...
CMake Warning (dev) at C:/Users/user/Downloads/dlib-master/dlib-master/dlib/cmake_utils/test_for_cudnn/CMakeLists.txt:7 (find_package):
Policy CMP0146 is not set: The FindCUDA module is removed. Run "cmake
--help-policy CMP0146" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.This warning is for project developers. Use -Wno-dev to suppress it.
-- Enabling CUDA support for dlib. DLIB WILL USE CUDA, compute capabilities: 50
-- Configuring done (12.2s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/user/Downloads/dlib-master/dlib-master/dlib/build
Then
cmake --build . --config Release
and then you should see
-- Looking for cuDNN install...
-- Found cuDNN: C:/Program Files/NVIDIA/CUDNN/v9.4/lib/cudnn.lib
-- Enabling CUDA support for dlib. DLIB WILL USE CUDA, compute capabilities: 50
-- Configuring done (0.6s)
-- Generating done (0.2s)
Run the following command from the source directory
python setup.py install
and then another output like
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6 (found suitable version "12.6", minimum required is "7.5")
-- Looking for cuDNN install...
-- Found cuDNN: C:/Program Files/NVIDIA/CUDNN/v9.4/lib/cudnn.lib
-- Building a CUDA test project to see if your compiler is compatible with CUDA...
CMake Warning (dev) at C:/Users/user/Downloads/dlib-master/dlib-master/dlib/cmake_utils/test_for_cuda/CMakeLists.txt:10 (find_package):
Policy CMP0146 is not set: The FindCUDA module is removed. Run "cmake
--help-policy CMP0146" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.This warning is for project developers. Use -Wno-dev to suppress it.
-- Building a cuDNN test project to check if you have the right version of cuDNN installed...
CMake Warning (dev) at C:/Users/user/Downloads/dlib-master/dlib-master/dlib/cmake_utils/test_for_cudnn/CMakeLists.txt:7 (find_package):
Policy CMP0146 is not set: The FindCUDA module is removed. Run "cmake
--help-policy CMP0146" for policy details. Use the cmake_policy command to
set the policy and suppress this warning.This warning is for project developers. Use -Wno-dev to suppress it.
-- Enabling CUDA support for dlib. DLIB WILL USE CUDA, compute capabilities: 50
-- Configuring done (23.3s)
-- Generating done (0.1s)
-- Build files have been written to: C:/Users/user/Downloads/dlib-master/dlib-master/build/temp.win-amd64-cpython-311/Release
Invoking CMake build: 'cmake --build . --config Release -- /m'
MSBuild version 17.10.4+10fbfbf2e for .NET Framework
A successful build then ends with
Installed c:\users\user\appdata\local\programs\python\python311\lib\site-packages\dlib-19.24.99-py3.11-win-amd64.egg
Processing dependencies for dlib==19.24.99
Finished processing dependencies for dlib==19.24.99
After all is said and done
Python 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import dlib
>>> print(dlib.DLIB_USE_CUDA)
True
Unfortunately, the same error mentioned occurs.
ImportError: DLL load failed while importing _dlib_pybind11
I commented on a closed issue of the same thing. I happened to find the fix.
I found the fix for this. At least on windows for me.
This can be solved by copying the cudnn64_7.dll (available here: https://developer.nvidia.com/cudnn) into the %CUDA_PATH%/bin directory (probably something like this: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin)
I am using the latest from below
Visual Studio 2022 Cmake TensorRT 10.4 GA for Windows 10, 11, Server 2019, Server 2022 and CUDA 12.0 to 12.6 ZIP Package cuDNN 9.4.0 CUDA Toolkit 12.6 Update 1
Install the CUDA toolkit first and reboot. After install the cuDNN. The path should have these already in it from the install
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\libnvvp
C:\Program Files\NVIDIA\CUDNN\v9.4\bin\
C:\Program Files\NVIDIA\CUDNN\v9.4\
With system variables like these CUDA_PATH
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
CUDA_PATH_V12_6C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
Now in the directory
C:\Program Files\NVIDIA\CUDNN\v9.4
Move all the 12.6 files from
C:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\bin
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\include\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\inlcude
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\bin
If you're using tensorrt as well then you want to extract the zip and move all files and folders to
C:\Program Files\NVIDIA\CUDNN\v9.4
Then you want to copy the file like the stackoverflow says (for me it was this)
This can be solved by copying the cudnn64_9.dll
(fromC:\Program Files\NVIDIA\CUDNN\v9.4\bin
) into the %CUDA_PATH%/bin directory (probably something like this:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin)
Then
git clone https://github.com/davisking/dlib.git
cd dlib
mkdir build
cd build
cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1 -DCMAKE_PREFIX_PATH="C:/Program Files/NVIDIA/CUDNN/v9.4"
I highly appreciate your detailed steps! It works!
Based on my different combinations of configurations, these steps are crucial after the installation of CUDA and cuDNN:
- Move files from subfolders to parent folders.
Move all the 12.6 files from
C:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\bin
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\include\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\inlcude
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\lib\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\lib
- Specify the CMAKE_PREFIX_PATH
Run the code below:
git clone https://github.com/davisking/dlib.git
cd dlib
mkdir build
cd build
cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1 -DCMAKE_PREFIX_PATH="C:/Program Files/NVIDIA/CUDNN/v9.4"
cmake --build . --config Release
python setup.py install
But actually, I did not do this. I did the following:
Firstly, I added the system variable CMAKE_PRIFIX_PATH whose value is "C:/Program Files/NVIDIA/CUDNN/v9.4".
Secondly, reboot.
Thirdly, run 'python setup.py install' from the source directory.
I searched the internet and found that the bizarre error may be related to the upgrade of Nvidia's cuDNN. When users install cuDNN 8.x - 1.x which are provided as zip files, the tutorials existing on the internet tell users to move the files in the bin
, include
, lib
folders of cuDNN to the corresponding bin
, include
, lib
folders of CUDA. However, the cuDNN (Graphical Installation) greater than 9.0.0 will create a subfolder such as 12.6
. And the system variables automatedly generated do not lead to the right filepath. I tried to configure the system variables. It failed.
Consequently, I think the Tarball installation guide and your suggestions are correct.
Unfortunately, the same error mentioned occurs.
ImportError: DLL load failed while importing _dlib_pybind11
I commented on a closed issue of the same thing. I happened to find the fix.
I found the fix for this. At least on windows for me.
https://stackoverflow.com/questions/62255730/dlib-importerror-in-windows-10-on-line-dlib-pybind11-import-dll-load-failed
This can be solved by copying the cudnn64_7.dll (available here: https://developer.nvidia.com/cudnn) into the %CUDA_PATH%/bin directory (probably something like this: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin)
I am using the latest from below
Visual Studio 2022 Cmake TensorRT 10.4 GA for Windows 10, 11, Server 2019, Server 2022 and CUDA 12.0 to 12.6 ZIP Package cuDNN 9.4.0 CUDA Toolkit 12.6 Update 1
Install the CUDA toolkit first and reboot. After install the cuDNN. The path should have these already in it from the installC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\libnvvp
C:\Program Files\NVIDIA\CUDNN\v9.4\bin\
C:\Program Files\NVIDIA\CUDNN\v9.4\
With system variables like these CUDA_PATHC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
CUDA_PATH_V12_6C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6
Now in the directoryC:\Program Files\NVIDIA\CUDNN\v9.4
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\bin
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\include\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\inlcude
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\bin
If you're using tensorrt as well then you want to extract the zip and move all files and folders toC:\Program Files\NVIDIA\CUDNN\v9.4
Then you want to copy the file like the stackoverflow says (for me it was this)
This can be solved by copying the cudnn64_9.dll
(fromC:\Program Files\NVIDIA\CUDNN\v9.4\bin
) into the %CUDA_PATH%/bin directory (probably something like this:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin)
Then
git clone https://github.com/davisking/dlib.git
cd dlib
mkdir build
cd build
cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1 -DCMAKE_PREFIX_PATH="C:/Program Files/NVIDIA/CUDNN/v9.4"
I highly appreciate your detailed steps! It works!
Based on my different combinations of configurations, these steps are crucial after the installation of CUDA and cuDNN:
- Move files from subfolders to parent folders.
Move all the 12.6 files from
C:\Program Files\NVIDIA\CUDNN\v9.4\bin\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\bin
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\include\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\inlcude
Move all the 12.6 files fromC:\Program Files\NVIDIA\CUDNN\v9.4\lib\12.6
toC:\Program Files\NVIDIA\CUDNN\v9.4\lib
- Specify the CMAKE_PREFIX_PATH
Run the code below:git clone https://github.com/davisking/dlib.git
cd dlib
mkdir build
cd build
cmake .. -DDLIB_USE_CUDA=1 -DUSE_AVX_INSTRUCTIONS=1 -DCMAKE_PREFIX_PATH="C:/Program Files/NVIDIA/CUDNN/v9.4"cmake --build . --config Release
python setup.py install
But actually, I did not do this. I did the following:
Firstly, I added the system variable CMAKE_PRIFIX_PATH whose value is "C:/Program Files/NVIDIA/CUDNN/v9.4".
Secondly, reboot.
Thirdly, run 'python setup.py install' from the source directory.
I searched the internet and found that the bizarre error may be related to the upgrade of Nvidia's cuDNN. When users install cuDNN 8.x - 1.x which are provided as zip files, the tutorials existing on the internet tell users to move the files in the
bin
,include
,lib
folders of cuDNN to the correspondingbin
,include
,lib
folders of CUDA. However, the cuDNN (Graphical Installation) greater than 9.0.0 will create a subfolder such as12.6
. And the system variables automatedly generated do not lead to the right filepath. I tried to configure the system variables. It failed.Consequently, I think the Tarball installation guide and your suggestions are correct.
Yeah I am knew to all this and I found out if making the python whl you dont need to do the cmake. But I guess it does not hurt lol
Warning: this issue has been inactive for 35 days and will be automatically closed on 2024-11-19 if there is no further activity.
If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.
I had similar problems and couldn't fix them just by moving the files to parent directories. In the end, what worked for me, was importing os and adding CUDA and CUDNN paths to BOTH PATH
and DLL directories:
if 'ON' == 'ON':
import os
os.environ["PATH"] += os.environ["PATH"] + ";" + r"C:\\Program Files\\NVIDIA\\CUDNN\\v9.4\\bin\\12.6" + ";" + r"C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin"
os.add_dll_directory('C:\\Program Files\\NVIDIA\\CUDNN\\v9.4\\bin\\12.6')
os.add_dll_directory('C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v12.6\\bin')
and for anyone struggling with this, you can edit this file in site-packages, example: .venv\Lib\site-packages\dlib\__init__.py
or open the .whl with 7zip and edit it there, example: dlib-19.24.6-cp312-cp312-win_amd64.whl\dlib\__init__.py
(just don't forget to force reinstall the wheel if you do the latter!)