[Bug]:
nys87 opened this issue · 4 comments
What Operating System(s) are you seeing this problem on?
Ubuntu 22.04.2 LTS
dlib version
19.24.2
Python version
3.10.12
Compiler
cmake version 3.22.1
Expected Behavior
Hello,
and sorry for my bad english,
I specially compiled 'dlib 19.24.2' to use it on my GPU: gtx 1050 to increase performance.
cpu:
Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
Ram:
16Gb
Test:
python3 -c "import dlib; print(dlib.DLIB_USE_CUDA); print (dlib.cuda.get_num_devices())"
True
1
i use python3 module 'face_recognition 1.3.0' for face recognition in jpg/jpeg photos
which should start work more fast(since I compiled it with GPU support in CUDA), but the speed remained almost the same as before(when the dlib was compiled without GPU/CUDA support).
I also noticed that in nvidia-smi where do I watch my GPU python never does not use more than 390 mb of memory but, this adapter has 4 GB.
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.05 Driver Version: 535.86.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1050 Off | 00000000:01:00.0 Off | N/A |
| N/A 64C P0 N/A / ERR! | 862MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1851 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 828135 C python3 378MiB |
| 0 N/A N/A 3833450 C python3 86MiB |
| 0 N/A N/A 3834429 C python3 390MiB |
+---------------------------------------------------------------------------------------+
What can I do?
Please write your advice, I'll post any command output as quickly as possible....
Current Behavior
When using the GPU, the speed remains the same as when using CPU
Steps to Reproduce
#!/usr/bin/env python3
import face_recognition
import time
start_time = time.time()
img = face_recognition.load_image_file('./IMG.jpeg')
faces = face_recognition.face_encodings(img)
print("--- %s seconds ---" % (time.time() - start_time))
Average size image is processed in about 2 seconds....
Anything else?
No response
The CUDA runtime has a long startup time. So run it a few times before calling time.time()
so that the startup of CUDA isn't being included in the timing.
Warning: this issue has been inactive for 35 days and will be automatically closed on 2023-10-04 if there is no further activity.
If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.
Warning: this issue has been inactive for 43 days and will be automatically closed on 2023-10-04 if there is no further activity.
If you are waiting for a response but haven't received one it's possible your question is somehow inappropriate. E.g. it is off topic, you didn't follow the issue submission instructions, or your question is easily answerable by reading the FAQ, dlib's official compilation instructions, dlib's API documentation, or a Google search.
Notice: this issue has been closed because it has been inactive for 45 days. You may reopen this issue if it has been closed in error.