Video2x Docker image not recognising GPU on Ubuntu 22.04
Closed this issue · 29 comments
Hi this is an amazing piece of software! I'm struggling with finding a way to get it to recognise my GPU though. At least I think that's what's happening: 480p => 2x upscaling on a RTX 4090 is taking multiple seconds per frame.
nvidia-smi
shows GPU is available inside a vanilla image so the linkage between docker and nvidia works (and I've used it on other projects)- But running video2x we get glacial execution times (> 1s per frame) and this output:
-v $(pwd):/host \
ghcr.io/k4yt3x/video2x:latest \
-i joe_30s_deinterlaced.mp4 \
-o joe_30s_upscaled.mp4 \
-f realesrgan \
-r 2 \
-m realesr-animevideov3
Video processing started; press SPACE to pause/resume, 'q' to abort.
[2024-11-01 10:53:07.580] [info] Output video dimensions: 1280x960
[libx264 @ 0x7086f8074740] using SAR=1/1
[libx264 @ 0x7086f8074740] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x7086f8074740] profile High, level 4.2, 4:2:0, 8-bit
[libx264 @ 0x7086f8074740] 264 - core 164 r3108 31e19f9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=30 lookahead_threads=5 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[0 llvmpipe (LLVM 18.1.8, 256 bits)] queueC=0[1] queueG=0[1] queueT=0[1]
[0 llvmpipe (LLVM 18.1.8, 256 bits)] bugsbn1=0 bugbilz=0 bugcopc=0 bugihfa=0
[0 llvmpipe (LLVM 18.1.8, 256 bits)] fp16-p/s/u/a=1/1/1/1 int8-p/s/u/a=1/1/1/1
[0 llvmpipe (LLVM 18.1.8, 256 bits)] subgroup=8 basic/vote/ballot/shuffle=1/1/1/1
[0 llvmpipe (LLVM 18.1.8, 256 bits)] fp16-8x8x16/16x8x8/16x8x16/16x16x16=0/0/0/0
Processing frame 0/1805 (0.00%); time elapsed: 5s^C
I even tried creating a custom image that added vulcan-tools but same issue. Here's the Dockerfile I used:
# Use the existing Video2X image as the base
FROM ghcr.io/k4yt3x/video2x:latest
# Install git and vulkan-tools
RUN pacman -Sy --noconfirm git vulkan-tools && \
rm -rf /var/cache/pacman/pkg/*
docker run --gpus all --rm -it --entrypoint vulkaninfo gruntus betterfy % docker run --gpus all --runtime=nvidia -e -it --rm \
-v $(pwd):/host \
video2x-with-vulkaninfo \
-i examples/joe_30s_deinterlaced.mp4 \
-o examples/joe_30s_upscaled.mp4 \
-f realesrgan \
-r 2 \
-m realesr-animevideov3
[libx264 @ 0x7c5d68074740] using SAR=1/1
[libx264 @ 0x7c5d68074740] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
Video processing started; press SPACE to pause/resume, 'q' to abort.
[2024-11-01 11:19:22.108] [info] Output video dimensions: 1280x960
[libx264 @ 0x7c5d68074740] profile High, level 4.2, 4:2:0, 8-bit
[libx264 @ 0x7c5d68074740] 264 - core 164 r3108 31e19f9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=30 lookahead_threads=5 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[0 llvmpipe (LLVM 18.1.8, 256 bits)] queueC=0[1] queueG=0[1] queueT=0[1]
[0 llvmpipe (LLVM 18.1.8, 256 bits)] bugsbn1=0 bugbilz=0 bugcopc=0 bugihfa=0
[0 llvmpipe (LLVM 18.1.8, 256 bits)] fp16-p/s/u/a=1/1/1/1 int8-p/s/u/a=1/1/1/1
[0 llvmpipe (LLVM 18.1.8, 256 bits)] subgroup=8 basic/vote/ballot/shuffle=1/1/1/1
[0 llvmpipe (LLVM 18.1.8, 256 bits)] fp16-8x8x16/16x8x8/16x8x16/16x16x16=0/0/0/0
Processing frame 0/1805 (0.00%); time elapsed: 12s
Since you said it's already working for other projects, I assume you already installed nvidia-docker2
? If you run a terminal inside the video2x container, can you see your own GPU?
Honestly I'm guessing it might be related to the NVIDIA drivers being too new in the container? If so it might be a bad idea to build on Arch.
Also see if anything on this page helps your situation: https://github.com/K4YT3X/video2x/wiki/Container
I believe I’m encountering a similar issue with Video2X not leveraging the GPU during processing. Here are the details of my setup and the observed behavior
Environment:
Docker OS: Ubuntu 24.04
Video2X Version: 6.10
NVIDIA Container Toolkit: Installed
NVIDIA Driver Version: 550.107.02
CUDA Version: 12.1
Command Executed:
video2x -i test.mkv -o test.mkv -f realesrgan -r 4 -m realesrgan-plus
Output
[2024-11-05 09:52:56.457] [info] Video2X version 6.1.0
[2024-11-05 09:52:56.457] [info] Processing file: Lafabrica.mkv
[2024-11-05 09:52:56.457] [info] Press SPACE to pause/resume, 'q' to abort.
[2024-11-05 09:52:56.461] [info] Output video dimensions: 2880x2304
[libx264 @ 0x75df1834b840] using SAR=12/11
[libx264 @ 0x75df1834b840] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 @ 0x75df1834b840] profile High, level 5.1, 4:2:0, 8-bit
[libx264 @ 0x75df1834b840] 264 - core 164 r3108 31e19f9 - H.264/MPEG-4 AVC codec - Copyleft 2003-2023 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=2 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=24 lookahead_threads=4 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=crf mbtree=1 crf=20.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
[0 llvmpipe (LLVM 17.0.6, 256 bits)] queueC=0[1] queueG=0[1] queueT=0[1]
[0 llvmpipe (LLVM 17.0.6, 256 bits)] bugsbn1=0 bugbilz=0 bugcopc=0 bugihfa=0
[0 llvmpipe (LLVM 17.0.6, 256 bits)] fp16-p/s/a=1/1/1 int8-p/s/a=1/1/1
[0 llvmpipe (LLVM 17.0.6, 256 bits)] subgroup=8 basic=1 vote=1 ballot=1 shuffle=1
[0 llvmpipe (LLVM 17.0.6, 256 bits)] queueC=0[1] queueG=0[1] queueT=0[1]
[0 llvmpipe (LLVM 17.0.6, 256 bits)] bugsbn1=0 bugbilz=0 bugcopc=0 bugihfa=0
[0 llvmpipe (LLVM 17.0.6, 256 bits)] fp16-p/s/u/a=1/1/1/1 int8-p/s/u/a=1/1/1/1
[0 llvmpipe (LLVM 17.0.6, 256 bits)] subgroup=8 basic/vote/ballot/shuffle=1/1/1/1
[0 llvmpipe (LLVM 17.0.6, 256 bits)] fp16-8x8x16/16x8x8/16x8x16/16x16x16=0/0/0/0
[2024-11-05 09:52:58.124] [warning] Estimating the total number of frames from duration * fps
Processing frame 0/852 (0%); time elapsed: 138s
GPU Usage: 0%
CPU Usage: 800%
I think it might still be related to the driver version. You can enter the shell and try running video2x with the new --listgpus
argument to see what Vulkan devices it can find. If it can find the better GPU you can manually specify its ID.
Also given you're using Ubuntu 2404 you can try the new deb package built for 2404. That'll save you from Docker.
Hi!
Here’s the output from running video2x --listgpus:
video2x --listgpus
0. llvmpipe (LLVM 17.0.6, 256 bits)
Type: CPU
Vulkan API Version: 1.3.274
Driver Version: 0.0.1
Any ideas on what might be missing to resolve this?
you're using Ubuntu 2404 you can try the new deb package built for 2404. That'll save you from Docker
That’s true! However, the main goal here is to process videos that would likely take 5-6 hours on my personal laptop.
To optimize, I plan to use a cloud service like Runpod, where I can access an RTX 4090 at a cost of 44 cents per hour.
Since this requires Docker to run on their infrastructure, it’s essential to get Docker working smoothly with Video2X and the GPU.
Docker shouldn’t, in theory, cause issues with GPU detection. Perhaps there’s an additional configuration or dependency needed to fully enable GPU support for Video2X.
For instance, dependencies like ffmpeg and libboost-program-options-dev are required for Video2X.
@maauso I think you didn't pass through the GPU correctly. It's not finding the GPU Vulkan device at all. In your docker command you may need to add --gpus all
and other arguments. Take a look at the wiki page.
Edit: I got confused between who sent what between the two of you. I don't actually see the commands you're using to start the docker container. Still, try the same thing, add --gpus all
and --privileged
to see if it works.
@boxabirds I don't think vulkan-tools is needed to run Vulkan programs. It just provides some utilities like vulkaninfo?
Yeah it is useful for debugging, but I didn't want to add it since it increases the size of the Docker image.
@maauso I think you didn't pass through the GPU correctly. It's not finding the GPU Vulkan device at all. In your docker command you may need to add
--gpus all
and other arguments. Take a look at the wiki page.Edit: I got confused between who sent what between the two of you. I don't actually see the commands you're using to start the docker container. Still, try the same thing, add
--gpus all
and--privileged
to see if it works.
Hi,
--gpus all
is included when starting the pod; without it, nvidia-smi wouldn’t display GPU info.
I’m wondering if the CUDA version might be causing the issue. Which version are you using, @k4yt3x?
@maauso The current version of v2x doesn't use CUDA at all, only Vulkan. It shouldn't be the issue. Let me see if there's a trial or something for runpod...
Hi
Thank you! I can run additional tests if needed. Currently, I'm using the ubuntu:latest image on RunPod.
@maauso If your setup permits, can I gain temporary access into your environment so I don't to go through the whole sign-up and adding a credit stuff?
My SSH pubkey is at https://github.com/k4yt3x.keys
For the record, @maauso's issue has been solved. The issue is related to RunPod's Ubuntu container not exporting the Vulkan ICD file to the VK_ICD_FILENAMES
environment variable, causing Vulkan applications to not be able to find the Vulkan loader config. This will fix the issue:
apt-get install -y nvidia-driver-550
export VK_ICD_FILENAMES=/etc/vulkan/icd.d/nvidia_icd.json
The driver version 550
should match whatever you see in nvidia-smi
if it's already on the system. This command might fail due to file conflicts, but it'll install the necessary files for Vulkan apps to work.
@boxabirds perhaps you can check where your nvidia_icd.json
is as well.
It should come as a part of the driver and is required for Vulkan to work.
@boxabirds Have you had a chance to take a look yet?
I'm somewhat surprised to hear that, especially if the video is anime/cartoon. I'd encourage you to try different models. Anyhow, I'll leave this open.
Has there been a resolution to this issue? Running the following does not use the RTX 4090 on the system. Testing docker on other images that are designed to show the output of nvidia-smi or use the GPU for Tensorflow examples work without issue. FPS using the video2x Docker image is around 0.98 FPS on my system.
For the following command, the output clearly shows only CPUs are being allocated:
COMMAND:
docker run --gpus all --device=/dev/nvidia0 --device=/dev/nvidiactl --runtime nvidia -it --rm -v $PWD:/host ghcr.io/k4yt3x/video2x:6.1.1 -i standard-test.mp4 -o output.mp4 -f realesrgan -r 4 -m realesr-animevideov3
RELEVANT OUTPUT:
using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2 AVX512
@nigelparsad Have you installed the NVIDIA container toolkit?
Also take a look at https://docs.video2x.org/running/container.html if you haven't.
Just adding that I installed the nvidia container toolkit and it makes no difference. Is vulkan a hard requirement? i.e is video2x never going to use the GPU if it isn't installed? The RTX 3090 should be supported - https://vulkan.gpuinfo.org/listreports.php?devicename=GeForce+RTX+3090&platform=linux
I see it stated above that video2x doesn't use CUDA at all? I must have missed that in the docs, I just assumed it was using CUDA.
Yes Vulkan is a hard requirement. You're also right that Video2X doesn't use CUDA, just Vulkan for now. Your 3090 should work. I'm not yet sure what's missing in your setup.
Same issue here. nvidia-docker-container and nvidia-docker2 are installed. I can assure the GPU is being detected by docker by using a Nvidia provided sample container (can be accessed here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html). The output from the sample follows:
Wed Dec 4 23:14:33 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce GTX 1660 Ti Off | 00000000:06:00.0 On | N/A |
| 39% 38C P5 10W / 120W | 304MiB / 6144MiB | 1% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
I'm running the docker run command provided on the video2x docs, with the KEY var defined as 6.1.1 (latest version provided around here). My machine is running running Pop OS 22.04. It seems video2x does not detect my GPU, and the CPU usage goes high when I boot the container.
I was also having issues trying to run video2x directly on the host machine, but no success, as libavcodec was not found no matter what I tried (I think that's an issue with the base 22.04, as said in another Github Issue).
Just adding that I installed the nvidia container toolkit and it makes no difference. Is vulkan a hard requirement? i.e is video2x never going to use the GPU if it isn't installed? The RTX 3090 should be supported - https://vulkan.gpuinfo.org/listreports.php?devicename=GeForce+RTX+3090&platform=linux
I see it stated above that video2x doesn't use CUDA at all? I must have missed that in the docs, I just assumed it was using CUDA.
Can you try installing vulkan-sdk
?
apt install -y vulkan-sdk
I feel like it's a bit tough to tackle the Docker image compatibility issues. The way to go will be AppImage and Flatpak. Please give the new AppImage a try and see how well that works. It has been working pretty well in our tests so far. This thread has been stale for a long time and I'm not making much progress, so I'll close it for now. Let's go with AppImage instead.
Upgrading to Ubuntu 24.04 can solve this problem.