NVIDIA/NvPipe

Docker build & failed to create encoder

AndreFrelicot opened this issue · 5 comments

Hi,

I've installed NvPipe on a a nvidia enabled docker image. But the encoder cannot be created.
CUDA version : 10
Driver: 410.57

NvPipe example application: Comparison of using host/device memory.

Resolution: 3840 x 2160
Codec: H.264
Bitrate: 32 Mbps @ 90 Hz
Resolution: 3840 x 2160

--- Encode from host memory / Decode to host memory ---
Frame | Encode (ms) | Decode (ms) | Size (KB)
Failed to create encoder: Failed to create encoder (LoadNvEncApi : NvEncodeAPIGetMaxSupportedVersion(&version) returned error -315456918 at /root/src/NvPipe/src/NvCodec/NvEncoder/NvEncoder.cpp:86
)
Failed to create decoder: Failed to create decoder (NvDecoder : cuvidCreateVideoParser(&m_hParser, &videoParserParameters) returned error -45206992 at /root/src/NvPipe/src/NvCodec/NvDecoder/NvDecoder.cpp:542
)
Segmentation fault (core dumped)

I also installed NvCodec with this Dockerfile :

RUN apt-get install -y --no-install-recommends unzip curl && \
 VIDEOSDK_DOWNLOAD_SUM=389d5e73b36881b06ca00ea86f0e9c0c312c1646166b96669e8b51324943e213 && \
    curl -fsSL https://developer.download.nvidia.com/compute/redist/VideoCodec/v8.2/NvCodec.zip -O && \
    echo "$VIDEOSDK_DOWNLOAD_SUM  NvCodec.zip" | sha256sum -c - && \
    unzip -j NvCodec.zip \
          NvCodec/NvDecoder/cuviddec.h \
          NvCodec/NvDecoder/nvcuvid.h \
          NvCodec/NvEncoder/nvEncodeAPI.h \
          -d /usr/local/cuda/include && \
    unzip -j NvCodec.zip \
          NvCodec/Lib/linux/stubs/x86_64/libnvcuvid.so \
          NvCodec/Lib/linux/stubs/x86_64/libnvidia-encode.so \
          -d /usr/local/cuda/lib64/stubs && \
    rm NvCodec.zip  
RUN ln -s /usr/local/cuda/lib64/stubs/libnvidia-encode.so /usr/local/cuda/lib64/stubs/libnvidia-encode.so.1
RUN ln -s /usr/local/cuda/lib64/stubs/libnvcuvid.so /usr/local/cuda/lib64/stubs/libnvcuvid.so.1

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57                 Driver Version: 410.57                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 107...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   41C    P8     5W /  N/A |      0MiB /  8119MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

thank you

Hi!

Is the nvidia-smi output from within the container or from your host system?

Which base image did you choose?

Please list the commands you used to start the container.

Thanks!

The nvidia-smi is from within the container, the nvidia device/driver is enabled.

I use nvidia-docker run -dit <imageID> && nvidia-docker attach <instanceID> to run the container.

The base image is :
nvidia/cudagl:10.0-devel-ubuntu18.04

Thanks! Did you enable the video driver capability?

Thank you, I missed these parameters
nvidia-docker run -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility -dit <imageID>
It works now.

Thank you, I missed these parameters
nvidia-docker run -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,video,utility -dit <imageID>
It works now.

@AndreFrelicot Would you be willing to share your working docker file? I am putting together a Docker / docker-compose project for NvPipe.
Thank you.