Black band at certain videos
Opened this issue ยท 0 comments
๐ Bug
There is an issue in memory alignment before SWS scale which causes an error in about 4% of the frames on my sample of random videos from various datasets and about 1% of them on the test videos in TV.
The issue most likely lies in memory alignment in our usage of av_image_fill_arrays
as this works when hardcoded av_frame buffer is allocated. I'm investigating the fix in the meantime
To Reproduce
Open and try to visualize the first frame of vision/test/assets/videos/WUzgd7C1pWA.mp4
using any method that relies on video_reader backend
Expected behavior
A full frame with no artifacts is decoded
Environment
PyTorch version: 1.8.0a0+d555768
Is debug build: False
CUDA used to build PyTorch: 11.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.19.4
Python version: 3.8 (64-bit runtime)
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
Nvidia driver version: 460.27.04
cuDNN version: Probably one of the following:
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-10.2.89/targets/x86_64-linux/lib/libcudnn.so.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.20.0
[pip3] torch==1.8.0a0+unknown
[pip3] torchvision==0.9.0a0+8ee9092
[conda] blas 1.0 mkl
[conda] magma-cuda112 2.5.2 1 pytorch
[conda] mkl 2020.2 256
[conda] mkl-include 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.20.0 pypi_0 pypi
[conda] torch 1.8.0a0+unknown pypi_0 pypi
[conda] torchvision 0.9.0a0+8ee9092 dev_0 <develop>
Additional context
This issue arose in #2916 and I'm opening a new one to have the change of the code needed (using torch.max
instead of torch.mean
for video stream tests in tests/test_video.py
. I'm hoping to merge that PR and continue to track the issue here.
cc @bjuncek