rigaya/QSVEnc

Segmentation fault (core dumped) with vpp-denoise for some videos

janb14 opened this issue · 1 comments

Ive got the following problem with some of my media when encoding to av1. If using vpp denoise mode auto, sometimes it crashes with a core dump. When removing the vpp denoise option it runs through. Any ideas ?

[janb14@ffmpeg ~]$ qsvencc -i <INPUT VIDEO> -c av1 --icq 23 -u best --vpp-denoise mode=auto -o /mnt/temp/video-0-optimised.mkv
--------------------------------------------------------------------------------
/mnt/temp/video-0-optimised.mkv
--------------------------------------------------------------------------------
PG is not supported on this platform, switched to FF mode.
cop.AUDelimiter value changed off -> auto by driver
cop.PicTimingSEI value changed off -> auto by driver
cop.SingleSeiNalUnit value changed off -> auto by driver
QSVEncC (x64) 7.61 (r3278) by rigaya, Mar  3 2024 23:12:09 (gcc 13.2.1/Linux)
OS             Arch Linux (6.5.13-5-pve) x64
CPU Info       Intel Xeon(R) D-1521 @ 2.40GHz (4C/7T) <DG2>
GPU Info       Intel Graphics / Driver :
Media SDK      QuickSyncVideo (hardware encoder) FF, 1st GPU(d), API v2.10
Async Depth    3 frames
Hyper Mode     off
Buffer Memory  va, 46 work buffer
Input Info     avqsv: H.264/AVC, 1920x1080, 25/1 fps
VPP            Denoise auto, strength 20
AVSync         cfr
Output         AV1(yuv420) main @ Level 4
               1920x1080p 1:1 25.000fps (25/1fps)
               avwriter: av1 => matroska
Target usage   1 - best
Encode Mode    ICQ (Intelligent Const. Quality)
ICQ Quality    23
QP Limit       min: 1, max: 255
Ref frames     4 frames
GopRefDist     8, B-pyramid: on
Max GOP Length 250 frames
Segmentation fault (core dumped)1684 kb/s, remain 0:03:18, est out size 489.0MB

Edit:
Testing showed only "auto_adjust" and "auto_bdrate" lead to segfaults. Obviously normal "auto" mode then aswell. All other vpp-denoise options go through.

Offtopic - IMHO
I hope that the developers from Intel-GPU understood and will not abandon. Their deinterlace and interpolation are better than those of NVIDIA and AMD.