Whether to use unsigned char for bitstreams which internal-bitdepth are 8?
xrayleigh2000 opened this issue · 9 comments
Hello,
At present, there are many 8-bit videos in practical applications, but vvdec uses short to store pixel values for bitstreams which internal-bitdepth are 8. According to the experience on HEVC decoder, using unsigned char to store pixel values can save 20-30% memory and 10-20% decoding time. Does vvdec have an optimized plan for using unsigned char to store pixel values for bitstreams which internal-bitdepth are 8.
Hey.
No, it would be a lot of work and basically require rewriting or at least templating all of our SIMD code, and adding type-independent stride into all of the buffer operations. So basically you would need to template all the code below a specific logic level with the Pel type parameter. It would make the code very confusing (more than it alraedy is). Its just an effort that we cannot afford right now.
I think VVC is a standard designed for quality high resolution videos, and I believe a lot of the content to be consumed with VVC will alctually be 10 bit.
Hello,
I have done some optimization on the previous vvdec version, and used unsigned char to store the pixel value for bitstreams which internal-bitdepth are 8. Do you mind if I upload the code to my fork of the vvdec library.
Of course not. VVdeC is BSD-style open source. As long as you reference the original authors you can do whatever.
If you wanted to push the code to our upstream repo, we would need to have more lengthy discussion.
I am actually interested to see what you did and what impact it has. Feel free to elaborate a bit.
Hello,
I only changed the part of the C code, which can save 20-30% memory on 1920x1080 bitstreams. Since the SIMD code has not been modified, the speed improvement is unclear.
Due to many modifications, I used the team's account to upload the code, the code is in https://github.com/LilBoatTech/vvdec/tree/ADAPTIVE_BIT_DEPTH.
You can turn on this feature by modifying ADAPTIVE_BIT_DEPTH in the TypeDef.h, and search for ADAPTIVE_BIT_DEPTH to see the modified code.
Interesting.
I see it is based on a rather old version (around 0.2.0.0, right?). In 1.3.0 we moved some buffers from per-frame allocation to per-picture-decoder (2x), saving around 2/3 of the memory. Prior to that vvdec really had veeery high memory requirements. It would be interesting to know what the impact is on the optimized version.
Other than that, I see you worked on the code a lot, because it has many unrelated changes making it a bit hard to read.
Hi @xrayleigh2000 let me comment on that also:
I see you completely reformatted the code. If you don't want to reapply your actual modifications to our original formatting, I would suggest you split your changes in at least two commits. One with your reformatting and one with the actual changes. This should actually be quite easy, assuming you used some automatic formatting utility.
Then we can at least have a look at what you actually did.
Hi,
I uploaded the code again as suggested by K-os.
@adamjw24 We optimize based on vvdec 0.2.1.0. We performed memory optimization similar to vvdec1.3.0, and the memory has been reduced from 450MB to 155MB. If ADAPTIVE_BIT_DEPTH is turned on, the memory can be further reduced to 100MB. (The test data is configured with thread=0, and there will be some discrepancies between different bitstreams.)
This is a nice improvement.
At present, many videos do not use 10-bit encoding, and it is expected that 10-bit videos will not be in the majority in the next 2 to 3 years. During this transition period, targeted optimization for 8-bit encoded videos will reduce memory usage and reduce the time consumption of encoding and decoding, which will help promote the application of H266 technology.
Does vvdec consider supporting 8-bit targeted optimizations?
Closing for now. We will not be looking into this anytime soon, and since VVC does not have a 8-bit only profile it doesn't seem to be feasible for VVdeC.