phoboslab/qoa

Possible lossy encoder optimization

Opened this issue · 1 comments

wareya commented

I discovered this while working on a codec with similar design principles. (Hi!) Specifically, I discovered this after learning about QOA's dynamic slice quantization, and the optimization it performs to the process of brute forcing them.

If you make certain assumptions about the "shape" of the quantization error as you loop over scaling factors, you can skip large chunks of scaling factors entirely. This assumption is that it mostly decreases towards the ideal scaling factor and then increases when going away from it. If it starts increasing, you can skip to the zeroth scaling factor; and if, after skipping to the zeroth scaling factor, it starts increasing rapidly, you can break out of the loop entirely.

Doing this results in a small loss of quality (around 0.05db), but this quality loss can be controlled by adjusting a fudge factor.

wareya@10858af

$ time ./qoaconv.exe "test ultra new.wav" out_old.qoa
test ultra new.wav: channels: 2, samplerate: 44100 hz, samples per channel: 3155783, duration: 71 sec
out.qoa: size: 2489 kb (2549328 bytes) = 278.32 kbit/s, psnr: 46.87 db

real    0m0.384s
user    0m0.343s
sys     0m0.046s
$ time ./qoaconv.exe "test ultra new.wav" out_new.qoa
test ultra new.wav: channels: 2, samplerate: 44100 hz, samples per channel: 3155783, duration: 71 sec
out_new.qoa: size: 2489 kb (2549328 bytes) = 278.32 kbit/s, psnr: 46.82 db

real    0m0.332s
user    0m0.311s
sys     0m0.030s

image

The fudge factor I chose was n < (slice_len >> 2). If the encoding loop only made it 25% of the way through before breaking, the slice is assumed to have excessively large error.

Test audio attached: test ultra new.zip

For some reason, the QOA developer has not shown interest in issues for a long time.