akgunter/crt-royale-reshade

Advanced Deinterlacing

akgunter opened this issue · 0 comments

The original crt-royale had no deinterlacing whatsoever, so it had combing artifacts and image retention on many monitors at 60 fps. My two deinterlacing algorithms mitigate the image retention issues at the cost of guaranteed combing artifacts.

This shader would benefit massively from more deinterlacing options. The simplest addition would be weighted bobbing, which would convert some degree of combing into ghosting; but I suspect that won't look much better in practice.

I'd really like to have some kind of smart interpolation between two in-field scanlines. Incorporating the previous frame's in-field scanline would be nice too. This would differ from my weaving algorithms in that they don't account for motion between frames, thus the combing artifacts.

In practice, we see combing by focusing on the edges of objects in the viewport; so a smart algorithm might combine edge detection with dynamic time warping to define a kind of warped average. The challenges with this would be the tracking of edges through a multi-channel time domain and the quadratic cost of naive DTW. I only have 1-2ms to work with, so this approach will be challenging to say the least.

It's tempting to consider machine learning to predict the interpolated line. It'd be possible to build it as a supervised regression task, since any non-interpolated video can be converted to an interpolated video. I could handle arbitrarily large viewports by segmenting them into smaller ones, thereby reducing the required size of the model. However I'm not aware of a good way to run a GPU-accelerated neural net in ReShade, and I really don't want to have to build a large enough training set to reliably train away the edge case artifacts that ML models tend to produce. I could probably mitigate the training requirements and artifacts by constructing it as a parametric solution, but I think a parametric representation would imply the existence of a classical algorithm that'd perform better.

I'll also need to refresh myself on the physical interlacing/phosphor process. As I recall, only a few scanlines' worth of phosphors emit light at any given point, meaning the gap between two in-field scanlines could actually be imperceptible on some CRT's. Reducing the apparent gap between scanlines would have an effect very similar to bobbing, but possibly without combing, ghosting, or image retention. It'd be important that the center-most out-of-field pixels were consistently illuminated by the neighboring scanlines. Otherwise those centers would be alternating between maximal and minimal brightness, and they'd still introduce combing and image retention. Unfortunately, this kind of config would take the form of a bloom or scanline config setting rather than a deinterlacing config setting. So using it as a workaround for deinterlacing would be more of a sanctioned hack than a user-friendly solution. And I'm sure this hack would preclude the recreation of a ton of real-world CRTs.

I'll start with the weighted bobbing and wide-scanlines methods, since they're both simple to code. Then I'll focus on edge detection and DTW.