HolyWu/vs-femasr

Strange highlights and Tensor not working.

Closed this issue · 5 comments

Selur commented

When using:

# Imports
import vapoursynth as vs
# getting Vapoursynth core
core = vs.core
import site
import os
import ctypes
# Adding torch dependencies to PATH
path = site.getsitepackages()[0]+'/torch_dependencies/'
ctypes.windll.kernel32.SetDllDirectoryW(path)
path = path.replace('\\', '/')
os.environ["PATH"] = path + os.pathsep + os.environ["PATH"]
# Loading Plugins
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/Support/fmtconv.dll")
core.std.LoadPlugin(path="i:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/vslsmashsource.dll")
# source: 'C:\Users\Selur\Desktop\howToSharpenThis.mp4'
# current color space: YUV420P10, bit depth: 10, resolution: 3840x2160, fps: 50, color matrix: 709, yuv luminance scale: limited, scanorder: progressive
# Loading C:\Users\Selur\Desktop\howToSharpenThis.mp4 using LibavSMASHSource
clip = core.lsmas.LibavSMASHSource(source="C:/Users/Selur/Desktop/howToSharpenThis.mp4")
# Setting color matrix to 709.
clip = core.std.SetFrameProps(clip, _Matrix=1)
clip = clip if not core.text.FrameProps(clip,'_Transfer') else core.std.SetFrameProps(clip, _Transfer=1)
clip = clip if not core.text.FrameProps(clip,'_Primaries') else core.std.SetFrameProps(clip, _Primaries=9)
# Setting color range to TV (limited) range.
clip = core.std.SetFrameProp(clip=clip, prop="_ColorRange", intval=1)
# making sure frame rate is set to 50
clip = core.std.AssumeFPS(clip=clip, fpsnum=50, fpsden=1)
clip = core.std.SetFrameProp(clip=clip, prop="_FieldBased", intval=0)
# cropping the video to 820x820
clip = core.std.CropRel(clip=clip, left=1020, right=2000, top=540, bottom=800)

clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, matrix_in_s="470bg", range_s="limited")
org = core.resize.Bicubic(clip=clip, width=1640, height=1640)

from vsfemasr import femasr
clip = femasr(clip)

# adjusting output color from: RGBS to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
org = core.resize.Bicubic(clip=org, format=vs.YUV420P10, matrix_s="470bg", range_s="limited", dither_type="error_diffusion")
clip = core.std.StackHorizontal([org.text.Text("Original"), clip.text.Text("Filtered")])

# Output
clip.set_output()

grafik

The result, looks impressive, but I see some strange highlights. (which I also see with other sources)
I also checked: enabling and disabling nvfuser, cuda_graphs does not change these highlights.
Are these to be expected, or is this a bug?

Using tensor:

clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="470bg", range_s="limited")




from vsfemasr import femasr


clip = femasr(clip, trt=True, trt_cache_path=r"G:\Temp")

I also tried with


clip = femasr(clip, trt=True, trt_cache_path="G:/Temp")

both failed with:

Python exception: 

Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
File "C:\Users\Selur\Desktop\test_2.vpy", line 38, in 
clip = femasr(clip, trt=True, trt_cache_path="G:/Temp")
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 196, in femasr
module = lowerer(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
return do_lower(module, inputs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
processed_module = pass_(module, input, *args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
lower_result = pm(module)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
lowered_module = self._lower_func(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
interp_result: TRTInterpreterResult = interpreter.run(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
assert engine
AssertionError

using Tensor without specifying trt_cache_path throws:

Python exception: 

Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 2866, in vapoursynth._vpy_evaluate
File "src\cython\vapoursynth.pyx", line 2867, in vapoursynth._vpy_evaluate
File "C:\Users\Selur\Desktop\test_2.vpy", line 38, in 
clip = femasr(clip, trt=True)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\vsfemasr\__init__.py", line 196, in femasr
module = lowerer(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 323, in __call__
return do_lower(module, inputs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\pass_utils.py", line 117, in pass_with_validation
processed_module = pass_(module, input, *args, **kwargs)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 320, in do_lower
lower_result = pm(module)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch\fx\passes\pass_manager.py", line 240, in __call__
out = _pass(out)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\passes\lower_pass_manager_builder.py", line 167, in lower_func
lowered_module = self._lower_func(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 180, in lower_pass
interp_res: TRTInterpreterResult = interpreter(mod, input, module_name)
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\lower.py", line 132, in __call__
interp_result: TRTInterpreterResult = interpreter.run(
File "I:\Hybrid\64bit\Vapoursynth\Lib\site-packages\torch_tensorrt\fx\fx2trt.py", line 252, in run
assert engine
AssertionError

I'm using CUDA-11.7_cuDNN-8.6.0_TensorRT-8.5.2.2_win64.7z from vs-animesr and NVIDIA Studio drivers 527.56 on a Geforce RTX 4080 on Windows 11.

Looks like the strange highlight is a defect of the pretrained model or the network itself. Clamping the tensor to 0-1 range doesn't help so it's not caused of out-of-range pixel values.

The TensorRT error is caused by engine creation failure due to OOM. Currently torch_tensorrt doesn't support all ops used in the network and has to partition the model into several subgraphs, which is the main culprit. You need to use a smaller tile for it to successfully build the engine, but the performance won't be optimal unfortunately.

Selur commented

You need to use a smaller tile for it to successfully build the engine, but the performance won't be optimal unfortunately.

To what, for example?
Since tile_w: int = 0, tile_h: int = 0, are the defaults and '0 denotes for do not use tile', I assumed there was no tiling, or if tiling was needed the dll would choose something that would work.

Using tile_w=420, tile_h=420, I get an output, but it's kind of broken.
420x420

Selur commented

Side note: tiling does lower the vram usage.
Using tile_w=200 and tile_h=200 results in: Padding size should be less than the corresponding input dimension, but got: padding (0, 220) at dimension 3 of input [1, 3, 216, 36]
Using tile_w=220 and tile_h=220 results in:
220x220

Seems like either I don't use 'unlucky' values, or tiling is somehow broken atm.

This model/network seems not suitable for tiling as it's quite sensitive about the content it sees and hence produces very inconsistent results between tiling and non-tiling.

Selur commented

Okay, so best I don't use tensor with this for now.
Tensor requires tiling to work, and tiling doesn't work with the model. ;)
Thanks for looking into it. :)