run.py forward() takes 2 positional arguments but 3 were given
pgodwin opened this issue · 4 comments
I'm attempting to use run.py - but getting the below error. Any ideas?
Loading TOFlow Net... Done.
Processing...
Traceback (most recent call last):
File "run.py", line 131, in <module>
predict = Estimate(net, Firstfilename=frameFirstName, Secondfilename=frameSecondName, cuda_flag=CUDA)
File "run.py", line 88, in Estimate
input=net(tensorPreprocessedFirst, tensorPreprocessedSecond),
File "C:\Users\pete\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() takes 2 positional arguments but 3 were given
Thank you for finding this bug.
Actually, I didn't check the codes under ./unstable, you have to debug if you want to use them.
As the question you proposed, you should revise this line to
input = net(torch.stack([tensorPreprocessedFirst, tensorPreprocessedSecond], dim=1))
Hope this works. And if it works, please tell me and I will update the code in this repository.
Thanks @Coldog2333. I've been using this to help denoise VHS captures and I think I've successfully trained a model.
Having some issues saving during evaluation though:
plt.imsave(os.path.join(out_img_dir, video, sep, 'out.png'),normalize(predicted_img.permute(1, 2, 0).cpu().detach().numpy()))
Traceback (most recent call last):
File "evaluate2.py", line 172, in <module>
vimeo_evaluate(dataset_dir, './test-result', pathlistfile, task=task, cuda_flag=cuda_flag)
File "evaluate2.py", line 164, in vimeo_evaluate
plt.imsave(os.path.join(out_img_dir, str_format % count),predicted_img.permute(1, 2, 0).cpu().detach().numpy())
File "/home/pete/.local/lib/python3.6/site-packages/matplotlib/pyplot.py", line 2133, in imsave
return matplotlib.image.imsave(fname, arr, **kwargs)
File "/home/pete/.local/lib/python3.6/site-packages/matplotlib/image.py", line 1496, in imsave
rgba = sm.to_rgba(arr, bytes=True)
File "/home/pete/.local/lib/python3.6/site-packages/matplotlib/cm.py", line 271, in to_rgba
raise ValueError("Floating point image RGB values "
ValueError: Floating point image RGB values must be in the 0..1 range.
I'm guessing there's a way to scale the result but unsure the best method yet.
I used:
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalized data
"""
return np.array((x - np.min(x)) / (np.max(x) - np.min(x)))
But this seemed to return variations of brightness levels between frames.
Normalize wasn't the right approach. I've just clipped the values outside of bounds.
plt.imsave(os.path.join(out_img_dir, video, sep, 'out.png'),np.clip(predicted_img.permute(1, 2, 0).cpu().detach().numpy(),0,1))
Please see Issue#6 for the image RGB value restriction problem if needed.