Getting closer to running but errors out on AttributeError: 'Tensor' object has no attribute 'T'
gateway opened this issue · 5 comments
... output..
processed optical flow: 00723 <---> 00724
processed optical flow: 00724 <---> 00725
processed optical flow: 00725 <---> 00726
processed optical flow: 00726 <---> 00727
processed optical flow: 00727 <---> 00001
Current size 256px
/home/gateway/anaconda3/envs/maua/lib/python3.7/site-packages/torch/nn/functional.py:2539: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Optimizing... size: 256, pass: 1, frame: 00002
Traceback (most recent call last):
File "canvas.py", line 337, in <module>
vid_img(opt)
File "canvas.py", line 238, in vid_img
content_frames = [match_histogram(f + noise, style_images_big[0]) for f in content_frames]
File "canvas.py", line 238, in <listcomp>
content_frames = [match_histogram(f + noise, style_images_big[0]) for f in content_frames]
File "canvas.py", line 49, in match_histogram
_, t, Ct = get_histogram(frame, eps)
File "canvas.py", line 22, in get_histogram
Ch = th.mm(h, h.T) / h.shape[1] + eps * th.eye(h.shape[0])
AttributeError: 'Tensor' object has no attribute 'T'
Line 22 in eb265bc
current command line:
btw:
Btw some things I noticed, your dirs are hardcoded in the config files, which is an easy fix.
if you try to run the script again on an already created folder it wont run, prob should do 2 things if possible. It would be nice not to have to recompute flow if all files are their, or have a flag for overwrite. For now the fix is to delete the dir and start the process all over again.
also wasn't sure what I need to change this to..
(maua) gateway@gateway-media:~/work/video/maua-style$ grep -r '/home/' *.py
style-similarity.py:dataset_folder = f"/home/hans/datasets/{dataset_name}"
cheers! We can get though this :)
What version of pytorch are you using? I think from 1.2 onwards .T should be available.
tensor.T is shorthand for tensor.transpose() or torch.t(tensor). You could try just substituting one of those other versions in, but that line works fine on my version, so I think this might be the cause of more problems.
I've written it so that it doesn't recalculate flow or rendered frames if they're already in the directory. What's the error you get when you run with an already created folder?
style-similarity.py tries to sort images by their color similarity. You don't need it to do style transfer. I use it to find style images that are similar in my huge directory. dataset_folder is just a file path to a folder full of images.
BTW: you might want to change the ffmpeg options in the .yaml file. I was using hevc_nvenc which is GPU accelerated, but that's not enabled unless you compile ffmpeg from source. This is probably a safer bet (although you should adjust the frame rate to whatever your frame rate is).
ffmpeg:
framerate: 30
pix_fmt: yuv420p
crf: 20
Also you can get away with less image_sizes and lower num_iterations for each size, that'll speed the style transfer part up quite a bit.
Yea, I have by default cuda 9.0 installed since a lot of the older code im using needs it and breaks.. so I had to install 10 as well w/o breaking my system :) so that all works and allowed my to run pytorch 1.2, cuda 9 = pytorch 1.1.x
another issue I ran into was with cupy 8.0x it doesn't have utils..
@cupy.util.memoize(for_each_device=True)
so this was changed to
@cupy.memoize(for_each_device=True)
I compiled my own ffmpeg a while go with gpu support so im good..
its currently optimizing sizes 256 so we will see what happens soon.. lol..
also is their a way to keep original size of the video or settings for 720p, 1080p?
As in the aspect resolution? It should just scale the short side. Although it looks like there are a couple hardcoded resizes left in load.py. You can try disabling those.
If you just mean the size of the image, you can change the image_sizes option in the yaml. The reason it starts out so small is to improve the style transfer. At each scale the algorithm will optimize different scales of details in the image leading to more info from the style getting transferred.
Any chance to get more performance out of the gpu when its in the optimizing size pass x frame x?
I'm not sure how to improve utilization, I think it's mainly a problem at the smallest scales. Theoretically things could be batched together, but that's a pretty big change from how it works now, especially because previous frames' results influence the next frames.
Going to go ahead and close this issue for now, performance is definitely something I want to take a look at in the future, the dependency related issues should be fixed by using the up-to-date requirements.txt