Produce Larger Output Image
kyung645 opened this issue · 6 comments
Hi,
Is it possible to produce larger output images? Currently, it seems the outputs are around 450x300. I tried adding a --load_size 1024 option but it returns with " TypeError: can't multiply sequence by non-int of type 'float' " . Would you happen to know how to generate larger images around 1024x1024? Thanks.
@kyung645
Hi,
I'm not a contributor but I succeeded to produce as large photo as size of 2048x2048.
My environment is
GPU:NVIDIA P100 with 16GB of VRAM x2
Pytorch version: 1.0
Original pytorch code is fine but as volatile option is deprecated in recent pytorch, test.py should be fixed unless you would be annoyed by memory exhaustion.
First, inference part should be under torch.no_grad(). Second, the cache should be removed every image.As a result, the last part of test.py should be like this. Then you can produce with large photo inputs modifying the --load_size option.
Good luck!
if opt.gpu > -1:
with torch.no_grad():
input_image = Variable(input_image).cuda()
# forward
output_image = model(input_image)
output_image = output_image[0]
# BGR -> RGB
output_image = output_image[[2, 1, 0], :, :]
else:
with torch.no_grad():
input_image = Variable(input_image).float()
output_image = model(input_image)
# deprocess, (0, 1)
output_image = output_image.data.cpu().float() * 0.5 + 0.5
# save
print ('Saving...%s' % (files[:-4] + '_' + opt.style + '.jpg'))
vutils.save_image(output_image, os.path.join(opt.output_dir, files[:-4] + '_' + opt.style + '.jpg'))
torch.cuda.empty_cache()
print('Done!')
@kyung645 The --load_size argument is missing type declaration in 'test.py' on line 13. It should read like this;
parser.add_argument('--load_size', type=int, default = 450)
Thank you all @enigmanx20 @DrazHD for making the code better!
Thank you @enigmanx20 @DrazHD.
The type declaration for the argument was a quick fix and it helped produce images with at least size 1024x1024.
I tried to generate 2048x2048 size after but ran into a memory problem so will be trying @enigmanx20's suggestion. By the way, NVIDIA P100 sounds nice! Is that a local setup?
Thanks again!
I test the 1024*1024 image, it takes 13G memory to run the model, is that OKay?
@kyung645
Hi,
I'm not a contributor but I succeeded to produce as large photo as size of 2048x2048.
My environment is
GPU:NVIDIA P100 with 16GB of VRAM x2
Pytorch version: 1.0
Original pytorch code is fine but as volatile option is deprecated in recent pytorch, test.py should be fixed unless you would be annoyed by memory exhaustion.First, inference part should be under torch.no_grad(). Second, the cache should be removed every image.As a result, the last part of test.py should be like this. Then you can produce with large photo inputs modifying the --load_size option.
Good luck!
if opt.gpu > -1: with torch.no_grad(): input_image = Variable(input_image).cuda() # forward output_image = model(input_image) output_image = output_image[0] # BGR -> RGB output_image = output_image[[2, 1, 0], :, :] else: with torch.no_grad(): input_image = Variable(input_image).float() output_image = model(input_image) # deprocess, (0, 1) output_image = output_image.data.cpu().float() * 0.5 + 0.5 # save print ('Saving...%s' % (files[:-4] + '_' + opt.style + '.jpg')) vutils.save_image(output_image, os.path.join(opt.output_dir, files[:-4] + '_' + opt.style + '.jpg')) torch.cuda.empty_cache() print('Done!')
I don't have a nvidia gpu, so I run it by cpu,.
I have 32GB RAM, but it seems useless, I have tried the code your typed, and set load_size = 1000, but the OOM problem is also exist.