Questions about transfer time
FantasyJXF opened this issue · 5 comments
FantasyJXF commented
I use the cmd python WCT.py
to generate the image, and find each image costs more than 40 seconds, I use my MAC(Retina, 15-inch, Mid 2015) to run the model.
Your 12GB TITAN X costs less than 1 second? I read your paper, it said so.
Elapsed time is: 44.224898
Transferring in3.jpg
Elapsed time is: 43.248152
Transferring in1.jpg
Elapsed time is: 41.960227
Transferring in4.jpg
Elapsed time is: 51.534320
Processed 4 images. Averaged time is 45.241899
liwei92 commented
Using the GPU.
FantasyJXF commented
FantasyJXF commented
I try to use PyTorch 0.3.0, there is no warning, the result is the same as my PyTorch 0.4.0.
Hope your reply
sunshineatnoon commented
The results are produced by using 4 auto-encoders, I guess you use 5 auto-encoders.
FantasyJXF commented
I check the code, maybe that's the problem, I'm going to try.
Thanks
class WCT(nn.Module):
def __init__(self,args):
super(WCT, self).__init__()
# load pre-trained network
vgg1 = load_lua(args.vgg1)
decoder1_torch = load_lua(args.decoder1)
vgg2 = load_lua(args.vgg2)
decoder2_torch = load_lua(args.decoder2)
vgg3 = load_lua(args.vgg3)
decoder3_torch = load_lua(args.decoder3)
vgg4 = load_lua(args.vgg4)
decoder4_torch = load_lua(args.decoder4)
vgg5 = load_lua(args.vgg5)
decoder5_torch = load_lua(args.decoder5)
self.e1 = encoder1(vgg1)
self.d1 = decoder1(decoder1_torch)
self.e2 = encoder2(vgg2)
self.d2 = decoder2(decoder2_torch)
self.e3 = encoder3(vgg3)
self.d3 = decoder3(decoder3_torch)
self.e4 = encoder4(vgg4)
self.d4 = decoder4(decoder4_torch)
self.e5 = encoder5(vgg5)
self.d5 = decoder5(decoder5_torch)