AttributeError: 'WarmupMultiStepLR' object has no attribute 'verbose'
amruz opened this issue · 2 comments
Thank you for the repository.
However, I am getting
File "train_alae.py", line 347, in
world_size=gpu_count)
File "/home/Documents/ALAE-master/launcher.py", line 131, in run
_run(0, world_size, fn, defaults, write_log, no_cuda, args)
File "/home/Documents/ALAE-master/launcher.py", line 96, in _run
fn(**matching_args)
File "train_alae.py", line 185, in train
reference_batch_size=32, base_lr=cfg.TRAIN.LEARNING_RATES)
File "/home/Documents/ALAE-master/scheduler.py", line 91, in init
self.schedulers[name] = WarmupMultiStepLR(opt, lr=base_lr, **kwargs)
File "/home/Documents/ALAE-master/scheduler.py", line 52, in init
self.step(last_epoch)
File "/home/.conda/envs/alae/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 166, in step
self.print_lr(self.verbose, i, lr, epoch)
AttributeError: 'WarmupMultiStepLR' object has no attribute 'verbose'
error while using MNIST dataset with config file mnist.yaml.
Pytorch - 1.7.0
Dareblopy - 0.0.5
UPDATE
If i add self.verbose=True in class "WarmupMultiStepLR" in scheduler.py the above error goes. Then the following error happens after 1 iteration
[1/60] - ptime: 19.65, loss_d: 1.9179583, loss_g: 0.8066629, lae: 0.6642829, blend: 1.000, lr: 0.001500000000, 0.001500000000, max mem: 666.889648",
Traceback (most recent call last):
File "train_alae.py", line 347, in
world_size=gpu_count)
File "/home/Documents/ALAE-master/launcher.py", line 131, in run
_run(0, world_size, fn, defaults, write_log, no_cuda, args)
File "/home/Documents/ALAE-master/launcher.py", line 96, in _run
fn(**matching_args)
File "train_alae.py", line 337, in train
model.module if hasattr(model, "module") else model, cfg, encoder_optimizer, decoder_optimizer)
File "train_alae.py", line 64, in save_sample
Z, _ = model.encode(sample_in, lod2batch.lod, blend_factor)
File "/home/Documents/ALAE-master/model.py", line 109, in encode
Z = self.encoder(x, lod, blend_factor)
File "/home/.conda/envs/alae/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/Documents/ALAE-master/net.py", line 345, in forward
return self.encode(x, lod)
File "/home/Documents/ALAE-master/net.py", line 311, in encode
x = self.from_rgbself.layer_count - lod - 1
File "/home/.conda/envs/alae/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/Documents/ALAE-master/net.py", line 257, in forward
x = self.from_rgb(x)
File "/home/.conda/envs/alae/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/Documents/ALAE-master/lreq.py", line 169, in forward
dilation=self.dilation, groups=self.groups)
RuntimeError: Given groups=1, weight of size [256, 1, 1, 1], expected input[32, 3, 4, 4] to have 1 channels, but got 3 channels instead
Any suggestions here?
If i add self.verbose=True in class "WarmupMultiStepLR" in scheduler.py the above error goes. Then the following error happens after 1 iteration
This should work. Apparently, they changed something in PyTorch version 1.7.
RuntimeError: Given groups=1, weight of size [256, 1, 1, 1], expected input[32, 3, 4, 4] to have 1 channels, but got 3 channels instead
Are you using custom images? The config for MNIST expects one channel images, but you have 3 channels apparently.
yes, i thought the input dataset path was set correctly. However it still was set to faces dataset at one part of my code.
Thanks for the quick reply. closing the issue since it was my mistake!