facebookresearch/madgrad

Optimizers from 1.1 incompatible with 1.2

AngledLuffa opened this issue · 0 comments

If I save a Madgrad optimizer from v1.1 as part of a pytorch model, it is not compatible with v1.2. I get the following error (trainer.py is obviously in our code):

  File "/sailhome/horatio/stanza/stanza/models/constituency/trainer.py", line 780, in train_model_one_epoch
    optimizer.step()
  File "/u/nlp/anaconda/main/anaconda3/envs/stanza-1.2/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/u/nlp/anaconda/main/anaconda3/envs/stanza-1.2/lib/python3.7/site-packages/torch/optim/optimizer.py", line 113, in wrapper
    return func(*args, **kwargs)
  File "/u/nlp/anaconda/main/anaconda3/envs/stanza-1.2/lib/python3.7/site-packages/madgrad/madgrad.py", line 102, in step
    decouple_decay = group["decouple_decay"]

Perhaps group.get(decouple_decay, reasonable_default) would make old models from 1.1 compatible with 1.2