Convergence issue with newer ASE
Closed this issue · 2 comments
Workaround: ML-NEB is still stable and compatible with ASE 3.17.0.
The two latest ASE stable releases, however, breaks ML-NEB, causing each iteration to slow down dramatically and possibly prevents convergence.
Help is wanted in identifying the bug.
I found that changing the line 387 in optimize/mlneb.py:
neb_opt.steps = 0
to:
neb_opt.nsteps = 0
solves the problem. There is no "steps" attribute in the Dynamics class of ase/optimize/optimize.py, but there is "nsteps", which keeps track of the number of steps taken in an optimization.
I found that changing the line 387 in optimize/mlneb.py:
neb_opt.steps = 0
to:
neb_opt.nsteps = 0
solves the problem. There is no "steps" attribute in the Dynamics class of ase/optimize/optimize.py, but there is "nsteps", which keeps track of the number of steps taken in an optimization.
Thank you @exenGT . This also fixes it for me on latest ase master.
Pull request #88 opened to fix it. @jagarridotorres could you please double check this?