[BUG] Simulation Crashes Due To Periodic Box Shrinking
Closed this issue · 4 comments
Describe the bug
Occasional simulation crashes seem to be caused by what the engine is reporting as a periodic box shrinking. This crash tends to happen between λ values of 0.6-1.0 for the system provided (GLU --> GLY mutation on a tripeptide with co-alc water), after around 1ns
of equilibration time. I have also seen this crash happen on two other systems (same system without co-alc water and a large 770 residue system), with variety of box sizes. I don't believe that the issue is necessarily with barostat itself, but rather the system blowing up and triggering this error somehow (suggested previously here). Currently, I am unable to reproduce this crash in sire, which is interesting since I tried to keep the simulation parameters as similar as possible.
Exact error reported is:
Minimisation/dynamics at λ = 1.0 failed with the following exception The periodic box size has decreased to less than twice the nonbonded cutoff.
To reproduce
SOMD2
Extract the provided tar.gz
file and run the SOMD2 input file with:
somd2 e2g_box_30.bss --log-level debug --timestep 4fs --log-file output.log --cutoff-type rf --runtime 500ps --num-lambda 33 --cutoff 10A --equilibration-time 10000ps --equilibration-timestep 2fs --energy-frequency 1ps --frame-frequency 200ps --checkpoint-frequency 5ns
output.log
file is also provided which contains the logged errors from a previous run.
Sire
import sire as sr
from sire import morph
import numpy as np
system = sr.stream.load("e2g_box_30.bss")
system = morph.link_to_reference(system)
system = morph.repartition_hydrogen_masses(system, 1.5)
# run params
timestep = "2fs"
equil_time = "10000ps"
run_time = "500ps"
cutoff_type = "rf"
energy_frequency = "1ps"
lambda_value = 0.65625 # problematic lambda value
cutoff = "10A"
constraint = "h-bonds"
perturbable_constraint = "h-bonds-not-perturbed"
# miminmisation
min_system = (
system.minimisation(
lambda_value=lambda_value,
constraint=constraint,
perturbable_constraint="none",
)
.run()
.commit()
)
lambda_values = list(np.linspace(0.0, 1.0, 33))
d = min_system.dynamics(
timestep="2fs",
temperature="25oC",
cutoff="10A",
cutoff_type=cutoff_type,
lambda_value=0.65625,
shift_delta="2A",
pressure="1atm",
constraint=constraint,
perturbable_constraint=perturbable_constraint,
)
# generate random velocities
d.randomise_velocities()
# equilibrate, not saving anything
d.run(equil_time, save_frequency=0)
lambda_index = lambda_values.index(0.65625)
lambda_windows = lambda_values[max(lambda_index - 1, 0) : min(len(lambda_values),lambda_index + 2)]
# run the dynamics, saving the energy every 1ps
d.run(
run_time,
energy_frequency=energy_frequency,
frame_frequency=0,
lambda_windows=lambda_windows,
)
Input files
Environment information:
SOMD2 version: 0.1.dev261+g6e0da22.d20240307
Sire version: 2024.1.0.dev+d971cfd
- compiled with OpenMM performance enhancements enabled
The increased stability within sire is probably explained by the fact that SOMD2 currently does not use constraints during equilibration and given that I been running equilibration at 2fs
timestep, this is likely causing the system to explode in SOMD2 and somehow triggers the periodic box shrinking error.
somd2/src/somd2/runner/_dynamics.py
Lines 176 to 179 in ddef28f
Thanks. I think we should probably equilibrate with constraints if it's unstable without, or perhaps turn them on and restart if it crashes. We could also expose constraint options for the equilibration too, but it might get a bit much.
I think some sort of option to enable constraints during equilibration would be good. At the moment --equilibration-timestep
flag can only be stably used with 1fs
timestep, so the flag by itself isn't really doing much. The option to automatically enable constraints after crashes is nice, but in this case the simulation only crashes after around 1ns
of runtime and so you would have to wait that long only for the simulation to restart with constraints. The final option of exposing constraints to CLI, perhaps a simple --equilibration-constraints
(enabled by default) and --no-equilibration-constraints
which would set the constraints to those of the production simulation, or disable them entirely would do the trick?
That would work and avoid the need to expose a bunch more constraint options. That way we could also match what is chosen for the minimisation performed prior to equilibration and production. We can also raise a warning if the user chooses no constraint but specifies a time step larger than 1fs.