help with understanding how the coordinates are updated
Closed this issue · 4 comments
Hi @lh3 ,
I am having a little trouble understanding how the coordinates in hickit code are updated?
I know that in nuclear_dynamics, the change in position is velocitytime + 0.5acceleration*time^2. The calculation of the acceleration is force/mass with an extra term.
Is hickit using similar equation? Also, while going over the code I did not understand the const max_f and coef_moment?
Any help will be great!
Hi @tarak77,
As far as I understand (@lh3 please correct me if I got it wrong), hickit does not directly model Newtonian dynamics (e.g. 2nd order differential equation) as in nuc_dynamics; however, hickit's use of momentum (controlled by coef_moment
) makes the two similar. Please see the code fdg.c for details.
In particular, in the main function hk_fdg1()
, the "force" matrix f[][]
stores the repulsive and attractive forces via update_force()
as in nuc_dynamics. However, this "force" is not used exactly in the same way as acceleration in Newtonian dynamics particle coordinates. In particular:
Hickit first calculates the root-mean-square (RMS) "force" t
from f[][]
with fv3_L2()
and sqrtf()
. The RMS value is displayed as RMS_force
in standard error, and will be normalized down to max_f
(together with the entire f[][]
) if it exceeds max_f
. Note that hickit does not add random/thermal fluctuation (or calculate RMS velocity to determine temperature) as in nuc_dynamics.
Hickit then updates particle coordinates x[][]
using f[][]
as a "velocity", by incrementing x[i][j]
with f[i][j] * step
. However, hickit also retains a memory of the previous update inx[][] - x0[][]
, and uses it to adjust the current update as a "momentum" (opt->coef_moment * (t - x0[i][j])
), making it similar to Newtonian dynamics. The parameter coef_moment
controls the contribution of this momentum term (again, @lh3 please correct me if I got it wrong).
Edit: the above has been updated regarding momentum.
Best,
Tan
hickit poses this as an optimization problem. Force is proportional to gradient. We can use gradient descent to find the optima. In the code, hickit is using gradient descent with a momentum term as is often used in deep learning.
Different formulations lead to largely the same result. They differ in convergence time and in the robustness to local optima.
Thanks a lot @lh3 for the explanation! Under a gradient descent framework, everything makes much more sense now.
@tarak77, we built the whole fdg.c (short for "force-directed graph") to see how much we can simplify nuc_dynamics's energy minimization (i.e. optimization) procedure.
From hickit's results, the use of random/thermal fluctuations turns out not essential; however, hierarchical optimization from coarse to fine bin sizes turns out useful.
Edit: hickit does use gradient descent with momentum (similar to velocity in Newtonian dynamics). I've also edited my previous reply for more details.
Best,
Tan
I understand now. Thank you!