Mass continuity cost function not minimizing all the way
Closed this issue · 3 comments
I am running pydda on data from two radars and comparing its output to that of another program which I know works well. The output for u and v wind is great, but the magnitude of w is maybe 3 or 4 times too low, and the shape is inaccurate in some areas.
When I initialize w as a constant field of something other than 0, the minimization function does not get very far from that point (so if I initialize at 5, w stays fairly close to there for all points). Initializing u or v as something other than 0 does not produce this result. They still are able to go back to the correct values.
I am only using the radial velocity and mass continuity cost functions, and I have narrowed this down to be an issue with the cost function for mass continuity. That would explain why changing initialization of u or v does not affect the end state (those vectors are dominated by the radial velocity cost), but changing initialization of w does (those vectors are dominated by mass continuity).
I have further confirmed this by optimizing purely u and v with only the radial velocity cost - the u and v output is very accurate. I then held u and v constant and optimized only w with only mass continuity, and the w output was about the same as if I had run the program normally.
The l-lfgs-b function occasionally returns an “ABNORMAL_TERMINATION_IN_LNSRCH” warning, depending on various factors, such as the parameters passed to l-lfgs-b, the weights for the cost functions, and the wind field initialization. It is worth noting that the erroneous outputs I’m getting for w (with a field initialized at 0,0,0) are still erroneous whether the minimization converges or returns the aforementioned warning. As far as I recall, this warning was not output when the mass continuity weight was zero (not 100% sure on that though).
I will continue looking into this for a little while longer, but does anyone have thoughts as to what I could do to fix this?
We have found an issue with how PyDDA calculates gradients at the boundaries. By next month, we will be making a new release of PyDDA using automatic differentiation that will hopefully resolve this issue.
We have released PyDDA 1.0! We now have three different options for the libraries to use for the optimization loop: SciPy, Jax, and TensorFlow. The latter two use automatic differentiation to calculate the gradients. We have observed that, in particular, convergence with the TensorFlow engine is typically reached with much fewer iterations than the original implementation.
I have tested the TensorFlow engine, it works very well, and the w field looks better than before.