Model evaluation post-analysis requires almost as much runtime as the actual analysis in multi-pathway models
luisfabib opened this issue · 2 comments
For a 5-pulse DEER analyzed with 4 dipolar pathways, the least-squares routine takes 45.8% of the runtime, whereas the model evaluation (particularly the uncertainty propagation) takes a whopping 34.2%. This is based on profiling the ex_fitting_5pdeer_pathways.py
example of DeerLab.
One can see that the cause for such low performance is a repeated call of the dipolarkernel
function during the Jacobian construction to propagate the uncertainty from the model parameters to the model's response.
I already observed this when analyzing data with large numbers of parameters and slow models. The model evaluation can take an enormous time out of the whole analysis.
Model parameter uncertainty must always be quantified automatically - this is important.
I do not have a strong opinion of whether the parameter uncertainties should be propagated to the model (V(t)) automatically or not. If this propagation is very slow, it's okay to have the user trigger it after the fact if wanted.