adjoint sensitivities
martinjrobins opened this issue · 0 comments
martinjrobins commented
add ability to calculate gradient of some output function wrt parameters using adjoint sensitivities (faster than forward sensitivities models with 100s of parameters)
- #92
- #94
- refactor solver state to be a solver specific struct and implement a new trait
SolverState
, each solver can be save or load their state and resume - write a generic checkpointing struct that can (a) save a particular solve as a sequence of states defining a list of
n
segments of the solution trajectory, (b) activate segmenti
and (c) interpolate the solution at any point in segmenti
- refactor solver state to be a solver specific struct and implement a new trait
- #100
- #98
- new solver trait functions:
-
solve_integrate_adjoint(t_max)
: solves adjoint problem with a functional given byf = \int_0^t_max out(t) dt
. Returnsdfdp
, wherep
is the parameter vector anddfdp
is a dense matrix. Steps are (a) do forward solve, (b) save forward solve as checkpointing and create adjoint equations (c) solve adjoint equations in reverse time, (d) return solution of adjoint equations at final time as the result -
solve_sum_squared_adjoint(t_discrete, data)
: solves adjoint problem with functional given byf = sum_i (out(t_i) - data_i)^2
. Returns same as above. Steps are (a) do forward solve, (b) save forward solve as checkpointing and create adjoint equations (c) solve adjoint equations in reverse time, using event pullbacks to adjust the state at each data point (d) return solution of adjoint equations at final time as the result - add
solve_integrate
andsolve_sum_squared
, same as above but return value off
instead. Implement - add
solve_integrate_fwd
andsolve_sum_squared_fwd
, same as above but use forward sensitivities instead of adjoints
-