Should controllers be allowed to use `U_GOAL` or only `U_EQ`?
adamhall opened this issue · 6 comments
Carrying on from #93.
Many controllers have been using env.U_GOAL
either as a linearization point or in the cost which uses the true system mass, not the prior mass, which is what symbolic.U_EQ
is for. Should controllers have access to U_GOAL
or they should exclusively be using U_EQ
. @Justin-Yuan What are your thoughts on this for the RL cost and Normalization?
- For RL cost I think it should use the true parameters since that's the only source of information for learning, and if that is not the true ones, there's no way an RL agent can learn well when testing in the true environment.
- For RL normalization, I think both will be fine to use, but the true parameters might be better because the action space normalization is treated as part of the environment in our current design (if we change the normalization that means the task itself also changes). So if we treat priors as part of the algorithmic design, it shouldn't affect the task (normalization) right?
But for the control methods, this can be tricky since the cost function is part of both the control algorithm and the environment. The ideal case is we have a clear boundary between what's given as the environment/task (which will be used in evaluation) and what's part of the control algorithm. I'd say the cost itself (nonlinear quadratic) is still part of the task side (since we need it in evaluation anyways), but anything that uses linearization (needed in algo optimizations) can use the prior.
@adamhall Do we currently have anywhere that needs to be fixed regarding this issue?
@adamhall @Justin-Yuan status?
I am leaning towards using symbolic.U_EQ
for linearization and env.U_EQ
for cost function or reward, the current/updated symbolic model should already be able to expose U_EQ
, but I'm not sure if the MPC controllers have been updated to use them as well? @adamhall
Closing issue due to staleness