alexliniger/MPCC

C++ implementation question

LiJiangnanBit opened this issue · 4 comments

@alexliniger Hi, the C++ version is an implementation of this paper "AMZ Driverless: The Full Autonomous Racing System", but I noticed that the solver used is HPIPM, which is a QP solver. So the C++ version does not solve the non-linear problem directly(as described in the paper), but uses the same linearization method as the matlab version("Optimization‐based autonomous racing of 1:43 scale RC cars"). Am I right?
also, I wonder is it efficient enough to solve the NLP directly.
Thanks!

There are two standard ways to solve NLPs locally, nonlinear interior point methods (NIP) and sequential quadratic programming (SQP). Both methods do linearize the optimization problem at some point, NIP methods linearize the KKT system at every step, whereas SQP methods approximate the NLP as a QP at every outer (SQP) iteration.
In the AMZ paper we use FORCES PRO which is a fast NIP method (basically IPOPT tailored for MPC), and here I use a simple SQP method.
What I did in "Optimization‐based autonomous racing of 1:43 scale RC cars" is often called real time iteration (RTI), where you do SQP but you only solve one QP instead of solving the sequence of QPs until convergence.
The advantage of RTI is that it is really quick, so in this repo (actually both matlab and C++) I use a SQP method with 2 or 3 iterations, which gives better results than RTI, but is still about twice as fast as FORCES PRO.
Best,
Alex

Thanks for your reply! I'll take a look at the code.

Hi, @alexliniger, could you elaborate on how you applied SQP?
I also think about applying MPC with parameterized reference (theta) in my project.

My question comes from the gradients of the objective function.

I refer to followings to understand SQP

  1. Sequential quadratic programming - optimization
  2. From linear to nonlinear MPC: bridging the gap via the real-time iteration

First, your formulation of Nonlinear MPC problem is following without regularization.

2020-04-08 (2)

M & P are constant matrices to express constraints.

And with applying the idea of 3.1 of [2]

2020-04-08 (4)

3.1 of [2] applies the newton method calculating delta x and delta u, but at your case directly calculating x because of x = x^{guess} + \delta x. and here normalization of the problem makes the newton method easier.
And constraints come back to the simple original form

2020-04-08 (5)

Because f() is not a simple form, the approximation of Hessian to
'Q' cannot work (I am not sure) also other parts becomes a bit complex form, but your C++ code and Matlab code handles much simpler objective function as taking care of theta^{ref} as constants.

This part made me confused. I'm sorry if this is just my basic calculus problem.

[Edit]
I figured out the Hessian and the gradient. Now I'm trying to understand the part of \Delta x -> x in the objective function.

2020-04-12 (2)

Sorry for not replaying, I am happy that you found the solution yourself
I assume some of the issue come up due to how I formulated the cost here.
You can also think about it as j(x,u) = q_l e_lag(x)^2 + q_c e_cont(x) - q_theta theta + cost(u) + cost(Delta u)

Best,
Alex