fdevinc/ungar

False result of solution under first round

Opened this issue · 1 comments

Very glad to make a try to use ungar to learn npmc, it's a excellent nmpc solver for researchers and learners. However, I encountered some issues. For example, the solutions in the first iteration consistently fail to meet the expected criteria. I suspect this might be related to some initial value settings, but despite my attempts, I have been unable to resolve it. Could you assist me in identifying a solution to this problem? I would greatly appreciate it. The solutions in the first iteration can be compared with those in subsequent iterations, and it's evident that the solutions in the first iteration do not meet the desired expectations.
image

If this issue cannot be resolved, it might necessitate running the problem-solving process twice, leading to a decrease in computational speed, which is not our desired outcome.

Hi @passer-by-Wang, thank you for your inquiry and for giving Ungar a try! From the output you shared, I assume you are tweaking the quadruped MPC example. As stated in another post, the provided examples are meant to illustrate the capabilities of the library and are not supposed to be used in production code. Specifically, the way I treat reference footholds in the quadrupedal locomotion implementation is very naive and easily creates conflicts with base tracking objectives. Also, there might be minor errors in the initialization of the various trajectories -- I have just found one.

Usually, nonlinear MPC controllers adopt a real-time iteration (RTI) scheme whereby each iteration is warm started with the solution from the previous iteration shifted by one time step. If the previous solution was optimal, then we can expect that its shifted version will be close to optimal for the current iteration, and a nonlinear solver should not take many iterations to converge (ideally, it should take only one). Thus, to guarantee that the RTI approach is effective, you should make sure that the solver is sufficiently "warm". To do so, you have multiple options:

  1. Running the solver for a large number of iterations when the controller starts:
...
// Define OCP optimizer.
SoftSQPOptimizer initialOptimizer{false, variables_.Get(step_size), 124_idx, 24.0, 1e-1};
SoftSQPOptimizer optimizer{false, variables_.Get(step_size), 4_idx, 24.0, 1e-1};
...
        // Solve OCP and log optimization results.
        if (!time) {
            variables_.Get(decision_variables) = initialOptimizer.Optimize(ocp, variables_.Get());
        } else {
            variables_.Get(decision_variables) = optimizer.Optimize(ocp, variables_.Get());
        }
...
  1. Setting the initial gait of the robot to stance: from your output, it looks like you switch to a different gait after 0.025 seconds, which might not be ideal for the solver. You may consider keeping a stance gait for some time before starting more dynamic gaits.
  2. Tweaking the objective function and the constraints: for example, linearizing the friction constraints greatly reduces the nonlinearities without noticeably affecting the capabilities of the robot. You may also consider adding cost weights to the Ungar variable Rho.
  3. Changing the solver parameters, especially its stiffness and epsilon: usually, the larger the stiffness and/or the epsilon, the more difficult becomes the convergence.

Once again, I would not be surprised if there were other issues hindering proper convergence in the code. Ideally, the provided examples should be used as bases to start from rather than complete controllers. I hope this helps, and feel free to reach out anytime if you have additional concerns!