stack-of-tasks/tsid

TSID Position controlled robot, How to include real robot state?

Closed this issue · 4 comments

Hi, I am trying to implement a walking controller for a position-controlled robot in gazebo.

It seems to work as long as I don't feed real robot states (computed by gazebo) into tsid and instead just keep integrating the virtual/commanded states.
If I try to compute the task errors based on real robot states and then integrate the accelerations into desired positions, send to the real robot, the system quickly "explodes".

My problem is: how would you take into account the floating base estimations of the real robot if the formulation only works with the virtual states?

If I take a look at how mc_rtc implements its QPSolver::runClosedLoop() function I can see that they use the real robot state. It seems they do:

  • store the previous control robot state (lines 471-472)
  • set the state of the robot model used to compute dynamics to the observed state (lines 475 - 479)
  • compute the desired values for each QP task (lines 482 - 487)
  • Solve the QP in the solveNoMbcUpdate with the observed state
  • Reset the robot model back to its control values (lines 495-496)
  • Set the QP reference acceleration to the one obtained by the solver (line 499)
  • Use Euler integration to update the control robot state (lines 500-503)

I tried to replicate the same in tsid and it looks something like this:

VectorX q_real(q_tsid_.size()), v_real(v_tsid_.size());

// get real state
q_real.head(7) = robot_state.floatingBase().pos(); // x,y,z,Qx,Qy,Qz,Qw,q                                      
q_real.tail(nq_ - 7) = robot_state.joints().pos();
v_real.head(6) = robot_state.floatingBase().vel(); // vx,vy,vz,wx,wy,wz,dq
v_real.tail(nv_ - 6) = robot_state.joints().vel();
if(n == 0) {
  q_tsid_ = q_real; v_tsid_ = v_real;
}

// problem data real state
const auto& data_real = tsid_->computeProblemData(t_, q_real, v_real);

// solve
const auto& sol = solver_->solve(data_real);
a_sol_ = tsid_->getAccelerations(sol);

// integrate inside virtual state (integrate(q_in, v_in, a, dt, q_out, v_out))
robot_.integrate(q_tsid_, v_tsid_, a_sol_, dt, q_tsid_, v_tsid_);

// restore problem data to new virtual state
tsid_->computeProblemData(t_, q_tsid_, v_tsid_);

// send q_tsid to robot

Is there some obvious error when doing this? For the floating base state, I am currently using the ground truth from gazebo.
I tried the same in python with pybullet and observed instabilities with q_real, v_real. Again, it's fine when using virtual states only.

If the real/estimated states can't enter the QP in this way, how would you include e.g. floating base estimations? By adapting the task references for CoM and Torso based on IMU readings instead and keeping the QP state virtual?

Thanks a lot.

Hi @simon-armleder , sorry I am late on this. Applying TSID to a position-controlled robot is not trivial, as you have experienced.

The straightforward way to do it is not to feedback on the real state, but on the virtual state that you get by integrating the desired accelerations computed by the QP. This works, but has the major issue of not accounting for the state feedback in TSID. State feedback is only used by the joint position controller, and is limited to the joints (no floating base).

If you wanna plug the real state in TSID you start having issues, because you are feeding back the state twice: once in TSID and once in the joint position controller. Things get really messy here: for instance, you are computing a desired acceleration a_sol_, which is associated to the real state, but then you use it to integrate starting from the virtual state q_tsid_. I am not aware of any clean solution to this problem. The only clean way to do things is to use joint torque control, but not all robots can do that. However, you can take a look at how people have worked around similar issues in the past, for instance by looking at how team IHMC did it at the DRC.

Hi

We did a lot of experiments with this problem, both with mc_rtc and tsid. In a few words, like Andrea said, you cannot feedback on the actual joint positions because your position controller (the PID boards of the robot) is already doing it. However, I think we managed to loop on the floating base alone (keeping the rest of the state at "theoretical values") and can close the loop on the joint states when using torque control (and we need to).

Nevertheless, this is not how most humanoid position-controlled robots "close the loop": most robot (including ours: iCub and Talos) use an "open-loop" TSID controller (computing everything using the model values) and a stabilizer based on the measured force/torque sensors (in the feet) and/or the IMU. The stabilizer outputs corrections to the TSID controller (like, changing the CoM reference but we have several strategies implemented).

Hi @jbmouret and @andreadelprete

thanks a lot for your helpful answers. I realized that feedback on the joint level doesn't make sense for a position-controlled robot.

Do you give feedback on the floating-base state in our "open-loop" TSID controller? At the moment I even keep integrating the virtual floating base accelerations. Trying to feed ground truth information from gazebo or from a FB estimator leads to instability.

Instead, what helped to make the real robot stick to planned trajectories is to compute the end-effector poses again with the real measured joint state X_real = fk(q_real) and then integrate the errors X_ref = X_pattern + Kp \int (X_pattern - X_real) dt to adapt the tsid references.
Or would you not include the tracker in cartesian space and instead put it in joint space? (After the QP), to deal with weak actuators/bad tracking?

I also implemented the three strategies mentioned here. Each of them again changes the references produced by the pattern generators. However, I think I also don't understand this effect entirely.

For example, during the Single Support Phase, the support foot has an active 6Dof contact constraint at the ground, but in order to implement the Foot Admittance and foot force difference strategy, the orientation and height of the support foot needs to change. So the constraint is updated in every iteration by setting its reference contact-6d, setReference. If I don't do this the robot falls to the inside. But now the virtual tsid support foot is moving around, not sticking to the ground (blue, green curve)
plots_right
The offsets in the last plot come from the cartesian tracker, trying to pull the foot upwards on the planed foot trajectory (orange curve, plot 1) and the foot force difference controller during double support.

Another thing that seems to be supercritical is the motion task and contact task gain Kp of the swing foot right after it landed on the ground. If I just keep the last reference after impact, which is slightly higher than the ground because of early impacts/rotation errors, the foot bounces away from the ground. If I instead set the command to something slightly inside the ground (e.g. z=0) it will not bounce after impact.

z_ref = 0 after impact
https://user-images.githubusercontent.com/33764700/160884561-bfdb3635-7827-472d-b341-2b2a50034881.mp4

z_ref = last height from foot trajectory
https://user-images.githubusercontent.com/33764700/160884793-88a7ad58-0f46-4311-bbd9-03ddfff9ea6c.mp4

Hi, a while ago I got it working in simulation and also on the real robot. I ended up not using the real floating base state.

Two hacks were important for the real system:

  1. Adapting the end-effector poses again with the real measured joint state as mentioned above.
  2. Appling some small offsets on the groin and ankle joints after the QP computation. This helped to reduce the huge tracking errors that occur in those joints when shifting the weight from one leg to the other.