StanfordASL/neural-network-lyapunov

Example training px4 Lyapunov

Closed this issue · 6 comments

Hi, great repo :) I'm starting to work with some examples, in particular the px4 one. I have one question: in train_forward_model the notation "_sp " is related to setpoint or speed?

Hello! _sp refers to the setpoint that's right.

Thanks! I'm also new to the px4 and drone world. Where can I read to clearly understand how and from where to take the data for this training?

I believe we never actually got around to getting data for this specific example (it's not mentioned in the paper). If I recall correctly though the idea was to train a high level position controller on top of the existing PX4 attitude controller. The attitude controller takes an attitude setpoint (roll, pitch, yaw) and a thrust setpoint, and uses its internal state estimate (attitude + rates) to produce the motor commands (with a cascaded PID). In theory you could try to learn a position controller that drives a position error (dx, dy and dz) to zero over time. I actually think this would work, we just didn't get around to doing it... The hint to this is the comment we wrote

    # The forward model maps (roll[n], pitch[n], yaw[n],
    # roll_sp[n], pitch_sp[n], yaw_sp[n], thrust_sp[n]) to
    # (dx[n+1] - dx[n], dy[n+1] - dy[n], dz[n+1] - dz[n], roll[n+1] - roll[n],
    # pitch[n+1] - pitch[n], yaw[n+1] - yaw[n])

@hongkai-dai might remember this better than me though.

As you said, we never really trained the dynamics model or the Lyapunov function for PX4.

Okay, thank you for all the replies :)

Hi, I'm re-opening this issue to ask a quick question: how can I test the quadrotor 3d simulation in pybullet? I'm not able to find the relative code in the repository.