/cajun

Primary LanguagePython

CAJun: Continuous Adaptive Jumping using a Learned Centroidal Controller

This repository contains the code for the paper "CAJun: Continuous Adaptive Jumping using a Learned Centroidal Controller".

The main contents of this repository include:

  • The simulation environment and training code to reproduce the paper results.
  • The real-robot interface to deploy the trained policy to a real-world Go1 quadrupedal robot.
  • An Isaacgym implementation of the Centroidal QP Controller, which can be executed efficiently in parallel in GPU.

Reproducing Paper Results

Setup the environment

First, make sure the environment is setup by following the steps in the Setup section.

Evaluating Policy

We provide both the pronking and trotting policy, as well as their end-to-end counterparts, in example_checkpoints. You can check it out by running:

python -m src.agents.ppo.eval --logdir=example_checkpoints/bound_cajun/ --num_envs=1 --use_gpu=False --show_gui=True --use_real_robot=False --save_traj=False

You can evaluate other policies by pointing to different logdir in the example_checkpoints folder. Please check the Python file for all available command line flags.

By default, the eval.py alternates the jumping command between long and short jumps. This is specified in line 64 of src/ppo/eval.py, where config.environment.jumping_distance_schedule is set to [1., 0.3] so that the desired jumping distance cycles between 1m and 0.3m. You can modify this list for any arbitrary schedule.

Usage

Train Policies:

The environment and training configurations are stored in src/envs/configs and src/agents/ppo/configs respectively. To train CAJun and baseline policies for jumping, run the following:

  1. CAJun Policies

    Pronking:

    python -m src.agents.ppo.train --config=src/agents/ppo/configs/pronk.py --logdir=logs/

    Bounding:

    python -m src.agents.ppo.train --config=src/agents/ppo/configs/bound.py --logdir=logs/
  2. E2E Policies

    Pronking:

    python -m src.agents.ppo.train --config=src/agents/ppo/configs/pronk_e2e.py --logdir=logs/

    Bounding:

    python -m src.agents.ppo.train --config=src/agents/ppo/configs/bound_e2e.py --logdir=logs/
  3. CAJun-QP Policies

    To train CAJun policies that utilize the constraint-enabled QP solver (instead of the clipped version), override the CAJun config with the following flag:

    Pronking:

    python -m src.agents.ppo.train --config=src/agents/ppo/configs/pronk.py --logdir=logs/pronk_qp \
    --config.environment.use_full_qp=True

    Bounding:

    python -m src.agents.ppo.train --config=src/agents/ppo/configs/bound.py --logdir=logs/bound_qp \
    --config.environment.use_full_qp=True

Running Centroidal Controller with GPU acceleration

To directly run the low-level centroidal controller with GPU acceleration, run:

python -m src.controllers.centroidal_body_controller_example --show_gui=True --use_gpu=True

which executes a trotting gait in the simulator by following a heuristically designed centroidal trajectory.

  • You can choose the number of robots to simulate --num_envs=X, switch between CPU/GPU --use_gpu=True/False and disable/enable GUI --show_gui=True/False via commandline flags.
  • Additionally, you can switch between simulation and real robot --use_real_robot=True/False. When running on the real robot, make sure to set --use_gpu=False, --show_gui=False and --num_envs=1 for best performance.

Dog Tracer

We provide a simple tool to visualize the logged robot trajectories. When evaluating PPO trajectories using src.agents.ppo.eval and set save_traj=True, the logged trajectory can be visualized using the dog_tracer web GUI.

To start dog_tracer, run:

python -m src.dog_tracer.dog_tracer

and load the trajectories from the UI.

Setup

Software

  1. Create a new virtual environment under Python 3.6, 3.7, 3.8 (3.8 recommended).

  2. Install dependencies:

    pip install -r requirements.txt

    Note that the numpy version must be no later than 1.19.5 (already specified in requirements.txt) to avoid conflict with the Isaac Gym utility files.

  3. Download and install IsaacGym Preview 4:

    • Download IsaacGym from https://developer.nvidia.com/isaac-gym. Extract the downloaded file to the root folder.
    • cd isaacgym/python && pip install -e .
    • Try running example cd examples && python 1080_balls_of_solitude.py. The code is set to run on CPU so don't worry if you see an error about GPU not being utilized.
  4. Install rsl_rl (adapted PPO implementation)

    cd rsl_rl && pip install -e .
  5. Lastly, build and install the interface to Unitree's Go1 SDK. The Unitree repo has been releasing new SDK versions. For convenience, we have included the version that we used in third_party/unitree_legged_sdk.

    • First, make sure the required packages are installed, following Unitree's guide. Most nostably, please make sure to install Boost and LCM:
    sudo apt install libboost-all-dev liblcm-dev
    • Then, go to third_party/go1_sdk and create a build folder:
    cd third_party/go1_sdk
    mkdir build && cd build

    Now, build the libraries and move them to the main directory by running:

    cmake ..
    make
    mv go1_interface* ../../..

Robot Setup

Follow these steps if you want to run policies on the real robot.

  1. Disable Unitree's default controller

    • By default, the Go1 robot enters sport mode and executes the default controller program at start-up. To avoid interferences, make sure to disable Unitree's default controller before running any custom control code on the real robot.
    • You can disable the default controller temporarily by pressing L2+B on the remote controller once the robot stands up, or permanently (recommended) by renaming the controller executable on the robot computer with IP 192.168.123.161.
    • After disabling the default controller, the robot should not stand up and should stay in motor damping mode.
  2. Setup correct permissions for non-sudo user

    Since the Unitree SDK requires memory locking and high process priority, root priority with sudo is usually required to execute commands. To run the SDK without sudo, write the following to /etc/security/limits.d/90-unitree.conf:

    <username> soft memlock unlimited
    <username> hard memlock unlimited
    <username> soft nice eip
    <username> hard nice eip

    Log out and log back in for the above changes to take effect.

  3. Connect to the real robot

    Connect from computer to the real robot using an Ethernet cable, and set the computer's IP address to be 192.168.123.24 (or anything in the 192.168.123.X range that does not collide with the robot's existing IPs). Make sure you can ping/SSH into the robot's computer (by default it is unitree@192.168.123.12).

  4. Test connection

    Start up the robot and make sure the robot is in joint-damping mode. Then, run the following:

    python -m src.robots.go1_robot_exercise_example --use_real_robot=True --use_gpu=False --num_envs=1

    The robot should be moving its body up and down following a pre-set trajectory. Terminate the script at any time to bring the robot back to joint-damping position.

Code Structure

Simulation

The simulation infrastructure is mostly a lightweight wrapper around IsaacGym that supports parallel simulation of the robot instances:

  • src/robots/robot.py contains general robot API.
  • src/robots/go1.py contains Go1-specific configurations.
  • src/robots/motors.py contains motor configurations.

Real Robot Interface

The real robot infrastructure is mostly implemented in robots/go1_robot.py, which invokes the C++ interface via pybind to communicate with Unitree SDKs. In addition:

  • src/robots/go1_robot_state_estimator.py provides a simple KF-based implementation to estimate the robot's speed.

Centroidal QP Controller

The Centroidal QP Controller is implemented in src/controllers:

  • src/controllers/phase_gait_generator.py implements the the gait modulation for each leg.
  • src/controllers/qp_torque_optimizer.py implements the torque controller for stance legs.
  • src/controllers/raibert_swing_leg_controller implements the position controller for swing legs.

Environments

The environment is implemented in src/envs/jump_env.py, where the configs can be found at src/envs/configs.

Acknowledgments

This repository is inspired by, and refactored from, the legged_gym repository. In addition, the PPO implementation is modified from rsl_rl. We thank the authors of these repos for their efforts.