Welcome to the force control tutorials with TIAGo of the NeuTouch Summer School. In this README, you can find install instructions and task descriptions for the two tasks in this tutorial. Note that for the first task, you can choose between using docker or installing the TIAGo simulation manually. We recommend using the manual installation, as this is the only way to run the second exercise.
In this first task, the goal is to implement a controller that reaches and maintains a given (but variable) target force. As the robot is either position or velocity controlled, you need to think about how to transform force deltas into position deltas. A code skeleton can be found in force_control.py, where the TIAGo gripper performs a closing movement.
Using docker was only successfully tested on Linux (Ubuntu 18.04) so far. Setting up X-forwarding under macOS can be tricky, so we recommend the manual installation option in that case.
- Clone the repository and change into the directory:
git clone https://github.com/llach/tiago_summer_school && cd tiago_summer_school
- Build the docker container using:
docker build -t force_control .
- Run
force_control.py
inside the docker container like so:
docker run --device /dev/dri -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $PWD/force_control.py:/main.py force_control
The script will launch two different GUIs (a pyBullet visualization and live plots of interesting variables), thus we need to set the DISPLAY
environment variable, forward /dev/dri
and forward the X11 socket.
Additionally, we mount force_control.py
as a volume. This allows us to edit the python file locally and execute it using docker run
without rebuilding the container again.
In case your IDE does not auto-complete tiago_rl
related code, you need to install the package manually as well.
The manual installation is fairly simple, too. It is recommended to use pyenv and set up an environment using python 3.9.6.
- Follow install instructions of tiago_rl
- Clone the repository and change into the directory:
git clone https://github.com/llach/tiago_summer_school && cd tiago_summer_school
- Run the script:
python force_control.py
You can also use the TIAGo simulation environments to train reinforcement learning algorithms to perform force control. We recommend using established and tested RL repositories for this, but you are not constrained to a particular project as the environments follow the widely accepted OpenAI gym conventions. In this example, we'll use stable-baselines3, but feel free to experiment with other projects as well.
- Follow install instructions of tiago_rl
- Install
stable-baselines3
pip install -U stable-baselines3
- Clone the repository and change into the directory:
git clone https://github.com/llach/tiago_summer_school && cd tiago_summer_school
- Run the script
python learn_control.py
Q: The fingers are sliding into the object. Is that normal?
A: Yes. We have changed the object's stiffness to be lower than usual to simulate a deformable object. As a result, the fingers are allowed to penetrate the object.
Q: How are TIAGo's fingers controlled?
A: Each finger is controlled individually on the real robot and also in simulation. Their maximum position while being fully open is 0.045 (corresponding to centimeters from the gripper's center), and their closing position is at 0.0.