Install MuJoCo if it is not already the case:
- Obtain a license on the MuJoCo website.
- Download MuJoCo binaries here.
- Unzip the downloaded archive into
~/.mujoco/mujoco200
and place your license key filemjkey.txt
at~/.mujoco
. - Use the env variables
MUJOCO_PY_MJKEY_PATH
andMUJOCO_PY_MUJOCO_PATH
to specify the MuJoCo license key path and the MuJoCo directory path. - Append the MuJoCo subdirectory bin path into the env variable
LD_LIBRARY_PATH
.
Install the following libraries:
sudo apt update
sudo apt install libosmesa6-dev libgl1-mesa-glx libglfw3
Install dependencies:
conda env create -f conda_env.yml
conda activate arl116
Install EARL benchmark (optional, but necessary if you want to use the codebase directly):
git clone https://github.com/architsharma97/earl_benchmark.git
cd earl_benchmark/
pip install -e .
Download and unzip the demos folder locally. The folder 'vision_demos' should now have all vision demos necessary to run experiments. Now, train an autonomous RL agent using MEDAL:
python3 medalplusplus.py
To run on an actual robotic platform, refer to the README within the iris_robots folder.
Monitor results:
tensorboard --logdir exp_local
Run franka/medal_franka.py
to reproduce the robot experiments -- please follow the instructions under the iris_robots
submodule. Also run:
git clone --recurse-submodules https://github.com/ahmeda14960/iris_robots.git
To fetch the latest version of our robot environment and the corresponding instructions. You will need to follow the instructions for demo collection
tailored towards your scene setup and move the subsequent forward/backward demo files under the franka_demos
folder within the respective task subfolder.
The codebase is built on top of the PyTorch implementation of DrQ-v2, original codebase linked here. We thank the authors for an easy codebase to work with!