SEER - Combining Analytic Control with Learning to Create a Stabilizing Controller that Works in Reality
Built by Damjan Denic, Martin Graf, Nav Leelarathna, and Sepand Dyanatkar
- Clone the repository inclusive submodules
git clone --recurse-submodules git@github.com:Paralelopipet/SEER.git
- Pretrained weights are available here.
- If you want to use them, place them in the folder
pybullet_multigoal_implementation/drl_implementation/examples
- If you want to use them, place them in the folder
- To setup natively:
- Create conda environment
conda env create -f environment.yml
- note: this will take a while (3 to 4 coffees)
- Activate environment
conda activate l32_seer
- Install our gym package
pip install --editable pybullet_multigoal_gym
- Install our RL package
pip install --editable pybullet_multigoal_implementation
- Install seer package
pip install --editable .
- Create conda environment
- To setup with Docker:
- Run
docker build -f evaluate.Dockerfile -t seer-evaluate .
- this needs to be repeated whenever any files were changed
- Run
Run pytest
- to train natively, run
python seer/train_and_eval_configs/config_runner.py --config seer.train_and_eval_configs.rl_training.<scenario to train>
- To train using Docker, run
docker run --memory=6g --cpus=4 --mount "type=bind,source=$PWD/pybullet_multigoal_implementation/drl_implementation/examples,target=/root/pybullet_multigoal_implementation/drl_implementation/examples" -it seer-evaluate --config seer.train_and_eval_configs.rl_training.<scenario to train>
- in CMD, replace
$PWD
by the absolute path of this directory
- in CMD, replace
- In both cases, replace
<scenario to train>
with the name of the Python file containing the scenario you want to train (i.e.,rl_config_train_basic
) - weights are saved to the
pybullet_multigoal_implementation/drl_implementation/examples
folder
- to evaluate natively, run
python seer/train_and_eval_configs/config_runner.py --config seer.train_and_eval_configs.<scenario to evaluate>
- To train using Docker, run
docker run --memory=4g --cpus=2 --mount "type=bind,source=$PWD/pybullet_multigoal_implementation/drl_implementation/examples,target=/root/pybullet_multigoal_implementation/drl_implementation/examples" -it seer-evaluate --config seer.train_and_eval_configs.<scenario to evaluate>
- in CMD, replace
$PWD
by the absolute path of this directory
- in CMD, replace
- In both cases, replace
<scenario to evaluate>
with the partial package name of the Python file containing the scenario you want to evaluate (i.e.,rl_eval.basic.rl_config_eval_basic
orbaseline.baseline_config_noisy_slope
) - if evaluating the reinforcement learning solution, weights need to be present in the
pybullet_multigoal_implementation/drl_implementation/examples
folder
- press
F5
(launch configurationRL Trainer
) - or try out the other launch configurations in
.vscode/launch.json