This is a code repository of learning for task and motion planning in a 2D kitchen. This repository is associated with the paper Active model learning and diverse action sampling for task and motion planning. Our project page is here. If you have any question, please create an issue here and remind me via email wangzi@google.com if no reply in a week.
We developed our code building upon several existing packages:
- pybox2d, a 2D physics engine for Python based on the Box2D library;
- GPy, a Gaussian process framework in Python;
- motion-planners, including basic motion planning functionals such as rapidly-exploring random trees;
- pddlstream, lightweight implementation of STRIPStream, which builds upon the Fast Downward planner.
- numpy, version 1.13.3 or higher.
- scipy, version 0.19.1 or higher.
- sklearn, version 0.18.1 or higher.
In particular, motion-planners and pddlstream are included as submodules in this repository.
We tested our code with Python 2.7.6 on Ubuntu 14.04 LTS (64-bit) and Mac OS X. To install pybox2d, GPy and Fast Downward, follow the following steps.
-
Install numpy, scipy and sklearn, following the instructions here and here.
-
To run the learning examples, follow the instructions here to install GPy.
-
To run the planning examples, follow the instructions here to obtain Fast Downward.
Once you confirm the system requirements are satisfied, make a copy of this repository with your favorite method, e.g.
git clone git@github.com:YOUR_USERNAME/Kitchen2D.git
cd Kitchen2D
Initialize and update the submodules by
git submodule init
git submodule update
Now you should be able to run the examples below.
An example of using the primitives is in primitive_example.py. Try
python primitive_example.py
We show an example of both learning and sampling the scooping action in learn_example.py. We adopt an active learning view to learn the feasible region of the pre-conditions of the primitives. In order to plan with the learned pre-conditions, we need to be able to sample from its feasible regions. The detailed algorithm and setups we used can be found in Section IV.A of the accompanying paper. Try
python learn_example.py
plan_example.py is an example of planning with learned pouring and scooping actions. We use STRIPSream as the backend planner. The goal of the task in plan_example.py is to “serve” a cup of coffee with cream and sugar by placing it on the green coaster near the edge of the table. Click here for vidoes of plans. Try
python plan_example.py
Please cite our work if you would like to use the code.
@inproceedings{wangIROS2018,
author={Zi Wang and Caelan Reed Garrett and Leslie Pack Kaelbling and Tomas Lozano-Perez},
title={Active model learning and diverse action sampling for task and motion planning},
booktitle={International Conference on Intelligent Robots and Systems (IROS)},
year={2018},
url={http://lis.csail.mit.edu/pubs/wang-iros18.pdf}
}
- Active model learning and diverse action sampling for task and motion planning (Zi Wang, Caelan Reed Garrett, Leslie Pack Kaelbling and Tomas Lozano-Perez), In International Conference on Intelligent Robots and Systems (IROS), 2018.