Use tagged versions for reliable reproduction of the results.
- NEWS Initial release: Published after AAAI18.
- NEWS Updates on version 2: Mainly the refactoring. AAAI18 experiments still works.
- NEWS Updates on version 2.1: Version 2 was not really working and I could finally have time to fix it. Added ZSAE experiments.
- NEWS Updates on version 2.2: Backported more minor improvements from the private repository
- NEWS Updates on version 3.0: Updates for DSAMA system. More refactoring is done on the learner code. We newly introduced ama3-planner which takes a PDDL domain file.
- NEWS Updates on version 4.0: Updates for Cube-Space AE in IJCAI2020 paper.
- This is a version that can exploit the full potential of search heuristics (lower bounds in Branch-and-Bound), such as LM-cut or Bisimulation Merge-and-Shrink in Fast Downward.
- We added the tuned hyperparameters to the repository so that reproducing the experiments will be easy(ish).
- We now included a script for 15-puzzle instances randomly sampled from the states 14 or 21 steps away from the goal.
- We also included a script for Korf’s 100 instances of 15-puzzle, but the accuracy was not sufficient in those problems where the shortest path length are typically around 50. Problems that require deeper searches also require more model accuracy because errors accumulate in each state transition.
- NEWS Updates on version 4.1:
- Improved the installation procedure. Now I recommend
conda
based installation, which specifies the correct Keras + TF versions. - The repository is now reorganized so that the library code goes to
latplan
directory and all other scripts remain in the root directory. During this migration I used git-filter-repo utility, which rewrites the history. This may have broken the older tags — I will inspect the breakage and fix them soon.
- Improved the installation procedure. Now I recommend
- NEWS Updates on version 4.1.1, 4.1.2: The project is loadable with Anaconda, is pip-installable (somewhat).
- NEWS Updates on version 4.1.3: Minor refactoring. We also released the trained weights. See Releases.
- NEWS Updates on version 5: Updates for Bidirectional Cube-Space AE in JAIR paper.
- AMA3+ Cube-Space AE : A revised version of AMA3 Cube-Space AE (IJCAI20 version) which is now modeled as a sound generative model.
- AMA4+ Bidirectional Cube-Space AE : An extension of Cube-Space AE which can learn both effects and preconditions.
This repository contains the source code of LatPlan.
- Asai, Kajino, Fukunaga, Muise: 2021. Classical Planning in Deep Latent Space.
- Under review in JAIR https://arxiv.org/abs/XXX
- Asai, M; Muise, C.: 2020. Learning Neural-Symbolic Descriptive Planning Models via Cube-Space Priors: The Voyage Home (to STRIPS).
- Accepted in IJCAI-2020 (Accept ratio 12.6%). https://arxiv.org/abs/2004.12850
- Asai, M.: 2019. Neural-Symbolic Descriptive Action Model from Images: The Search for STRIPS.
- Asai, M.: 2019. Unsupervised Grounding of Plannable First-Order Logic Representation from Images (code available from https://github.com/guicho271828/latplan-fosae)
- Accepted in ICAPS-2019, Learning and Planning Track. https://arxiv.org/abs/1902.08093
- Asai, M.; Kajino, F: 2019. Towards Stable Symbol Grounding with Zero-Suppressed State AutoEncoder
- Accepted in ICAPS-2019, Learning and Planning Track. https://arxiv.org/abs/1903.11277
- Asai, M.; Fukunaga, A: 2018. Classical Planning in Deep Latent Space: Breaking the Subsymbolic-Symbolic Boundary.
- Accepted in AAAI-2018. https://arxiv.org/abs/1705.00154
- Asai, M.; Fukunaga, A: 2017. Classical Planning in Deep Latent Space: From Unlabeled Images to PDDL (and back).
- In Knowledge Engineering for Planning and Scheduling (KEPS) Workshop (ICAPS2017).
- In Cognitum Workshop at ICJAI-2017.
- In Neural-Symbolic Workshop 2017.
The system is built on Tensorflow 1.15. Since the official Tensorflow by Google (both the source code and the package) no longer supports 1.15, machines with recent GPUs require a port maintained by NVIDIA. Our installation script installs this port. The port only supports Linux, therefore we do not support OSX and Windows.
Also, NVIDIA port requires LIBC 2.29 or later. If your system does not come with LIBC 2.29, you should obtain the source code from https://ftp.gnu.org/gnu/libc/, build it, and install it somewhere locally. Do not install it system-wide, as libc is widely incompatibile between versions and causes a nightmare for inexperienced linux beginners — you can no longer run normal commands like `ls`, `rm`, `cat` or any other programs.
The only shared object by required the NVIDIA port is libm.so.6
.
To run the program later, run export LD_PRELOAD=/path/to/glibc-2.29/lib/libm.so.6
on the command line
before launching python.
anaconda
/ miniconda
(https://docs.conda.io/en/latest/miniconda.html) is a
dependency management system that can install both python and non-python dependencies into a local environment.
It is conceptually similar to docker, but it does not use virtualization or container infrastructure.
We recommend using miniconda
, as it is smaller.
After the installation, run the following code:
conda config --add channels conda-forge
conda config --set channel_priority strict
conda env create -f environment.yml # This takes about 15-30 min. Conda does not provide an informative progress, so be patient
conda activate latplan
./install.sh
On Ubuntu, prerequisites can be installed by running ./install-non-anaconda.sh . It is not well-tested, so beware of hiccup. Note that this requires several libraries system-wide with sudo. Users of other distributions should inspect the file and find the equivalent packages in the respective system.
./install-non-anaconda.sh
./install.sh
g++ cmake make python flex bison g++-multilib
— these are required for compiling Fast Downward.git build-essential automake libcurl4-openssl-dev
— these are required for compiling [Roswell](http://roswell.github.io/). OSX users should usebrew install roswell
.gnuplot
— for plotting.parallel
— for running some scripts.libmagic-dev
— for filetype detection used by file processor. NOTE: you need a header source code in CPATH. Installing the shared object library is not sufficientsqlite3
— version 3.35 or later is required. Used only for generating tables and figures.libc
version 2.29 or later, because NVIDIA port of tensorflow is compiled against it.- python 3.8 (not later or older), because NVIDIA port of tensorflow is compiled only against it.
- Common Lisp library dependencies (lines starting with
ros
in ./install.sh )ros dynamic-space-size=8000 install numcl arrival eazy-gnuplot magicffi dataloader
python3-pip
for pip.
Installing the latest version of Latplan via pip
creates a runnable latplan
script in ~/.local/bin
.
The script is not usable for running the experiments (see the next section) because it has an empty hyperparameter.
However, it has the same command line API as train_common.py
, train_kltune.py
, and so on,
therefore it may be useful for you to understand the command line API for those scripts.
Next, customize the following files for your job scheduler before running.
The job submission commands are stored in a variable $common
, which by default
has the value like jbsub -mem 32g -cores 1+1 -queue x86_24h
, which means
the jobs are submitted to a 24 hour runtime limit queue, requesting 1 cpu, 1 gpu (1+1) and 32g memory.
You also need to uncomment the commands to run.
By default, everything is commented out and nothing runs.
# If you installed glibc locally
export LD_PRELOAD=/path/to/glibc-2.29/lib/libm.so.6
# Submit the jobs for training AMA3+ (Cube-Space AEs) and AMA4+ (Bidirectional Cube-Space AEs)
./train_propositional.sh
# Submit the jobs for converting the training results into PDDL files
./pddl-ama3.sh
# Copy the problem instances into a target directory.
problem-generators/copy propositional problem-instances-10min-0.0-1
# Edit run_ama3_all.sh to specify appropriate target directory and then submit the jobs for planning.
# To reproduce the exact same experiments in the paper,
# approximately 400 jobs are submitted. Each job requires 8 cores, no GPUs, and takes 6 hours maximum.
# Details can be customized for your compute environment.
./run_ama3_all.sh
# After the experiments, run this to generate the tables and figures.
# for details read the source code.
make -C tables
- Library code
- latplan/main/*.py
- Each file contains source code for loading the dataset and launching the training.
- latplan/model.py
- network definitions.
- latplan/mixins/*.py
- Contains various mixin classes used to build a complex neural network.
- latplan/util/
- contains general-purpose utility functions for python code.
- latplan/puzzles/
- code for domain generators/validators.
- puzzles/*.py
- each file represents a domain.
- puzzles/model/*.py
- the core model (successor rules etc.) of the domain. this is disentangled from the images.
- Scripts
- strips.py
- (Bad name!) the program for training an SAE, and writes the propositional encoding of states/transitions to a CSV file.
- ama1-planner.py
- Latplan using AMA1. (obsolete)
- ama2-planner.py
- Latplan using AMA2. (obsolete)
- ama3-planner.py
- Latplan using visual inputs (init, goal) and a PDDL domain file.
- run_ama{1,2,3}_all.sh
- Run all experiments.
- helper/
- helper scripts for AMA1.
- problem-generators/
- scripts for generating problem instances.
- tests/
- test files, mostly the unit tests for domain generator/validator
- samples/
- where the learned results should go. Each SAE training results are stored in a subdirectory.
- tables/
- code for storing experimental results into SQLITE and generating tables and figures.
- (git submodule) planner-scripts/
- My personal scripts for invoking domain-independent planners. Not just Fast Downward.