Neural-Probabilistic Answer Set Programming
Arseny Skryagin, Wolfgang Stammer, Daniel Ochs, Devendra Singh Dhami , Kristian Kersting
Abstract: The goal of combining the robustness of neural networks and the expressivity of symbolic methods has rekindled the interest in Neuro-Symbolic AI. One specifically interesting branch of research is deep probabilistic programming languages (DPPLs) which carry out probabilistic logical programming via the probability estimations of deep neural networks. However, recent SOTA DPPL approaches allow only for limited conditional probabilistic queries and do not offer the power of true joint probability estimation. In our work, we propose an easy integration of tractable probabilistic inference within a DPPL. To this end we introduce SLASH, a novel DPPL that consists of Neural-Probabilistic Predicates (NPPs) and a logical program, united via answer set programming. NPPs are a novel design principle allowing for the unification of all deep model types and combinations thereof to be represented as a single probabilistic predicate. In this context, we introduce a novel
$+/-$ notation for answering various types of probabilistic queries by adjusting the atom notations of a predicate. We evaluate SLASH on the benchmark task of MNIST addition as well as novel tasks for DPPLs such as missing data prediction, generative learning and set prediction with state-of-the-art performance, thereby showing the effectiveness and generality of our method.
This is the repository for SLASH, the deep declarative probabilistic programming language introduced within Neural-Probabilistic Answer Set Programming. Please see SLASH_Supplementary_Materials.pdf for additional information including detailed proofs and experimental details.
To clone SLASH including all submodules use:
git clone --recurse-submodules -j8 https://github.com/askrix/SLASH
The environment.yaml
provides all packages needed to run a SLASH program using the Anaconda package manager. To create a new conda environment install Anaconda (installation page) and then create a new environment using the following command:
conda env create -n slash -f environment.yml
conda activate env_dev
pip install git+https://github.com/ildoonet/pytorch-gradual-warmup-lr.git
The following packages are installed:
- pytorch
- clingo #version 5.5.1
- scikit-image
- scikit-learn
- seaborn
- tqdm
- rtpt
- tensorboard #only needed for standalone slot attention
- torchsummary
- GradualWarmupSheduler
To use virtualenv run the following commands. This creates a new virtual environment and installs all packages needed. Tested Python Versions 3.6. For using cuda version 10.x
with pytorch remove the +cu113
version appendix in the requirements.txt
file otherwise it will use CUDA version 11.3
.
python3 -m venv slash_env
source slash_env/bin/activate
pip install --upgrade pip
python3 -m pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
Alternatively you can also run slash using docker. For this you first need to create an image using the provided Dockerfile
.
To start first create an image named slash
and then run the container using that image. In the slash base folder execute:
docker build . -t slash:0.01
docker run --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=<GPU-ID> --ipc=host -it --rm -v /$(pwd):/splpmln slash:1.0
--data: contains datasets
--results: contains all exported results for tensorboard
--src: source files which has its own readme
--asp_playground.ipynb - Notebook explaining how we use ASP for training slot attention.
Go visit the src/experiments/mnist_addition
folder to get familar with SLASH.
If you want to know more about stable model generation take a look at asp_playground.ipynb
.
If you use this code for your research, please cite the following:
@inproceedings{skryagin2022KR,
title={Neural-Probabilistic Answer Set Programming},
author={Arseny Skryagin and Wolfgang Stammer and Daniel Ochs and Devendra Singh Dhami and Kristian Kersting},
booktitle={Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning (KR)},
year={2022}
}