/awesome-neuro-symbolic-ai

A curated list of awesome Neuro-Symbolic AI software frameworks

Creative Commons Zero v1.0 UniversalCC0-1.0

Awesome Neuro-Symbolic AI

PRs Welcome Awesome Stars Forks

A curated list of awesome Neuro-Symbolic AI (NSAI or NeSy) software frameworks.

If you want to contribute to this list (please do), send me a pull request or contact me @mattfaltyn.

Table of Contents

NSAI Basics and Resources

NSAI in Two Sentences

"NSAI aims to build rich computational AI models, systems and applications by combining neural and symbolic learning and reasoning. It hopes to create synergies among the strengths of neural and symbolic AI while overcoming their complementary weaknesses." - Sixteenth International Workshop on Neural-Symbolic Learning and Reasoning

Textbooks

Overview Articles

NSAI Categories

Henry Kautz's taxonomy from his Robert S. Englemore Memorial Lecture in 2020 at the Thirty-Fourth AAAI Conference on Artificial Intelligence (slides here) is informal standard for categorizing neuro-symbolic architectures. Hamilton et al (2022) reframed Kautz's taxonomy into four categories to make it more intuitive. We omit Kautz's Type VI as no architectures currently exist under that category.

Category 1: Sequential

Type I

A Type I (symbolic Neuro symbolic) system is standard deep learning. This class is included in the taxonomy as the input and output of a neural network can be symbols (such as words in language translation) that are vectorized within the model. Some Type I architectures include:

Category 2: Nested

Type II

A Type II (Symbolic[Neuro]) system is a hybrid system in which a symbolic solver utilizes neural networks as subroutines to solve one or more tasks. Some Type II frameworks include:

  • AlphaGo

Category 3: Cooperative

Type III

A Type III (Neuro; Symbolic) system is a hybrid system where a neural network solves one task and interacts via its input and output with a symbolic system that solves a different task. Some Type III frameworks include:

  • Neural-Concept Learner

Category 4: Compiled

Type IV

Type IV (Neuro: Symbolic → Neuro) is a system in which the symbolic knowledge is compiled into the training set of a neural network. Some Type IV frameworks include:

Type V

A Type V (Neuro_Symbolic) system is a tightly-coupled but distributed neuro-symbolic systems where a symbolic logic rule is mapped onto an embedding which acts as a soft-constraint on the network’s loss function. These systems are often tensorized in some manner. Some Type V frameworks include:

Frameworks

In this section, we aim to provide the most comprehensive NSAI frameworks to date.

Logical Neural Network

A Neural = Symbolic framework for sound and complete weighted real-value logic created by IBM Research.

Software

Media

Academic Papers
Blogs

Logic Tensor Networks

Sony's Logic Tensor Networks (LTN) is a neurosymbolic framework that supports querying, learning, and reasoning with both rich data and abstract knowledge about the world. LTN introduces a fully differentiable logical language, called Real Logic, whereby the elements of a first-order logic signature are grounded onto data using neural computational graphs and first-order fuzzy logic semantics.

Neural Logic Machines

Google's Neural Logic Machine (NLM) is a neural-symbolic architecture for both inductive learning and logic reasoning. NLMs use tensors to represent logic predicates.

Hinge-Loss Markov Random Fields and Probabilistic Soft Logic

Bach et al's Hinge-Loss Markov Random Fields (HL-MRFs) are a new kind of probabilistic graphical model that generalizes different approaches to convex inference. We unite three approaches from the randomized algorithms, probabilistic graphical models, and fuzzy logic communities, showing that all three lead to the same inference objective. We then define HL-MRFs by generalizing this unified objective. The second new formalism, probabilistic soft logic (PSL), is a probabilistic programming language that makes HL-MRFs easy to define using a syntax based on first-order logic.

TensorLog

Cohen's TensorLog is a probabilistic deductive database in which reasoning uses a differentiable process. In TensorLog, each clause in a logical theory is first converted into certain type of factor graph. Then, for each type of query to the factor graph, the message-passing steps required to perform belief propagation (BP) are “unrolled” into a function, which is differentiable.

Markov Logic Networks

Matthew Richardson's and Pedro Domingos' Markov Logic Networks (MLNs) are a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight.