A curated list of papers on Neural Symbolic and Probabilistic Logic. Papers are sorted by their uploaded dates in descending order. Each paper is with a description of a few words. Welcome to your contribution!
[Taxonomy] We devide papers into several sub-areas, including
- Surveys on Neural Symbolic and Probabilistic Logic
- Logic-Enhanced Neural Networks (Neural Symbolic)
- Neural-Enhanced Symbolic Logic (Neural Symbolic)
- Probabilistic Logic
- Theoretical Papers
- Miscellaneous
Year | Title | Venue | Paper | Description |
---|---|---|---|---|
2022 | Neuro-Symbolic Approaches in Artificial Intelligence | National Science Review | Paper | A perspective paper that provide a rough guide to key research directions, and literature pointers for anybody interested in learning more about neural-symbolic learning. |
2022 | A review of some techniques for inclusion of domain-knowledge into deep neural networks | Nature Scientific Reports | Paper | Presents a survey of techniques for constructing deep networks from data and domain-knowledge. It categorises these techniques into 3 major categories: (1) changes to input representation, (2) changes to loss function, (3a) changes to model structure and (3b) changes to model parameters. |
2021 | Neural, Symbolic and Neural-Symbolic Reasoning on Knowledge Graphs | AI Open | Paper | Take a thorough look at the development of the symbolic, neural and hybrid reasoning on knowledge graphs. |
2021 | Modular design patterns for hybrid learning and reasoning systems | arXiv | Paper | Analyse a large body of recent literature and we propose a set of modular design patterns for such hybrid, neuro-symbolic systems. |
2021 | How to Tell Deep Neural Networks What We Know | arXiv | Paper | This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks. |
2020 | From Statistical Relational to Neuro-Symbolic Artificial Intelligence | IJCAI | Paper | This survey identifies several parallels across seven different dimensions between these two fields. |
2020 | Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains | SUM | Paper | Survey work that provides further evidence for the connections between logic and learning. |
2020 | Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective | IJCAI | Paper | A Survey on Neural-Symbolic with GNN. |
2020 | Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey | Frontiers in Robotics and AI | Paper | In this paper we make a survey that aims to renew the link between symbolic representations and distributed/distributional representations. |
2020 | On the Binding Problem in Artificial Neural Networks | arXiv | Paper | In this paper, we argue that the underlying cause for this shortcoming is their inability to dynamically and flexibly bind information that is distributed throughout the network. |
2019 | Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning | Journal of Applied Logic | Paper | We survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. |
2017 | Neural-Symbolic Learning and Reasoning: A Survey and Interpretation | arXiv | Paper | Reviews personal ideas and views of several researchers on neural-symbolic learning and reasoning. |
2011 | Statistical Relational AI: Logic, Probability and Computation | ICLP | Paper | We overview the foundations of StarAI. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Meta Module Network for Compositional Visual Reasoning | WACV | Paper | Code | N2NMN application |
2020 | Neural Module Networks for Reasoning over Text | ICLR | Paper | Code | TMN, parser-NMN application |
2020 | Learning to Discretely Compose Reasoning Module Networks for Video Captioning | arXiv | Paper | Code | RMN, N2NMN application |
2020 | LRTA: A Transparent Neural-symbolic Reasoning Framework with Modular Supervision for VQA | arXiv | Paper | N2NMN application | |
2019 | Self-Assembling Modular Networks for Interpretable Multi-hop Reasoning | arXiv | Paper | Code | N2NMN application |
2019 | Probabilistic Neural-Symbolic Models for Interpretable Visual Question Answering | ICML | Paper | Code | The author proposed ProbNMN, using variational method to generate reasoning graph. |
2019 | Explainable and Explicit Visual Reasoning over Scene Graphs | CVPR | Paper | Code | XNM, N2NMN + scene graph |
2019 | Learning to Assemble Neural Module Tree Networks for Visual Grounding | ICCV | Paper | Code | NMTree, parser-NMN application |
2019 | Structure Learning for Neural Module Networks | EACL | Paper | LNMN, follows Stack-NMN to add learnable (soft) modules | |
2018 | Explainable Neural Computation via Stack Neural Module Networks | ECCV | Paper | Code | Stack-NMN, N2NMN + differentiable memory stack + soft program execution |
2018 | Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding | arXiv | Paper | Code | NS-VQA, N2NMN + scene graph |
2018 | Compositional Models for VQA: Can Neural Module Networks Really Count? | BICA | Paper | interesting (negative) result of N2NMN | |
2018 | Transparency by Design: Closing the Gap between Performance and Interpretability in Visual Reasoning | CVPR | Paper | Code | TbD, soft modules / structures |
2018 | Visual Question Reasoning on General Dependency Tree | CVPR | Paper | Code | ACMN, parser-NMN (DPT -> structure) |
2017 | Learning to Reason: End-To-End Module Networks for Visual Question Answering | ICCV | Paper | Code | N2NMN |
2017 | Inferring and Executing Programs for Visual Reasoning | ICCV | Paper | Code | Basically N2NMN which refers N2NMN as "concurrent work" |
2016 | Learning to Compose Neural Networks for Question Answering | NAACL | Paper | Code | Compared to original NMN, the authors add a layout selector to select layout from several proposed candidates. |
2016 | Neural Module Networks | CVPR | Paper | Code | Initial paper. The authors proposed Neural Module Networks in this paper. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Calibrating Concepts and Operations: Towards Symbolic Reasoning on Real Images | ICCV | Paper | Code | we introduce an executor with learnable concept embedding magnitudes for handling distribution imbalance, and an operation calibrator for highlighting important operations and suppressing redundant ones |
2019 | The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision | ICLR | Paper | Code | Neuro-Symbolic Concept Learner in VQA |
2017 | β-VAE: Learning Basiz Visual Concept With A Constrained Variational Framework | ICLR | Paper | Automated discovery of interpretable factorised latent representations from raw image |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2020 | Neuro-Symbolic Visual Reasoning: Disentangling "Visual" from "Reasoning" | PMLR | Paper | Code | a Differentiable First-Order Logic formalism for VQA |
2019 | Learning by Abstraction: The Neural State Machine | NeurIPS | Paper | Given an image, we first predict a probabilistic graph then perform sequential reasoning over the graph. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2020 | A Constraint-Based Approach to Learning and Explanation | AAAI | Paper | Code | Learning First Order Constraints |
2018 | A Semantic Loss Function for Deep Learning with Symbolic Knowledge | ICML | Paper | Code | Semantic Loss, a continuous regularizer of logic prior. |
2017 | Logic tensor networks for semantic image interpretation. | IJCAI | Paper | Code | Logic Tensor Networks (LTNs) are an SRL framework which integrates neural networks with first-order fuzzy logic. |
2017 | Semantic-based regularization for learning and inference | Artificial Intelligence | Paper | A Regularizer using fuzzy logic. | |
2016 | Harnessing Deep Neural Networks with Logic Rules | ACL | Paper | We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Acquisition of Chess Knowledge in AlphaZero | arXiv | Paper | In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. | |
2021 | Knowledge Neurons in Pretrained Transformers | arXiv | Paper | We explore how implicit knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons. | |
2019 | Logical Explanations for Deep Relational Machines Using Relevance Information | JMLR | Paper | This work provides a methodology to generate symbolic explanations for predictions made by a deep neural network constructed from relational data, called DRMs. It investigates the use of a Bayes-like approach to identify logical proxies for local predictions of a DRM. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2022 | Composition of Relational Features with an Application to Explaining Black-Box Predictors | arXiv | Paper | Code | Complex (deep) neural networks can be constructed from relational description of data using relational features. The input layer of the DNN are simple relational features (clauses) and further layers are formed by composing these features. The resulting DNN is called a Compositional Relational Machines (CRM), which is inherently explainable. |
2021 | Inclusion of domain-knowledge into GNNs using mode-directed inverse entailment | Machine Learning Journal | Paper | Code | Constructing GNNs from relational data and symbolic domain-knowledge, via construction of "Bottom-Graphs" |
2021 | Incorporating symbolic domain knowledge into graph neural networks | Machine Learning Journal | Paper | Code | Constructing GNNs from relational data and symbolic domain-knowledge, via "Vertex Enrichment" |
2020 | Logical Neural Networks | NeurIPS | Paper | Transform a logic formula to NN-like. Relax Boolean to [0,1] | |
2019 | Synthesizing datalog programs using numerical relaxation. | IJCAI | Paper | Code | Differential Datalog |
2019 | SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver | ICML | Paper | Code | Differential SAT |
2018 | Large-Scale Assessment of Deep Relational Machines | ILP | Paper | Constructs MLPs from relational data and symbolic domain-knowledge using "Propositionalisation" | |
2018 | Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures | JAIR | Paper | Code | Creating deep neural networks from "templates" constructed from first-order logic rules. |
2018 | Learning Explanatory Rules from Noisy Data | JAIR | Paper | Code | Differentiable ILP |
2017 | TensorLog: Deep Learning Meets Probabilistic Databases | arXiv | Paper | Code | Relax Boolean truth value to [0,1] |
2017 | Differentiable Learning of Logical Rules for Knowledge Base Reasoning | NeurIPS | Paper | Code | Neural Logic Programming, learning probabilistic first-order logical rules for knowledge base reasoning in end-to-end model. |
2017 | End-to-end Differentiable Proving | NeurIPS | Paper | Code | We replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Neural Markov Logic Networks | UAI | Paper | Code | NMLNs are an exponential-family model for modelling distributions over possible worlds without explicit logic rules. |
2020 | NeurASP: Embracing Neural Networks into Answer Set Programming | IJCAI | Paper | Code | NeurASP, a simple extension of answer set programs by embracing neural networks. |
2019 | Neural Logic Machines | ICLR | Paper | Code | Logic predicates as tensors, logic rules as neural operators. |
2019 | DeepLogic: Towards End-to-End Differentiable Logical Reasoning | AAAI-MAKE | Paper | Code | Feed logic rules into RNN as a string |
2018 | DeepProbLog: Neural Probabilistic Logic Programming | NeurIPS | Paper | Code | Add "neural predicates" to ProbLog which is a probabilistic logic programming language. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Neural-Symbolic Integration: A Compositional Perspective | AAAI | Paper | Treating Neural and Symbolic as black boxes to be integrated, without making assumptions on their internal structure and semantics. | |
2020 | Relational Neural Machines | ECAI | Paper | Relational Neural Machines, a novel framework allowing to jointly train the parameters of the learners and of a First–Order Logic based reasoner. | |
2020 | Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning | ICML | Paper | Code | NGS, (1) introducing the grammar model as a symbolic prior, (2) proposing a novel back-search algorithm to propagate the error through the symbolic reasoning module efficiently. |
2019 | NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language | ACL | Paper | A Prolog prover which we extend to utilize a similarity function over pretrained sentence encoders. | |
2018 | Lifted relational neural networks: Efficient learning of latent relational structures. | JAIR | Paper | Combine the interpretability and expressive power of first order logic with the effectiveness of neural network learning. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2007 | ProbLog: A Probabilistic Prolog and its Application in Link Discovery | IJCAI | Paper | Code | ProbLog, a library for probabilistic logic programming. |
2005 | Learning the structure of Markov logic networks | ICML | Paper | an algorithm for learning the structure of MLNs from relational databases | |
2001 | Bayesian Logic Programs | Paper | Bayesian networks + Logic Program | ||
2001 | Parameter Learning of Logic Programs for Symbolic-statistical Modeling | JAIR | Paper | Wepropose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. | |
1996 | Stochastic Logic Programs | Advances in ILP | Paper | A formulaton: Stochastic Logic Programs | |
1992 | Probabilistic logic programming | Information and Computation | Paper | A formulation of Probabilistic logic programming. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2008 | Event Modeling and Recognition Using Markov Logic Networks | ECCV | Paper | Application of MLNs | |
2008 | Hybrid Markov Logic Networks | AAAI | Paper | Extend Markov Logic Networks to continous space. | |
2007 | Efficient Weight Learning for Markov Logic Networks | PKDD | Paper | weights learning of MLNs | |
2005 | Discriminative Training of Markov Logic Networks | AAAI | Paper | a discriminative approach to training MLNs | |
2005 | Markov Logic Networks | Springer | Paper | Combining Logic and Markov Networks, a classic paper. |
Year | Title | Venue | Paper | Description |
---|---|---|---|---|
2022 | DeepLogic: Joint Learning of Neural Perception and Logical Reasoning | IEEE | Paper | Neural-symbolic learning, aiming to combine the perceiving power of neural perception and the reasoning power of symbolic logic together, has drawn increasing research attention. However, existing works simply cascade the two components together and optimize them isolatedly, failing to utilize the mutual enhancing information between them. To address this problem... |
2022 | Composition of Relational Features with an Application to Explaining Black-Box Predictors | arXiv | Paper | Complex (deep) neural networks can be constructed from relational description of data using relational features. The input layer of the DNN are simple relational features (clauses) and further layers are formed by composing these features. The resulting DNN is called a Compositional Relational Machines (CRM), which is inherently explainable. |
2021 | Inclusion of domain-knowledge into GNNs using mode-directed inverse entailment | Machine Learning Journal | Paper | Constructing GNNs from relational data and symbolic domain-knowledge, via construction of "Bottom-Graphs" |
2019 | Logical Explanations for Deep Relational Machines Using Relevance Information | JMLR | Paper | Our interest in this paper is in the construction of symbolic explanations for predictions made by a deep neural network on DRM |
2018 | Exact Learning of Lightweight Description Logic Ontologies | JMLR | Paper | We study the problem of learning description logic (DL) ontologies in Angluin et al.’s framework of exact learning via queries. |
2017 | Hinge-Loss Markov Random Fields and Probabilistic Soft Logic | JMLR | Paper | In this paper, we introduce two new formalisms for modeling structured data, and show that they can both capture rich structure and scale to big data. |
2017 | Answering FAQs in CSPs, Probabilistic Graphical Models, Databases, Logic and Matrix Operations (Invited Talk) | STOC | Paper | A invited talk on a general framework |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2020 | Integrating Logical Rules Into Neural Multi-Hop Reasoning for Drug Repurposing | ICML | Paper | Logic Rules + GNN + RL | |
2020 | WHAT CAN NEURAL NETWORKS REASON ABOUT? | ICLR | Paper | How NN structured correlates with the performance on different reasoning tasks. | |
2019 | Bridging Machine Learning and Logical Reasoning by Abductive Learning | NeurIPS | Paper | machine learning model learns to perceive primitive logic facts from data, while logical reasoning can exploit symbolic domain knowledge and correct the wrongly perceived facts for improving the machine learning models | |
2013 | Deep relational machines | NeurIPS | Paper | A DRM learns the first layer of representation by inducing first order Horn clauses and the successive layers are generated by utilizing restricted Boltzmann machines. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Off-Policy Differentiable Logic Reinforcement Learning | ECML PKDD | Paper | In this paper, we proposed an Off-Policy Differentiable Logic Reinforcement Learning (OPDLRL) framework to inherit the benefits of interpretability and generalization ability in Differentiable Inductive Logic | |
2020 | Exploring Logic Optimizations with Reinforcement Learning and Graph Convolutional Network | MLCAD | Paper | Code | We propose a Markov decision process (MDP) formulation of the logic synthesis problem and a reinforcement learning (RL) algorithm incorporating with graph convolutional network to explore the solution search space. |
2020 | Reinforcement Learning with External Knowledge by using Logical Neural Networks | IJCAI Workshop | Paper | We propose an integrated method that enables model-free reinforcement learning from external knowledge sources in an LNNs-based logical constrained framework such as action shielding and guide. | |
2019 | Transfer of Temporal Logic Formulas in Reinforcement Learning | IJCAI | Paper | We first propose an inference technique to extract metric interval temporal logic (MITL) formulas in sequential disjunctive normal form from labeled trajectories collected in RL of the two tasks. | |
2019 | Neural Logic Reinforcement Learning | ICML | Paper | Code | We propose a novel algorithm named Neural Logic Reinforcement Learning (NLRL) to represent the policies in reinforcement learning by first-order logic. |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2020 | Measuring Compositional Generalization: A Comprehensive Method on Realistic Data | ICLR | Paper | Code | CFQ, a large dataset of Natural Language Question Answering |
Year | Title | Venue | Paper | Code | Description |
---|---|---|---|---|---|
2021 | Domiknows: A library for integration of symbolic domain knowledge in deep learning | arXiv | Homepage | Code | This library provides a language interface integrate Domain Knowldge in Deep Learning. |
2019 | LYRICS: a General Interface Layer to Integrate Logic Inference and Deep Learning | ECML | Paper | Tensorflow, seems only in design, not implemented | |
2007 | ProbLog: A Probabilistic Prolog and its Application in Link Discovery | IJCAI | Paper | Code | ProbLog, a library for probabilistic logic programming. |