/NACE

Non-Axiomatic Causal Explorer

Primary LanguagePythonMIT LicenseMIT

Non-Axiomatic Causal Explorer

Aim

This project builds upon an implementation of Berick Cook's AIRIS, with support for partial observability. The aim is to enhance its capabilities to handle non-deterministic and non-stationary environments, as well as changes external to the agent. This will initially be achieved by incorporating relevant components of Non-Axiomatic Logic (NAL).

Background

Several AI systems, as referenced in related works, employ a form of Cognitive Schematics. These systems learn and use empirically-causal temporal relations, typically in the form of (precondition, operation) => consequence. This approach allows the AI to develop a goal-independent understanding of its environment, primarily derived from correlations with the AI's actions. However, albeit not "necessarily causal" these "hypotheses" are not passively obtained correlations, as they can be re-tested and seeked for by the AI to improve its predictive power. This is a significant advantage over the axiomatic relations proposed by Judea Pearl. Pearl's approach is fundamentally limited, as it cannot learn from correlation alone, but only obtain new probability spaces with a graph of already-given causal relations. This limitation is not present in the cognitive schematic approach, which makes it a more general adaptive learning model better-suited for autonomous agents. Additionally, the use of the NAL frequency and confidence values to represent hypothesis truth value enables efficient revision of the agent's knowledge in real-time. Unlike the probabilistic approach, this method can function effectively even with small sample sizes, can handle novel events (unknown unknowns) and has a low computational cost since only local memory updates are necessary.

Architecture

image

Demonstration scenarios

  • Learning to collect salad from scratch: World1
  • Learning how to put the cup on the table, in this case the goal is known to the agent: World2
  • Learning to collect batteries and to pick up keys in order to make it through doors: World3
  • Learning to collect salad with a moving cat as disturbance: World4
  • Learning to play Pong in the grid world: World5
  • Learning to bring eggs to the chicken: World6
  • Learning to play soccer: World7
  • Learning to collect salad while avoiding to get shocked by electric fences World8

Related works:

Autonomous Intelligent Reinforcement Interpreted Symbolism (AIRIS)

OpenNARS for Applications (ONA)

Rational OpenCog Controlled Agent (ROCCA)