For cutting-edge AI researchers looking for clean semantics models to represent the context-specific causal dependencies essential for causal induction, this DeepMind’s algorithm encourages you to look at good old-fashioned probability trees.
The probability tree diagram is used to represent a probability space. Tree diagrams illustrate a series of independent events or conditional probabilities.
The Node on the probability tree diagram represents an event, and it’s probability. The root node represents a particular event where probability equals one. The set of sibling nodes represents an exhaustive and exclusive partition of the parent event.
Probability (that the series of events leading to a particular node will occur) = probability (that Node) * probability(Parent node).
Probability trees have been there for decades, but they have not received much attention from ML and AI enthusiasts. As per the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, “Probability trees are among the simplest models of causal generative processes.” According to the authors, the above is the first to propose concrete algorithms for causal reasoning in the discrete probability trees.
Humans naturally learn reasoning to a great extent through inducing causal relationships from observations, and according to cognitive scientists, we do it quite well. Even with sparse and limited data, humans can quickly learn causal structures (such as observations of the co-occurrence frequencies between causes and effects, interactions between physical objects, etc.).
Causal induction is a classic problem in ML and statistics. Models, such as causal Bayesian networks (CBNs), can describe the causal dependencies for causal induction. But CBNs are not capable of representing context-specific independencies. According to the DeepMind team, the algorithms cover the whole causal hierarchy and operate on random propositional and causal events, expanding the causal reasoning to “a very general class of discrete stochastic processes.”
Focusing their research on finite probability trees, the DeepMind team produced concrete algorithms for:
- Computing minimal representations of the arbitrary events formed through:
- Propositional calculus
- Causal precedences
- Computing the following three fundamental operations of the causal hierarchy