Inductive Biases May Bridge The Gap Between Current Deep Learning And Human Cognitive Abilities

At present, machine learning (ML) models find applications in various fields, and many ML systems have achieved remarkable accuracy in a variety of domains. However, in many cases, the answers seem inconsistent depending on the training dataset and the tasks they encounter. Therefore, whether the model’s reasoning or judgment is correct has come under question. Understanding human intelligence will help in building intelligent machines. But as in physics, the principles alone are not sufficient to predict the behavior of complex systems like brains; we need substantial computation to simulate human-like intelligence. 

Anirudh Goyal and Yoshua Bengio at Mila, University of Montreal, suggests that deep learning can be extended qualitatively instead of adding additional data and computing resources. In their new paper, Inductive Biases for Deep Learning of Higher-Level Cognition, they explore how inductive biases can bridge the gap between current deep learning and human cognitive abilities to bring deep learning closer to human-level AI.

Currently, Deep learning (DL) incorporates several fundamental inductive biases found in humans and other animals. The team proposes that augmenting these inductive biases can advance deep learning. Focusing on biases that involve higher-level and sequential conscious processing can improve DL from its current successes on in-distribution understanding in highly supervised learning tasks to more robust and human-like out-of-distribution conception transfer learning abilities.

The researchers discuss inductive biases based on higher-level cognition, declarative knowledge of causal dependencies, biological inspiration, and higher-level cognition characterization. They have leveraged the System 1 and System 2 dichotomy presented in the book Thinking, Fast and Slow by Daniel Kahneman. Here System 1 refers to what current deep learning is capable of. For example, being intuitive, fast, automatic, anchored in sensory perception. System 2, on the other hand, signifies being rational, sequential, slow, logical, conscious, and expressible with language.

The team suggests that DL models performing System 2 tasks by taking advantage of System 1 abilities’ computational workhorse will be efficient while dealing with dynamic conditions. In other words, they are likely to learn to think and perform tasks like humans in changing situations.

They have identified several open questions and paths for future DL research, including:

  • Inductive biases in new planning techniques.
  • Jointly learning a large-scale encoder and a large scale generic model with high-level variables.
  • Computation over modules and data points and reforming neural architecture with low-level programming and hardware design requirements.
  •  Unify declarative knowledge form and inference mechanism with modularity in an individual architecture.

The researchers state that the proposed ideas on using inductive biases are still in the early stages of maturation. Further study is required to improve the understanding levels and determine suitable methods for incorporating these priors in neural architectures and training frameworks.

Paper: https://arxiv.org/pdf/2011.15091.pdf

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.

↗ Step by Step Tutorial on 'How to Build LLM Apps that can See Hear Speak'