DeepMind Researchers Introduce Epistemic Neural Networks (ENNs) For Uncertainty Modeling In Deep Learning


Deep learning algorithms are widely used in numerous AI applications because of their flexibility and computational scalability, making them suitable for complex applications. However, most deep learning methods today neglect epistemic uncertainty related to knowledge which is crucial for safe and fair AI.

A new DeepMind study has provided a way for quantifying epistemic uncertainty, along with new perspectives on existing methods, all to improve our statistical knowledge of deep learning.

The team propose Epistemic Neural Networks (ENNs) as an interface for uncertainty modelling in deep learning. Their study suggests that ENNs can improve performance in terms of statistical quality and computational cost. In addition, they use KL divergence (from a target distribution) as a well-defined metric to evaluate ENNs.

Their research recommends that all existing approaches to uncertainty modelling in deep learning can be expressed as ENNs. But rewriting an existing method as an ENN will not improve its performance. However, the team suggest that it will provide a new perspective at neural networks’ potential as computational tools for approximate posterior inference. They further explain that using a common vocabulary makes it easier to understand clearly and compare different but related techniques to posterior approximation. This makes the task of constructing a better ENN then becomes a matter of designing better loss functions, optimization algorithms, or network architecture.

The suggested ENNs create predictions using an input, parameters, and an epistemic index. They are intended to solve the complex problem of posterior inference in Bayesian neural networks (BNNs) by having the network learn a posterior distribution via the epistemic index input.

To analyze ENNs, the team took into account computing constraints during both training and evaluation. As a result, they evaluate ENN performance by looking at the posterior approximation quality, using the KL divergence from a target distribution as a metric.


Further, they introduce a practical testbed to assess ENN quality in a computationally efficient way. The researchers define a target posterior in terms of the neural network Gaussian process (NNGP), then compute the target NNGP posterior using the neural tangents library and quantify the KL divergence of ENN forecasts using a sample-based approximation. 

This computational testbed is based on a generative model for the underlying posterior inference. As a result, it is protected from some of the risks of overfitting to a specific dataset. Even if researchers fine-tune parameters for each challenge, a generative model can constantly generate new streams of testing data. The testbed presents a small and sanitized problem, thus objectively discussing the “right” response while maintaining complete control over problem definitions.

The team demonstrates that the proposed metrics are robust to the choice of kernel and that ENNs can offer orders of magnitude of computation savings by evaluating the proposed approach against multiple benchmarks. Furthermore, the GP bandit problem experiment demonstrates that testbed performance is highly correlated with sequential decision problem performance.


The ENNs, when united with the practical testbed, has great scope of research and will prove to be a step forward towards realistic uncertainty estimation in large and complex deep learning systems.



A message from Asif Razzaq, Co-founder of Marktechpost:

Show your support for our mission ‘making AI understandable for all’ by joining/connecting through our 34k+ FB GroupLinkedIn Page and Quora AI Group.

Advertisement/Sponsored Post:

If you are a company looking to promote your product/webinar/conference/service, feel free to reach out via email to [email protected] We offer sponsored posts and advertisements.