Researchers Develop Data‑Driven Discovery of Green’s Functions With Human‑Understandable Deep Learning

This research summary is based on the paper 'Data‑driven discovery of Green’s functions with human‑understandable deep learning'

Please don't forget to join our ML Subreddit

Mathematicians take data from natural systems and then use trained computer neural networks to try to deduce underlying mathematical equations in the new and rapidly growing subject of partial differential equation (PDE) learning. Researchers from the University of Oxford advance PDE learning with a novel “rational” neural network that reveals its findings so that mathematicians can understand: through Green’s functions—the correct inverse of a differential equation in calculus.

Deep learning has the potential to change science and technology by exposing its findings in a way that is understandable to humans. A revolutionary data-driven approach has been created to form a human-machine collaboration that will speed up scientific discovery. This machine-human collaboration is a step toward the day when deep learning can help scientists understand natural phenomena like weather, climate change, fluid dynamics, genomics, etc. Rational neural networks have been trained to learn Green’s hidden linear partial differential equations function by collecting physical system responses under excitations derived from a Gaussian process. These functions show traits and features that are easily understood by humans, such as linear conservation laws and symmetries, shock and singularity locations, boundary effects, and dominant modes.

Deep learning (DL) has the potential to be a scientific tool for identifying elusive patterns in the natural and technology worlds. These patterns point to previously unknown partial differential equations (PDEs) that regulate biological, fluid dynamics, and physics events. PDE discovery, PDE learning, and symbolic regression have recently converged as potential methods for applying machine learning to scientific inquiries in scientific computing and machine learning. These methods aim to learn the operator that maps excitations to system responses or to determine the coefficients of a PDE model.

According to the research team, neural networks, a subtype of machine learning, is inspired by the simple animal brain mechanism of neurons and synapses—inputs and outputs. In computerized neural networks, neurons are called “activation functions” since they collect information from other neurons. Synapses, also known as weights, connect the neurons and convey impulses to the next neuron.

Deep learning Green’s Functions

According to researchers, Green’s functions have been studied by mathematicians for about 200 years. To quickly solve a differential equation, Green’s function is frequently applied. Earls proposed a reverse of utilizing Green’s functions to comprehend rather than solve a differential equation.


To do this, the researchers developed a tailored rational neural network with more sophisticated activation functions that can reflect the severe physical behavior of Green’s functions. In a separate study published in 2021, one of the researchers introduced rational neural networks. “There are different types of neurons from different brain sections, just as there are different types of neurons in the brain. They aren’t all the same, you know, “According to researchers. “In a neural network, picking the activation function—the input” is how it works.



As researchers can choose different inputs, rational neural networks have the potential to be more versatile than traditional neural networks. “Activation function can be altered to genuinely capture what is wanted from a Green’s function,” researchers explained. “For a natural system, the machine learns the Green’s function. It has no idea what it signifies and is unable to comprehend it.




🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...