Machine learning systems use neural networks that are frequently too vast and sophisticated to quickly find the most basic explanation for an event or observation, a phenomenon known as parsimony.
In physical sciences, most models do not understand the underlying physics of the system in question, such as restrictions or symmetries. This limits their ability to generalize. Furthermore, most machine learning models are challenging to understand. That is, most machine learning algorithms cannot learn physics or justify their predictions. These limits are sometimes offset by large amounts of data in many subjects. But this is not always practicable in areas like materials science, where data acquisition is costly and time-intensive.
Recent research conducted by Purdue University researchers has discovered a technique that shows how machine learning may be used to uncover physical principles from data. They found that enforcing parsimony on artificial neural networks through “stochastic optimization” allows them to better balance simplicity and accuracy, extracting meaningful physics from data.
Learning new physics and justifying predictions is a challenge for machine learning models. Now, machine learning was used to understand Newton’s second law of motion and Lindemann’s law for forecasting the melting temperature of materials, thanks to the approach created by Purdue researchers.
For this, the researchers propose PNNs: Parsimonious neural networks that aim to strike a balance between parsimony and accuracy when characterizing training data. The PNN technique uses neural networks to allow complicated function compositions to balance parsimony while employing genetic algorithms.
The idea behind this approach is that imposing parsimony (e.g., limiting adjustable parameters and favoring linear correlations between variables) will require the resulting model to be easily interpretable and pull out the problem’s symmetries.
The researchers used data from articles on Newton’s second equation of motion and the Lindemann melting law to train the sparse neural networks.
Compared to a flexible feed-forward neural network, the resulting PNN lends itself to interpretation (like Newton’s laws) and provides a substantially more accurate description of the particle dynamics when applied iteratively. The resulting PNNs are energy efficient and time-reversible, which means they learn non-trivial symmetries that are implicit in the data but not explicitly presented.
This approach is highly versatile and can be used in circumstances when there is no underlying differential equation. The team first used PNNs to learn the equations of motion that govern the Hamiltonian dynamics of a particle in the presence and absence of friction in a highly nonlinear external potential.
The team also developed a tool that other researchers can use to achieve simpler and more interpretable machine learning models based on their findings from their work.