Researchers at Lawrence Livermore National Laboratory (LLNL) Developed a Novel Deep Learning Framework for Symbolic Regression

At the Lawrence Livermore National Laboratory (LLNL), scientists have developed a novel framework and an accompanying visualization tool that utilizes deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems.

Their paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021). In their paper, the researchers describe applying deep reinforcement learning to discrete optimization. Discrete optimization focuses on problems that deal with discrete “building blocks” that must be combined in a particular order or configuration to optimize the desired property. They focused on a type of discrete optimization called symbolic regression. Symbolic regression finds short mathematical expressions that fit data gathered from an experiment. It aims to discover the underlying equations or dynamics of a physical process.

According to the lead author of the paper, Brenden Petersen, discrete optimization is challenging because it doesn’t have gradients. He further explains that they have shown that deep reinforcement learning is a powerful method to search the space of discrete objects efficiently. The team has proposed to use large models to explore the space of small models.

Symbolic regression is usually approached in machine learning and artificial intelligence with evolutionary algorithms. The primary issue with evolutionary approaches is that the algorithms aren’t principled and don’t scale very well. LLNL’s deep learning approach is unique because it’s theory-backed and based on gradient information, making it much more understandable and useful for scientists.


“At the core of their approach is a neural network that learns the landscape of discrete objects. It does so by holding a memory of the process and understanding how these objects are distributed in this massive space to decide an excellent direction to follow. This combination of memory and direction is usually missing from conventional methods.

According to Santiago, a co-author, the DSR framework allows a wide range of constraints to be considered, thereby considerably reducing the search space’s size. The team tested this algorithm on a set of symbolic regression problems. The results showed that it outperformed several common benchmarks, including commercial software gold standards.

The team has tested their framework on real-world physics problems such as thin-film compression, where it shows promising results. According to researchers, the algorithm is widely applicable and is not just limited to symbolic regression. They have even started to apply it to the creation of unique amino acid sequences for improved binding to pathogens for vaccine design. The team has also created an interactive visualization app for the algorithm that physicists can use to solve real-world problems.



🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...