Researchers at MIT Solve a Differential Equation Behind the Interaction of Two Neurons Through Synapses to Unlock a New Type of Speedy and Efficient Artificial Intelligence AI Algorithm

Continuous-time neural networks are one subset of machine learning systems capable of taking on representation learning for spatiotemporal decision-making tasks. Continuous differential equations are frequently used to depict these models (DEs). Numerical DE solvers, however, limit their expressive potential when used on computers. The scaling and understanding of many natural physical processes, like the dynamics of neural systems, have been severely hampered by this restriction.

Inspired by the brains of microscopic creatures, MIT researchers have developed “liquid” neural networks, a fluid, robust ML model that can learn and adapt to changing situations. These methods can be used in safety-critical tasks such as driving and flying.

However, as the number of neurons and synapses in the model grows, the underlying mathematics becomes more difficult to solve, and the processing cost of the model rises. 

Differential equation-based neural network systems are difficult to solve and scale to large numbers of parameters. Complex neural networks can be built with a physical description of cell interactions along with the threshold. Any future embedded intelligence system should use this framework as a foundation since it can aid in resolving more complex machine-learning tasks by allowing for improved representation learning.

The same group of researchers has now found a way to overcome this roadblock by solving the differential equation underlying the interaction of two neurons through synapses. Following this, they present the new machine learning models called “closed-form Continuous-time” (CfCs) that keep the attractive qualities of liquid networks but eliminate the necessity for numerical integration. 

This method is as rapid and scalable as liquid neural networks, and they share their causal, robust, and explainable features. Since these neural networks are small and continue to evolve after training, they can be used for any task that requires understanding data over time, while many conventional models are rigid.

This paves the path for reliable machine learning in mission-critical settings. Not only is there no longer any need to solve this differential equation step by step, but the calculation time has also been greatly reduced.

The models significantly outperformed their state-of-the-art counterparts in various tasks, including the recognition of human behaviors using motion sensors, the modeling of physical dynamics of a simulated walker robot, and the processing of events in sequential images. For instance, the models were 220 times faster than human experts on a job that involved predicting mortality for a group of 8,000 patients.


The closed-form solution is a good approximation of the actual system dynamics, so if you were to substitute it inside of this network, you would get the precise behavior. Therefore, they can solve it with even fewer neurons, making it computationally cheaper and faster.

Time series (events that occurred over time) can be used as inputs in these models, which can then be used for categorization, vehicle control, humanoid robot motion, or even financial and medical forecasts. Many different settings can be used to improve the system’s accuracy, resilience, performance, and — most crucially — calculation speed, albeit these improvements don’t always come without a cost.

Finding a solution to this equation will have far-reaching consequences for studying both natural and artificial intelligence. The findings of this research demonstrate how boosting the efficiency of computing for this category of neural networks can open up new possibilities for use in fields like safety-critical commercial and defense systems.

Check out the papercode, dataset, and reference article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more. 

Tanushree Shenwai is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Bhubaneswar. She is a Data Science enthusiast and has a keen interest in the scope of application of artificial intelligence in various fields. She is passionate about exploring the new advancements in technologies and their real-life application.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...