Using Algorithms Derived From Neuroscience Research, Numenta Demonstrates 50x Speed Improvements on Deep Learning Networks

0
1518
Source: https://numenta.com/assets/pdf/research-publications/papers/Sparsity-Enables-50x-Performance-Acceleration-Deep-Learning-Networks.pdf

Numenta is a machine intelligence company that focuses on developing cohesive theory, core software, technology, and applications following the neocortex principles. The scientists and engineers work on one of the most significant challenges humanity can face, i.e., understanding how the brain works. Numenta recently announced that it had achieved dramatic performance improvements in deep learning networks’ inference tasks without any loss in accuracy. 

Numenta has made some advances by applying a principle of the brain called sparsity. It compared sparse networks and dense networks by running its algorithms on Xilinx FPGAs (Field Programmable Gate Array) for a speech recognition task that used the Google Speech Commands (GSC) dataset. It used the number of words processed per second as a parameter to measure the efficiency. The results show that sparse networks yield more than 50x acceleration over dense networks on a Xilinx Alveo board. 

Advertisement

Numenta also demonstrated the running of GSC network on a Xilinx Zynq chip (a smaller chip that is not efficient enough to run dense networks), enabling a new set of applications based on low-cost, low-power solutions. Using the metric of the number of words per second per watt, it shows that the sparse networks use remarkably less power than the most efficient dense network.

Numenta’s sparse network makes two modifications to a standard deep learning layer, utilizing both sparse weights and sparse activations. The end result is a sparse network that more closely mimics the brain. (Graphic: Business Wire)
https://www.businesswire.com/news/home/20201110005393/en/Numenta-Demonstrates-50x-Speed-Improvements-on-Deep-Learning-Networks-Using-Brain-Derived-Algorithms

Fig: Numenta’s sparse network makes two modifications to a standard deep learning layer, utilizing both sparse weights and sparse activations. The end result is a sparse network that more closely mimics the brain. (Graphic: Business Wire)

The demonstration mentioned above shows that sparsity can achieve better acceleration and power efficiencies for various Deep Learning platforms and network configurations and maintain competitive accuracy. 

This approach may help in:

  • Implementation of more extensive and more complex networks using the same resources.
  • Producing more network copies on the given resources.
  • Implementation of deep learning networks on edge platforms that have resource constraints.
  • Saving large energy and reduction in costs due to scaling efficiencies.

“The brain offers the best guide for achieving these advances in the future. The results announced by Numenta demonstrate great promise by applying its cortical theory to achieve significant performance improvements,” says Priyadarshini Panda, Assistant Professor at Yale University in Electrical Engineering.

Source: https://www.businesswire.com/news/home/20201110005393/en/Numenta-Demonstrates-50x-Speed-Improvements-on-Deep-Learning-Networks-Using-Brain-Derived-Algorithms

White Paper: https://numenta.com/assets/pdf/research-publications/papers/Sparsity-Enables-50x-Performance-Acceleration-Deep-Learning-Networks.pdf

Website: https://numenta.com/press/2020/11/10/Numenta-Demonstrates-50x-Performance-Acceleration-Deep-Learning-Networks

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.