Google Introduces RigL Algorithm For Training Sparse Neural Networks

0
1387
Source: Paper- Rigging the Lottery: Making All Tickets Winners https://arxiv.org/pdf/1911.11134.pdf

Most of the AI models these days are based on Artificial Neural Networks. These neural networks consist of a system of Artificial Neurons linked together by software connections. These connections connect different inputs to different outputs by passing data, performing mathematical algorithms to generate the best possible result. There are many data pathways for the same, but only a fraction of those are used in many AI models, and the others are left unused, taking up a lot of space. This can cause the model to slow down.

To overcome this problem, Google recently released RigL, an algorithm that can make Artificial Intelligence models based on Neural Networks more efficient. It achieves it by eliminating the useless connections by making strategic tweaks to the neural network’s structure during the model’s training phase.

https://ai.googleblog.com/2020/09/improving-sparse-training-with-rigl.html

To test RigL, researchers used an image processing model to analyze images of different characters. During the model’s training phase, RigL observed that only the foreground images needed to be processed while skipping in the background. Therefore, it removed the connections used for processing the background pixels and added new and more efficient ones.

Google claims that even though RigL removes some connections, it does not compromise with the model’s accuracy. In one test, Google researchers used RigL to delete 80% of the ResNet-50 model’s connections. The resulting neural network achieved accuracy comparable to that of the original. In another experiment, researchers shrunk ResNet-50 by 99% and still achieved maximum accuracy of 70.55%.

https://ai.googleblog.com/2020/09/improving-sparse-training-with-rigl.html

Google’s RigL is not the only one trying to compress neural networks to increase accuracy. Many methods are used to do this, but they often compromise with the accuracy of the model. Google claims to achieve higher accuracy while requiring fewer FLOPs than three of the most successful alternative techniques until now. Therefore, RigL achieves accuracy as well as efficiency simultaneously.

Source:https://ai.googleblog.com/2020/09/improving-sparse-training-with-rigl.html

Paper: https://arxiv.org/pdf/1911.11134.pdf

Github: https://github.com/google-research/rigl

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.