While today’s deep neural networks (DNNs) drive AI’s deep-learning revolution, determining a DNN’s appropriate complexity remains challenging. If a DNN is too shallow, its predictive performance will suffer; if it is too deep, it will tend to overfit, and its complexity will result in prohibitively high compute costs. Researchers propose the new unbounded depth neural network (UDN), an infinitely deep probabilistic model that enables self-adaptation to the training data without an upper limit, saving deep learning researchers and practitioners from difficult decisions regarding the appropriate complexity for their DNN architectures in the new paper Variational Inference for Infinitely Deep Neural Networks.
The suggested UDN beat existing finite and infinite models in predictive performance in their empirical investigation, obtaining 99 percent accuracy on the easiest label pairs using only a few layers and 94 percent accuracy using the entire dataset. On picture categorization, the UDN also showed that it could adjust the depth to data complexity from a few layers to roughly one hundred layers. In the meantime, the proposed dynamic variational inference approach showed successful truncation of space exploration.
According to the scientists, the discovery opens up several intriguing research directions. The UDN might be used to transformer topologies, and the unbounded variational family could be utilized for variational inference of other infinite families.
In summary, They present the infinitely deep neural network known as the unbounded depth neural network, which is capable of producing data from any of its hidden layers. It modifies its truncation in its posterior to fit the observations. This offers a unique variational family and a variational inference method. It maintains a finite but evolving collection of variational parameters to explore the unbounded posterior space of the UDN parameters. Then using both real and synthetic data, they empirically tested the UDN. It does a good job of adjusting its complexity to the available facts. It performs better than other finite and infinite models in terms of prediction.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Variational Inference for Infinitely Deep Neural Networks'. All Credit For This Research Goes To Researchers on This Project. Check out the paper. Please Don't Forget To Join Our ML Subreddit
Prathvik is ML/AI Research content intern at MarktechPost, he is a 3rd year undergraduate at IIT Kharagpur. He has a keen interest in Machine learning and data science.He is enthusiastic in learning about the applications of Machine learning in different fields of study.