In Reinforcement Learning (RL), it has always been challenging to learn from visual observations, which is a fundamental yet challenging problem. Despite algorithmic advancements combined with convolutional neural networks, current methods for learning from visual observations still lack on two fronts: (a) sample efficiency of learning, and (b) generalization to new environments.
To solve this problem of Reinforcement Learning (RL) to learn from visual observations, a group of University of California, Berkeley researchers have open-sourced Reinforcement Learning with Augmented Data (‘RAD’). In the latest release of ‘RAD’, published in arXiv, the research team explains how this Augmented Data based simple plug-and-play module (‘RAD’) can enhance any Reinforcement Learning (RL) algorithm.
Data augmentation techniques increase diversity in training data sets without collecting a new set of data. ‘RAD’ achieves state-of-the-art in terms of data-efficiency and performance across 15 environments based on the DeepMind Control Suite. According to the research team, ‘RAD’ can improve any existing reinforcement learning algorithm, and it achieves better compute and data efficiency than Google AI’s PlaNet.
Release platform: https://mishalaskin.github.io/rad/
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.