In Reinforcement Learning (RL), it has always been challenging to learn from visual observations, which is a fundamental yet challenging problem. Despite algorithmic advancements combined with convolutional neural networks, current methods for learning from visual observations still lack on two fronts: (a) sample efficiency of learning, and (b) generalization to new environments.
To solve this problem of Reinforcement Learning (RL) to learn from visual observations, a group of University of California, Berkeley researchers have open-sourced Reinforcement Learning with Augmented Data (‘RAD’). In the latest release of ‘RAD’, published in arXiv, the research team explains how this Augmented Data based simple plug-and-play module (‘RAD’) can enhance any Reinforcement Learning (RL) algorithm.
Data augmentation techniques increase diversity in training data sets without collecting a new set of data. ‘RAD’ achieves state-of-the-art in terms of data-efficiency and performance across 15 environments based on the DeepMind Control Suite. According to the research team, ‘RAD’ can improve any existing reinforcement learning algorithm, and it achieves better compute and data efficiency than Google AI’s PlaNet.
Release platform: https://mishalaskin.github.io/rad/