UC Berkeley Researchers Open-Source ‘RAD’ To Improve Any Reinforcement Learning Algorithm


In Reinforcement Learning (RL), it has always been challenging to learn from visual observations, which is a fundamental yet challenging problem. Despite algorithmic advancements combined with convolutional neural networks, current methods for learning from visual observations still lack on two fronts: (a) sample efficiency of learning, and (b) generalization to new environments.

To solve this problem of Reinforcement Learning (RL) to learn from visual observations, a group of University of California, Berkeley researchers have open-sourced Reinforcement Learning with Augmented Data (‘RAD’). In the latest release of ‘RAD’, published in arXiv, the research team explains how this Augmented Data based simple plug-and-play module (‘RAD’) can enhance any Reinforcement Learning (RL) algorithm. 

Data augmentation techniques increase diversity in training data sets without collecting a new set of data. ‘RAD’ achieves state-of-the-art in terms of data-efficiency and performance across 15 environments based on the DeepMind Control Suite. According to the research team, ‘RAD’ can improve any existing reinforcement learning algorithm, and it achieves better compute and data efficiency than Google AI’s PlaNet.

AdvertisementCoursera Plus banner featuring Johns Hopkins University, Google, and University of Michigan courses highlighting data science career-advancing content

Release platform: https://mishalaskin.github.io/rad/

Github: https://github.com/MishaLaskin/rad

Paper: https://arxiv.org/abs/2004.14990




Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.