In collaboration with Sorbonne University, Facebook AI introduced a new benchmark for continual learning (CL). This presents excellent means to improve traditional machine learning (ML) methods by training AI models to mimic the way humans learn new tasks. The team addresses one of the most vital bottlenecks associated with modern AI systems: their reliance on massive human-labeled datasets.
Humans can learn from past experiences but applying the same to AI is not easy. Generally, ML systems are trained in isolation, thus require specific models for each new task. Additionally, these models need massive datasets in supervised settings or through numerous interactions in reinforcement-learning environments to achieve high accuracy.
However, if models can instead continuously learn from one task to the next, one can significantly reduce the amount of labeled data used to train them. This also reduces the time and resources needed to develop a new model. Additionally, this makes building personalized systems like speech recognition system that continuously adapts to new expressions and changing situations.
Training with Continual Learning (CL)
Under Continual Learning (CL), instead of starting the training from scratch, the AI model applies knowledge from previous tasks to solve new problems. The team asserts that CL models require less supervision and do not rely on human-labeled datasets.
However, developing a useful CL model is quite challenging. The researchers also have to change their evaluation and comparison methods as they change model training methods. Their accuracy generally measures traditional ML models after training on a given task, while CL models have to be evaluated over multiple dimensions:
- How well they transfer knowledge between tasks.
- Their ability to retain previously acquired skills.
- How they scale to a massive number of tasks.
Until now, there have been no practical standard benchmarks to evaluate CL systems across these areas.
The new benchmark: CTrL
The research introduces two vital components of developing an effective CL system. First is a set of general properties that make up an effective CL learner. For example, the learner must make more accurate predictions when observing new examples related to tasks it tackled earlier. And the second is a standard benchmark for evaluating CL systems called CTrL.
The researchers built CTrL based on these components to benchmark how efficiently CL models transfer knowledge between tasks and scale to various tasks. The principle idea in creating CTrL was to compare a model’s performance on the same task when learned in isolation versus learned after observing potentially related tasks.
CTrL examines the model in both settings and helps evaluate the amount of knowledge transferred from the sequence of observed tasks. This successfully assesses the model’s ability to transfer to similar tasks. The suggested benchmark proposes various tasks to determine multiple transfer dimensions and a long sequence of tasks to evaluate CL models’ ability to scale.
The proposed model works well on both standard CL benchmarks and the newly introduced CTrL. When the Modular model Networks with Task-Driven Priors (MNTDP) is assigned a new task, it decides which previously learned modules could be applied and which new modules are needed to solve it. MNTDP leverages a task-driven earlier used to limit the search space over the viable ways to combine modules. This reduces computation while yielding more reliable transfer quality.