Introduction To Continual Learning
With the world changing at a rapid pace, data is now available at unprecedented rates and is continuously changing over time. Therefore, the need for agents to continuously adapt to the ever-changing environment is growing more and more. To illustrate the necessity for an agent to be a lifelong learner, consider the autonomous driving application. The perception models deployed in the car must continuously adapt to not only different weather, lighting and road conditions but also learn new sets of object instances. For instance, a model trained only on Netherlands’ traffic signs, will have to adapt and accurately predict traffic signs that differ in appearance, while also learning new and unseen traffic signs. Training the models from scratch each time a new object must be learned is a resource and time expensive. A more efficient and sustainable approach is to develop models that can adapt and learn the new objects without forgetting the previously learned objects. Further, as the models interact with the environment and make decisions, they should also be able to utilize the feedback from the car and driver to remodel their behavior and continually evolve to become efficient lifelong learners.
Challenges with traditional Deep Neural Network approach
However, traditional Deep Neural Networks (DNNs) are often trained on a large quantity of data and are constrained to a specific task. DNNs cannot adapt dynamically to new tasks without restarting the training process each time new data becomes available. Catastrophic forgetting is one of the key issues that prevent the models from dynamically adapting to new information. Catastrophic forgetting refers to the tendency of artificial neural networks to completely and abruptly forget previously learned information upon learning new information. For example, let us consider the traffic sign detector model discussed above. If the model that is trained on Netherlands’ traffic signs needs to be adapted to perform well in Germany also, we need to train on the new data (Germany’s traffic signs). If we naively fine-tune the model on these new signs, the model parameters will get modified to perform well on the German traffic signs data, but changing parameters will cause the performance of the model to degrade on the old data. Hence, DNNs suffer from this dilemma to learn new knowledge without interfering with previously learned information.
Lifelong learning in the brain
On the other hand, lifelong learning in the brain is enabled by a rich set of neurophysiological principles which enable incremental learning by acquiring, fine-tuning, and transferring knowledge and skills throughout the lifespan . Continual learning in the brain is mediated by twin objectives: learning and memorization . The former task is characterized by the extraction of the statistical structure of the perceived events with the aim to generalize to novel situations.
The latter, conversely, requires the collection of separated episodic-like events. While it is true that we tend to gradually forget previously learned information, only rarely does the learning of novel information catastrophically interfere with consolidated knowledge. This ability to continually learn from a dynamic environment without catastrophic forgetting is a hallmark of human intelligence, currently missing in artificial systems.
Combining human and machine intelligence
Bridging the gap between human and machine intelligence, and developing agents that can continuously adapt and be efficient continual learners is imminent. Enabling the networks to adapt flexibly and continuously to the evolving world can open up attractive avenues for improving machine intelligence systems. Initial attempts to this end involved restraining updates to existing information in the agents to prevent interference, a mechanism employed to stabilize memories in humans.
This was soon followed by the inculcation of task specialization in the agents, inspired by modular and specialized units in the brain. Later works have further incorporated the replay action in the brain — the reactivation of and relearning on information of past experiences — into the agents. Finally, recent methods have focused on combining these approaches, and extending them to multiple memory systems which focus on learning general-purpose or global information (for e.g. shapes of roads and cars, which are similar everywhere) and task-specific or local information (for e.g. signboard content, which can differ between countries), separately. These methods have achieved impressive performance in continual learning, making them candidates for deployment in critical applications such as in self-driving cars.
Continual learning agents exposed to an ever-evolving environment are expected not to forget. These agents are part of such an environment in applications like home robots, self-driving cars, smart home appliances, AR/VR gadgets, etc. Existing and desired infrastructure in these applications should be light, portable, and highly resource (energy and memory) efficient. Drawing inspiration from the most resource-efficient deployment, the brain , could lead us to adapt existing continual learning strategies accordingly. Tune in to these different strategies and how we improve them.
 Parisi, German I., et al. “Continual lifelong learning with neural networks: A review.” Neural Networks 113 (2019): 54–71.
 Lewkowicz, David J. “Early experience and multisensory perceptual narrowing.” Developmental psychobiology 56.2 (2014): 292–315.
 Kudithipudi, D., Aguilar-Simon, M., Babb, J. et al. Biological underpinnings for lifelong learning machines. Nat Mach Intell 4, 196–210 (2022). https://doi.org/10.1038/s42256-022-00452-0