Introduction To Continual Learning

Challenges with traditional Deep Neural Network approach

However, traditional Deep Neural Networks (DNNs) are often trained on a large quantity of data and are constrained to a specific task. DNNs cannot adapt dynamically to new tasks without restarting the training process each time new data becomes available. Catastrophic forgetting is one of the key issues that prevent the models from dynamically adapting to new information. Catastrophic forgetting refers to the tendency of artificial neural networks to completely and abruptly forget previously learned information upon learning new information. For example, let us consider the traffic sign detector model discussed above. If the model that is trained on Netherlands’ traffic signs needs to be adapted to perform well in Germany also, we need to train on the new data (Germany’s traffic signs). If we naively fine-tune the model on these new signs, the model parameters will get modified to perform well on the German traffic signs data, but changing parameters will cause the performance of the model to degrade on the old data. Hence, DNNs suffer from this dilemma to learn new knowledge without interfering with previously learned information.

Combining human and machine intelligence

Bridging the gap between human and machine intelligence, and developing agents that can continuously adapt and be efficient continual learners is imminent. Enabling the networks to adapt flexibly and continuously to the evolving world can open up attractive avenues for improving machine intelligence systems. Initial attempts to this end involved restraining updates to existing information in the agents to prevent interference, a mechanism employed to stabilize memories in humans.

Conclusion

Continual learning agents exposed to an ever-evolving environment are expected not to forget. These agents are part of such an environment in applications like home robots, self-driving cars, smart home appliances, AR/VR gadgets, etc. Existing and desired infrastructure in these applications should be light, portable, and highly resource (energy and memory) efficient. Drawing inspiration from the most resource-efficient deployment, the brain [3], could lead us to adapt existing continual learning strategies accordingly. Tune in to these different strategies and how we improve them.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
NavInfo Europe

NavInfo Europe

Helping companies power their future with intelligent solutions in AI, Simulation, Map Data Services, and Cybersecurity.