🤖 AI Summary
Continual image restoration suffers from catastrophic forgetting of previous tasks, while existing solutions often require backbone modifications or incur substantial computational overhead. Method: This paper proposes a lightweight convolutional layer enhancement method that operates without altering the backbone network. Its core innovation is a dynamic filter parameter generation mechanism based on a shared knowledge base, which decomposes convolutional weights into task-invariant bases and task-specific increments, updated efficiently via a lightweight adaptation module. Contribution/Results: The method enables dynamic injection of new-task parameters without significantly increasing inference latency, preserving performance on historical tasks while adapting effectively to new ones. Experiments on multiple continual image restoration benchmarks demonstrate substantial mitigation of catastrophic forgetting: average PSNR improvements of 1.2–2.3 dB on new tasks, with inference speed nearly identical to the original model.
📝 Abstract
Continual learning is an emerging topic in the field of deep learning, where a model is expected to learn continuously for new upcoming tasks without forgetting previous experiences. This field has witnessed numerous advancements, but few works have been attempted in the direction of image restoration. Handling large image sizes and the divergent nature of various degradation poses a unique challenge in the restoration domain. However, existing works require heavily engineered architectural modifications for new task adaptation, resulting in significant computational overhead. Regularization-based methods are unsuitable for restoration, as different restoration challenges require different kinds of feature processing. In this direction, we propose a simple modification of the convolution layer to adapt the knowledge from previous restoration tasks without touching the main backbone architecture. Therefore, it can be seamlessly applied to any deep architecture without any structural modifications. Unlike other approaches, we demonstrate that our model can increase the number of trainable parameters without significantly increasing computational overhead or inference time. Experimental validation demonstrates that new restoration tasks can be introduced without compromising the performance of existing tasks. We also show that performance on new restoration tasks improves by adapting the knowledge from the knowledge base created by previous restoration tasks. The code is available at https://github.com/aupendu/continual-restore.