π€ AI Summary
Supervised learning exhibits poor stability and neglects task ordering under dynamically evolving task sequencesβe.g., where task similarity increases progressively.
Method: We propose the first unified theoretical framework integrating multi-task learning and continual learning. Grounded in generalization error bounds, task similarity modeling, and sample complexity theory, our approach derives computable and tight performance guarantees for evolving task sequences and analytically characterizes how the effective sample size grows with task evolution.
Results: Evaluated on multiple benchmark datasets, our method significantly improves classification accuracy. Crucially, the theoretically derived guarantees align closely with empirical results, achieving a rigorous balance between theoretical soundness and practical efficacy.
π Abstract
Multiple supervised learning scenarios are composed by a sequence of classification tasks. For instance, multi-task learning and continual learning aim to learn a sequence of tasks that is either fixed or grows over time. Existing techniques for learning tasks that are in a sequence are tailored to specific scenarios, lacking adaptability to others. In addition, most of existing techniques consider situations in which the order of the tasks in the sequence is not relevant. However, it is common that tasks in a sequence are evolving in the sense that consecutive tasks often have a higher similarity. This paper presents a learning methodology that is applicable to multiple supervised learning scenarios and adapts to evolving tasks. Differently from existing techniques, we provide computable tight performance guarantees and analytically characterize the increase in the effective sample size. Experiments on benchmark datasets show the performance improvement of the proposed methodology in multiple scenarios and the reliability of the presented performance guarantees.