🤖 AI Summary
To address the challenge of simultaneously achieving high prediction accuracy and preserving inter-source heterogeneity in multi-source few-shot learning, this paper proposes a novel meta-learning framework. Our method integrates cross-domain residual modeling with an adaptive clustering mechanism, explicitly retaining each source’s distributional characteristics within a unified predictive architecture. By dynamically aggregating heterogeneous sources and ensembling multiple base learners, it effectively mitigates biases induced by distributional shift and imbalanced sample sizes across sources. Extensive experiments on five large-scale real-world datasets—including data from the Swiss National Asylum Program’s geographic allocation pilot—demonstrate that our approach consistently outperforms state-of-the-art methods in both overall prediction accuracy and intra-source reliability. Notably, it is the first method to achieve synergistic optimization of high-accuracy prediction and meaningful preservation of inter-source differences.
📝 Abstract
Machine learning (ML) tasks often utilize large-scale data that is drawn from several distinct sources, such as different locations, treatment arms, or groups. In such settings, practitioners often desire predictions that not only exhibit good overall accuracy, but also remain reliable within each source and preserve the differences that matter across sources. For instance, several asylum and refugee resettlement programs now use ML-based employment predictions to guide where newly arriving families are placed within a host country, which requires generating informative and differentiated predictions for many and often small source locations. However, this task is made challenging by several common characteristics of the data in these settings: the presence of numerous distinct data sources, distributional shifts between them, and substantial variation in sample sizes across sources. This paper introduces Clustered Transfer Residual Learning (CTRL), a meta-learning method that combines the strengths of cross-domain residual learning and adaptive pooling/clustering in order to simultaneously improve overall accuracy and preserve source-level heterogeneity. We provide theoretical results that clarify how our objective navigates the trade-off between data quantity and data quality. We evaluate CTRL alongside other state-of-the-art benchmarks on 5 large-scale datasets. This includes a dataset from the national asylum program in Switzerland, where the algorithmic geographic assignment of asylum seekers is currently being piloted. CTRL consistently outperforms the benchmarks across several key metrics and when using a range of different base learners.