🤖 AI Summary
Multi-task learning (MTL) confronts the dual challenge of distributional heterogeneity and posterior heterogeneity, which existing methods struggle to jointly model. To address this, we propose a dual-encoder framework: a shared encoder captures cross-task commonalities, while task-specific encoders preserve individual characteristics; task similarity is further modeled via adaptive latent factor coefficients. This work is the first to jointly handle both types of heterogeneity within a unified framework. We design an alternating optimization scheme integrating the dual-encoder architecture with structured coefficient regularization. Leveraging local Rademacher complexity, we derive a theoretical upper bound on excess risk. Empirical evaluation demonstrates that our method consistently outperforms state-of-the-art MTL approaches on synthetic benchmarks and significantly improves prediction accuracy of tumor doubling time across five distinct cancer patient-derived xenograft (PDX) datasets.
📝 Abstract
Multi-task learning (MTL) has become an essential machine learning tool for addressing multiple learning tasks simultaneously and has been effectively applied across fields such as healthcare, marketing, and biomedical research. However, to enable efficient information sharing across tasks, it is crucial to leverage both shared and heterogeneous information. Despite extensive research on MTL, various forms of heterogeneity, including distribution and posterior heterogeneity, present significant challenges. Existing methods often fail to address these forms of heterogeneity within a unified framework. In this paper, we propose a dual-encoder framework to construct a heterogeneous latent factor space for each task, incorporating a task-shared encoder to capture common information across tasks and a task-specific encoder to preserve unique task characteristics. Additionally, we explore the intrinsic similarity structure of the coefficients corresponding to learned latent factors, allowing for adaptive integration across tasks to manage posterior heterogeneity. We introduce a unified algorithm that alternately learns the task-specific and task-shared encoders and coefficients. In theory, we investigate the excess risk bound for the proposed MTL method using local Rademacher complexity and apply it to a new but related task. Through simulation studies, we demonstrate that the proposed method outperforms existing data integration methods across various settings. Furthermore, the proposed method achieves superior predictive performance for time to tumor doubling across five distinct cancer types in PDX data.