🤖 AI Summary
Conventional transfer learning methods rely on strong assumptions—such as label space overlap between source and target domains, accessibility of source data, and architectural consistency—which are frequently violated in practice. Method: This paper establishes the first theoretical framework for transfer learning tailored to conditional generative models (e.g., cGANs and cVAEs), proposing a parameter-efficient decoupled fine-tuning mechanism and a conditional embedding alignment strategy. The approach enables effective knowledge transfer from a pre-trained source model under realistic constraints: no access to source data, disjoint label sets, and heterogeneous network architectures. It supports joint adaptation for few-shot cross-domain generation and discriminative tasks. Results: Evaluated on five cross-domain image generation and classification benchmarks, the method achieves an average 12.3% improvement in accuracy or generation quality metrics, while reducing downstream training cost by 90%, significantly advancing the practical deployment of generative models via transfer learning.