🤖 AI Summary
This work addresses the lack of theoretical foundations and efficient adaptation methods for transfer learning in variational quantum circuits (VQCs). We first formally define *transferability* and *adaptability*—two key properties governing knowledge transfer in VQCs—and propose a pretraining-finetuning framework based on single-parameter unitary subgroups. Our method enables optimal task-specific finetuning via analytical parameter transfer, bypassing gradient-based optimization. Furthermore, we derive a theoretical loss bound that quantifies, for the first time, the knowledge transfer capability of VQCs. Experiments demonstrate that our approach significantly reduces training cost in the target domain, accelerates convergence, and improves generalization performance on quantum classification tasks. Overall, this work establishes an interpretable, analytically tractable paradigm for transfer learning in quantum machine learning—bridging theory and practice.
📝 Abstract
This work analyzes transfer learning of the Variational Quantum Circuit (VQC). Our framework begins with a pretrained VQC configured in one domain and calculates the transition of 1-parameter unitary subgroups required for a new domain. A formalism is established to investigate the adaptability and capability of a VQC under the analysis of loss bounds. Our theory observes knowledge transfer in VQCs and provides a heuristic interpretation for the mechanism. An analytical fine-tuning method is derived to attain the optimal transition for adaptations of similar domains.