Fisher Information, Training and Bias in Fourier Regression Models

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how the effective dimension of the Fisher information matrix (FIM) and model bias jointly govern training dynamics and generalization in quantum neural networks (QNNs). We introduce a controllable construction for Fourier regression models—enabling independent tuning of effective dimension and bias—and derive closed-form expressions for their FIM. Leveraging tensor network representations, we characterize the interplay between model geometry and training evolution. Our analysis reveals a fundamental trade-off: high FIM effective dimension accelerates convergence for unbiased models, whereas low effective dimension enhances predictive accuracy for biased ones. Crucially, this study establishes the first quantitative link among FIM geometry, task alignment, and model bias—yielding an interpretable theoretical framework and practical analytical tools for QNN architecture design, performance prediction, and general machine learning model assessment.

Technology Category

Application Category

📝 Abstract
Motivated by the growing interest in quantum machine learning, in particular quantum neural networks (QNNs), we study how recently introduced evaluation metrics based on the Fisher information matrix (FIM) are effective for predicting their training and prediction performance. We exploit the equivalence between a broad class of QNNs and Fourier models, and study the interplay between the emph{effective dimension} and the emph{bias} of a model towards a given task, investigating how these affect the model's training and performance. We show that for a model that is completely agnostic, or unbiased, towards the function to be learned, a higher effective dimension likely results in a better trainability and performance. On the other hand, for models that are biased towards the function to be learned a lower effective dimension is likely beneficial during training. To obtain these results, we derive an analytical expression of the FIM for Fourier models and identify the features controlling a model's effective dimension. This allows us to construct models with tunable effective dimension and bias, and to compare their training. We furthermore introduce a tensor network representation of the considered Fourier models, which could be a tool of independent interest for the analysis of QNN models. Overall, these findings provide an explicit example of the interplay between geometrical properties, model-task alignment and training, which are relevant for the broader machine learning community.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Fisher information metrics for quantum neural networks
Analyzing effective dimension and bias interplay in Fourier models
Establishing relationship between model geometry and trainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fisher information matrix predicts quantum neural network performance
Analytical FIM expression derived for Fourier models
Tensor network representation introduced for QNN analysis
🔎 Similar Papers
No similar papers found.