On the Rate of Gaussian Approximation for Linear Regression Problems

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the Gaussian approximation rate of the posterior or approximate distribution in online linear regression, focusing on convergence behavior under a constant learning rate. Methodologically, it integrates probabilistic approximation theory with statistical learning analysis, incorporating spectral properties of the design matrix—such as eigenvalue decay and condition number. The main contribution is the first precise characterization, in high dimensions, of the joint dependence of the approximation error on dimension $d$, iteration count $n$, and matrix structure. Theoretically, it establishes that, given sufficient samples, the Gaussian approximation error achieves the rate $sqrt{(log n)/n}$, explicitly accounting for both dimensionality and design matrix characteristics—significantly tightening existing bounds derived under unstructured assumptions. These results provide sharp theoretical guarantees for online Bayesian approximation and variational inference.

Technology Category

Application Category

📝 Abstract
In this paper, we consider the problem of Gaussian approximation for the online linear regression task. We derive the corresponding rates for the setting of a constant learning rate and study the explicit dependence of the convergence rate upon the problem dimension $d$ and quantities related to the design matrix. When the number of iterations $n$ is known in advance, our results yield the rate of normal approximation of order $sqrt{log{n}/n}$, provided that the sample size $n$ is large enough.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Gaussian approximation rates for online linear regression
Deriving convergence rates with constant learning rate settings
Studying dimensional dependence and design matrix effects
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gaussian approximation for online linear regression
Constant learning rate convergence rate analysis
Explicit dependence on dimension and design matrix
🔎 Similar Papers
No similar papers found.