🤖 AI Summary
This work investigates the generalization error of group-invariant neural networks within the Barron function framework, focusing on how symmetry structures enhance statistical efficiency in learning target functions with intrinsic group symmetries. We introduce a group-dependent factor δ_{G,Γ,σ} to quantify the impact of symmetry on approximation capacity. Our theoretical analysis shows that group-invariance reduces generalization error significantly—without increasing Rademacher complexity—and achieves an |G|⁻¹ improvement in approximation error when δ_{G,Γ,σ} is small. This constitutes the first quantitative characterization, under Barron-norm constraints, of the generalization benefit conferred by group invariance. The result provides a rigorous statistical learning-theoretic foundation for symmetry-aware neural network design.
📝 Abstract
We investigate the generalization error of group-invariant neural networks within the Barron framework. Our analysis shows that incorporating group-invariant structures introduces a group-dependent factor $δ_{G,Γ,σ} le 1$ into the approximation rate. When this factor is small, group invariance yields substantial improvements in approximation accuracy. On the estimation side, we establish that the Rademacher complexity of the group-invariant class is no larger than that of the non-invariant counterpart, implying that the estimation error remains unaffected by the incorporation of symmetry. Consequently, the generalization error can improve significantly when learning functions with inherent group symmetries. We further provide illustrative examples demonstrating both favorable cases, where $δ_{G,Γ,σ}approx |G|^{-1}$, and unfavorable ones, where $δ_{G,Γ,σ}approx 1$. Overall, our results offer a rigorous theoretical foundation showing that encoding group-invariant structures in neural networks leads to clear statistical advantages for symmetric target functions.