A Minimum Description Length Approach to Regularization in Neural Networks

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural networks often fail to converge to exact solutions on formal language tasks, and conventional regularizers (e.g., $L_1$/$L_2$) can even destabilize perfect initializations. To address this, we propose a novel regularization method grounded in the Minimum Description Length (MDL) principle. Our approach leverages information-theoretic principles to rigorously balance model complexity and data fidelity, explicitly encoding provable inductive biases that prioritize exact solutions within the hypothesis space—without reliance on specific optimization algorithms. As the first systematic application of MDL-based regularization to neural networks, our method significantly improves convergence to exact solutions on formal language modeling benchmarks. It effectively mitigates overfitting and achieves superior generalization compared to both unregularized baselines and standard $L_1$/$L_2$ regularizers across all evaluated metrics.

Technology Category

Application Category

📝 Abstract
State-of-the-art neural networks can be trained to become remarkable solutions to many problems. But while these architectures can express symbolic, perfect solutions, trained models often arrive at approximations instead. We show that the choice of regularization method plays a crucial role: when trained on formal languages with standard regularization ($L_1$, $L_2$, or none), expressive architectures not only fail to converge to correct solutions but are actively pushed away from perfect initializations. In contrast, applying the Minimum Description Length (MDL) principle to balance model complexity with data fit provides a theoretically grounded regularization method. Using MDL, perfect solutions are selected over approximations, independently of the optimization algorithm. We propose that unlike existing regularization techniques, MDL introduces the appropriate inductive bias to effectively counteract overfitting and promote generalization.
Problem

Research questions and friction points this paper is trying to address.

Standard regularization fails to converge to correct solutions
MDL balances model complexity and data fit effectively
MDL promotes generalization by counteracting overfitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Minimum Description Length for regularization
Balances model complexity with data fit
Promotes generalization by counteracting overfitting
🔎 Similar Papers
No similar papers found.