🤖 AI Summary
To address the limited nonlinear expressive capacity of deep neural networks, this paper proposes a lightweight Quadratic Enhancement Module (QEM) that models pairwise (second-order) feature interactions layer-wise via a differentiable quadratic transformation. QEM employs low-rank decomposition, weight sharing, and structured sparsification to achieve this with minimal parameter and computational overhead. The module is plug-and-play and compatible with mainstream architectures—including CNNs and Transformers. Extensive experiments demonstrate consistent accuracy improvements across diverse tasks: +0.8–2.3% on ImageNet (image classification), GLUE (text classification), and LLaMA-2 fine-tuning (large language models), while preserving training and inference efficiency. Ablation studies confirm that QEM significantly strengthens nonlinear modeling capability at negligible cost. The method thus offers strong effectiveness, generalizability, and practicality for enhancing representational power in modern deep learning systems.
📝 Abstract
The combination of linear transformations and non-linear activation functions forms the foundation of most modern deep neural networks, enabling them to approximate highly complex functions. This paper explores the introduction of quadratic transformations to further increase nonlinearity in neural networks, with the aim of enhancing the performance of existing architectures. To reduce parameter complexity and computational complexity, we propose a lightweight quadratic enhancer that uses low-rankness, weight sharing, and sparsification techniques. For a fixed architecture, the proposed approach introduces quadratic interactions between features at every layer, while only adding negligible amounts of additional model parameters and forward computations. We conduct a set of proof-of-concept experiments for the proposed method across three tasks: image classification, text classification, and fine-tuning large-language models. In all tasks, the proposed approach demonstrates clear and substantial performance gains.