🤖 AI Summary
This work addresses the challenge that modern neural networks struggle with selective forgetting and long-range extrapolation in tasks exhibiting algebraic structure, such as modular arithmetic, cyclic reasoning, and Lie group dynamics. To overcome this limitation, the authors propose the Bilinear Multilayer Perceptron (Bilinear MLP), which explicitly incorporates multiplicative interactions as an inductive bias to encourage the learning of structurally disentangled internal representations. Theoretical analysis reveals that this architecture possesses a “non-mixing” property under gradient flow, causing functional components to separate into orthogonal subspaces—a characteristic that facilitates precise model editing. Empirical results demonstrate that, compared to conventional pointwise nonlinear networks, the Bilinear MLP recovers operators aligned with the underlying true algebraic structures, significantly improving performance in targeted forgetting and generalization tasks.
📝 Abstract
Selective unlearning and long-horizon extrapolation remain fragile in modern neural networks, even when tasks have underlying algebraic structure. In this work, we argue that these failures arise not solely from optimization or unlearning algorithms, but from how models structure their internal representations during training. We explore if having explicit multiplicative interactions as an architectural inductive bias helps in structural disentanglement, through Bilinear MLPs. We show analytically that bilinear parameterizations possess a `non-mixing'property under gradient flow conditions, where functional components separate into orthogonal subspace representations. This provides a mathematical foundation for surgical model modification. We validate this hypothesis through a series of controlled experiments spanning modular arithmetic, cyclic reasoning, Lie group dynamics, and targeted unlearning benchmarks. Unlike pointwise nonlinear networks, multiplicative architectures are able to recover true operators aligned with the underlying algebraic structure. Our results suggest that model editability and generalization are constrained by representational structure, and that architectural inductive bias plays a central role in enabling reliable unlearning.