RewriteNets: End-to-End Trainable String-Rewriting for Generative Sequence Modeling

📅 2026-01-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes RewriteNets, a neural architecture grounded in explicit parallel string rewriting to address the limitations of conventional sequence models such as Transformers, which suffer from quadratic computational complexity due to implicit structural representations and exhibit poor systematic generalization. RewriteNets incorporate learnable rewriting rules at each layer, enabling efficient sequence modeling through fuzzy pattern matching, conflict resolution, and non-overlapping rule selection. The model integrates a straight-through Gumbel-Sinkhorn estimator to facilitate end-to-end differentiable training, effectively casting sequence modeling as a differentiable symbolic rewriting process with explicit structural inductive bias. Evaluated on the SCAN length-split task, RewriteNets achieve 98.7% accuracy—substantially outperforming LSTM and Transformer baselines—while demonstrating superior computational efficiency and enhanced systematic generalization capabilities.

Technology Category

Application Category

📝 Abstract
Dominant sequence models like the Transformer represent structure implicitly through dense attention weights, incurring quadratic complexity. We propose RewriteNets, a novel neural architecture built on an alternative paradigm: explicit, parallel string rewriting. Each layer in a RewriteNet contains a set of learnable rules. For each position in an input sequence, the layer performs four operations: (1) fuzzy matching of rule patterns, (2) conflict resolution via a differentiable assignment operator to select non-overlapping rewrites, (3) application of the chosen rules to replace input segments with output segments of potentially different lengths, and (4) propagation of untouched tokens. While the discrete assignment of rules is non-differentiable, we employ a straight-through Gumbel-Sinkhorn estimator, enabling stable end-to-end training. We evaluate RewriteNets on algorithmic, compositional, and string manipulation tasks, comparing them against strong LSTM and Transformer baselines. Results show that RewriteNets excel at tasks requiring systematic generalization (achieving 98.7% accuracy on the SCAN benchmark's length split) and are computationally more efficient than Transformers. We also provide an analysis of learned rules and an extensive ablation study, demonstrating that this architecture presents a promising direction for sequence modeling with explicit structural inductive biases.
Problem

Research questions and friction points this paper is trying to address.

sequence modeling
systematic generalization
string rewriting
computational efficiency
structural inductive biases
Innovation

Methods, ideas, or system contributions that make the work stand out.

string rewriting
explicit structural bias
differentiable assignment
systematic generalization
Gumbel-Sinkhorn estimator
🔎 Similar Papers
No similar papers found.