🤖 AI Summary
This work addresses the interpretable modeling of genotype-to-phenotype mappings for biological sequences (DNA/RNA/proteins), specifically tackling gauge ambiguity—the non-uniqueness of subsequence weights—in overparameterized weighted subsequence decomposition. We propose a unified framework that rigorously couples weighted subsequence decomposition with Gaussian process (GP) function-space modeling, establishing, for the first time, an exact correspondence between overparameterized weight regularization and GP function-space priors. Our method introduces customizable regularizers supporting arbitrary gauge choices and derives analytic posterior distributions for gauge-fixed weights. It enables efficient computation on long sequences and systematically reveals the implicit functional priors underlying prevalent regularizers. The framework delivers the first unified modeling paradigm for sequence-function interpretation that simultaneously ensures statistical rigor, computational tractability, and mechanistic interpretability.
📝 Abstract
Mappings from biological sequences (DNA, RNA, protein) to quantitative measures of sequence functionality play an important role in contemporary biology. We are interested in the related tasks of (i) inferring predictive sequence-to-function maps and (ii) decomposing sequence-function maps to elucidate the contributions of individual subsequences. Because each sequence-function map can be written as a weighted sum over subsequences in multiple ways, meaningfully interpreting these weights requires"gauge-fixing,"i.e., defining a unique representation for each map. Recent work has established that most existing gauge-fixed representations arise as the unique solutions to $L_2$-regularized regression in an overparameterized"weight space"where the choice of regularizer defines the gauge. Here, we establish the relationship between regularized regression in overparameterized weight space and Gaussian process approaches that operate in"function space,"i.e. the space of all real-valued functions on a finite set of sequences. We disentangle how weight space regularizers both impose an implicit prior on the learned function and restrict the optimal weights to a particular gauge. We also show how to construct regularizers that correspond to arbitrary explicit Gaussian process priors combined with a wide variety of gauges. Next, we derive the distribution of gauge-fixed weights implied by the Gaussian process posterior and demonstrate that even for long sequences this distribution can be efficiently computed for product-kernel priors using a kernel trick. Finally, we characterize the implicit function space priors associated with the most common weight space regularizers. Overall, our framework unifies and extends our ability to infer and interpret sequence-function relationships.