🤖 AI Summary
Traditional protein engineering approaches struggle to perform controllable and non-trivial sequence edits on template proteins while preserving their native-like properties. This work proposes a variable-length sequence-to-sequence modeling framework based on edit flows, which uniquely integrates edit operations with flow matching to jointly predict both the location and type of mutations by learning evolutionary trajectories among related proteins. By combining evolutionary information with a controllable editing mechanism, the method diverges from conventional autoregressive or masked language models. Evaluated on UniRef and OAS datasets, it demonstrates sequence distribution modeling capabilities comparable to state-of-the-art masked language models while significantly improving the generation of natural yet structurally novel protein variants.
📝 Abstract
We introduce EvoFlows, a variable-length sequence-to-sequence protein modeling approach uniquely suited to protein engineering. Unlike autoregressive and masked language models, EvoFlows perform a limited, controllable number of insertions, deletions, and substitutions on a template protein sequence. In other words, EvoFlows predict not only _which_ mutation to perform, but also _where_ it should occur. Our approach leverages edit flows to learn mutational trajectories between evolutionarily-related protein sequences, simultaneously modeling distributions of related natural proteins and the mutational paths connecting them. Through extensive _in silico_ evaluation on diverse protein communities from UNIREF and OAS, we demonstrate that EvoFlows capture protein sequence distributions with a quality comparable to leading masked language models commonly used in protein engineering, while showing improved ability to generate non-trivial yet natural-like mutants from a given template protein.