🤖 AI Summary
DPO faces scalability bottlenecks in aligning protein language models with experimental design objectives: the number of training preference pairs grows quadratically with sequence count, rendering training prohibitively expensive even on moderate-scale datasets. To address this, we propose Cluster-DPO—the first efficient preference optimization framework tailored to protein sequence space. It reduces redundancy by clustering sequence embeddings to prune non-informative preference pairs and introduces intra-cluster likelihood amortization to substantially lower computational overhead while preserving gradient consistency. Evaluated on three protein engineering tasks—enzyme activity, thermostability, and binding affinity—Cluster-DPO achieves in vitro and in vivo performance comparable to standard DPO, with 1.8–3.7× faster training. Crucially, speedup scales favorably with dataset size. Cluster-DPO thus establishes a scalable new paradigm for aligning large-scale protein language models with experimental objectives.
📝 Abstract
Direct Preference Optimization (DPO) is an effective approach for aligning protein language models with experimental design goals. However, DPO faces a scalability bottleneck: the number of possible training pairs grows quadratically with the number of labeled sequences, leading to prohibitive training times even for modestly sized datasets. We introduce g-DPO, a framework that (i) uses sequence space clustering to prune redundant pairs while preserving training signal, and (ii) amortizes likelihood computations with group-based approximations. Across three protein engineering tasks, g-DPO maintains in-silico and in-vitro performance that is statistically indistinguishable from standard DPO, while converging 1.8 to 3.7 times faster, with greater gains expected as the size of the dataset increases.