🤖 AI Summary
This paper identifies a critical unreliability in machine learning–based oracle benchmarks for biological sequence design: over 70% of methods exhibit rank reversal across different oracle models—due to architectural or training stochasticity—revealing a fundamental deficiency in out-of-distribution generalization.
Method: The authors systematically characterize the detrimental impact of oracle inconsistency on benchmark validity and propose a hybrid evaluation framework integrating multiple biophysically grounded metrics—including stability, foldability, and solubility—while constraining the oracle’s scoring domain to enhance design robustness.
Contribution/Results: Through large-scale reproduction of 12 state-of-the-art design methods, cross-oracle consistency analysis, and out-of-distribution generalization diagnostics, the framework significantly improves wet-lab validation rates. It establishes a new paradigm for constructing trustworthy, AI-driven benchmarks for protein and nucleic acid design—grounded in empirical feasibility and physical realism.
📝 Abstract
Machine learning methods can automate the in silico design of biological sequences, aiming to reduce costs and accelerate medical research. Given the limited access to wet labs, in silico design methods commonly use an oracle model to evaluate de novo generated sequences. However, the use of different oracle models across methods makes it challenging to compare them reliably, motivating the question: are in silico sequence design benchmarks reliable? In this work, we examine 12 sequence design methods that utilise ML oracles common in the literature and find that there are significant challenges with their cross-consistency and reproducibility. Indeed, oracles differing by architecture, or even just training seed, are shown to yield conflicting relative performance with our analysis suggesting poor out-of-distribution generalisation as a key issue. To address these challenges, we propose supplementing the evaluation with a suite of biophysical measures to assess the viability of generated sequences and limit out-of-distribution sequences the oracle is required to score, thereby improving the robustness of the design procedure. Our work aims to highlight potential pitfalls in the current evaluation process and contribute to the development of robust benchmarks, ultimately driving the improvement of in silico design methods.