🤖 AI Summary
Conventional drug–target affinity prediction models suffer from biased evaluation: random test-set splitting artificially enriches high molecular similarity samples, thereby masking model failure in low-similarity generalization scenarios. Method: We propose a similarity-aware evaluation framework featuring a novel, controllable similarity-distribution data splitting strategy, formulated as a differentiable optimization problem and solved efficiently via gradient descent. Using fingerprint-based molecular similarity metrics, we conduct multi-model benchmarking across four standard datasets. Results: Our framework reveals severe performance degradation—averaging 30–50% drops—in low-similarity regimes, exposing critical limitations of existing methods. It significantly enhances evaluation fidelity and model interpretability, establishing a new paradigm for reliable deployment of affinity prediction models.
📝 Abstract
Drug-target binding affinity prediction is a fundamental task for drug discovery. It has been extensively explored in literature and promising results are reported. However, in this paper, we demonstrate that the results may be misleading and cannot be well generalized to real practice. The core observation is that the canonical randomized split of a test set in conventional evaluation leaves the test set dominated by samples with high similarity to the training set. The performance of models is severely degraded on samples with lower similarity to the training set but the drawback is highly overlooked in current evaluation. As a result, the performance can hardly be trusted when the model meets low-similarity samples in real practice. To address this problem, we propose a framework of similarity aware evaluation in which a novel split methodology is proposed to adapt to any desired distribution. This is achieved by a formulation of optimization problems which are approximately and efficiently solved by gradient descent. We perform extensive experiments across five representative methods in four datasets for two typical target evaluations and compare them with various counterpart methods. Results demonstrate that the proposed split methodology can significantly better fit desired distributions and guide the development of models. Code is released at https://github.com/Amshoreline/SAE/tree/main.