🤖 AI Summary
This study addresses the limited capability of 1D-sequence-only models in capturing RNA structural stability and intermolecular interactions for attribute prediction, exacerbated by real-world challenges including data scarcity, incomplete labeling, sequencing noise, and computational inefficiency. Methodologically, we first systematically demonstrate the critical value of RNA 2D/3D geometric context for prediction; second, we construct the first RNA benchmark dataset with fine-grained 2D/3D structural annotations; third, we propose a geometry-aware graph neural network that jointly encodes 3D atomic coordinates and multi-scale structural representations, integrated with robust training strategies. Experimental results show that our model achieves an average 12% reduction in RMSE over baselines. Notably, it significantly outperforms pure sequence-based models under low-data regimes (<1k samples) and partial-labeling settings (≤30% labeled instances), where the latter require 2–5× more data to match its performance.
📝 Abstract
Accurate prediction of RNA properties, such as stability and interactions, is crucial for advancing our understanding of biological processes and developing RNA-based therapeutics. RNA structures can be represented as 1D sequences, 2D topological graphs, or 3D all-atom models, each offering different insights into its function. Existing works predominantly focus on 1D sequence-based models, which overlook the geometric context provided by 2D and 3D geometries. This study presents the first systematic evaluation of incorporating explicit 2D and 3D geometric information into RNA property prediction, considering not only performance but also real-world challenges such as limited data availability, partial labeling, sequencing noise, and computational efficiency. To this end, we introduce a newly curated set of RNA datasets with enhanced 2D and 3D structural annotations, providing a resource for model evaluation on RNA data. Our findings reveal that models with explicit geometry encoding generally outperform sequence-based models, with an average prediction RMSE reduction of around 12% across all various RNA tasks and excelling in low-data and partial labeling regimes, underscoring the value of explicitly incorporating geometric context. On the other hand, geometry-unaware sequence-based models are more robust under sequencing noise but often require around $2-5 imes$ training data to match the performance of geometry-aware models. Our study offers further insights into the trade-offs between different RNA representations in practical applications and addresses a significant gap in evaluating deep learning models for RNA tasks.