🤖 AI Summary
This study systematically reviews machine learning—particularly deep learning—for gene regulatory network (GRN) inference, addressing the challenge of modeling nonlinear and dynamic regulatory interactions from high-throughput transcriptomic data (including bulk and single-cell RNA-seq). We propose, for the first time, a unified methodological taxonomy encompassing supervised, unsupervised, semi-supervised, and contrastive learning paradigms. We standardize benchmark datasets (e.g., DREAM, SINCERITIES) and evaluation metrics (AUPR, AUROC, F-score), and empirically delineate current model performance limits. Key contributions include: (1) synthesizing advances in deep neural architectures to elucidate their superior capacity for capturing complex, context-dependent regulatory logic; (2) establishing a reproducible, standardized benchmarking framework with practical implementation guidelines; and (3) laying a methodological foundation for both next-generation GRN algorithm development and mechanism-driven biological discovery.
📝 Abstract
Gene Regulatory Networks (GRNs) are intricate biological systems that control gene expression and regulation in response to environmental and developmental cues. Advances in computational biology, coupled with high throughput sequencing technologies, have significantly improved the accuracy of GRN inference and modeling. Modern approaches increasingly leverage artificial intelligence (AI), particularly machine learning techniques including supervised, unsupervised, semi-supervised, and contrastive learning to analyze large scale omics data and uncover regulatory gene interactions. To support both the application of GRN inference in studying gene regulation and the development of novel machine learning methods, we present a comprehensive review of machine learning based GRN inference methodologies, along with the datasets and evaluation metrics commonly used. Special emphasis is placed on the emerging role of cutting edge deep learning techniques in enhancing inference performance. The potential future directions for improving GRN inference are also discussed.