🤖 AI Summary
Current transcriptional foundation models (TFMs) suffer from poor reproducibility and a lack of consensus on best practices due to highly fragmented training objectives and architectures. To address this, we propose a unified, open-source, modular TFM framework centered on the Whole-Cell Expression Decoder (WCED)—a self-supervised pretraining objective that leverages the [CLS] token to model global gene expression patterns across the entire cell. WCED is the first to be jointly optimized with masked language modeling (MLM) in a multi-task setting. The framework supports diverse input representations—including log-normalized counts and BERT-style tokenization—and integrates seamlessly with CELLxGENE data. Evaluated across十余 single-cell datasets, WCED matches or surpasses state-of-the-art models (e.g., scGPT) in both zero-shot and fine-tuning settings, delivering significant improvements in three core tasks: cell type annotation, batch correction, and perturbation prediction.
📝 Abstract
Transcriptomic foundation models (TFMs) have recently emerged as powerful tools for analyzing gene expression in cells and tissues, supporting key tasks such as cell-type annotation, batch correction, and perturbation prediction. However, the diversity of model implementations and training strategies across recent TFMs, though promising, makes it challenging to isolate the contribution of individual design choices or evaluate their potential synergies. This hinders the field's ability to converge on best practices and limits the reproducibility of insights across studies. We present BMFM-RNA, an open-source, modular software package that unifies diverse TFM pretraining and fine-tuning objectives within a single framework. Leveraging this capability, we introduce a novel training objective, whole cell expression decoder (WCED), which captures global expression patterns using an autoencoder-like CLS bottleneck representation. In this paper, we describe the framework, supported input representations, and training objectives. We evaluated four model checkpoints pretrained on CELLxGENE using combinations of masked language modeling (MLM), WCED and multitask learning. Using the benchmarking capabilities of BMFM-RNA, we show that WCED-based models achieve performance that matches or exceeds state-of-the-art approaches like scGPT across more than a dozen datasets in both zero-shot and fine-tuning tasks. BMFM-RNA, available as part of the biomed-multi-omics project ( https://github.com/BiomedSciAI/biomed-multi-omic ), offers a reproducible foundation for systematic benchmarking and community-driven exploration of optimal TFM training strategies, enabling the development of more effective tools to leverage the latest advances in AI for understanding cell biology.