🤖 AI Summary
Existing single-cell RNA sequencing (scRNA-seq) annotation models typically label cells independently, ignoring batch-level contextual dependencies and the mutual exclusivity constraint inherent in cell type labeling—constraints routinely leveraged by human experts through cross-cluster collaborative reasoning. Method: We formalize CellPuzzles, a novel task requiring context-aware, constraint-consistent cell type annotation at the batch level. To address it, we introduce the first batch-level reward reinforcement learning framework, integrating supervised fine-tuning with Proximal Policy Optimization (PPO) guided by batch-wise accuracy. We further construct the CellPuzzles benchmark—the first to support reasoning trajectory distillation and evaluation across multiple tissues, diseases, and donors. Results: Our Cell-o1 (7B) model achieves 32.9% batch-level accuracy on CellPuzzles, outperforming OpenAI o1 by 73%. It demonstrates expert-like stepwise reasoning and strong generalization across diverse biological contexts.
📝 Abstract
Cell type annotation is a key task in analyzing the heterogeneity of single-cell RNA sequencing data. Although recent foundation models automate this process, they typically annotate cells independently, without considering batch-level cellular context or providing explanatory reasoning. In contrast, human experts often annotate distinct cell types for different cell clusters based on their domain knowledge. To mimic this workflow, we introduce the CellPuzzles task, where the objective is to assign unique cell types to a batch of cells. This benchmark spans diverse tissues, diseases, and donor conditions, and requires reasoning across the batch-level cellular context to ensure label uniqueness. We find that off-the-shelf large language models (LLMs) struggle on CellPuzzles, with the best baseline (OpenAI's o1) achieving only 19.0% batch-level accuracy. To fill this gap, we propose Cell-o1, a 7B LLM trained via supervised fine-tuning on distilled reasoning traces, followed by reinforcement learning with batch-level rewards. Cell-o1 achieves state-of-the-art performance, outperforming o1 by over 73% and generalizing well across contexts. Further analysis of training dynamics and reasoning behaviors provides insights into batch-level annotation performance and emergent expert-like reasoning. Code and data are available at https://github.com/ncbi-nlp/cell-o1.