🤖 AI Summary
This study addresses the underutilization of rich clinical insights embedded in pathology reports due to the absence of effective retrieval and reasoning mechanisms for real-time clinical decision support. To bridge this gap, the authors propose the first unified large language model framework based on Retrieval-Augmented Generation (RAG), transforming static pathology archives into a dynamic, semantically searchable knowledge base capable of multi-task reasoning. The framework enables fully automated, high-quality cohort construction based on free-text criteria and integrates functionalities such as natural language querying, clinical question answering, immunohistochemistry (IHC) protocol recommendation, and report restructuring. Evaluated on 70,000 multi-institutional pathology reports, the system achieves Recall@10 = 1.0, constructs cohorts in an average of 9.2 minutes, demonstrates 91.3% agreement with manual review, misses no eligible cases, and receives an expert rating of 4.56 out of 5.
📝 Abstract
Pathology underpins modern diagnosis and cancer care, yet its most valuable asset, the accumulated experience encoded in millions of narrative reports, remains largely inaccessible. Although institutions are rapidly digitizing pathology workflows, storing data without effective mechanisms for retrieval and reasoning risks transforming archives into a passive data repository, where institutional knowledge exists but cannot meaningfully inform patient care. True progress requires not only digitization, but the ability for pathologists to interrogate prior similar cases in real time while evaluating a new diagnostic dilemma. We present PathoScribe, a unified retrieval-augmented large language model (LLM) framework designed to transform static pathology archives into a searchable, reasoning-enabled living library. PathoScribe enables natural language case exploration, automated cohort construction, clinical question answering, immunohistochemistry (IHC) panel recommendation, and prompt-controlled report transformation within a single architecture. Evaluated on 70,000 multi-institutional surgical pathology reports, PathoScribe achieved perfect Recall@10 for natural language case retrieval and demonstrated high-quality retrieval-grounded reasoning (mean reviewer score 4.56/5). Critically, the system operationalized automated cohort construction from free-text eligibility criteria, assembling research-ready cohorts in minutes (mean 9.2 minutes) with 91.3% agreement to human reviewers and no eligible cases incorrectly excluded, representing orders-of-magnitude reductions in time and cost compared to traditional manual chart review. This work establishes a scalable foundation for converting digital pathology archives from passive storage systems into active clinical intelligence platforms.