🤖 AI Summary
This study addresses the lack of structured temporal sequences in sepsis-related clinical text by proposing the first end-to-end large language model (LLM) pipeline to automatically extract and temporally anchor Sepsis-3–defined events from unstructured, non-temporal discharge summaries and case reports. The method integrates prompt engineering with O1-preview and Llama-3.3-70B-Instruct, joint entity–temporal span extraction, rule-augmented post-processing, and a cross-source validation framework (I2B2/MIMIC-IV). It presents the first systematic evaluation of LLMs’ temporal grounding capability in clinical narratives, exposing inherent limitations in pure-text sequential reasoning and identifying multimodal augmentation as a critical future direction. On 2,139 PubMed-open case reports, the pipeline achieves an event matching rate of 0.755 and temporal ordering consistency of 0.932. We release the first publicly available Sepsis-3 textual temporal sequence corpus, enabling dynamic clinical modeling and interpretable AI research.
📝 Abstract
Clinical case reports and discharge summaries may be the most complete and accurate summarization of patient encounters, yet they are finalized, i.e., timestamped after the encounter. Complementary data structured streams become available sooner but suffer from incompleteness. To train models and algorithms on more complete and temporally fine-grained data, we construct a pipeline to phenotype, extract, and annotate time-localized findings within case reports using large language models. We apply our pipeline to generate an open-access textual time series corpus for Sepsis-3 comprising 2,139 case reports from the Pubmed-Open Access (PMOA) Subset. To validate our system, we apply it on PMOA and timeline annotations from I2B2/MIMIC-IV and compare the results to physician-expert annotations. We show high recovery rates of clinical findings (event match rates: O1-preview--0.755, Llama 3.3 70B Instruct--0.753) and strong temporal ordering (concordance: O1-preview--0.932, Llama 3.3 70B Instruct--0.932). Our work characterizes the ability of LLMs to time-localize clinical findings in text, illustrating the limitations of LLM use for temporal reconstruction and providing several potential avenues of improvement via multimodal integration.