🤖 AI Summary
This study addresses the challenge posed by the prevalence of unstructured textual and graphical data in historical materials science literature, which significantly hinders data-driven research. To overcome this limitation, the authors propose a skill-based autonomous agent framework that integrates large language models with multimodal information extraction techniques. The framework enables automated alignment of textual and graphical data through semantic filtering, chart digitization, and physics-informed consistency validation, thereby facilitating the construction of a high-fidelity, creep property database without manual intervention. Evaluated on 243 scientific papers, the approach achieves over 90% success rate in extracting graphical data and demonstrates exceptional parameter alignment between text and figures, with an R² exceeding 0.99. This work represents the first fully automated, cross-modal, and physically self-consistent integration of material creep data from heterogeneous literature sources.
📝 Abstract
The advancement of data-driven materials science is currently constrained by a fundamental bottleneck: the vast majority of historical experimental data remains locked within the unstructured text and rasterized figures of legacy scientific literature. Manual curation of this knowledge is prohibitively labor-intensive and prone to human error. To address this challenge, we introduce an autonomous, agent-based framework powered by Large Language Models (LLMs) designed to excavate high-fidelity datasets from scientific PDFs without human intervention. By deploying a modular"skill-based"architecture, the agent orchestrates complex cognitive tasks - including semantic filtering, multi-modal information extraction, and physics-informed validation. We demonstrate the efficacy of this framework by constructing a physically self-consistent database for material creep mechanics, a domain characterized by complex graphical trajectories and heterogeneous constitutive models. Applying the pipeline to 243 publications, the agent achieved a verified extraction success rate exceeding 90% for graphical data digitization. Crucially, we introduce a cross-modal verification protocol, demonstrating that the agent can autonomously align visually extracted data points with textually extracted constitutive parameters ($R^2>0.99$), ensuring the physical self-consistency of the database. This work not only provides a critical resource for investigating time-dependent deformation across diverse material systems but also establishes a scalable paradigm for autonomous knowledge acquisition, paving the way for the next generation of self-driving laboratories.