🤖 AI Summary
Materials science faces persistent AI adoption bottlenecks, including multi-source heterogeneous data, poor generalizability, limited interpretability, class imbalance, and challenges in multimodal fusion. To address these, this paper presents a systematic survey of foundational models (FMs), large language model (LLM)-based agents, standardized datasets, and open-source tools tailored for materials science (MatSci). We introduce, for the first time, a task-driven taxonomy comprising six application frameworks—e.g., materials discovery, structural design, and property prediction—as well as a novel FM task classification system. Furthermore, we propose an integrated roadmap combining multimodal fusion, continual learning, and trustworthy AI, synergizing LLM agents with autonomous experimental platforms. Our approach significantly enhances cross-scale materials modeling accuracy and automation capability. This work delivers the first comprehensive, technically grounded roadmap for AI-powered materials research and development.
📝 Abstract
Foundation models (FMs) are catalyzing a transformative shift in materials science (MatSci) by enabling scalable, general-purpose, and multimodal AI systems for scientific discovery. Unlike traditional machine learning models, which are typically narrow in scope and require task-specific engineering, FMs offer cross-domain generalization and exhibit emergent capabilities. Their versatility is especially well-suited to materials science, where research challenges span diverse data types and scales. This survey provides a comprehensive overview of foundation models, agentic systems, datasets, and computational tools supporting this growing field. We introduce a task-driven taxonomy encompassing six broad application areas: data extraction, interpretation and Q&A; atomistic simulation; property prediction; materials structure, design and discovery; process planning, discovery, and optimization; and multiscale modeling. We discuss recent advances in both unimodal and multimodal FMs, as well as emerging large language model (LLM) agents. Furthermore, we review standardized datasets, open-source tools, and autonomous experimental platforms that collectively fuel the development and integration of FMs into research workflows. We assess the early successes of foundation models and identify persistent limitations, including challenges in generalizability, interpretability, data imbalance, safety concerns, and limited multimodal fusion. Finally, we articulate future research directions centered on scalable pretraining, continual learning, data governance, and trustworthiness.