xChemAgents: Agentic AI for Explainable Quantum Chemistry

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In quantum chemical property prediction, naive concatenation of multimodal descriptors (geometric structures and textual representations) degrades performance on symmetry-sensitive tasks and compromises interpretability. To address this, we propose a physics-aware dual-agent collaborative framework: the Selector agent employs a sparse weighting mechanism to adaptively identify salient chemical descriptors and generate natural-language reasoning; the Validator agent iteratively incorporates physical constraints—such as unit consistency and scaling laws—via dialog-based refinement. The method integrates large language model (LLM)-driven agent collaboration, multimodal graph neural networks, and dynamic descriptor selection. On standard benchmarks, it achieves up to 22% reduction in mean absolute error over strong baselines. Crucially, it produces high-fidelity, physically verifiable chemical explanations—embedding interpretability intrinsically within the prediction pipeline—and simultaneously advances both predictive accuracy and human-understandable reasoning.

Technology Category

Application Category

📝 Abstract
Recent progress in multimodal graph neural networks has demonstrated that augmenting atomic XYZ geometries with textual chemical descriptors can enhance predictive accuracy across a range of electronic and thermodynamic properties. However, naively appending large sets of heterogeneous descriptors often degrades performance on tasks sensitive to molecular shape or symmetry, and undermines interpretability. xChemAgents proposes a cooperative agent framework that injects physics-aware reasoning into multimodal property prediction. xChemAgents comprises two language-model-based agents: a Selector, which adaptively identifies a sparse, weighted subset of descriptors relevant to each target, and provides a natural language rationale; and a Validator, which enforces physical constraints such as unit consistency and scaling laws through iterative dialogue. On standard benchmark datasets, xChemAgents achieves up to a 22% reduction in mean absolute error over strong baselines, while producing faithful, human-interpretable explanations. Experiment results highlight the potential of cooperative, self-verifying agents to enhance both accuracy and transparency in foundation-model-driven materials science. The implementation and accompanying dataset are available anonymously at https://github.com/KurbanIntelligenceLab/xChemAgents.
Problem

Research questions and friction points this paper is trying to address.

Enhancing quantum chemistry predictions with multimodal descriptors
Addressing performance degradation from heterogeneous descriptor usage
Improving interpretability in physics-aware property prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal graph neural networks with textual descriptors
Physics-aware cooperative agent framework
Selector and Validator agents for interpretability
🔎 Similar Papers
No similar papers found.
Can Polat
Can Polat
Texas A&M University
AI for Materials ScienceComputational MaterialsAI for Science
M
Mehmet Tuncel
Artificial Intelligence and Data Science Research Center, Istanbul Technical University, Istanbul, Türkiye; Department of Electrical & Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
Hasan Kurban
Hasan Kurban
Hamad Bin Khalifa University
Artificial IntelligenceSoftware EngineeringAI for Science
E
Erchin Serpedin
Electrical & Computer Engineering, Texas A&M University, College Station, TX, USA
Mustafa Kurban
Mustafa Kurban
Ankara University
NanomaterialsSensorsEnergy StorageMachine LearningDrug Delivery