🤖 AI Summary
In quantum chemical property prediction, naive concatenation of multimodal descriptors (geometric structures and textual representations) degrades performance on symmetry-sensitive tasks and compromises interpretability. To address this, we propose a physics-aware dual-agent collaborative framework: the Selector agent employs a sparse weighting mechanism to adaptively identify salient chemical descriptors and generate natural-language reasoning; the Validator agent iteratively incorporates physical constraints—such as unit consistency and scaling laws—via dialog-based refinement. The method integrates large language model (LLM)-driven agent collaboration, multimodal graph neural networks, and dynamic descriptor selection. On standard benchmarks, it achieves up to 22% reduction in mean absolute error over strong baselines. Crucially, it produces high-fidelity, physically verifiable chemical explanations—embedding interpretability intrinsically within the prediction pipeline—and simultaneously advances both predictive accuracy and human-understandable reasoning.
📝 Abstract
Recent progress in multimodal graph neural networks has demonstrated that augmenting atomic XYZ geometries with textual chemical descriptors can enhance predictive accuracy across a range of electronic and thermodynamic properties. However, naively appending large sets of heterogeneous descriptors often degrades performance on tasks sensitive to molecular shape or symmetry, and undermines interpretability. xChemAgents proposes a cooperative agent framework that injects physics-aware reasoning into multimodal property prediction. xChemAgents comprises two language-model-based agents: a Selector, which adaptively identifies a sparse, weighted subset of descriptors relevant to each target, and provides a natural language rationale; and a Validator, which enforces physical constraints such as unit consistency and scaling laws through iterative dialogue. On standard benchmark datasets, xChemAgents achieves up to a 22% reduction in mean absolute error over strong baselines, while producing faithful, human-interpretable explanations. Experiment results highlight the potential of cooperative, self-verifying agents to enhance both accuracy and transparency in foundation-model-driven materials science. The implementation and accompanying dataset are available anonymously at https://github.com/KurbanIntelligenceLab/xChemAgents.