MLLM-SR: Conversational Symbolic Regression base Multi-Modal Large Language Models

📅 2024-06-08
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing symbolic regression methods struggle to effectively incorporate domain-specific prior knowledge and lack the capability to interpret natural language instructions. This paper introduces the first conversational, multimodal large language model–based framework for symbolic regression, enabling interactive, natural-language-guided generation of mathematical expressions (e.g., “containing sin functions” or “satisfying symmetry constraints”). Our approach integrates instruction tuning, program synthesis, and symbolic regression to achieve controllable, interpretable, and natural-language-driven formula discovery. On the Nguyen benchmark, it significantly outperforms state-of-the-art methods. Empirical evaluation demonstrates its ability to accurately parse diverse semantic constraints, substantially improving expression correctness and physical consistency—thereby overcoming a key limitation of conventional approaches in leveraging prior knowledge.

Technology Category

Application Category

📝 Abstract
Formulas are the language of communication between humans and nature. It is an important research topic of artificial intelligence to find expressions from observed data to reflect the relationship between each variable in the data, which is called a symbolic regression problem. The existing symbolic regression methods directly generate expressions according to the given observation data, and we cannot require the algorithm to generate expressions that meet specific requirements according to the known prior knowledge. For example, the expression needs to contain $sin$ or be symmetric, and so on. Even if it can, it often requires very complex operations, which is very inconvenient. In this paper, based on multi-modal large language models, we propose MLLM-SR, a conversational symbolic regression method that can generate expressions that meet the requirements simply by describing the requirements with natural language instructions. By experimenting on the Nguyen dataset, we can demonstrate that MLLM-SR leads the state-of-the-art baselines in fitting performance. More notably, we experimentally demonstrate that MLLM-SR can well understand the prior knowledge we add to the natural language instructions. Moreover, the addition of prior knowledge can effectively guide MLLM-SR to generate correct expressions.
Problem

Research questions and friction points this paper is trying to address.

Discovering scientific formulas from observational data using AI
Integrating natural language prior knowledge into formula generation
Improving symbolic regression with multimodal large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multimodal large language models
Incorporates natural language prior knowledge
Achieves state-of-the-art symbolic regression
🔎 Similar Papers
No similar papers found.
Y
Yanjie Li
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
W
Weijun Li
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China
Lina Yu
Lina Yu
Shenzhen Technology University
Humanitarian logisticsResource allocationDynamic programmingReinforcement learning
Min Wu
Min Wu
Professor, IEEE Fellow, China University of Geosciences
Process controlRobust controlIntelligent systems
J
Jingyi Liu
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China
Wenqiang Li
Wenqiang Li
The Ohio State University
5G securityembedded systemvulnerability discoveryAI
S
Shu Wei
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China
Y
Yusong Deng
AnnLab, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China