Combining Large Language Models and Gradient-Free Optimization for Automatic Control Policy Synthesis

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) generate symbolic control policies whose structural and parametric components are tightly coupled, leading to inefficient search in policy optimization. Method: We propose a structure-parameter decoupling framework: first, an LLM generates an interpretable symbolic functional structure (i.e., a program skeleton) of the control policy; subsequently, continuous parameters within this fixed structure are optimized independently via gradient-free numerical optimization. Contribution/Results: This approach avoids the high computational cost of end-to-end joint search while preserving policy interpretability and significantly improving optimization efficiency. Evaluated on multiple classical control benchmarks, our method achieves higher cumulative rewards and superior sample efficiency compared to baseline approaches—including neural and symbolic baselines—demonstrating its effectiveness and practicality for interpretable reinforcement learning.

Technology Category

Application Category

📝 Abstract
Large Language models (LLMs) have shown promise as generators of symbolic control policies, producing interpretable program-like representations through iterative search. However, these models are not capable of separating the functional structure of a policy from the numerical values it is parametrized by, thus making the search process slow and inefficient. We propose a hybrid approach that decouples structural synthesis from parameter optimization by introducing an additional optimization layer for local parameter search. In our method, the numerical parameters of LLM-generated programs are extracted and optimized numerically to maximize task performance. With this integration, an LLM iterates over the functional structure of programs, while a separate optimization loop is used to find a locally optimal set of parameters accompanying candidate programs. We evaluate our method on a set of control tasks, showing that it achieves higher returns and improved sample efficiency compared to purely LLM-guided search. We show that combining symbolic program synthesis with numerical optimization yields interpretable yet high-performing policies, bridging the gap between language-model-guided design and classical control tuning. Our code is available at https://sites.google.com/berkeley.edu/colmo.
Problem

Research questions and friction points this paper is trying to address.

Decoupling policy structure synthesis from parameter optimization in control policies
Improving search efficiency by combining symbolic program generation with numerical optimization
Bridging interpretable LLM-generated policies with classical control performance tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples structural synthesis from parameter optimization
Uses numerical optimization for LLM-generated program parameters
Combines symbolic program synthesis with numerical optimization
🔎 Similar Papers
No similar papers found.
C
Carlo Bosio
UC Berkeley, Department of Mechanical Engineering
M
Matteo Guarrera
UC Berkeley, Department of Electrical Engineering and Computer Sciences
A
Alberto Sangiovanni-Vincentelli
UC Berkeley, Department of Electrical Engineering and Computer Sciences
Mark W. Mueller
Mark W. Mueller
Mechanical Engineering, UC Berkeley
ControlRoboticsFlying vehicles