From Intent to Execution: Multimodal Chain-of-Thought Reinforcement Learning for Precise CAD Code Generation

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing CAD modeling relies heavily on manual intervention and domain expertise, and translating natural language specifications into executable CAD code faces challenges including weak logical reasoning, frequent syntactic errors, and low geometric fidelity. This paper proposes a multimodal chain-of-thought-guided reinforcement learning framework. To address training instability under sparse rewards, we innovatively introduce trust-region stretching, precision-aware loss weighting, and excessive-length filtering. The method synergistically integrates large language models, multimodal reasoning, and domain-specific optimization. Evaluated on our newly constructed ExeCAD dataset (16,540 instances), our approach achieves significant improvements over state-of-the-art vision-language models in executable code rate, geometric error control, and reasoning coherence. These advances collectively bridge the gap toward practical, natural-language-driven CAD automation.

Technology Category

Application Category

📝 Abstract
Computer-Aided Design (CAD) plays a vital role in engineering and manufacturing, yet current CAD workflows require extensive domain expertise and manual modeling effort. Recent advances in large language models (LLMs) have made it possible to generate code from natural language, opening new opportunities for automating parametric 3D modeling. However, directly translating human design intent into executable CAD code remains highly challenging, due to the need for logical reasoning, syntactic correctness, and numerical precision. In this work, we propose CAD-RL, a multimodal Chain-of-Thought (CoT) guided reinforcement learning post training framework for CAD modeling code generation. Our method combines CoT-based Cold Start with goal-driven reinforcement learning post training using three task-specific rewards: executability reward, geometric accuracy reward, and external evaluation reward. To ensure stable policy learning under sparse and high-variance reward conditions, we introduce three targeted optimization strategies: Trust Region Stretch for improved exploration, Precision Token Loss for enhanced dimensions parameter accuracy, and Overlong Filtering to reduce noisy supervision. To support training and benchmarking, we release ExeCAD, a noval dataset comprising 16,540 real-world CAD examples with paired natural language and structured design language descriptions, executable CADQuery scripts, and rendered 3D models. Experiments demonstrate that CAD-RL achieves significant improvements in reasoning quality, output precision, and code executability over existing VLMs.
Problem

Research questions and friction points this paper is trying to address.

Automating CAD code generation from natural language
Ensuring logical reasoning and numerical precision in CAD
Improving CAD code executability and geometric accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Chain-of-Thought guided reinforcement learning
Three task-specific rewards for CAD modeling
Targeted optimization strategies for stable learning
🔎 Similar Papers
No similar papers found.
K
Ke Niu
Fudan University, Shanghai, China
H
Haiyang Yu
Fudan University, Shanghai, China
Z
Zhuofan Chen
Fudan University, Shanghai, China
Mengyang Zhao
Mengyang Zhao
The College of Computer Science and Artificial Intelligence, Fudan University.
Computer VisionAnomaly Detection
Teng Fu
Teng Fu
Fudan University
Deep Learning
B
Bin Li
Fudan University, Shanghai, China
Xiangyang Xue
Xiangyang Xue
Professor of Computer Science, Fudan University
Computer VisionPattern RecognitionMachine Learning