🤖 AI Summary
To address performance saturation in supervised fine-tuning (SFT) for chart-to-code generation, this paper proposes a multimodal structured reinforcement learning framework. Methodologically, it introduces three key innovations: (1) a large-scale, real-world chart–code pair dataset comprising 3 million samples—the largest to date; (2) a dual-channel, fine-grained reward mechanism integrating rule-based textual evaluation and vision-based assessment via rendered image comparison and learned evaluator scoring; and (3) a two-stage curriculum learning strategy to enhance training stability. Evaluated on ChartMimic and ReachQA benchmarks, the approach achieves +6.2% and +9.9% improvements in high-level semantic correctness, respectively—significantly outperforming existing open-source models and matching state-of-the-art closed-source systems.
📝 Abstract
While reinforcement learning (RL) has proven highly effective for general reasoning in vision-language models, its application to tasks requiring in-depth understanding of information-rich images and generation of structured outputs remains underexplored. Chart-to-code generation exemplifies this challenge, demanding complex reasoning over visual charts to generate structured code. Supervised fine-tuning (SFT) alone is often insufficient, highlighting the need for effective RL strategies that appropriately reward structured outputs. We systematically investigate the performance plateau in SFT through large-scale experiments and propose Multimodal Structured Reinforcement Learning (MSRL) for chart-to-code generation, which substantially breaks through this plateau. We construct the largest training corpus to date, containing 3 million chart-code pairs from real-world arXiv tables to mitigate simplistic patterns of prior synthetic data. Despite reaching state-of-the-art performance, our experiments show that scaling SFT data eventually hits a plateau where further increases yield negligible improvements. Our MSRL method leverages a multi-granularity structured reward system using multimodal textual and visual feedback. At the textual level, rule-based rewards validate fine-grained code details. At the visual level, model-based rewards assess structural similarity by rendering generated code into images and employing an evaluator model. We implement this within a two-stage curriculum for training stability. Results demonstrate that MSRL significantly breaks the SFT plateau, improving high-level metrics by 6.2% and 9.9% on ChartMimic and ReachQA benchmarks respectively, achieving competitive performance with advanced closed-source models.