🤖 AI Summary
This work proposes PhysicsSolutionAgent, the first autonomous agent systematically exploring large language model (LLM)-driven generation of multimodal explanatory videos for physics education. Addressing the challenge that current LLMs struggle to produce high-quality, long-form visual explanations of physical problems, the system integrates GPT-5-mini, the Manim animation engine, and a vision-language model (VLM) feedback mechanism to automatically generate six-minute videos combining numerical and theoretical analyses. The authors develop an automated evaluation pipeline comprising 15 metrics and employ VLM-based refinement for multidimensional quality optimization. Evaluated on 32 physics problems, the approach achieves 100% generation success with an average automated score of 3.8/5. Human assessment, however, reveals persistent challenges such as layout inconsistencies and visual misinterpretations, highlighting critical bottlenecks in multimodal reasoning and verification.
📝 Abstract
Explaining numerical physics problems often requires more than text-based solutions; clear visual reasoning can substantially improve conceptual understanding. While large language models (LLMs) demonstrate strong performance on many physics questions in textual form, their ability to generate long, high-quality visual explanations remains insufficiently explored. In this work, we introduce PhysicsSolutionAgent (PSA), an autonomous agent that generates physics-problem explanation videos of up to six minutes using Manim animations. To evaluate the generated videos, we design an assessment pipeline that performs automated checks across 15 quantitative parameters and incorporates feedback from a vision-language model (VLM) to iteratively improve video quality. We evaluate PSA on 32 videos spanning numerical and theoretical physics problems. Our results reveal systematic differences in video quality depending on problem difficulty and whether the task is numerical or theoretical. Using GPT-5-mini, PSA achieves a 100% video-completion rate with an average automated score of 3.8/5. However, qualitative analysis and human inspection uncover both minor and major issues, including visual layout inconsistencies and errors in how visual content is interpreted during feedback. These findings expose key limitations in reliable Manim code generation and highlight broader challenges in multimodal reasoning and evaluation for visual explanations of numerical physics problems. Our work underscores the need for improved visual understanding, verification, and evaluation frameworks in future multimodal educational systems