🤖 AI Summary
This work addresses the limited procedural expertise of large language models in autonomous workflows, which hinders their practical deployment. To overcome this, we propose the first large-scale framework for automatic extraction of procedural knowledge tailored to multi-agent repositories. By analyzing open-source agent projects from platforms like GitHub, our approach combines repository structure parsing with dense retrieval to identify high-value skills—such as visualization and pedagogy—and leverages the Manim engine to generate instructional content, uniformly formatted as standardized SKILL.md files. The framework enables skill expansion without model retraining and incorporates safety governance alongside a multidimensional evaluation mechanism. Experimental results demonstrate that the generated instructional materials achieve a 40% improvement in knowledge transfer efficiency while matching the quality of human-authored tutorials.
📝 Abstract
The transition from monolithic large language models (LLMs) to modular, skill-equipped agents represents a fundamental architectural shift in artificial intelligence deployment. While general-purpose models demonstrate remarkable breadth in declarative knowledge, their utility in autonomous workflows is frequently constrained by insufficient specialized procedural expertise. This report investigates a systematic framework for automated acquisition of high-quality agent skills through mining of open-source repositories on platforms such as GitHub. We focus on the extraction of visualization and educational capabilities from state-of-the-art systems including TheoremExplainAgent and Code2Video, both utilizing the Manim mathematical animation engine. The framework encompasses repository structural analysis, semantic skill identification through dense retrieval, and translation to the standardized SKILL.md format. We demonstrate that systematic extraction from agentic repositories, combined with rigorous security governance and multi-dimensional evaluation metrics, enables scalable acquisition of procedural knowledge that augments LLM capabilities without requiring model retraining. Our analysis reveals that agent-generated educational content can achieve 40\% gains in knowledge transfer efficiency while maintaining pedagogical quality comparable to human-crafted tutorials.