🤖 AI Summary
Existing Large Virtual Student Agents (LVSA) lack systematic personality modeling, scalable behavioral consistency evaluation, and empirical validation in authentic pedagogical settings.
Method: We propose the SOEI framework (Scenario–Object–Evaluation–Interaction) to develop an education-oriented Chinese LVSA. It introduces a dual-anchor generative paradigm grounded in educational theory and psychology, integrating LoRA fine-tuning with expert-crafted prompting. We design a multidimensional behavioral annotation protocol and a human-AI collaborative evaluation mechanism, validated via controlled pre-service teacher experiments.
Contribution/Results: The framework successfully generates five Big Five personality-aligned agents, achieving 92.3% personality consistency across multi-turn dialogues. It significantly enhances teachers’ differentiated questioning and feedback behaviors. This work empirically substantiates LVSA’s dual role as both an AI-for-Education (AI4Edu) tool and an Education-for-AI (Edu4AI) testbed—bridging pedagogical practice and AI agent development.
📝 Abstract
Recent advances in large language models (LLMs) have enabled intelligent tutoring systems, yet the development of LLM-based Virtual Student Agents (LVSAs) remains underexplored. Such agents are essential for teacher-facing applications, where simulating diverse learner traits can support adaptive instruction and pedagogical skill development. However, current methods lack principled personality modeling, scalable evaluation of behavioral consistency, and empirical validation in interactive teaching settings. We propose the SOEI framework, a structured pipeline comprising Scene, Object, Evaluation, and Interaction, for constructing and evaluating personality-aligned LVSAs in classroom scenarios. Leveraging Chinese language instruction as a cognitively and emotionally rich testbed, we generate five LVSAs based on Big Five traits through LoRA fine-tuning and expert-informed prompt design. Their behavioral realism and personality coherence are assessed using a hybrid human&GPT-4 evaluation and a multi-dimensional annotation protocol. Through controlled experiments with real pre-service teachers, we demonstrate that LVSAs can elicit adaptive teaching strategies and maintain trait-consistent behavior across multi-turn dialogues. Our results provide: (1) an educationally and psychologically grounded generation pipeline for LLM-based student agents; (2) a hybrid, scalable evaluation framework for behavioral realism; and (3) empirical insights into the pedagogical utility of LVSAs in shaping instructional adaptation. By embedding LVSAs into both generative modeling and human-in-the-loop teaching, SOEI bridges AI for Education (AI4Edu) and Education for AI (Edu4AI), positioning classroom interaction as a rigorous testbed for controllability, personality alignment, and human-likeness in large language models.