🤖 AI Summary
Existing educational recommender systems predominantly rely on click-through or rating-based relevance metrics, which inadequately reflect actual pedagogical effectiveness. To address this, we propose OBER—the first outcome-based framework that deeply integrates learning objectives and assessment items into the recommendation architecture. OBER employs a log-driven, unified mastery computation model to enable direct, dynamic assessment of knowledge acquisition. Its minimalist entity-relation modeling and plug-in architecture support seamless integration of diverse algorithms—including collaborative filtering, knowledge graph–based recommendation, and expert-curated learning paths—enabling fair, scalable cross-algorithm evaluation. Empirical validation across 5,700+ learners demonstrates that expert paths significantly enhance immediate knowledge mastery, while collaborative filtering better supports long-term retention. Crucially, OBER quantifies relevance, engagement, and learning outcomes simultaneously—without requiring additional assessments—ensuring both method-agnosticism and practical deployability.
📝 Abstract
Most educational recommender systems are tuned and judged on click- or rating-based relevance, leaving their true pedagogical impact unclear. We introduce OBER-an Outcome-Based Educational Recommender that embeds learning outcomes and assessment items directly into the data schema, so any algorithm can be evaluated on the mastery it fosters. OBER uses a minimalist entity-relation model, a log-driven mastery formula, and a plug-in architecture. Integrated into an e-learning system in non-formal domain, it was evaluated trough a two-week randomized split test with over 5 700 learners across three methods: fixed expert trajectory, collaborative filtering (CF), and knowledge-based (KB) filtering. CF maximized retention, but the fixed path achieved the highest mastery. Because OBER derives business, relevance, and learning metrics from the same logs, it lets practitioners weigh relevance and engagement against outcome mastery with no extra testing overhead. The framework is method-agnostic and readily extensible to future adaptive or context-aware recommenders.