🤖 AI Summary
This study addresses the challenge of assessing scientific modeling in Next Generation Science Standards (NGSS) classrooms while simultaneously accommodating cognitive diversity and ensuring linguistic–cultural fairness. We propose a learning progression (LP)-guided multimodal AI assessment framework. Methodologically, we pioneer the integration of LP theory with multimodal machine learning to automatically analyze students’ hand-drawn electrostatic models alongside their textual explanations, enabling fine-grained scoring of conceptual understanding of electric interactions and identification of individual cognitive pathways. Our contributions are threefold: (1) a fairness-aware assessment paradigm that supports diverse cognitive representations; (2) LP-informed, personalized feedback that enhances instructional responsiveness; and (3) empirical validation in high school physics instruction, demonstrating the framework’s validity, reliability, and efficiency for multimodal science understanding assessment—thereby offering a scalable, equitable approach to scientific literacy evaluation.
📝 Abstract
Learning Progressions (LPs) can help adjust instruction to individual learners needs if the LPs reflect diverse ways of thinking about a construct being measured, and if the LP-aligned assessments meaningfully measure this diversity. The process of doing science is inherently multi-modal with scientists utilizing drawings, writing and other modalities to explain phenomena. Thus, fostering deep science understanding requires supporting students in using multiple modalities when explaining phenomena. We build on a validated NGSS-aligned multi-modal LP reflecting diverse ways of modeling and explaining electrostatic phenomena and associated assessments. We focus on students modeling, an essential practice for building a deep science understanding. Supporting culturally and linguistically diverse students in building modeling skills provides them with an alternative mode of communicating their understanding, essential for equitable science assessment. Machine learning (ML) has been used to score open-ended modeling tasks (e.g., drawings), and short text-based constructed scientific explanations, both of which are time- consuming to score. We use ML to evaluate LP-aligned scientific models and the accompanying short text-based explanations reflecting multi-modal understanding of electrical interactions in high school Physical Science. We show how LP guides the design of personalized ML-driven feedback grounded in the diversity of student thinking on both assessment modes.