Learning Progression-Guided AI Evaluation of Scientific Models To Support Diverse Multi-Modal Understanding in NGSS Classroom

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of assessing scientific modeling in Next Generation Science Standards (NGSS) classrooms while simultaneously accommodating cognitive diversity and ensuring linguistic–cultural fairness. We propose a learning progression (LP)-guided multimodal AI assessment framework. Methodologically, we pioneer the integration of LP theory with multimodal machine learning to automatically analyze students’ hand-drawn electrostatic models alongside their textual explanations, enabling fine-grained scoring of conceptual understanding of electric interactions and identification of individual cognitive pathways. Our contributions are threefold: (1) a fairness-aware assessment paradigm that supports diverse cognitive representations; (2) LP-informed, personalized feedback that enhances instructional responsiveness; and (3) empirical validation in high school physics instruction, demonstrating the framework’s validity, reliability, and efficiency for multimodal science understanding assessment—thereby offering a scalable, equitable approach to scientific literacy evaluation.

Technology Category

Application Category

📝 Abstract
Learning Progressions (LPs) can help adjust instruction to individual learners needs if the LPs reflect diverse ways of thinking about a construct being measured, and if the LP-aligned assessments meaningfully measure this diversity. The process of doing science is inherently multi-modal with scientists utilizing drawings, writing and other modalities to explain phenomena. Thus, fostering deep science understanding requires supporting students in using multiple modalities when explaining phenomena. We build on a validated NGSS-aligned multi-modal LP reflecting diverse ways of modeling and explaining electrostatic phenomena and associated assessments. We focus on students modeling, an essential practice for building a deep science understanding. Supporting culturally and linguistically diverse students in building modeling skills provides them with an alternative mode of communicating their understanding, essential for equitable science assessment. Machine learning (ML) has been used to score open-ended modeling tasks (e.g., drawings), and short text-based constructed scientific explanations, both of which are time- consuming to score. We use ML to evaluate LP-aligned scientific models and the accompanying short text-based explanations reflecting multi-modal understanding of electrical interactions in high school Physical Science. We show how LP guides the design of personalized ML-driven feedback grounded in the diversity of student thinking on both assessment modes.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multi-modal scientific models and explanations using machine learning
Supporting diverse student thinking in NGSS-aligned science assessments
Providing personalized feedback for electrostatic phenomena understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning progression-guided AI evaluation system
Multi-modal assessment combining drawings and text
Personalized feedback based on student thinking diversity
🔎 Similar Papers
No similar papers found.
L
Leonora Kaldaras
CREATE for STEM at Michigan State University
T
Tingting Li
Texas Tech University College of Education
P
Prudence Djagba
CREATE for STEM at Michigan State University
K
Kevin Haudek
CREATE for STEM at Michigan State University
Joseph Krajcik
Joseph Krajcik
Michigan State University
science educationproject-based learningeducative curriculumcurriculum developmentlearning sciences