🤖 AI Summary
AI/ML ethics curricula suffer from inadequate coverage of fairness and justice, monolithic pedagogical perspectives, and assessment frameworks lacking normative theoretical grounding. Method: We propose a justice-oriented, multi-stakeholder simulation framework that integrates Rawlsian and other normative justice theories into a rubric for automated textual analysis; leveraging large language models (LLMs), we simulate diverse stakeholder perspectives—including instructors, students, and marginalized groups—to conduct thematic modeling and evaluation across 24 computing ethics syllabi. Contribution/Results: This work is the first to embed normative justice theory into LLM-driven curriculum assessment, revealing systemic omissions of structural justice issues and significant divergences in stakeholder concerns. It yields an actionable diagnostic tool and concrete pedagogical pathways to advance ethics education from formal inclusion toward substantive justice.
📝 Abstract
Course syllabi set the tone and expectations for courses, shaping the learning experience for both students and instructors. In computing courses, especially those addressing fairness and ethics in artificial intelligence (AI), machine learning (ML), and algorithmic design, it is imperative that we understand how approaches to navigating barriers to fair outcomes are being addressed.These expectations should be inclusive, transparent, and grounded in promoting critical thinking. Syllabus analysis offers a way to evaluate the coverage, depth, practices, and expectations within a course. Manual syllabus evaluation, however, is time-consuming and prone to inconsistency. To address this, we developed a justice-oriented scoring rubric and asked a large language model (LLM) to review syllabi through a multi-perspective role simulation. Using this rubric, we evaluated 24 syllabi from four perspectives: instructor, departmental chair, institutional reviewer, and external evaluator. We also prompted the LLM to identify thematic trends across the courses. Findings show that multiperspective evaluation aids us in noting nuanced, role-specific priorities, leveraging them to fill hidden gaps in curricula design of AI/ML and related computing courses focused on fairness and ethics. These insights offer concrete directions for improving the design and delivery of fairness, ethics, and justice content in such courses.