🤖 AI Summary
This work addresses the susceptibility of large language models to harmful biases induced by irrelevant social contexts—such as teacher identity or educational background—in high-stakes decision-making scenarios. To mitigate this issue, the authors propose Debiasing-DPO, a novel approach that uniquely integrates self-supervised preference learning with supervised fine-tuning using ground-truth labels. By constructing pairs of neutral and biased reasoning trajectories, the method enhances model robustness against spurious contextual cues. Evaluated on the NCTE education dataset across Llama and Qwen model families, Debiasing-DPO reduces bias influence by an average of 84% while simultaneously improving prediction accuracy by 52%, effectively achieving both debiasing and performance enhancement.
📝 Abstract
LLMs are increasingly used for high-stakes decision-making, yet their sensitivity to spurious contextual information can introduce harmful biases. This is a critical concern when models are deployed for tasks like evaluating teachers' instructional quality, where biased assessment can affect teachers' professional development and career trajectories. We investigate model robustness to spurious social contexts using the largest publicly available dataset of U.S. classroom transcripts (NCTE) paired with expert rubric scores. Evaluating seven frontier and open-weight models across seven categories of spurious contexts -- including teacher experience, education level, demographic identity, and sycophancy-inducing framings -- we find that irrelevant contextual information can shift model predictions by up to 1.48 points on a 7-point scale, with larger models sometimes exhibiting greater sensitivity despite higher predictive accuracy. Mitigations using prompts and standard direct preference optimization (DPO) prove largely insufficient. We propose **Debiasing-DPO**,, a self-supervised training method that pairs neutral reasoning generated from the query alone, with the model's biased reasoning generated with both the query and additional spurious context. We further combine this objective with supervised fine-tuning on ground-truth labels to prevent losses in predictive accuracy. Applied to Llama 3B \& 8B and Qwen 3B \& 7B Instruct models, Debiasing-DPO reduces bias by 84\% and improves predictive accuracy by 52\% on average. Our findings from the educational case study highlight that robustness to spurious context is not a natural byproduct of model scaling and that our proposed method can yield substantial gains in both accuracy and robustness for prompt-based prediction tasks.