🤖 AI Summary
This work proposes the first fully automated, black-box framework for uncovering implicit biases in large language models (LLMs) that are not explicitly stated in their reasoning chains. Existing evaluation methods rely on predefined bias categories and manually curated datasets, limiting their ability to detect unknown biases. In contrast, the proposed approach leverages LLMs themselves to generate candidate bias concepts, constructs paired positive and negative variants of task-specific inputs, and applies statistical testing to identify latent factors that significantly influence model decisions yet remain unmentioned in the reasoning process. Requiring no human annotation or pre-specified categories, the method is both task-adaptive and scalable. Evaluated across three decision-making tasks and six prominent LLMs, it successfully reproduces known biases—such as those related to gender and race—and reveals novel biases, including one linked to Spanish-language fluency.
📝 Abstract
Large Language Models (LLMs) often provide chain-of-thought (CoT) reasoning traces that appear plausible, but may hide internal biases. We call these *unverbalized biases*. Monitoring models via their stated reasoning is therefore unreliable, and existing bias evaluations typically require predefined categories and hand-crafted datasets. In this work, we introduce a fully automated, black-box pipeline for detecting task-specific unverbalized biases. Given a task dataset, the pipeline uses LLM autoraters to generate candidate bias concepts. It then tests each concept on progressively larger input samples by generating positive and negative variations, and applies statistical techniques for multiple testing and early stopping. A concept is flagged as an unverbalized bias if it yields statistically significant performance differences while not being cited as justification in the model's CoTs. We evaluate our pipeline across six LLMs on three decision tasks (hiring, loan approval, and university admissions). Our technique automatically discovers previously unknown biases in these models (e.g., Spanish fluency, English proficiency, writing formality). In the same run, the pipeline also validates biases that were manually identified by prior work (gender, race, religion, ethnicity). More broadly, our proposed approach provides a practical, scalable path to automatic task-specific bias discovery.