🤖 AI Summary
This study investigates the risk of large language models (LLMs) inducing cognitive biases in high-stakes domains—specifically healthcare and law—focusing on summarization and news fact-checking tasks. We propose a multidimensional bias quantification framework and empirically measure three critical bias types: affective shift (21.86%), post-knowledge-cutoff hallucination (57.33%), and primacy effect (5.94%). To enable rigorous assessment, we develop a joint evaluation methodology integrating context consistency detection, hallucination identification, and cognitive bias analysis. Through systematic comparative evaluation across multiple LLM families and hybrid human-automated assessment, we test 18 mitigation strategies and identify several interventions that significantly reduce all three bias categories. Our work establishes a reproducible diagnostic paradigm for bias detection in LLMs and provides empirically grounded governance pathways for trustworthy deployment in safety-critical applications.
📝 Abstract
Large language models (LLMs) are increasingly integrated into applications ranging from review summarization to medical diagnosis support, where they affect human decisions. Even though LLMs perform well in many tasks, they may also inherit societal or cognitive biases, which can inadvertently transfer to humans. We investigate when and how LLMs expose users to biased content and quantify its severity. Specifically, we assess three LLM families in summarization and news fact-checking tasks, evaluating how much LLMs stay consistent with their context and/or hallucinate. Our findings show that LLMs expose users to content that changes the sentiment of the context in 21.86% of the cases, hallucinates on post-knowledge-cutoff data questions in 57.33% of the cases, and primacy bias in 5.94% of the cases. We evaluate 18 distinct mitigation methods across three LLM families and find that targeted interventions can be effective. Given the prevalent use of LLMs in high-stakes domains, such as healthcare or legal analysis, our results highlight the need for robust technical safeguards and for developing user-centered interventions that address LLM limitations.