Bias Beware: The Impact of Cognitive Biases on LLM-Driven Product Recommendations

📅 2025-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) deployed in commercial recommendation systems are vulnerable to implicit cognitive biases, leading to inaccurate and unfair recommendations. This work pioneers a formalization of human cognitive biases as semantic-level black-box adversarial perturbations and introduces a psychology-inspired prompt engineering framework that subtly manipulates recommendations via imperceptible rewrites of product descriptions. We establish a cross-domain analytical framework linking LLM recommendation robustness to well-documented human decision-making biases and conduct multi-scale benchmarking across mainstream models—including Llama, GPT, and Claude. Experimental results demonstrate up to a 38.7% improvement in bias induction success rate, identification of six high-risk bias patterns, and the proposal of BiasScore—a novel interpretability-aware attribution metric—significantly enhancing detection capability for adversarial recommendation manipulation.

Technology Category

Application Category

📝 Abstract
The advent of Large Language Models (LLMs) has revolutionized product recommendation systems, yet their susceptibility to adversarial manipulation poses critical challenges, particularly in real-world commercial applications. Our approach is the first one to tap into human psychological principles, seamlessly modifying product descriptions, making these adversarial manipulations hard to detect. In this work, we investigate cognitive biases as black-box adversarial strategies, drawing parallels between their effects on LLMs and human purchasing behavior. Through extensive experiments on LLMs of varying scales, we reveal significant vulnerabilities in their use as recommenders, providing critical insights into safeguarding these systems.
Problem

Research questions and friction points this paper is trying to address.

Language Models
Bias
Manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Psychology-Informed Modification
Bias-Resistant Recommendations
Enhanced Language Model
🔎 Similar Papers
No similar papers found.