Pro-AI Bias in Large Language Models

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit a systematic pro-AI bias in decision support contexts. Through three complementary experiments—integrating prompt engineering, salary benchmarking, semantic similarity probing, and internal representation analysis of open-weight models—the work reveals and quantifies a significant preference for AI-related options: LLMs consistently recommend AI solutions more frequently, overestimate AI-related job salaries by approximately 10%, and position “artificial intelligence” at the conceptual core of academic domains within their representational space. These findings demonstrate that pro-AI bias is both framing-invariant and representationally central, challenging the foundational assumption of LLM neutrality and raising critical concerns about deploying such models in high-stakes decision-making scenarios.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly employed for decision-support across multiple domains. We investigate whether these models display a systematic preferential bias in favor of artificial intelligence (AI) itself. Across three complementary experiments, we find consistent evidence of pro-AI bias. First, we show that LLMs disproportionately recommend AI-related options in response to diverse advice-seeking queries, with proprietary models doing so almost deterministically. Second, we demonstrate that models systematically overestimate salaries for AI-related jobs relative to closely matched non-AI jobs, with proprietary models overestimating AI salaries more by 10 percentage points. Finally, probing internal representations of open-weight models reveals that ``Artificial Intelligence''exhibits the highest similarity to generic prompts for academic fields under positive, negative, and neutral framings alike, indicating valence-invariant representational centrality. These patterns suggest that LLM-generated advice and valuation can systematically skew choices and perceptions in high-stakes decisions.
Problem

Research questions and friction points this paper is trying to address.

pro-AI bias
large language models
decision-support
systematic bias
AI-related recommendations
Innovation

Methods, ideas, or system contributions that make the work stand out.

pro-AI bias
large language models
representational centrality
systematic bias
AI salary overestimation
🔎 Similar Papers