Identity-Aware Large Language Models require Cultural Reasoning

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit a pervasive deficiency in cultural reasoning—the capacity to recognize culture-specific knowledge, social norms, and dynamically align with users’ culturally situated expectations—leading to systematic Western-centric biases in tasks such as moral judgment and idiom comprehension. Method: This work formally establishes cultural reasoning as a core AI capability, co-equal with factual accuracy and linguistic coherence; introduces a dynamic, situation-aware evaluation framework that integrates empirical behavioral analysis with conceptual modeling; and critically examines prevalent data augmentation and fine-tuning approaches, exposing their cultural myopia and inability to ensure genuine cultural adaptivity. Contribution/Results: The study provides the first theoretical foundation and rigorous assessment paradigm for identity-aware AI, enabling principled development of language models that are genuinely inclusive, context-sensitive, and culturally responsive—thereby advancing equitable and socially grounded AI systems.

Technology Category

Application Category

📝 Abstract
Large language models have become the latest trend in natural language processing, heavily featuring in the digital tools we use every day. However, their replies often reflect a narrow cultural viewpoint that overlooks the diversity of global users. This missing capability could be referred to as cultural reasoning, which we define here as the capacity of a model to recognise culture-specific knowledge values and social norms, and to adjust its output so that it aligns with the expectations of individual users. Because culture shapes interpretation, emotional resonance, and acceptable behaviour, cultural reasoning is essential for identity-aware AI. When this capacity is limited or absent, models can sustain stereotypes, ignore minority perspectives, erode trust, and perpetuate hate. Recent empirical studies strongly suggest that current models default to Western norms when judging moral dilemmas, interpreting idioms, or offering advice, and that fine-tuning on survey data only partly reduces this tendency. The present evaluation methods mainly report static accuracy scores and thus fail to capture adaptive reasoning in context. Although broader datasets can help, they cannot alone ensure genuine cultural competence. Therefore, we argue that cultural reasoning must be treated as a foundational capability alongside factual accuracy and linguistic coherence. By clarifying the concept and outlining initial directions for its assessment, a foundation is laid for future systems to be able to respond with greater sensitivity to the complex fabric of human culture.
Problem

Research questions and friction points this paper is trying to address.

LLMs lack cultural reasoning to recognize diverse global perspectives
Models default to Western norms and sustain harmful stereotypes
Current evaluation methods fail to measure adaptive cultural competence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrating cultural reasoning into model capabilities
Adjusting outputs to align with user cultural norms
Developing context-aware cultural evaluation methods
🔎 Similar Papers
No similar papers found.