Why That Robot? A Qualitative Analysis of Justification Strategies for Robot Color Selection Across Occupational Contexts

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how individuals select robots based on skin tone and level of anthropomorphism across occupational contexts, uncovering underlying social biases and the mechanisms through which they are rationalized. Analyzing open-ended justifications from 1,038 participants (4,146 responses), the research integrates stereotype priming experiments with a human–AI consensus-driven qualitative content analysis (κ = 0.73) to develop and validate a multidimensional coding framework. Findings reveal that 52% of justifications employ functional reasoning, often implicitly embedding occupational–racial stereotypes; while stereotype priming significantly influences color preferences, this effect is absent in verbal explanations. Higher anthropomorphism prompts users to adopt de-racialized, “machine-centric” strategies. Demographic factors substantially moderate rationalization patterns, highlighting the intricate social cognition involved in robot design and deployment.
📝 Abstract
As robots increasingly enter the workforce, human-robot interaction (HRI) must address how implicit social biases influence user preferences. This paper investigates how users rationalize their selections of robots varying in skin tone and anthropomorphic features across different occupations. By qualitatively analyzing 4,146 open-ended justifications from 1,038 participants, we map the reasoning frameworks driving robot color selection across four professional contexts. We developed and validated a comprehensive, multidimensional coding scheme via human--AI consensus ($κ= 0.73$). Our results demonstrate that while utilitarian \textit{Functionalism} is the dominant justification strategy (52\%), participants systematically adapted these practical rationales to align with established racial and occupational stereotypes. Furthermore, we reveal that bias frequently operates beneath conscious rationalization: exposure to racial stereotype primes significantly shifted participants' color choices, yet their spoken justifications remained masked by standard affective or task-related reasoning. We also found that demographic backgrounds significantly shape justification strategies, and that robot shape strongly modulates color interpretation. Specifically, as robots become highly anthropomorphic, users increasingly retreat from functional reasoning toward \textit{Machine-Centric} de-racialization. Through these empirical results, we provide actionable design implications to help reduce the perpetuation of societal biases in future workforce robots.
Problem

Research questions and friction points this paper is trying to address.

human-robot interaction
social bias
robot appearance
racial stereotypes
occupational context
Innovation

Methods, ideas, or system contributions that make the work stand out.

human-robot interaction
implicit bias
anthropomorphism
qualitative coding
racial stereotypes
🔎 Similar Papers
No similar papers found.
Jiangen He
Jiangen He
Assistant Professor, School of Information Sciences, University of Tennessee
Visual AnalyticsInformation VisualizationBibliometricsScience Mapping
W
Wanqi Zhang
The University of Tennessee, Knoxville, Tennessee, USA
J
Jessica K. Barfield
University of Kentucky, Lexington, Kentucky, USA