Social Robots for People with Dementia: A Literature Review on Deception from Design to Perception

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Social robots in dementia care risk inducing deceptive perceptions by obscuring their artificial nature, raising ethical concerns about epistemic integrity and cognitive respect. Method: Grounded in dual-process theory, this study conducts a scoping review and thematic analysis of 26 empirical studies to derive the first empirically grounded definition of “robot deception”: a systematic misattribution of biological essence under Type 1 intuitive cognition. Contribution/Results: It identifies four critical design cues—physiological signal simulation, social intention signaling, familiarity-based embodiment, and artificiality cues—and reveals dynamic fluctuations in dementia patients’ attributions of biological, social, and mental properties. Moving beyond traditional philosophical or behaviorist frameworks, the study pioneers the integration of ontological cognitive instability into human-robot interaction analysis. This reframes design imperatives toward a balanced paradigm that sustains engagement while preserving users’ epistemic agency and cognitive dignity.

Technology Category

Application Category

📝 Abstract
As social robots increasingly enter dementia care, concerns about deception, intentional or not, are gaining attention. Yet, how robotic design cues might elicit misleading perceptions in people with dementia, and how these perceptions arise, remains insufficiently understood. In this scoping review, we examined 26 empirical studies on interactions between people with dementia and physical social robots. We identify four key design cue categories that may influence deceptive impressions: cues resembling physiological signs (e.g., simulated breathing), social intentions (e.g., playful movement), familiar beings (e.g., animal-like form and sound), and, to a lesser extent, cues that reveal artificiality. Thematic analysis of user responses reveals that people with dementia often attribute biological, social, and mental capacities to robots, dynamically shifting between awareness and illusion. These findings underscore the fluctuating nature of ontological perception in dementia contexts. Existing definitions of robotic deception often rest on philosophical or behaviorist premises, but rarely engage with the cognitive mechanisms involved. We propose an empirically grounded definition: robotic deception occurs when Type 1 (automatic, heuristic) processing dominates over Type 2 (deliberative, analytic) reasoning, leading to misinterpretation of a robot's artificial nature. This dual-process perspective highlights the ethical complexity of social robots in dementia care and calls for design approaches that are not only engaging, but also epistemically respectful.
Problem

Research questions and friction points this paper is trying to address.

How robotic design cues create deceptive perceptions in dementia patients
Understanding cognitive mechanisms behind dementia patients' robot misinterpretations
Ethical implications of social robot deception in dementia care
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies key design cues causing deceptive impressions
Proposes dual-process theory for robotic deception definition
Advocates epistemically respectful robot design approaches
🔎 Similar Papers
No similar papers found.
F
Fan Wang
Department of Industrial Engineering and Innovation Science, Eindhoven University of Technology, the Netherlands
Giulia Perugia
Giulia Perugia
Assistant Professor, Eindhoven University of Technology
HRISocial RoboticsGendered RobotsDementiaEthical and Inclusive HRI
Y
Yuan Feng
Department of Industrial Design, Northwestern Polytechnical University, China
W
Wijnand IJsselsteijn
Department of Industrial Engineering and Innovation Science, Eindhoven University of Technology, the Netherlands