🤖 AI Summary
Social robots in dementia care risk inducing deceptive perceptions by obscuring their artificial nature, raising ethical concerns about epistemic integrity and cognitive respect. Method: Grounded in dual-process theory, this study conducts a scoping review and thematic analysis of 26 empirical studies to derive the first empirically grounded definition of “robot deception”: a systematic misattribution of biological essence under Type 1 intuitive cognition. Contribution/Results: It identifies four critical design cues—physiological signal simulation, social intention signaling, familiarity-based embodiment, and artificiality cues—and reveals dynamic fluctuations in dementia patients’ attributions of biological, social, and mental properties. Moving beyond traditional philosophical or behaviorist frameworks, the study pioneers the integration of ontological cognitive instability into human-robot interaction analysis. This reframes design imperatives toward a balanced paradigm that sustains engagement while preserving users’ epistemic agency and cognitive dignity.
📝 Abstract
As social robots increasingly enter dementia care, concerns about deception, intentional or not, are gaining attention. Yet, how robotic design cues might elicit misleading perceptions in people with dementia, and how these perceptions arise, remains insufficiently understood. In this scoping review, we examined 26 empirical studies on interactions between people with dementia and physical social robots. We identify four key design cue categories that may influence deceptive impressions: cues resembling physiological signs (e.g., simulated breathing), social intentions (e.g., playful movement), familiar beings (e.g., animal-like form and sound), and, to a lesser extent, cues that reveal artificiality. Thematic analysis of user responses reveals that people with dementia often attribute biological, social, and mental capacities to robots, dynamically shifting between awareness and illusion. These findings underscore the fluctuating nature of ontological perception in dementia contexts. Existing definitions of robotic deception often rest on philosophical or behaviorist premises, but rarely engage with the cognitive mechanisms involved. We propose an empirically grounded definition: robotic deception occurs when Type 1 (automatic, heuristic) processing dominates over Type 2 (deliberative, analytic) reasoning, leading to misinterpretation of a robot's artificial nature. This dual-process perspective highlights the ethical complexity of social robots in dementia care and calls for design approaches that are not only engaging, but also epistemically respectful.