🤖 AI Summary
In the context of Industry 5.0, collaborative robot (cobot) failures may undermine human trust and perceived autonomy—critical dimensions of human-centered, well-being-oriented manufacturing. Method: A virtual reality experiment (N = 39) simulated cobot failures of varying severity and employed standardized psychometric scales to quantify changes in trust and autonomy. Real-time, transparent failure explanations were introduced as an intervention. Results: Failures significantly reduced human trust but did not significantly affect perceived autonomy. Crucially, transparent, real-time fault communication mitigated trust erosion and partially restored autonomy perception. This study provides the first empirical evidence of a dual restorative effect of information transparency during fault response—simultaneously supporting trust recovery and autonomy preservation. The findings advance theoretical understanding of human–robot collaboration under uncertainty and offer actionable design principles for resilient, welfare-driven human–machine systems in smart manufacturing.
📝 Abstract
Collaborative robots (cobots) are a core technology of Industry 4.0. Industry 4.0 uses cyber-physical systems, IoT and smart automation to improve efficiency and data-driven decision-making. Cobots, as cyber-physical systems, enable the introduction of lightweight automation to smaller companies through their flexibility, low cost and ability to work alongside humans, while keeping humans and their skills in the loop. Industry 5.0, the evolution of Industry 4.0, places the worker at the centre of its principles: The physical and mental well-being of the worker is the main goal of new technology design, not just productivity, efficiency and safety standards. Within this concept, human trust in cobots and human autonomy are important. While trust is essential for effective and smooth interaction, the workers' perception of autonomy is key to intrinsic motivation and overall well-being. As failures are an inevitable part of technological systems, this study aims to answer the question of how system failures affect trust in cobots as well as human autonomy, and how they can be recovered afterwards. Therefore, a VR experiment (n = 39) was set up to investigate the influence of a cobot failure and its severity on human autonomy and trust in the cobot. Furthermore, the influence of transparent communication about the failure and next steps was investigated. The results show that both trust and autonomy suffer after cobot failures, with the severity of the failure having a stronger negative impact on trust, but not on autonomy. Both trust and autonomy can be partially restored by transparent communication.