When Robots Say No: The Empathic Ethical Disobedience Benchmark

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robots must balance instruction-following with adherence to safety and social norms, yet existing safety reinforcement learning benchmarks emphasize physical risks, while human-robot trust studies suffer from limited scale and poor reproducibility. Method: We propose the Empathic Ethical Disobedience (EED) benchmark and introduce EED Gym—a standardized, multi-role, multi-scenario testbed enabling systematic evaluation of compliance, refusal, clarification, and alternative-action decisions. Contribution/Results: We jointly quantify refusal behavior along three dimensions: safety, user trust, and empathy. We integrate empirically grounded blame/trust models and a personified role framework, and define verifiable credibility tiers for constructive, empathic, and other refusal styles. Experiments show that action masking eliminates unsafe compliance; explanatory refusals preserve trust; constructive refusals achieve highest credibility scores, while empathic refusals yield highest empathy scores; safety-aware RL improves robustness but often induces excessive conservatism.

Technology Category

Application Category

📝 Abstract
Robots must balance compliance with safety and social expectations as blind obedience can cause harm, while over-refusal erodes trust. Existing safe reinforcement learning (RL) benchmarks emphasize physical hazards, while human-robot interaction trust studies are small-scale and hard to reproduce. We present the Empathic Ethical Disobedience (EED) Gym, a standardized testbed that jointly evaluates refusal safety and social acceptability. Agents weigh risk, affect, and trust when choosing to comply, refuse (with or without explanation), clarify, or propose safer alternatives. EED Gym provides different scenarios, multiple persona profiles, and metrics for safety, calibration, and refusals, with trust and blame models grounded in a vignette study. Using EED Gym, we find that action masking eliminates unsafe compliance, while explanatory refusals help sustain trust. Constructive styles are rated most trustworthy, empathic styles -- most empathic, and safe RL methods improve robustness but also make agents more prone to overly cautious behavior. We release code, configurations, and reference policies to enable reproducible evaluation and systematic human-robot interaction research on refusal and trust. At submission time, we include an anonymized reproducibility package with code and configs, and we commit to open-sourcing the full repository after the paper is accepted.
Problem

Research questions and friction points this paper is trying to address.

Develops a benchmark for robot refusal safety and social acceptability
Evaluates agents' decisions balancing risk, affect, and trust
Provides reproducible testbed for human-robot interaction research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized testbed for refusal safety and social acceptability
Agents weigh risk, affect, and trust in decisions
Action masking and explanatory refusals improve safety and trust
🔎 Similar Papers
No similar papers found.