🤖 AI Summary
Home AI agents face ethical conflicts, imbalanced autonomy, and insufficient inclusivity in multi-user households—particularly disadvantaging children, older adults, and neurodiverse users. To address this, we propose the Plural Voices Model (PVM), the first framework enabling dynamic value negotiation and real-time alignment within a single-agent architecture. PVM innovatively integrates adaptive safety scaffolding, a family-centered coordination hub, and an autonomy-adjustment slider, augmented by video-guided and personalized interaction design. Methodologically, it synthesizes heterogeneous public datasets, incorporates human-AI co-designed curricula, and employs fairness-aware contextual modeling to jointly optimize value identification, conflict detection, and accessibility. Experiments demonstrate that PVM significantly outperforms multi-agent baselines in compliance (76% vs. 70%), fairness (90% vs. 85%), zero safety violations, and low latency. The code and model are publicly released.
📝 Abstract
Domestic AI agents faces ethical, autonomy, and inclusion challenges, particularly for overlooked groups like children, elderly, and Neurodivergent users. We present the Plural Voices Model (PVM), a novel single-agent framework that dynamically negotiates multi-user needs through real-time value alignment, leveraging diverse public datasets on mental health, eldercare, education, and moral reasoning. Using human+synthetic curriculum design with fairness-aware scenarios and ethical enhancements, PVM identifies core values, conflicts, and accessibility requirements to inform inclusive principles. Our privacy-focused prototype features adaptive safety scaffolds, tailored interactions (e.g., step-by-step guidance for Neurodivergent users, simple wording for children), and equitable conflict resolution. In preliminary evaluations, PVM outperforms multi-agent baselines in compliance (76% vs. 70%), fairness (90% vs. 85%), safety-violation rate (0% vs. 7%), and latency. Design innovations, including video guidance, autonomy sliders, family hubs, and adaptive safety dashboards, demonstrate new directions for ethical and inclusive domestic AI, for building user-centered agentic systems in plural domestic contexts. Our Codes and Model are been open sourced, available for reproduction: https://github.com/zade90/Agora