Plural Voices, Single Agent: Towards Inclusive AI in Multi-User Domestic Spaces

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Home AI agents face ethical conflicts, imbalanced autonomy, and insufficient inclusivity in multi-user households—particularly disadvantaging children, older adults, and neurodiverse users. To address this, we propose the Plural Voices Model (PVM), the first framework enabling dynamic value negotiation and real-time alignment within a single-agent architecture. PVM innovatively integrates adaptive safety scaffolding, a family-centered coordination hub, and an autonomy-adjustment slider, augmented by video-guided and personalized interaction design. Methodologically, it synthesizes heterogeneous public datasets, incorporates human-AI co-designed curricula, and employs fairness-aware contextual modeling to jointly optimize value identification, conflict detection, and accessibility. Experiments demonstrate that PVM significantly outperforms multi-agent baselines in compliance (76% vs. 70%), fairness (90% vs. 85%), zero safety violations, and low latency. The code and model are publicly released.

Technology Category

Application Category

📝 Abstract
Domestic AI agents faces ethical, autonomy, and inclusion challenges, particularly for overlooked groups like children, elderly, and Neurodivergent users. We present the Plural Voices Model (PVM), a novel single-agent framework that dynamically negotiates multi-user needs through real-time value alignment, leveraging diverse public datasets on mental health, eldercare, education, and moral reasoning. Using human+synthetic curriculum design with fairness-aware scenarios and ethical enhancements, PVM identifies core values, conflicts, and accessibility requirements to inform inclusive principles. Our privacy-focused prototype features adaptive safety scaffolds, tailored interactions (e.g., step-by-step guidance for Neurodivergent users, simple wording for children), and equitable conflict resolution. In preliminary evaluations, PVM outperforms multi-agent baselines in compliance (76% vs. 70%), fairness (90% vs. 85%), safety-violation rate (0% vs. 7%), and latency. Design innovations, including video guidance, autonomy sliders, family hubs, and adaptive safety dashboards, demonstrate new directions for ethical and inclusive domestic AI, for building user-centered agentic systems in plural domestic contexts. Our Codes and Model are been open sourced, available for reproduction: https://github.com/zade90/Agora
Problem

Research questions and friction points this paper is trying to address.

Addressing ethical and inclusion challenges for overlooked domestic user groups
Developing a single-agent framework for dynamic multi-user needs negotiation
Creating privacy-focused adaptive interactions with equitable conflict resolution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic multi-user negotiation through real-time value alignment
Privacy-focused prototype with adaptive safety scaffolds
Fairness-aware curriculum design using human and synthetic data
🔎 Similar Papers
No similar papers found.
Joydeep Chandra
Joydeep Chandra
Indian Institute of Technology, Patna
S
Satyam Kumar Navneet
Independent Researcher, India