🤖 AI Summary
This study addresses the dual challenges of insufficient AI decision transparency and inadequate modeling of multi-user preferences. We propose a Dialogue-based Large Language Model (D-LLM) framework that jointly integrates preference pattern recognition, reasoning process tracing, and structured dialogue negotiation. Leveraging the GRAPHYP search experience network, a preference classification system, and explainable AI (XAI) techniques, the framework enables end-to-end embedding of individual preferences into model decisions. Its core innovation lies in supporting multi-user expression of heterogeneous preferences via natural-language dialogue while generating auditable, visualizable, and verifiable reasoning paths. Empirical evaluation demonstrates significant improvements in decision trustworthiness and human-AI empathic alignment, advancing the credible deployment of AI in complex, inter-personal collaborative settings.
📝 Abstract
This perspective paper explores the future potential of "conversational intelligence" by examining how Large Language Models (LLMs) could be combined with GRAPHYP's network system to better understand human conversations and preferences. Using recent research and case studies, we propose a conceptual framework that could make AI rea-soning transparent and traceable, allowing humans to see and understand how AI reaches its conclusions. We present the conceptual perspective of "Matching Game Preferences through Dialogical Large Language Models (D-LLMs)," a proposed system that would allow multiple users to share their different preferences through structured conversations. This approach envisions personalizing LLMs by embedding individual user preferences directly into how the model makes decisions. The proposed D-LLM framework would require three main components: (1) reasoning processes that could analyze different search experiences and guide performance, (2) classification systems that would identify user preference patterns, and (3) dialogue approaches that could help humans resolve conflicting information. This perspective framework aims to create an interpretable AI system where users could examine, understand, and combine the different human preferences that influence AI responses, detected through GRAPHYP's search experience networks. The goal of this perspective is to envision AI systems that would not only provide answers but also show users how those answers were reached, making artificial intelligence more transparent and trustworthy for human decision-making.