Matching Game Preferences Through Dialogical Large Language Models: A Perspective

📅 2025-07-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the dual challenges of insufficient AI decision transparency and inadequate modeling of multi-user preferences. We propose a Dialogue-based Large Language Model (D-LLM) framework that jointly integrates preference pattern recognition, reasoning process tracing, and structured dialogue negotiation. Leveraging the GRAPHYP search experience network, a preference classification system, and explainable AI (XAI) techniques, the framework enables end-to-end embedding of individual preferences into model decisions. Its core innovation lies in supporting multi-user expression of heterogeneous preferences via natural-language dialogue while generating auditable, visualizable, and verifiable reasoning paths. Empirical evaluation demonstrates significant improvements in decision trustworthiness and human-AI empathic alignment, advancing the credible deployment of AI in complex, inter-personal collaborative settings.

Technology Category

Application Category

📝 Abstract
This perspective paper explores the future potential of "conversational intelligence" by examining how Large Language Models (LLMs) could be combined with GRAPHYP's network system to better understand human conversations and preferences. Using recent research and case studies, we propose a conceptual framework that could make AI rea-soning transparent and traceable, allowing humans to see and understand how AI reaches its conclusions. We present the conceptual perspective of "Matching Game Preferences through Dialogical Large Language Models (D-LLMs)," a proposed system that would allow multiple users to share their different preferences through structured conversations. This approach envisions personalizing LLMs by embedding individual user preferences directly into how the model makes decisions. The proposed D-LLM framework would require three main components: (1) reasoning processes that could analyze different search experiences and guide performance, (2) classification systems that would identify user preference patterns, and (3) dialogue approaches that could help humans resolve conflicting information. This perspective framework aims to create an interpretable AI system where users could examine, understand, and combine the different human preferences that influence AI responses, detected through GRAPHYP's search experience networks. The goal of this perspective is to envision AI systems that would not only provide answers but also show users how those answers were reached, making artificial intelligence more transparent and trustworthy for human decision-making.
Problem

Research questions and friction points this paper is trying to address.

Enhancing AI transparency in understanding human preferences
Developing dialogical LLMs for personalized user preference matching
Creating interpretable AI systems for traceable decision-making processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining LLMs with GRAPHYP's network system
Embedding user preferences into model decisions
Transparent AI reasoning through structured dialogues
🔎 Similar Papers
No similar papers found.
R
Renaud Fabre
Dionysian Economics Laboratory (LED), Université Paris 8, 93200 Saint-Denis, France
D
Daniel Egret
Université Paris Sciences et Lettres (PSL), 75006 Paris, France
Patrice Bellot
Patrice Bellot
Aix-Marseille Université - CNRS (LIS)
Information RetrievalNatural Language ProcessingArtificial IntelligenceMachine LearningText and Data Mining