🤖 AI Summary
Existing multimodal task-oriented dialogue (TOD) systems suffer from two key limitations: (1) neglect of unstructured user review knowledge, and (2) underutilization of large language models (LLMs). To address these, we propose a dual-knowledge-enhanced two-stage reasoning framework. In Stage I, an LLM-driven knowledge probe jointly models structured attributes and unstructured reviews to dynamically assess knowledge utility and disentangle intent cues. In Stage II, response generation is explicitly decoupled from knowledge utilization, enabling focused, high-fidelity text generation. Our approach is the first to incorporate unstructured reviews into multimodal TOD knowledge fusion and introduces an interpretable, knowledge-selection mechanism. Extensive experiments on multiple public benchmarks demonstrate significant improvements over state-of-the-art methods. Code and pretrained models are publicly released.
📝 Abstract
Textual response generation is pivotal for multimodal mbox{task-oriented} dialog systems, which aims to generate proper textual responses based on the multimodal context. While existing efforts have demonstrated remarkable progress, there still exist the following limitations: 1) extit{neglect of unstructured review knowledge} and 2) extit{underutilization of large language models (LLMs)}. Inspired by this, we aim to fully utilize dual knowledge ( extit{i.e., } structured attribute and unstructured review knowledge) with LLMs to promote textual response generation in multimodal task-oriented dialog systems. However, this task is non-trivial due to two key challenges: 1) extit{dynamic knowledge type selection} and 2) extit{intention-response decoupling}. To address these challenges, we propose a novel dual knowledge-enhanced two-stage reasoner by adapting LLMs for multimodal dialog systems (named DK2R). To be specific, DK2R first extracts both structured attribute and unstructured review knowledge from external knowledge base given the dialog context. Thereafter, DK2R uses an LLM to evaluate each knowledge type's utility by analyzing LLM-generated provisional probe responses. Moreover, DK2R separately summarizes the intention-oriented key clues via dedicated reasoning, which are further used as auxiliary signals to enhance LLM-based textual response generation. Extensive experiments conducted on a public dataset verify the superiority of DK2R. We have released the codes and parameters.