🤖 AI Summary
This study addresses multimodal dialogue response retrieval—jointly retrieving text and image responses for dialogue contexts. To overcome limitations in existing approaches regarding cross-modal alignment and subtask coordination, we propose two integration paradigms: a two-stage pipeline and an end-to-end joint modeling framework, and conduct the first systematic comparison of their performance. We innovatively introduce parameter-sharing mechanisms across modalities and across subtasks, integrating contrastive learning and joint optimization within both dual-encoder and end-to-end architectures. Experimental results demonstrate that the end-to-end approach achieves comparable retrieval accuracy to the two-stage method while featuring greater architectural simplicity. Moreover, our parameter-sharing strategy improves retrieval accuracy by 3.2% and reduces model parameters by 37%. These findings establish a new, efficient, and lightweight paradigm for multimodal dialogue response retrieval.
📝 Abstract
Multimodal chatbots have become one of the major topics for dialogue systems in both research community and industry. Recently, researchers have shed light on the multimodality of responses as well as dialogue contexts. This work explores how a dialogue system can output responses in various modalities such as text and image. To this end, we first formulate a multimodal dialogue response retrieval task for retrieval-based systems as the combination of three subtasks. We then propose three integration methods based on a two-step approach and an end-to-end approach, and compare the merits and demerits of each method. Experimental results on two datasets demonstrate that the end-to-end approach achieves comparable performance without an intermediate step in the two-step approach. In addition, a parameter sharing strategy not only reduces the number of parameters but also boosts performance by transferring knowledge across the subtasks and the modalities.