Multi-Hop Question Generation via Dual-Perspective Keyword Guidance

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-hop question generation (MQG) suffers from insufficient keyword guidance and role ambiguity: existing approaches fail to distinguish between intent-indicative keywords (e.g., “compare”, “infer”) and document-content keywords (e.g., entity names, facts), limiting cross-paragraph reasoning. This work proposes a dual-perspective keyword co-modeling framework that explicitly differentiates and jointly models these two keyword types for structured guidance and hard constraints during question generation. Methodologically, we design an extended Transformer encoder and a dual answer-aware decoder—one branch generates intent and content keywords, the other generates the final question—enabling explicit keyword injection and position-aware fusion. Evaluated on multiple MQG benchmarks, our approach significantly outperforms state-of-the-art methods, achieving consistent improvements in question relevance, multi-hop logical coherence, and answer traceability.

Technology Category

Application Category

📝 Abstract
Multi-hop question generation (MQG) aims to generate questions that require synthesizing multiple information snippets from documents to derive target answers. The primary challenge lies in effectively pinpointing crucial information snippets related to question-answer (QA) pairs, typically relying on keywords. However, existing works fail to fully utilize the guiding potential of keywords and neglect to differentiate the distinct roles of question-specific and document-specific keywords. To address this, we define dual-perspective keywords (i.e., question and document keywords) and propose a Dual-Perspective Keyword-Guided (DPKG) framework, which seamlessly integrates keywords into the multi-hop question generation process. We argue that question keywords capture the questioner's intent, whereas document keywords reflect the content related to the QA pair. Functionally, question and document keywords work together to pinpoint essential information snippets in the document, with question keywords required to appear in the generated question. The DPKG framework consists of an expanded transformer encoder and two answer-aware transformer decoders for keyword and question generation, respectively. Extensive experiments demonstrate the effectiveness of our work, showcasing its promising performance and underscoring its significant value in the MQG task.
Problem

Research questions and friction points this paper is trying to address.

Effectively pinpointing crucial information snippets for multi-hop question generation
Differentiating question-specific and document-specific keywords' distinct roles
Integrating dual-perspective keywords into the question generation process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-perspective keywords guide question generation
Expanded transformer encoder enhances information pinpointing
Answer-aware decoders for keyword and question generation
🔎 Similar Papers
No similar papers found.
M
Maodong Li
School of Computer Science and Technology, Soochow University, China
L
Longyin Zhang
Institute for Infocomm Research, A*STAR, Singapore
Fang Kong
Fang Kong
Southern University of Science and Technology, Assistant Professor
multi-armed banditsonline learningreinforcement learning