🤖 AI Summary
Existing RAG approaches for fine-grained question answering over academic PDFs suffer from a disconnection between neural and symbolic retrieval, and fail to leverage document structure due to single-view, layout-agnostic text chunking.
Method: We propose a collaborative multi-view RAG framework featuring: (1) a neural-symbolic dual-path retrieval mechanism enabling dynamic complementarity between semantic and exact matching; (2) a schema-driven, multi-view PDF parsing pipeline—extracting structured content across chapters, tables, and equations—to jointly generate relational databases and vector indices; and (3) an LLM agent-guided iterative context collection strategy.
Results: Evaluated on three full-PDF QA benchmarks—including AIRQA-REAL—our method significantly outperforms pure vector-based RAG and diverse structured baselines, achieving +12.6% absolute improvement in answer accuracy and +37.4% gain in structural awareness.
📝 Abstract
The increasing number of academic papers poses significant challenges for researchers to efficiently acquire key details. While retrieval augmented generation (RAG) shows great promise in large language model (LLM) based automated question answering, previous works often isolate neural and symbolic retrieval despite their complementary strengths. Moreover, conventional single-view chunking neglects the rich structure and layout of PDFs, e.g., sections and tables. In this work, we propose NeuSym-RAG, a hybrid neural symbolic retrieval framework which combines both paradigms in an interactive process. By leveraging multi-view chunking and schema-based parsing, NeuSym-RAG organizes semi-structured PDF content into both the relational database and vectorstore, enabling LLM agents to iteratively gather context until sufficient to generate answers. Experiments on three full PDF-based QA datasets, including a self-annotated one AIRQA-REAL, show that NeuSym-RAG stably defeats both the vector-based RAG and various structured baselines, highlighting its capacity to unify both retrieval schemes and utilize multiple views. Code and data are publicly available at https://github.com/X-LANCE/NeuSym-RAG.