🤖 AI Summary
Traditional RAG systems exhibit limited retrieval capability for technical documentation containing structured data (e.g., tables, images). To address this, we propose a novel RAG architecture integrating vector retrieval with a fine-tuned re-ranker. Methodologically, we build a lightweight re-ranker based on Gemma-2-9B-it and supervise its fine-tuning on a domain-specific technical document corpus using the RAFT (Retrieval-Augmented Fine-Tuning) strategy, substantially enhancing modeling of structured content and out-of-context queries. Evaluation on RAGas and DeepEval benchmarks shows our system achieves 94%/96% factual consistency and 87%/93% answer relevance—outperforming baseline RAG across all metrics. Our core contribution is the first application of a RAFT-driven re-ranking mechanism to RAG for structured technical documentation, enabling high-precision, robust domain-specific question answering.
📝 Abstract
Large Language Models (LLMs) are capable of natural language understanding and generation. But they face challenges such as hallucination and outdated knowledge. Fine-tuning is one possible solution, but it is resource-intensive and must be repeated with every data update. Retrieval-Augmented Generation (RAG) offers an efficient solution by allowing LLMs to access external knowledge sources. However, traditional RAG pipelines struggle with retrieving information from complex technical documents with structured data such as tables and images. In this work, we propose a RAG pipeline, capable of handling tables and images in documents, for technical documents that support both scanned and searchable formats. Its retrieval process combines vector similarity search with a fine-tuned reranker based on Gemma-2-9b-it. The reranker is trained using RAFT (Retrieval-Augmented Fine-Tuning) on a custom dataset designed to improve context identification for question answering. Our evaluation demonstrates that the proposed pipeline achieves a high faithfulness score of 94% (RAGas) and 96% (DeepEval), and an answer relevancy score of 87% (RAGas) and 93% (DeepEval). Comparative analysis demonstrates that the proposed architecture is superior to general RAG pipelines in terms of table-based questions and handling questions outside context.