LitLLM: A Toolkit for Scientific Literature Review

📅 2024-02-02
🏛️ arXiv.org
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
Existing LLM-based literature review generation tools suffer from two critical limitations: poor factual consistency (hallucination) and weak timeliness (inability to incorporate unseen recent research). To address these, we propose an automated framework specifically designed for scholarly review generation. Our approach integrates retrieval-augmented generation (RAG), abstraction-driven keyword extraction, and user-feedback-guided adaptive retrieval to jointly perform keyword identification, semantic re-ranking, and summary-guided paragraph generation. Compared with conventional methods, our framework significantly improves factual accuracy and temporal coverage: empirical evaluation shows a 37% increase in factual accuracy and enables end-to-end generation of high-quality “Related Work” sections, substantially reducing manual effort. The core innovation lies in the first integration of abstraction-level modeling with interactive RAG into the literature review generation pipeline—enabling dynamic, context-aware, and user-informed synthesis of scholarly knowledge.

Technology Category

Application Category

📝 Abstract
Conducting literature reviews for scientific papers is essential for understanding research, its limitations, and building on existing work. It is a tedious task which makes an automatic literature review generator appealing. Unfortunately, many existing works that generate such reviews using Large Language Models (LLMs) have significant limitations. They tend to hallucinate-generate non-factual information-and ignore the latest research they have not been trained on. To address these limitations, we propose a toolkit that operates on Retrieval Augmented Generation (RAG) principles, specialized prompting and instructing techniques with the help of LLMs. Our system first initiates a web search to retrieve relevant papers by summarizing user-provided abstracts into keywords using an off-the-shelf LLM. Authors can enhance the search by supplementing it with relevant papers or keywords, contributing to a tailored retrieval process. Second, the system re-ranks the retrieved papers based on the user-provided abstract. Finally, the related work section is generated based on the re-ranked results and the abstract. There is a substantial reduction in time and effort for literature review compared to traditional methods, establishing our toolkit as an efficient alternative. Our project page including the demo and toolkit can be accessed here: https://litllm.github.io
Problem

Research questions and friction points this paper is trying to address.

Automates literature reviews to reduce time and effort
Addresses LLM hallucinations and outdated research limitations
Uses RAG and tailored retrieval for accurate reviews
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Retrieval Augmented Generation (RAG) principles
Summarizes abstracts into keywords for web search
Re-ranks papers and generates related work section
🔎 Similar Papers
No similar papers found.
S
Shubham Agarwal
ServiceNow Research, Mila - Quebec AI Institute, HEC Montreal, Canada
I
I. Laradji
ServiceNow Research, UBC, Vancouver, Canada
Laurent Charlin
Laurent Charlin
Associate Professor, HEC Montréal & Mila, Canada CIFAR AI Chair
Machine LearningArtificial Intelligence
C
Christopher Pal
ServiceNow Research, Mila - Quebec AI Institute, Canada CIFAR AI Chair