Sequential-NIAH: A Needle-In-A-Haystack Benchmark for Extracting Sequential Needles from Long Contexts

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical limitation of large language models (LLMs) in accurately extracting and ranking multiple embedded key facts (“needles”) within long contexts. To this end, we introduce the first long-context benchmark specifically designed for sequential information extraction, supporting context lengths from 8K to 128K tokens. Methodologically, we propose a synthetic evaluation framework with verifiable temporal and logical ordering constraints, incorporating three “needle” generation paradigms—synthetic, real-world, and open-domain—and integrating multi-scale sampling and noise-robustness testing. Comprehensive experiments across six state-of-the-art LLMs reveal a dual degradation effect: performance declines significantly with increasing context length and needle count. The best-performing model achieves only 63.15% accuracy, whereas a self-supervised evaluation model attains 99.49% on the synthetic set—demonstrating the benchmark’s high reliability and strong discriminative power.

Technology Category

Application Category

📝 Abstract
Evaluating the ability of large language models (LLMs) to handle extended contexts is critical, particularly for retrieving information relevant to specific queries embedded within lengthy inputs. We introduce Sequential-NIAH, a benchmark specifically designed to evaluate the capability of LLMs to extract sequential information items (known as needles) from long contexts. The benchmark comprises three types of needle generation pipelines: synthetic, real, and open-domain QA. It includes contexts ranging from 8K to 128K tokens in length, with a dataset of 14,000 samples (2,000 reserved for testing). To facilitate evaluation on this benchmark, we trained a synthetic data-driven evaluation model capable of evaluating answer correctness based on chronological or logical order, achieving an accuracy of 99.49% on synthetic test data. We conducted experiments on six well-known LLMs, revealing that even the best-performing model achieved a maximum accuracy of only 63.15%. Further analysis highlights the growing challenges posed by increasing context lengths and the number of needles, underscoring substantial room for improvement. Additionally, noise robustness experiments validate the reliability of the benchmark, making Sequential-NIAH an important reference for advancing research on long text extraction capabilities of LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to retrieve sequential information from long contexts.
Assessing performance of LLMs on extracting needles from 8K-128K token texts.
Benchmarking noise robustness and accuracy in long-context information extraction.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic data-driven evaluation model
Sequential needle extraction benchmark
Handles contexts up to 128K tokens
🔎 Similar Papers
No similar papers found.
Y
Yifei Yu
Tencent YouTu Lab, Shanghai, China
Qian-Wen Zhang
Qian-Wen Zhang
腾讯科技
L
Lingfeng Qiao
Tencent YouTu Lab, Shanghai, China
Di Yin
Di Yin
Tencent
LLMNLPMLLM
F
Fang Li
Tencent YouTu Lab, Beijing, China
J
Jie Wang
Tencent YouTu Lab, Beijing, China
Z
Zengxi Chen
Tencent, Shanghai, China
S
Suncong Zheng
Tencent, Beijing, China
X
Xiaolong Liang
Tencent, Shanghai, China
Xing Sun
Xing Sun
Tencent Youtu Lab
LLMMLLMAgent