π€ AI Summary
This study systematically evaluates the gap between the practical comprehension capabilities and theoretical context capacity of large language models under extremely long contexts (up to 70K tokens). By constructing a multitask evaluation framework encompassing depression detection from 20K social media posts, recipe retrieval, and mathematical reasoning, the authors conduct comparative experiments on Grok-4, GPT-4, Gemini 2.5, and GPT-5. The findings reveal that when processing more than 5K posts, all models except GPT-5 experience a sharp drop in accuracy to 50β53%, whereas GPT-5 maintains high precision at approximately 95%. Moreover, the βlost in the middleβ phenomenon is largely mitigated in newer models. These results underscore the necessity of employing multidimensional metrics to assess the true performance of long-context models in high information-density scenarios.
π Abstract
With the significant expansion of the context window in Large Language Models (LLMs), these models are theoretically capable of processing millions of tokens in a single pass. However, research indicates a significant gap between this theoretical capacity and the practical ability of models to robustly utilize information within long contexts, especially in tasks that require a comprehensive understanding of numerous details. This paper evaluates the performance of four state-of-the-art models (Grok-4, GPT-4, Gemini 2.5, and GPT-5) on long short-context tasks. For this purpose, three datasets were used: two supplementary datasets for retrieving culinary recipes and math problems, and a primary dataset of 20K social media posts for depression detection. The results show that as the input volume on the social media dataset exceeds 5K posts (70K tokens), the performance of all models degrades significantly, with accuracy dropping to around 50-53% for 20K posts. Notably, in the GPT-5 model, despite the sharp decline in accuracy, its precision remained high at approximately 95%, a feature that could be highly effective for sensitive applications like depression detection. This research also indicates that the"lost in the middle"problem has been largely resolved in newer models. This study emphasizes the gap between the theoretical capacity and the actual performance of models on complex, high-volume data tasks and highlights the importance of metrics beyond simple accuracy for practical applications.