🤖 AI Summary
Existing temporal question answering (TQA) datasets suffer from limited scale and insufficient coverage of temporal reasoning capabilities, hindering evaluation of complex tasks such as cross-temporal comparison, aggregation, and multi-hop temporal reasoning. To address this, we propose ComplexTempQA—the first large-scale TQA benchmark comprising over 100 million questions, spanning two decades and multiple domains of events and entities, and supporting attribute, comparative, and counting question types. We introduce a fine-grained taxonomy of temporal reasoning questions and incorporate structured metadata with precise temporal scope annotations to significantly enhance evaluability and interpretability. Built upon Wikipedia and Wikidata, the dataset integrates event extraction, temporal alignment, multi-hop path generation, and metadata annotation. With its unprecedented scale, diversity, and reasoning complexity—far exceeding benchmarks like HotpotQA—ComplexTempQA establishes the first high-quality, large-scale, reasoning-oriented resource for evaluating and training foundation models on temporal QA.
📝 Abstract
We introduce ComplexTempQA, a large-scale dataset consisting of over 100 million question-answer pairs designed to tackle the challenges in temporal question answering. ComplexTempQA significantly surpasses existing benchmarks like HOTPOTQA, TORQUE, and TEQUILA in scale and scope. Utilizing data from Wikipedia and Wikidata, the dataset covers questions spanning over two decades and offers an unmatched breadth of topics. We introduce a unique taxonomy that categorizes questions as attributes, comparisons, and counting questions, each revolving around events, entities, and time periods. One standout feature of ComplexTempQA is the high complexity of its questions, which demand effective capabilities for answering such as across-time comparison, temporal aggregation, and multi-hop reasoning involving temporal event ordering and entity recognition. Additionally, each question is accompanied by detailed metadata, including specific time scopes, allowing for comprehensive evaluation and enhancement of the temporal reasoning abilities of large language models. ComplexTempQA serves both as a testing ground for developing sophisticated AI models and as a foundation for advancing research in question answering, information retrieval, and language understanding.