DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?

📅 2024-09-12
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing data science benchmarks lack realism, failing to adequately assess the practical capabilities of large language models (LLMs), large vision-language models (LVLMs), and data science agents. Method: We introduce DSBench—the first industrial-practice-oriented benchmark—comprising 466 analytical tasks and 74 end-to-end modeling tasks, incorporating realistic constraints including long-context reasoning, multimodal inputs, and large-file/multi-table inference. Our multi-stage evaluation framework, grounded in Elo scoring and calibrated against real Kaggle competitions, assesses core competencies: SQL execution, code generation, visualization comprehension, and model hyperparameter tuning. Contribution/Results: Experiments reveal that state-of-the-art models achieve only up to 34.12% accuracy on analytical tasks—34.74 percentage points below human expert performance—quantifying, for the first time, the substantial capability gap between current data science agents and professional practitioners. DSBench establishes a reproducible, high-fidelity evaluation standard to guide future research.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have demonstrated impressive language/vision reasoning abilities, igniting the recent trend of building agents for targeted applications such as shopping assistants or AI software engineers. Recently, many data science benchmarks have been proposed to investigate their performance in the data science domain. However, existing data science benchmarks still fall short when compared to real-world data science applications due to their simplified settings. To bridge this gap, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks. This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions. DSBench offers a realistic setting by encompassing long contexts, multimodal task backgrounds, reasoning with large data files and multi-table structures, and performing end-to-end data modeling tasks. Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent solving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG). These findings underscore the need for further advancements in developing more practical, intelligent, and autonomous data science agents.
Problem

Research questions and friction points this paper is trying to address.

Evaluate data science agents realistically
Bridge gap between benchmarks and real-world applications
Assess performance in complex data science tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

DSBench evaluates data science agents
Includes realistic data analysis tasks
Assesses performance with multi-table structures
🔎 Similar Papers
No similar papers found.