CRABS: A syntactic-semantic pincer strategy for bounding LLM interpretation of Python notebooks

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Data science notebooks pose challenges for accurate static modeling of information flow and execution dependencies without actual code execution. Method: This paper proposes a syntax–semantics co-parsing framework that combines lightweight structural analysis via abstract syntax trees (ASTs) with zero-shot large language model (LLM) capabilities to identify inter-cell input/output relationships. It introduces a novel “syntax–semantics caliper strategy,” leveraging syntactic upper and lower bounds to tightly constrain LLM hallucination and mitigate long-context modeling difficulties. Results: Evaluated on 50 real-world Kaggle notebooks, the method achieves 98% F1 for information flow identification, 99% F1 for execution dependency inference, and 98% accuracy in resolving syntactic ambiguities. To our knowledge, this is the first approach enabling high-precision, interpretable, execution-free static semantic understanding of notebooks.

Technology Category

Application Category

📝 Abstract
Recognizing the information flows and operations comprising data science and machine learning Python notebooks is critical for evaluating, reusing, and adapting notebooks for new tasks. Investigating a notebook via re-execution often is impractical due to the challenges of resolving data and software dependencies. While Large Language Models (LLMs) pre-trained on large codebases have demonstrated effectiveness in understanding code without running it, we observe that they fail to understand some realistic notebooks due to hallucinations and long-context challenges. To address these issues, we propose a notebook understanding task yielding an information flow graph and corresponding cell execution dependency graph for a notebook, and demonstrate the effectiveness of a pincer strategy that uses limited syntactic analysis to assist full comprehension of the notebook using an LLM. Our Capture and Resolve Assisted Bounding Strategy (CRABS) employs shallow syntactic parsing and analysis of the abstract syntax tree (AST) to capture the correct interpretation of a notebook between lower and upper estimates of the inter-cell I/O sets, then uses an LLM to resolve remaining ambiguities via cell-by-cell zero-shot learning, thereby identifying the true data inputs and outputs of each cell. We evaluate and demonstrate the effectiveness of our approach using an annotated dataset of 50 representative, highly up-voted Kaggle notebooks that together represent 3454 actual cell inputs and outputs. The LLM correctly resolves 1397 of 1425 (98%) ambiguities left by analyzing the syntactic structure of these notebooks. Across 50 notebooks, CRABS achieves average F1 scores of 98% identifying cell-to-cell information flows and 99% identifying transitive cell execution dependencies.
Problem

Research questions and friction points this paper is trying to address.

Understanding data flows in Python notebooks without execution
Addressing LLM hallucinations in notebook interpretation
Resolving ambiguities in cell dependencies via syntactic-semantic analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses syntactic-semantic pincer strategy for LLM
Employs shallow syntactic parsing and AST analysis
Resolves ambiguities via cell-by-cell zero-shot learning
🔎 Similar Papers
No similar papers found.
M
Meng Li
School of Information Sciences, University of Illinois Urbana-Champaign
T
Timothy M. McPhillips
School of Information Sciences, University of Illinois Urbana-Champaign
Dingmin Wang
Dingmin Wang
Applied Scientist@AWS AI Lab
Natural Language ProcessingLLMs for Code
S
Shin-Rong Tsai
School of Information Sciences, University of Illinois Urbana-Champaign
Bertram Ludäscher
Bertram Ludäscher
Professor, School of Information Sciences, University of Illinois at Urbana-Champaign
Scientific data managementworkflowsdata curationprovenanceknowledge representation