🤖 AI Summary
This work addresses the lack of effective proactive detection and root cause diagnosis methods for frequent crashes in machine learning notebooks. The authors propose a novel approach that integrates static code context with runtime kernel states—such as object types and tensor shapes—and, for the first time, injects structured runtime information into large language models (including Gemini, Qwen, and GPT-5) to predict and explain cell-level crashes before execution. Evaluated on the JunoBench benchmark, the method significantly outperforms existing techniques, achieving a 7–10 percentage point improvement in accuracy and an 8–11 point gain in F1 score, with particularly strong performance in diagnostic tasks.
📝 Abstract
Jupyter notebooks are widely used for machine learning (ML) development due to their support for interactive and iterative experimentation. However, ML notebooks are highly prone to bugs, with crashes being among the most disruptive. Despite their practical importance, systematic methods for crash detection and diagnosis in ML notebooks remain largely unexplored. We present CRANE-LLM, a novel approach that augments large language models (LLMs) with structured runtime information extracted from the notebook kernel state to detect and diagnose crashes before executing a target cell. Given previously executed cells and a target cell, CRANE-LLM combines static code context with runtime information, including object types, tensor shapes, and data attributes, to predict whether the target cell will crash (detection) and explain the underlying cause (diagnosis). We evaluate CRANE-LLM on JunoBench, a benchmark of 222 ML notebooks comprising 111 pairs of crashing and corresponding non-crashing notebooks across multiple ML libraries and crash root causes. Across three state-of-the-art LLMs (Gemini, Qwen, and GPT-5), runtime information improves crash detection and diagnosis by 7-10 percentage points in accuracy and 8-11 in F1-score, with larger gains for diagnosis. Improvements vary across ML libraries, crash causes, and LLMs, and depends on the integration of complementary categories of runtime information.