Leveraging Test Driven Development with Large Language Models for Reliable and Verifiable Spreadsheet Code Generation: A Research Framework

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit hallucinations, logical inconsistencies, and syntactic errors when generating spreadsheet formulas or code for high-stakes domains such as financial modeling and scientific computing, severely undermining their reliability and trustworthiness. Method: We propose the first framework that systematically integrates test-driven development (TDD) into LLM-based code generation, enforcing a “test-first” paradigm to provide formal constraints and cognitive guidance. Our approach combines structured prompt engineering, multi-language support (Excel formulas, Python, Rust), and a quantitative evaluation framework. Contribution/Results: Experiments demonstrate significant improvements over baselines in correctness, robustness, and user engagement—particularly for non-programmers. The framework establishes a novel paradigm for high-reliability AI-assisted computation and advances responsible AI deployment in education and professional practice.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs), such as ChatGPT, are increasingly leveraged for generating both traditional software code and spreadsheet logic. Despite their impressive generative capabilities, these models frequently exhibit critical issues such as hallucinations, subtle logical inconsistencies, and syntactic errors, risks particularly acute in high stakes domains like financial modelling and scientific computations, where accuracy and reliability are paramount. This position paper proposes a structured research framework that integrates the proven software engineering practice of Test-Driven Development (TDD) with Large Language Model (LLM) driven generation to enhance the correctness of, reliability of, and user confidence in generated outputs. We hypothesise that a "test first" methodology provides both technical constraints and cognitive scaffolding, guiding LLM outputs towards more accurate, verifiable, and comprehensible solutions. Our framework, applicable across diverse programming contexts, from spreadsheet formula generation to scripting languages such as Python and strongly typed languages like Rust, includes an explicitly outlined experimental design with clearly defined participant groups, evaluation metrics, and illustrative TDD based prompting examples. By emphasising test driven thinking, we aim to improve computational thinking, prompt engineering skills, and user engagement, particularly benefiting spreadsheet users who often lack formal programming training yet face serious consequences from logical errors. We invite collaboration to refine and empirically evaluate this approach, ultimately aiming to establish responsible and reliable LLM integration in both educational and professional development practices.
Problem

Research questions and friction points this paper is trying to address.

Addressing LLM hallucinations and errors in spreadsheet code generation
Improving reliability of LLM outputs through Test-Driven Development
Enhancing accuracy in financial and scientific computational models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates Test-Driven Development with LLMs
Uses test-first methodology for accurate outputs
Applies framework across diverse programming contexts
🔎 Similar Papers
No similar papers found.