🤖 AI Summary
Large language models (LLMs) still struggle to meet the stringent demands of high-stakes applications such as payroll systems, particularly regarding high-precision numerical computation and auditable outputs. This work constructs a synthetic payroll system to systematically evaluate multiple LLMs—including GPT, Claude, Perplexity, Grok, and Gemini—on their ability to interpret payroll rules, respect execution order, and generate cent-level accurate results. The evaluation employs a hierarchical test dataset and diverse prompting strategies, such as schema-guided and reasoning-oriented prompts. The study introduces a compact, reproducible framework that clearly delineates scenarios where LLMs can operate reliably using prompts alone from those necessitating external computation, thereby offering practical guidance for deploying high-assurance LLM-based systems.
📝 Abstract
Large language models are now used daily for writing, search, and analysis, and their natural language understanding continues to improve. However, they remain unreliable on exact numerical calculation and on producing outputs that are straightforward to audit. We study synthetic payroll system as a focused, high-stakes example and evaluate whether models can understand a payroll schema, apply rules in the right order, and deliver cent-accurate results. Our experiments span a tiered dataset from basic to complex cases, a spectrum of prompts from minimal baselines to schema-guided and reasoning variants, and multiple model families including GPT, Claude, Perplexity, Grok and Gemini. Results indicate clear regimes where careful prompting is sufficient and regimes where explicit computation is required. The work offers a compact, reproducible framework and practical guidance for deploying LLMs in settings that demand both accuracy and assurance.