🤖 AI Summary
This work addresses a critical limitation of large language models (LLMs) in formal reasoning: their tendency to exhibit “semantic override” hallucinations when confronted with locally redefined semantics—such as custom logic gates or operators—due to overreliance on pretraining priors, thereby disregarding temporary definitions provided in prompts. The study formally defines and quantifies two error types, semantic override and assumption injection, and introduces a micro-benchmark comprising 30 logical and digital circuit reasoning tasks. Using validator-style trap tasks that encompass Boolean algebra and operator overloading, the authors evaluate model adherence to local semantic specifications. Experimental results reveal that mainstream LLMs consistently ignore contextual definitions, introduce undeclared assumptions, and omit critical constraints—even in simple tasks—highlighting fundamental shortcomings in their capacity for rigorous formal reasoning.
📝 Abstract
Large language models (LLMs) demonstrate strong performance on standard digital logic and Boolean reasoning tasks, yet their reliability under locally redefined semantics remains poorly understood. In many formal settings, such as circuit specifications, examinations, and hardware documentation, operators and components are explicitly redefined within narrow scope. Correct reasoning in these contexts requires models to temporarily suppress globally learned conventions in favor of prompt-local definitions. In this work, we study a systematic failure mode we term semantic override, in which an LLM reverts to its pretrained default interpretation of operators or gate behavior despite explicit redefinition in the prompt. We also identify a related class of errors, assumption injection, where models commit to unstated hardware semantics when critical details are underspecified, rather than requesting clarification. We introduce a compact micro-benchmark of 30 logic and digital-circuit reasoning tasks designed as verifier-style traps, spanning Boolean algebra, operator overloading, redefined gates, and circuit-level semantics. Evaluating three frontier LLMs, we observe persistent noncompliance with local specifications, confident but incompatible assumptions, and dropped constraints even in elementary settings. Our findings highlight a gap between surface-level correctness and specification-faithful reasoning, motivating evaluation protocols that explicitly test local unlearning and semantic compliance in formal domains.