A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference

πŸ“… 2026-02-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study investigates whether layer-wise approximation errors in neural network inference are composable in a way that guarantees reliable overall output. Through theoretical analysis and the construction of counterexamples, the authors formally demonstrate for the first time that even when each layer’s approximation error remains within prescribed tolerance bounds, their cumulative effect can still cause the final output to deviate arbitrarily. They construct functionally equivalent networks that are highly sensitive to layer-wise perturbations, showing that bounded per-layer errors can be exploited to manipulate the output arbitrarily. This result rigorously refutes the sufficiency of layer-wise verification approaches and challenges the widely held assumption of their general validity in approximate inference.

Technology Category

Application Category

πŸ“ Abstract
A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance $Ξ΄$; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).
Problem

Research questions and friction points this paper is trying to address.

layerwise verification
neural network inference
approximate computation
error propagation
non-composability
Innovation

Methods, ideas, or system contributions that make the work stand out.

layerwise verification
non-composability
approximate inference
adversarial error
neural network verification
πŸ”Ž Similar Papers
No similar papers found.