🤖 AI Summary
AI-driven autonomous decision-making suffers from unverifiable reasoning processes and compromised verification robustness due to non-deterministic interference. Method: This paper proposes a lightweight, statistically sampled verifiable computing protocol. It formally defines “execution provability” and innovatively replaces traditional cryptographic proofs with efficient statistical sampling—ensuring verification reliability while drastically reducing overhead. The approach integrates uncertainty modeling, adaptation strategies for non-deterministic environments, and a lightweight verification mechanism to systematically address verification failures caused by stochasticity and environmental perturbations. Contribution/Results: Experiments demonstrate a 10×–100× reduction in verification overhead, enabling high-throughput, low-latency real-time inference verification. The protocol is empirically validated in autonomous driving and edge-AI decision-making scenarios, confirming its practical efficacy and scalability.
📝 Abstract
Many real-world applications are increasingly incorporating automated decision-making, driven by the widespread adoption of ML/AI inference for planning and guidance. This study examines the growing need for verifiable computing in autonomous decision-making. We formalize the problem of verifiable computing and introduce a sampling-based protocol that is significantly faster, more cost-effective, and simpler than existing methods. Furthermore, we tackle the challenges posed by non-determinism, proposing a set of strategies to effectively manage common scenarios.