VeriLLM: A Lightweight Framework for Publicly Verifiable Decentralized Inference

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized LLM inference, permissionless nodes and absence of prior trust render outputs unverifiable. Method: We propose the first lightweight decentralized inference verification framework with game-theoretic security guarantees. Our approach features a homogeneous inference-verification network architecture enabling GPU resource reuse; introduces an LLM-optimized lightweight verification algorithm and a public verification protocol under the one-honest-verifier assumption; and designs a peer-prediction-based incentive mechanism ensuring task indistinguishability and Nash equilibrium for honest behavior. Contributions/Results: Experiments show verification overhead reduced to ~1% of inference cost, significant end-to-end throughput improvement, and effective expansion of the verifiable node pool. We formally prove system security and stability against rational adversaries.

Technology Category

Application Category

📝 Abstract
Decentralized inference is an appealing paradigm for serving large language models (LLMs), offering strong security, high efficiency, and lower operating costs. Yet the permissionless setting admits no a priori trust in participating nodes, making output verifiability a prerequisite for secure deployment. We present VeriLLM, a publicly verifiable protocol for decentralized LLM inference that (i) achieves security under a one-honest-verifier assumption, (ii) attains near-negligible verification cost (about 1% of the underlying inference) via a lightweight verification algorithm designed explicitly for LLMs, and (iii) enforces honest checking through a peer-prediction mechanism that mitigates lazy verification in naive voting. We further introduce an isomorphic inference-verification network that multiplexes both roles on the same set of GPU workers. This architecture (i) increases GPU utilization and thereby improves end-to-end throughput for both inference and verification, (ii) expands the effective pool of available validators, strengthening robustness and security, and (iii) enforces task indistinguishability at the worker boundary to prevent job-type-conditioned behavior. Finally, we provide a formal game-theoretic analysis and prove that, under our incentives, honest inference and verification constitute a Nash equilibrium, ensuring incentive compatibility against rational adversaries. To our knowledge, this is the first decentralized inference verification protocol with an end-to-end game-theoretic security proof.
Problem

Research questions and friction points this paper is trying to address.

Ensuring verifiable outputs in permissionless decentralized LLM inference
Reducing verification costs through lightweight algorithms for LLMs
Establishing game-theoretic security against rational adversaries in decentralized systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight verification algorithm for LLMs
Isomorphic inference-verification network architecture
Game-theoretic Nash equilibrium for security
🔎 Similar Papers
No similar papers found.
K
Ke Wang
Gradient Network
F
Felix Qu
National University of Singapore
Libin Xia
Libin Xia
Peking University
Applied cryptographyBlockchain
Z
Zishuo Zhao
University of Illinois
C
Chris Tong
Gradient Network
L
Lynn Ai
Gradient Network
Eric Yang
Eric Yang
AI Scientist, Verily Life Sciences