Adaptive and Robust Cost-Aware Proof of Quality for Decentralized LLM Inference Networks

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of reliably and efficiently rewarding high-quality outputs in decentralized large language model inference networks, where heterogeneous latency and cost conditions complicate incentive alignment, and evaluator heterogeneity or malicious scoring can undermine trust. To this end, the authors propose an adversarially robust consensus mechanism that enhances a cost-aware Proof of Quality framework. By integrating robust statistical aggregation—such as median and trimmed mean—with adaptive trust-weighting strategies, the method dynamically adjusts evaluator weights based on reliability. It further incorporates adversarial strategy simulation and offline proxy-ground-truth evaluation to strengthen resilience. Experiments demonstrate that the approach significantly improves consensus alignment with ground truth, reduces sensitivity to noise and strategic attacks, reveals the task-dependent nature of evaluator reliability, and quantifies the trade-off between evaluator sample size and reward stability.

Technology Category

Application Category

📝 Abstract
Decentralized large language model inference networks require lightweight mechanisms to reward high quality outputs under heterogeneous latency and cost. Proof of Quality provides scalable verification by sampling evaluator nodes that score candidate outputs, then aggregating their scores into a consensus signal that determines rewards. However, evaluator heterogeneity and malicious score manipulation can distort consensus and inflate payouts, which weakens incentive alignment in open participation settings. This paper extends a cost-aware Proof of Quality mechanism by adding adversary-resilient consensus formation. We study robust aggregation rules, including median and trimmed mean, and an adaptive trust-weighted consensus that updates evaluator weights from deviation signals. Using question answering and summarization workloads with a ground truth proxy for offline analysis, we quantify evaluator reliability and show strong variance across evaluators, including task-dependent misalignment that can invert correlations. We then evaluate robustness under four adversarial strategies, including noise injection, boosting, sabotage, and intermittent manipulation, across a sweep of malicious ratios and evaluator sample sizes. Our results show that robust aggregation improves consensus alignment with the ground truth proxy and reduces sensitivity to noisy and strategic attacks compared with simple averaging. We further characterize the operational trade-off introduced by evaluator sampling, where larger evaluator sets reduce evaluator rewards and increase payoff variance while inference rewards remain relatively stable in our configuration. These findings motivate robust consensus as a default component for cost-aware Proof of Quality and provide practical guidance for selecting evaluator sampling parameters under adversarial risk and resource constraints.
Problem

Research questions and friction points this paper is trying to address.

Proof of Quality
decentralized LLM inference
evaluator heterogeneity
adversarial manipulation
consensus mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proof of Quality
robust consensus
trust-weighted aggregation
decentralized LLM inference
adversarial resilience
🔎 Similar Papers
No similar papers found.
A
Arther Tian
DGrid AI
A
Alex Ding
DGrid AI
F
Frank Chen
DGrid AI
S
Simon Wu
DGrid AI
Aaron Chan
Aaron Chan
Sahara AI
Machine LearningLarge Language ModelsAI AgentsDecentralized AI