Robust Probabilistic Model Checking with Continuous Reward Domains

๐Ÿ“… 2025-02-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Traditional probabilistic model checking only verifies expected reward values, failing to capture the full quality-of-service profile under heavy-tailed or multimodal reward distributions, and suffers from limited accuracy and scalability in continuous reward spaces. To address this, we propose a novel DTMC reward distribution approximation method that jointly employs Erlang mixture distributions and moment-generating functions (MGFs), enabling analytical derivation of the complete reward distribution function for both continuous and discrete reward domainsโ€”marking the first integration of these two formalisms. Our approach provides rigorous theoretical error bounds, overcoming expressiveness and scalability limitations inherent in histogram-based methods. Experimental evaluation demonstrates that, under guaranteed bounded approximation error, the method significantly improves modeling accuracy and verification efficiency for continuous-reward scenarios, achieving both high precision and strong scalability on real-world model checking tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Probabilistic model checking traditionally verifies properties on the expected value of a measure of interest. This restriction may fail to capture the quality of service of a significant proportion of a system's runs, especially when the probability distribution of the measure of interest is poorly represented by its expected value due to heavy-tail behaviors or multiple modalities. Recent works inspired by distributional reinforcement learning use discrete histograms to approximate integer reward distribution, but they struggle with continuous reward space and present challenges in balancing accuracy and scalability. We propose a novel method for handling both continuous and discrete reward distributions in Discrete Time Markov Chains using moment matching with Erlang mixtures. By analytically deriving higher-order moments through Moment Generating Functions, our method approximates the reward distribution with theoretically bounded error while preserving the statistical properties of the true distribution. This detailed distributional insight enables the formulation and robust model checking of quality properties based on the entire reward distribution function, rather than restricting to its expected value. We include a theoretical foundation ensuring bounded approximation errors, along with an experimental evaluation demonstrating our method's accuracy and scalability in practical model-checking problems.
Problem

Research questions and friction points this paper is trying to address.

Handles continuous and discrete reward distributions
Approximates reward distribution with bounded error
Enables robust model checking of quality properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Moment matching with Erlang mixtures
Continuous and discrete reward distributions
Theoretical bounded approximation errors
๐Ÿ”Ž Similar Papers
No similar papers found.