Value of Information-based Deceptive Path Planning Under Adversarial Interventions

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses deceptive path planning under active adversarial intervention, where an observer deliberately disrupts trajectory execution. Unlike conventional approaches targeting passive observation only, we propose the first Markov Decision Process (MDP) framework explicitly modeling adversarial intervention. Our method introduces a Value-of-Information (VoI)-based deception objective that minimizes information gain to the observer, thereby inducing suboptimal intervention decisions. By integrating VoI theory with linear programming optimization, we synthesize computationally efficient deceptive policies. Experiments in grid-world environments demonstrate significant improvements in deception success rate and robustness over state-of-the-art deceptive path planners and conservative path baselines. The core contributions are: (1) the first deception-aware path planning framework supporting explicit modeling of active adversarial intervention; and (2) a novel, optimization-friendly information-hiding criterion grounded in VoI theory.

Technology Category

Application Category

📝 Abstract
Existing methods for deceptive path planning (DPP) address the problem of designing paths that conceal their true goal from a passive, external observer. Such methods do not apply to problems where the observer has the ability to perform adversarial interventions to impede the path planning agent. In this paper, we propose a novel Markov decision process (MDP)-based model for the DPP problem under adversarial interventions and develop new value of information (VoI) objectives to guide the design of DPP policies. Using the VoI objectives we propose, path planning agents deceive the adversarial observer into choosing suboptimal interventions by selecting trajectories that are of low informational value to the observer. Leveraging connections to the linear programming theory for MDPs, we derive computationally efficient solution methods for synthesizing policies for performing DPP under adversarial interventions. In our experiments, we illustrate the effectiveness of the proposed solution method in achieving deceptiveness under adversarial interventions and demonstrate the superior performance of our approach to both existing DPP methods and conservative path planning approaches on illustrative gridworld problems.
Problem

Research questions and friction points this paper is trying to address.

Deceptive path planning under adversarial observer interventions
Designing paths to mislead adversarial observers effectively
Computationally efficient MDP-based solutions for deceptive planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

MDP-based model for adversarial DPP
VoI objectives for deceptive policies
Linear programming for efficient solutions
🔎 Similar Papers
No similar papers found.