Securing AI Agents in Cyber-Physical Systems: A Survey of Environmental Interactions, Deepfake Threats, and Defenses

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses emerging security threats—such as deepfakes, semantic manipulation, and MCP protocol exploits—that challenge AI agents in cyber-physical systems (CPS), where traditional defenses fall short. To counter these risks, the authors propose SENTINEL, a holistic security framework spanning the AI agent lifecycle. SENTINEL integrates threat modeling, feasibility assessment, defense selection, and continuous validation, while incorporating physical constraints and data provenance mechanisms. Empirical evaluation in a smart grid case study demonstrates that detection alone is insufficient for securing safety-critical CPS; instead, robust protection requires synergistic enforcement of physical laws and trustworthy data lineage to achieve defense-in-depth. This research provides a systematic design methodology for building trustworthy, AI-enabled cyber-physical systems.

Technology Category

Application Category

📝 Abstract
The increasing integration of AI agents into cyber-physical systems (CPS) introduces new security risks that extend beyond traditional cyber or physical threat models. Recent advances in generative AI enable deepfake and semantic manipulation attacks that can compromise agent perception, reasoning, and interaction with the physical environment, while emerging protocols such as the Model Context Protocol (MCP) further expand the attack surface through dynamic tool use and cross-domain context sharing. This survey provides a comprehensive review of security threats targeting AI agents in CPS, with a particular focus on environmental interactions, deepfake-driven attacks, and MCP-mediated vulnerabilities. We organize the literature using the SENTINEL framework, a lifecycle-aware methodology that integrates threat characterization, feasibility analysis under CPS constraints, defense selection, and continuous validation. Through an end-to-end case study grounded in a real-world smart grid deployment, we quantitatively illustrate how timing, noise, and false-positive costs constrain deployable defenses, and why detection mechanisms alone are insufficient as decision authorities in safety-critical CPS. The survey highlights the role of provenance- and physics-grounded trust mechanisms and defense-in-depth architectures, and outlines open challenges toward trustworthy AI-enabled CPS.
Problem

Research questions and friction points this paper is trying to address.

AI agents
cyber-physical systems
deepfake threats
Model Context Protocol
security risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

SENTINEL framework
AI agent security
deepfake threats
Model Context Protocol (MCP)
cyber-physical systems
🔎 Similar Papers
No similar papers found.
Mohsen Hatami
Mohsen Hatami
Adjunct Professor, University of Florida
Construction ManagementDeep Learning (Machine Learning)Artificial IntelligenceSimulation &
V
Van Tuan Pham
Dept. of Electrical & Computer Engineering, Binghamton University, Binghamton, NY 13902, USA
H
Hozefa Lakadawala
Dept. of Electrical & Computer Engineering, Binghamton University, Binghamton, NY 13902, USA
Yu Chen
Yu Chen
Professor at Dept. of Electrical & Computer Engr, Binghamton University, SUNY
Network SecurityEdge ComputingIoTsSmart CitiesSmart Surveillance