🤖 AI Summary
This study addresses the low coordination efficiency and poor robustness of engineered biological agents in tissue repair. We propose a biologically inspired multi-agent reinforcement learning (MARL) framework. Methodologically, we introduce— for the first time—the integration of reaction-diffusion modeling, Hebbian-like neurochemical communication, and a tripartite reward mechanism based on gradient sensing, synchronization, and robustness; additionally, we incorporate a molecular signal-driven curriculum learning strategy. Unlike conventional MARL, our paradigm explicitly models molecular diffusion dynamics and biologically plausible synaptic plasticity, enabling emergent behaviors such as dynamic secretion regulation and spatial coordination. In silico experiments demonstrate a 37% improvement in repair efficiency and significantly enhanced robustness against noise and localized damage. This work establishes the first interpretable and scalable control paradigm for cooperative biological agents, bridging synthetic biology and intelligent healthcare.
📝 Abstract
In this paper, we present a multi-agent reinforcement learning (MARL) framework for optimizing tissue repair processes using engineered biological agents. Our approach integrates: (1) stochastic reaction-diffusion systems modeling molecular signaling, (2) neural-like electrochemical communication with Hebbian plasticity, and (3) a biologically informed reward function combining chemical gradient tracking, neural synchronization, and robust penalties. A curriculum learning scheme guides the agent through progressively complex repair scenarios. In silico experiments demonstrate emergent repair strategies, including dynamic secretion control and spatial coordination.