🤖 AI Summary
Online matching systems (e.g., kidney exchange, cloud scheduling) suffer from poor adaptability to dynamic environments and weak generalization of traditional heuristics. Method: This paper proposes a reinforcement learning-based expert policy coordination framework. It employs an advantage-weighted mechanism to adaptively orchestrate multiple heuristic policies and integrates the Adv2 framework with an Actor-Critic neural architecture for online policy weight updates. Contribution/Results: The approach establishes theoretical regret guarantees and—crucially—derives, for the first time, a finite-time bias bound for temporal-difference learning under non-stationary environments. By jointly optimizing scalability over large state spaces and decision interpretability, it achieves faster convergence and significantly higher system efficiency than both individual heuristics and standard RL baselines in stochastic matching domains such as kidney exchange.
📝 Abstract
Online matching problems arise in many complex systems, from cloud services and online marketplaces to organ exchange networks, where timely, principled decisions are critical for maintaining high system performance. Traditional heuristics in these settings are simple and interpretable but typically tailored to specific operating regimes, which can lead to inefficiencies when conditions change. We propose a reinforcement learning (RL) approach that learns to orchestrate a set of such expert policies, leveraging their complementary strengths in a data-driven, adaptive manner. Building on the Adv2 framework (Jonckheere et al., 2024), our method combines expert decisions through advantage-based weight updates and extends naturally to settings where only estimated value functions are available. We establish both expectation and high-probability regret guarantees and derive a novel finite-time bias bound for temporal-difference learning, enabling reliable advantage estimation even under constant step size and non-stationary dynamics. To support scalability, we introduce a neural actor-critic architecture that generalizes across large state spaces while preserving interpretability. Simulations on stochastic matching models, including an organ exchange scenario, show that the orchestrated policy converges faster and yields higher system level efficiency than both individual experts and conventional RL baselines. Our results highlight how structured, adaptive learning can improve the modeling and management of complex resource allocation and decision-making processes.