Online reinforcement learning via sparse Gaussian mixture model Q-functions

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the fundamental trade-off among interpretability, generalization, and computational efficiency in Q-function modeling for online reinforcement learning. Methodologically, it proposes a structured Q-function framework based on Sparse Gaussian Mixture Models (S-GMMs), featuring: (i) Hadamard over-parameterization to enforce controllable sparsity, balancing expressivity and model complexity; (ii) Riemannian gradient updates that respect geometric parameter constraints and enhance generalization; and (iii) streaming-data-driven online policy iteration for real-time adaptation and efficient exploration. Experiments on standard benchmarks demonstrate performance competitive with dense deep RL baselines, while reducing parameter count by over 80%. Crucially, the approach maintains strong generalization and policy interpretability even under stringent parameter budgets. This work establishes a novel paradigm for lightweight, trustworthy online RL.

Technology Category

Application Category

📝 Abstract
This paper introduces a structured and interpretable online policy-iteration framework for reinforcement learning (RL), built around the novel class of sparse Gaussian mixture model Q-functions (S-GMM-QFs). Extending earlier work that trained GMM-QFs offline, the proposed framework develops an online scheme that leverages streaming data to encourage exploration. Model complexity is regulated through sparsification by Hadamard overparametrization, which mitigates overfitting while preserving expressiveness. The parameter space of S-GMM-QFs is naturally endowed with a Riemannian manifold structure, allowing for principled parameter updates via online gradient descent on a smooth objective. Numerical tests show that S-GMM-QFs match the performance of dense deep RL (DeepRL) methods on standard benchmarks while using significantly fewer parameters, and maintain strong performance even in low-parameter-count regimes where sparsified DeepRL methods fail to generalize.
Problem

Research questions and friction points this paper is trying to address.

Online reinforcement learning with sparse Gaussian mixture Q-functions
Mitigating overfitting through structured sparsification techniques
Maintaining performance with fewer parameters than dense methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online sparse Gaussian mixture Q-functions
Hadamard sparsification regulates model complexity
Riemannian manifold gradient descent updates
🔎 Similar Papers
No similar papers found.