Joint Continual Learning of Local Language Models and Cloud Offloading Decisions with Budget Constraints

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges faced by on-device small language models in continual learning, where limited memory and computational resources hinder adaptation to shifting task distributions, often leading to catastrophic forgetting and unstable cloud offloading behavior. To tackle these issues, the authors propose DA-GRPO, a novel approach that integrates a cloud invocation budget constraint directly into a dual-advantage function within a Group Relative Policy Optimization framework. This method jointly optimizes on-device task learning and cloud collaboration decisions without requiring predefined reward functions or external routing modules. Experimental results demonstrate that DA-GRPO significantly improves post-switch accuracy on mathematical reasoning and code generation tasks, effectively mitigates catastrophic forgetting, and achieves stable, efficient edge-cloud collaboration under a fixed budget.

Technology Category

Application Category

📝 Abstract
Locally deployed Small Language Models (SLMs) must continually support diverse tasks under strict memory and computation constraints, making selective reliance on cloud Large Language Models (LLMs) unavoidable. Regulating cloud assistance during continual learning is challenging, as naive reward-based reinforcement learning often yields unstable offloading behavior and exacerbates catastrophic forgetting as task distributions shift. We propose DA-GRPO, a dual-advantage extension of Group Relative Policy Optimization that incorporates cloud-usage constraints directly into advantage computation, avoiding fixed reward shaping and external routing models. This design enables the local model to jointly learn task competence and collaboration behavior, allowing cloud requests to emerge naturally during post-training while respecting a prescribed assistance budget. Experiments on mathematical reasoning and code generation benchmarks show that DA-GRPO improves post-switch accuracy, substantially reduces forgetting, and maintains stable cloud usage compared to prior collaborative and routing-based approaches.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Cloud Offloading
Budget Constraints
Catastrophic Forgetting
Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Learning
Cloud Offloading
Small Language Models
Budget Constraints
Policy Optimization
🔎 Similar Papers
No similar papers found.
E
Evan Chen
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN
Wenzhi Fang
Wenzhi Fang
Purdue University
LLM Post-TrainingFederated Learning
Shiqiang Wang
Shiqiang Wang
IBM T. J. Watson Research Center
Agentic AICollaborative & Federated AILLMsMachine LearningOptimization Algorithms
C
Christopher Brinton
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN