🤖 AI Summary
To address real-time performance optimization of configurable systems under dynamic workloads—where outdated historical configurations often degrade effectiveness—this paper proposes a lifelong adaptive planning framework. Methodologically, it integrates reinforcement learning–driven online planning, on-demand knowledge distillation (distilled knowledge seeding), configuration similarity modeling, and incremental experience accumulation to dynamically purify and precisely reuse historically validated knowledge. Its key contribution is the first synergistic integration of lifelong planning and knowledge distillation, which explicitly prevents interference from obsolete or misleading configurations. Experimental results demonstrate that, compared to state-of-the-art approaches, the framework improves system performance by 229%, accelerates configuration generation by 2.22×, and significantly enhances both real-time responsiveness and tuning quality.
📝 Abstract
Modern configurable systems provide tremendous opportunities for engineering future intelligent software systems. A key difficulty thereof is how to effectively self-adapt the configuration of a running system such that its performance (e.g., runtime and throughput) can be optimized under time-varying workloads. This unfortunately remains unaddressed in existing approaches as they either overlook the available past knowledge or rely on static exploitation of past knowledge without reasoning the usefulness of information when planning for self-adaptation. In this paper, we tackle this challenging problem by proposing DLiSA, a framework that self-adapts configurable systems. DLiSA comes with two properties: firstly, it supports lifelong planning, and thereby the planning process runs continuously throughout the lifetime of the system, allowing dynamic exploitation of the accumulated knowledge for rapid adaptation. Secondly, the planning for a newly emerged workload is boosted via distilled knowledge seeding, in which the knowledge is dynamically purified such that only useful past configurations are seeded when necessary, mitigating misleading information. Extensive experiments suggest that the proposed DLiSA significantly outperforms state-of-the-art approaches, demonstrating a performance improvement of up to 229% and a resource acceleration of up to 2.22x on generating promising adaptation configurations. All data and sources can be found at our repository: https://github.com/ideas-labo/dlisa.