🤖 AI Summary
To address the high uncertainty in human guidance and low exploration efficiency of reinforcement learning (RL) in automated generation of complex model transformation sequences, this paper proposes a reinforcement learning framework integrated with uncertain human guidance. The method maps user-defined transformations to learnable RL primitives, models the state-action space, and dynamically balances guidance confidence against execution latency during policy execution. Its key innovation lies in explicitly incorporating confidence-annotated, fuzzy human feedback into both reward shaping and policy update mechanisms—establishing a closed-loop human-in-the-loop paradigm for collaborative modeling. Experimental results demonstrate that, under identical computational budgets, the proposed framework accelerates policy convergence by 42% and improves the correctness rate of generated transformation sequences by 31%, while significantly reducing manual intervention overhead and model mis-specification errors. This advances model-driven engineering toward a robust, human–machine collaborative paradigm.
📝 Abstract
Model-driven engineering problems often require complex model transformations (MTs), i.e., MTs that are chained in extensive sequences. Pertinent examples of such problems include model synchronization, automated model repair, and design space exploration. Manually developing complex MTs is an error-prone and often infeasible process. Reinforcement learning (RL) is an apt way to alleviate these issues. In RL, an autonomous agent explores the state space through trial and error to identify beneficial sequences of actions, such as MTs. However, RL methods exhibit performance issues in complex problems. In these situations, human guidance can be of high utility. In this paper, we present an approach and technical framework for developing complex MT sequences through RL, guided by potentially uncertain human advice. Our framework allows user-defined MTs to be mapped onto RL primitives, and executes them as RL programs to find optimal MT sequences. Our evaluation shows that human guidance, even if uncertain, substantially improves RL performance, and results in more efficient development of complex MTs. Through a trade-off between the certainty and timeliness of human advice, our method takes a step towards RL-driven human-in-the-loop engineering methods.