Model-Based Reinforcement Learning for Control of Strongly-Disturbed Unsteady Aerodynamic Flows

๐Ÿ“… 2024-08-26
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Reinforcement learning (RL) for unsteady aerodynamic flow control under strong disturbances suffers from prohibitively high training costs and poor generalizability to full-scale computational fluid dynamics (CFD) environments. Method: This paper proposes a physics-enhanced model-based RL (MBRL) framework. Its core innovations include: (i) a physics-constrained autoencoder for high-fidelity flow field dimensionality reduction; and (ii) latent-space long-horizon dynamics modeling to improve prediction robustness. Contribution/Results: To our knowledge, this is the first successful application of MBRL to real-world aerodynamic control. In a pitch-controlled airfoil subject to gust disturbances, the learned policy significantly suppresses lift fluctuations. Crucially, the policy transfers seamlessly to full-scale CFD simulations, reducing required training samples by one to two orders of magnitude. The framework establishes a new paradigm for intelligent control of high-dimensional unsteady flowsโ€”efficient, physically interpretable, and broadly transferable.

Technology Category

Application Category

๐Ÿ“ Abstract
The intrinsic high dimension of fluid dynamics is an inherent challenge to control of aerodynamic flows, and this is further complicated by a flow's nonlinear response to strong disturbances. Deep reinforcement learning, which takes advantage of the exploratory aspects of reinforcement learning (RL) and the rich nonlinearity of a deep neural network, provides a promising approach to discover feasible control strategies. However, the typical model-free approach to reinforcement learning requires a significant amount of interaction between the flow environment and the RL agent during training, and this high training cost impedes its development and application. In this work, we propose a model-based reinforcement learning (MBRL) approach by incorporating a novel reduced-order model as a surrogate for the full environment. The model consists of a physics-augmented autoencoder, which compresses high-dimensional CFD flow field snaphsots into a three-dimensional latent space, and a latent dynamics model that is trained to accurately predict the long-time dynamics of trajectories in the latent space in response to action sequences. The accuracy and robustness of the model are demonstrated in the scenario of a pitching airfoil within a highly disturbed environment. Additionally, an application to a vertical-axis wind turbine in a disturbance-free environment is discussed in the Appendix Based on the model trained in the pitching airfoil problem, we realize an MBRL strategy to mitigate lift variation during gust-airfoil encounters. We demonstrate that the policy learned in the reduced-order environment translates to an effective control strategy in the full CFD environment.
Problem

Research questions and friction points this paper is trying to address.

Control of strongly-disturbed unsteady aerodynamic flows
High training cost in model-free reinforcement learning
Development of a model-based reinforcement learning approach
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-based reinforcement learning for aerodynamic control
Physics-augmented autoencoder reduces CFD dimensionality
Latent dynamics model predicts long-time flow behavior
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zhecheng Liu
University of California, Los Angeles, Los Angeles, California 90095
D
D. Beckers
California Institute of Technology, Pasadena, California 91125
J
J. Eldredge
University of California, Los Angeles, Los Angeles, California 90095