Time, space and control: deep-learning applications to turbulent flows
Time: Mon 2023-06-12 10.00
Location: F3, Lindstedtsvägen 26 & 28, Stockholm
Language: English
Subject area: Engineering Mechanics
Doctoral student: Luca Guastoni , SeRC - Swedish e-Science Research Centre, Strömningsmekanik och Teknisk Akustik, Linné Flow Center, FLOW
Opponent: Prof. Dr.-Ing. Andrea Beck, Institute of Aerodynamics and Gas Dynamics, University of Stuttgart, Germany
Supervisor: Ricardo Vinuesa, Linné Flow Center, FLOW, SeRC - Swedish e-Science Research Centre, Strömningsmekanik och Teknisk Akustik; Hossein Azizpour, Science for Life Laboratory, SciLifeLab, Robotik, perception och lärande, RPL, SeRC - Swedish e-Science Research Centre; Philipp Schlatter, Linné Flow Center, FLOW, SeRC - Swedish e-Science Research Centre, Strömningsmekanik och Teknisk Akustik, Lehrstuhls für Strömungsmechanik (LSTM), Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Germany
QC 230516
Abstract
In the present thesis, the application of deep learning and deep reinforcement learning to turbulent-flow simulations is investigated. Deep-learning models are trained to perform temporal and spatial predictions, while deep reinforcement learning is applied to a flow-control problem, namely the reduction of drag in an open channel flow. Long short-term memory (LSTM, Hochreiter & Schmidhuber 1997) networks and Koopman non-linear forcing (KNF) models are optimized to perform temporal predictions in two reduced-order-models of turbulence, namely the nine-equations model proposed by Moehlis et al. (2004) and a truncated proper orthogonal decomposition (POD) of a minimal channel flow (Jiménez & Moin 1991). In the first application, both models are able to produce accurate short-term predictions. Furthermore, the predicted system trajectories are statistically correct. KNF models outperform LSTM networks in short-term predictions, with a much lower training computational cost. In the second task, only LSTMs can be trained successfully, predicting trajectories that are statistically accurate. Spatial predictions are performed in two turbulent flows: an open channel flow and a boundary-layer flow. Fully-convolutional networks (FCNs) are used to predict two-dimensional velocity-fluctuation fields at a given wall-normal location using wall measurements (and vice versa). Thanks to the non-linear nature of these models, they provide a better reconstruction performance than optimal linear methods like extended POD (Borée 2003). Finally, we show the potential of deep reinforcement learning in discovering new control strategies for turbulent flows. By framing the fluid-dynamics problem as a multi-agent reinforcement-learning environment and by training the agents using a location-invariant deep deterministic policy-gradient (DDPG) algorithm, we are able to learn a control strategy that achieves a remarkable 30% drag reduction, improving over existing strategies by about 10 percentage points.