Skip to main content

Antoni Plonczak: Explaining turbulence predictions from deep neural networks -- Finding important features with approximate Shapley values

Time: Fri 2022-09-23 14.00

Location: 3721, Lindstedtsvägen 25

Respondent: Antoni Plonczak

Opponent: Sergi Andreu

Supervisor: Anna Karin Tornberg, Ricardo Vinuesa

Export to calendar

Abstract:

Deep-learning models have been shown to produce accurate predictions in various scientific and engineering applications, such as turbulence modelling, by efficiently learning complex nonlinear relations from data. However, deep networks are often black boxes and it is not clear from the model parameters which inputs are more important to a prediction. As a result, it is difficult to understand whether models are taking into account physically relevant information and little theoretical understanding of the phenomenon modelled by the deep network can be gained.

In this work, methods from the field of explainable AI, based on Shapley Value approximation, are applied to compute feature attributions in previously trained fully convolutional deep neural networks for predicting velocity fluctuations in an open channel turbulent flow using wall quantities as inputs. The results show that certain regions in the inputs to the model have a higher importance to a prediction, which is verified by computational experiments that confirm the models are more sensitive to those inputs as compared to randomly selected inputs, if the error in the prediction is considered. These regions correspond to certain strongly distinguishable features (visible structures) in the model inputs. The correlations between the regions with high importance and visible structures in the model inputs are investigated with a linear regression analysis. The results indicate that certain physical characteristics of these structures are highly correlated to the importance of individual input features within these structures.