Numerical methods for Sylvester-type matrix equations and nonlinear eigenvalue problems
Time: Wed 2021-05-12 10.00
Subject area: Applied and Computational Mathematics Optimization and Systems Theory Numerical Analysis
Doctoral student: Emil Ringh , Optimeringslära och systemteori, SeRC - Swedish e-Science Research Centre
Opponent: Professor Daniel Kressner, École polytechnique fédérale de Lausanne
Supervisor: Universitetslektor Elias Jarlebring, Numerisk analys, NA, SeRC - Swedish e-Science Research Centre; Universitetslektor Johan Karlsson, Optimeringslära och systemteori, Strategiskt centrum för industriell och tillämpad matematik, CIAM; Universitetslektor Per Enqvist, Optimeringslära och systemteori
Linear matrix equations and nonlinear eigenvalue problems (NEP) appear in a wide variety of applications in science and engineering. Important special cases of the former are the Lyapunov equation, the Sylvester equation, and their respective generalizations. These appear, e.g., as Gramians to linear and bilinear systems, in computations involving block-triangularization of matrices, and in connection with discretizations of some partial differential equations. The NEP appear, e.g., in stability analysis of time-delay systems, and as results of transformations of linear eigenvalue problems.
This thesis mainly consists of 4 papers that treats the above mentioned computational problems, and presents both theory and methods. In paper A we consider a NEP stemming from the discretization of a partial differential equation describing wave propagation in a waveguide. Some NEP-methods require in each iteration to solve a linear system with a fixed matrix, but different right-hand sides, and with a fine discretization, this linear solve becomes the bottleneck. To overcome this we present a Sylvester-based preconditioner, exploiting the Sherman–Morrison–Woodbury formula.
Paper B treats the generalized Sylvester equation and present two main results: First, a characterization that under certain assumptions motivates the existence of low-rank solutions. Second, a Krylov method applicable when the matrix coefficients are low-rank commuting, i.e., when the commutator is of low rank.
In Paper C we study the generalized Lyapunov equation. Specifically, we extend the motivation for applying the alternating linear scheme (ALS) method, from the stable Lyapunov equation to the stable generalized Lyapunov equation. Moreover, we show connections to H2-optimal model reduction of associated bilinear systems, and show that ALS can be understood to construct a rank-1 model reduction subspace to such a bilinear system related to the residual. We also propose a residual-based generalized rational-Krylov-type subspace as a solver for the generalized Lyapunov equation.
The fourth paper, Paper D, connects the NEP to the two-parameter eigenvalue problem. The latter is a generalization of the linear eigenvalue problem in the sense that there are two eigenvalue-eigenvector equations, both depending on two scalar variables. If we fix one of the variables, then we can use one of the equations, which is then a generalized eigenvalue problem, to solve for the other variable. In that sense, the solved-for variable can be understood as a family of functions of the first variable. Hence, it is a variable elimination technique where the second equation can be understood as a family of NEPs. Methods for NEPs can thus be adapted and exploited to solve the original problem. The idea can also be reversed, providing linearizations for certain NEPs.