Lectures in Mathematics ETH Zürich, Department of Mathematics Research Institute of Mathematics Department of Mathematics Research Institute of Mathematics
CONTENTS NOTE
Text of Note
1. Motivation, problem and notation -- 1.1 Motivation -- 1.2 Problem formulation -- 1.3 Usual tools -- 1.4 Notation for polynomial acceleration -- 1.5 Minimal error and minimal residual -- 1.6 Approximation of the solution operator -- 1.7 Location of zeros -- 1.8 Heuristics -- Comments to Chapter 1 -- 2. Spectrum, resolvent and power boundedness -- 2.1 The spectrum -- 2.2 The resolvent -- 2.3 The spectral mapping theorem -- 2.4 Continuity of the spectrum -- 2.5 Equivalent norms -- 2.6 The Yosida approximation -- 2.7 Power bounded operators -- 2.8 Minimal polynomials and algebraic operators -- 2.9 Quasialgebraic operators -- 2.10 Polynomial numerical hull -- Comments to Chapter 2 -- 3. Linear convergence -- 3.1 Preliminaries -- 3.2 Generating functions and asymptotic convergence factors -- 3.3 Optimal reduction factor -- 3.4 Green's function for G? -- 3.5 Optimal polynomials for -- 3.6 Simply connected G?(L) -- 3.7 Stationary recursions -- 3.8 Simple examples -- Comments to Chapter 3 -- 4. Sublinear convergence -- 4.1 Introduction -- 4.2 Convergence of Lk(L?1) -- 4.3 Splitting into invariant subspaces -- 4.4 Uniform convergence -- 4.5 Nonisolated singularity and successive approximation -- 4.6 Nonisolated singularity and polynomial acceleration -- 4.7 Fractional powers of operators -- 4.8 Convergence of iterates -- 4.9 Convergence with speed -- Comments to Chapter 4 -- 5. Superlinear convergence -- 5.1 What is superlinear -- 5.2 Introductory examples -- 5.3 Order and type -- 5.4 Finite termination -- 5.5 Lower and upper bounds for optimal polynomials -- 5.6 Infinite products -- 5.7 Almost algebraic operators -- 5.8 Estimates using singular values -- 5.9 Multiple clusters -- 5.10 Approximation with algebraic operators -- 5.11 Locally superlinear implies superlinear -- Comments to Chapter 5 -- References -- Definitions.
0
SUMMARY OR ABSTRACT
Text of Note
Assume that after preconditioning we are given a fixed point problem x = Lx + f (*) where L is a bounded linear operator which is not assumed to be symmetric and f is a given vector. The book discusses the convergence of Krylov subspace methods for solving fixed point problems (*), and focuses on the dynamical aspects of the iteration processes. For example, there are many similarities between the evolution of a Krylov subspace process and that of linear operator semigroups, in particular in the beginning of the iteration. A lifespan of an iteration might typically start with a fast but slowing phase. Such a behavior is sublinear in nature, and is essentially independent of whether the problem is singular or not. Then, for nonsingular problems, the iteration might run with a linear speed before a possible superlinear phase. All these phases are based on different mathematical mechanisms which the book outlines. The goal is to know how to precondition effectively, both in the case of "numerical linear algebra" (where one usually thinks of first fixing a finite dimensional problem to be solved) and in function spaces where the "preconditioning" corresponds to software which approximately solves the original problem.