PURE AND APPLIED MATHEMATICS
A Wiley-Interscience Series of Texts, Monographs, and Tracts Founded by RICHARD COURANT Editor Emeritus: PETER HILTON Editors: MYRON
B. A LLEN III, DAVID A. COX,
H ARRY HOCHSTADT, PETER LAX, JOHN TOLAND A complete list of the titles in this series appears at the end of this volume.
TIME DEPENDENT PROBLEMS AND DIFFERENCE METHODS
BERTIL GUSTAFSSON Department of Scientific Computing Uppsala University, Sweden HEIN Z-OTTO KREISS Department of Mathematics University of California at Los Angeles JOSEPH OLIGER Department of Computer Science Stanford University
A Wiley-Interscience Pu blication JOHN WILEY & SONS, INC. New York
•
Chichester
•
Brisbane
•
Toronto
•
Si ngap vre
This text is printed on acid-free paper. Copyright © 1 995 by John Wiley & Sons, Inc. Al! rights reserved. Published simultaneously in Canada. Reproduction or translation of any part of this work beyond that permitted by Section 1 07 or 108 of the 1 976 United States Copyright Act without the permission of the copyright owner is unlawful. Requests for permission of further information should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 1 0 1 58-00 1 2.
Library of Congress Cataloging in Publication Data:
Gustafsson, Bertil, 1 939Time dependent problems and difference methods / Bertil Gustafsson, Heinz-Otto Kreiss, Joseph Oliger. p. cm. - (Pure and applied mathematics) Includes bibliographical references and index. ISBN 0-47 1 -50734-2 (acid-free) I. Differential equations, Partial-Numerical solutions. I. Kreiss, Heinz-Otto II. Oliger, Joseph, 1 941III. Title. IY. Series: Pure and applied mathematics (John Wiley & Sons: Unnumbered) QA374.0974 1 995 94-44 1 76 5 1 .353-dc20 eIP .
Printed in the United States of America 10 9 8 7 6 5 4 3 2 I
CONTENTS
ix
Preface Acknowledgments PART I 1.
Problems with Periodic Solutions
Fourier Series and Trigonometric Interpolation
xiii 1 3
Some results from the theory of Fourier series, 3 1 .2 Periodic gridfunctions and difference operators, 17 1 .3 Trigonometric interpolation, 24 1 .4 The S operator for differentiation, 30 1.5 Generalizations, 33 Bibliographic Notes, 37
1.1
2. Model Equations
2. 1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 3.
38
First-order wave equation, convergence, and stability, 38 The Leap-frog scheme, 50 Implicit methods, 55 Truncation error, 59 Heat equation, 6 1 Convection-diffusion equation, 70 Higher order equations, 74 Generalization to several space dimensions, 76 Bibliographic Notes, 78
Higher Order Accuracy
80
Efficiency of higher order accurate difference approximations, 80 3.2 Fourier method, 95 Bibliographic Notes, 104 3.1
4.
Well-Posed Problems
4. 1
106
Well-posedness, 106 v
CONTENTS
vi
4.2 Scalar differential equations with constant coefficients in one space dimension, 1 1 3 4.3 First-order systems with constant coefficients in one space dimension, 1 16 4.4 Parabolic systems with constant coefficients in one space dimension, 1 22 4.5 General systems with constant coefficients, 1 27 4.6 Semibounded operators with variable coefficients, 134 4.7 The solution operator and Duhamel's principle, 142 4.8 Generalized solutions, 149 4.9 Well-posedness of nonlinear problems, 1 52 Bibliographic Notes, 1 56 5.
Stabil ity and Convergence for Numerical Approximations of Linear and Nonlinear Pro blems
157
5 . 1 Stability and convergence, 1 57 5.2 Stability for approximations with constant coefficients, 171 5.3 Approximations with variable coefficients: The energy method, 1 82 5.4 Splitting methods, 195 5.5 Stability for nonlinear problems, 20 1 Bibliographic Notes, 210 6.
Hyper bolic Equations and Numerical Methods
6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 7.
Systems with constant coefficients in one space dimension, 2 1 1 Systems with variable coefficients in one space dimension, 214 Systems with constant coefficients in several space dimensions, 2 1 8 Systems with variable coefficients in several space dimensions, 22 1 Approximations with constant coefficients, 222 Approximations with variable coefficients, 235 The method of lines, 238 The finite-volume method, 254 The Fourier method, 262 Bibliographic Notes, 267
Para bolic Equations and Numerical Methods
7.1 7.2
211
General parabolic systems, 270 Stability for difference and Fourier methods, 275
270
CONTENTS 7.3
vii
Difference approximations in several space dimensions, 282 Bibliographic Notes, 289
8. Pro blems with Discontinuous Solutions
8. 1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 PART II 9.
290
Difference methods for linear hyperbolic equations, 290 Method of characteristics, 297 Method of characteristics in several space dimensions, 304 Method of characteristics on a regular grid, 305 Regularization using viscosity, 3 1 3 The inviscid Burgers' equations, 3 1 6 The viscous Burgers' equation and traveling waves, 320 Numerical methods for scalar equations based on regularization, 328 Regularization for systems of equations, 336 High-resolution methods, 345 Bibliographic Notes, 355 Initial-Boundary-Value Pro blems
The Energy Method for Initial-Boundary-Value Pro blems
357 359
9.1
Characteristics and boundary conditions for hyperbolic systems in one space dimension, 359 9.2 Energy estimates for hyperbolic systems in one space dimension, 368 9.3 Energy estimates for parabolic differential equations in one space dimension, 375 9.4 Well-posed problems, 3 8 1 9 . 5 Semibounded operators, 385 9.6 Quarter-space problems in more than one space dimension, 390 Bibliographic Notes, 397 10.
The Laplace Transform Method for Initial-Boundary-Value Pro blems
10. 1 10.2 10.3 1 0.4 10.5
Solution of hyperbolic systems, 398 Solution of parabolic problems, 404 Generalized well-posedness, 410 Systems with constant coefficients in one space dimensi on, 4 1 9 Hyperbolic systems with constant coefficients in several space dimensions, 427
398
CONTENTS
viii
10.6 10.7
11.
The E nergy Method for Di fference Approximatio ns
11.1 1 1 .2 1 1 .3 1 1 .4 1 1 .5
12.
Parabolic systems in more than one space dimension, 440 Systems with variable coefficients in general domains, 443 Bibliographic Notes, 444
Hyperbolic problems, 445 Parabolic differential equations, 458 Stability, consistency, and order of accuracy, 465 Higher order approximations, 47 1 Several space dimensions, 484 Bibliographic Notes, 491 Appendix 1 1 , 492
The Laplace Tra nsform Method for Difference Approximations
445
496
12. 1 Necessary conditions for stability, 496 12.2 Sufficient conditions for stability, 506 12.3 A fourth-order accurate approximation for hyperbolic differential equations, 524 1 2.4 Stability in the generalized sense for hyperbolic systems, 526 1 2.5 An example that does not satisfy the Kreiss condition but is stable in the generalized sense, 538 1 2.6 Parabolic equations, 552 1 2.7 The convergence rate, 567 Bibliographic Notes, 575 13.
The Laplace Transform Method for Fully Discrete Approximatio ns: Normal Mode Analysis
13. 1
General theory for approximations of hyperbolic systems, 576 13.2 The method of lines and generalized stability, 599 13.3 Several space dimensions, 6 1 1 1 3 .4 Domains with irregular boundaries and overlapping grids, 615 Bibliographic Notes, 620
576
Appendix A . l Results from Linear Algebra, 622 Appendix A.2 Laplace Transform, 625 Appendix A.3 Iterative Methods, 629 References, 633 Index
639
PREFACE
In this preface, we discuss the material to be covered, the point of view we take, and our emphases. Our primary goal is to discuss material relevant to the derivation and analysis of numerical methods for computing approximate solutions to partial differential equations for time-dependent problems arising in the sciences and engineering. It is our intention that this book should be useful for graduate students interested in applied mathematics and scientific computation as well as physical scientists and engineers whose primary interests are in carrying out numerical experiments to investigate physical behavior and test designs. We carry out a parallel development of material for differential equations and numerical rriethods. Our motivation for this approach is twofold: the usual treatment of partial differential equations does not follow the lines that are most useful for the analysis of numerical methods, and the derivation of numerical methods is increasingly utilizing and benefiting from following the detailed development for the differential equations. Most of our development and analysis is for linear equations, whereas most of the calculations done in practice are for nonlinear problems. However, this is not so fruitless as it may sound. If the nonlinear problem of interest has a smooth solution, then it can be linearized about this solution and the solution of the nonlinear problem will be a solution of the linearized problem with a perturbed forcing function. Errors of numerical approximations for the nonlinear problem can thus be estimated locally, and justified in terms of the linearized equations. A problem often arises in this scenario; the mathematical properties required to guarantee that the solution is smooth a priori may not be known or verifiable. So we often perform calculations whose results we cannot justify a priori. In this situation, we can proceed rationally, if not rigorously, by using a method that we could justify for the corresponding linearized problems and can be justified a posteriori, at least in principle, if the obtained solution satisfies certain smoothness properties. The smoothness properties of our computed solutions can be observed to experimentally verify the needed smoothness requirements and justify our computed results a posteriori. However, this procedure is not without its limitations. There are many problems that do not have smooth solutions. There are genuinely nonlinear phenomena, such as ix
x
PREFACE
shocks, rarefaction waves, and nonlinear instability, that we must study in a nonlinear framework, and we discuss such issues separately. There are a few general results for nonlinear problems that generally are justifications of the linearization procedure mentioned above and that we include when available. The material covered in this book emphasizes our own interests and work. In particular, our development of hyperbolic equations is more complete and detailed than our development of parabolic equations and equations of other types. Similarly, we emphasize the construction and analysis of finite difference methods, although we do discuss Fourier methods. We devote a considerable portion of this book to initial boundary value problems and numerical methods for them. This is the first book to contain much of this material and quite a lot of it has been redone for this presentation. We also tend to emphasize the sufficient results needed to justify methods used in applications rather than necessary results, and to stress error bounds and estimates which are valid for finite values of the discretization parameters rather than statements about limits. We have organized this book in two parts: Part I discusses problems with periodic solutions and Part II discusses initial-boundary-value problems. It is simpler and more clear to develop the general concepts and to analyze problems and methods for the periodic boundary problems where the boundaries can essentially be ignored and Fourier series or trigonometric interpolants can be used. This same development is often carried out elsewhere for the Cauchy, or pure initial-value, problem. These two treatments are dual to each other, one relying upon Fourier series and the other upon Fourier integrals. We have chosen periodic boundary problems, because we are, in this context, dealing with a finite, computable method without any complications arising from the infinite domains of the corresponding Cauchy problems. Periodic boundary problems do arise naturally in many physical situations such as flows in toroids or on the surface of spheres; for example, the separation of periodic boundary and initial-boundary-value problems is also natural, because the results for initial-boundary-value problems often take the following form: If the problem or method is good for the periodic boundary problem and if some additional conditions are satisfied, then the problem or method is good for a corresponding initial-boundary-value problem. So an analysis and understanding of the corresponding periodic boundary problem is often a necessary condition for results for more general problems. In Part I, we begin with a discussion in Chapter 1 of Fourier series and trigonometric interpolation, which is central to this part of the book. In Chapter 2, we discuss model equations for convection and diffusion. Throughout the book, we often rely upon a model equation approach to our material. Equations typifying various phenomena, such as convection, diffusion, and dispersion, that distinguish the difficulties inherent in approximating equations of different types are central to our analysis and development. Difference methods are first introduced in this chapter and discussed in terms of the model equations. In Chapter 3, we consider the efficiencies of using higher order accurate methods, which, in a natural limit, lead to the Fourier or pseudospectral method. The
PREFACE
xi
concept of a well-posed problem is introduced in Chapter 4 for general linear and nonlinear problems for partial differential equations. The general stability and convergence theory for difference methods is presented in Chapter 5. Sections are devoted to the tools and techniques needed to establish stability for methods for linear problems with constant coefficients and then for those with variable coefficients. Splitting methods are introduced, and their analysis is carried out. These methods are very useful for problems in several space dimensions and to take advantage of special solution techniques for particular operators. The chapter closes with a discussion of stability for nonlinear problems. Chapters 6 and 7 are devoted to specific results and methods for hyperbolic and parabolic equations, respectively. Nonlinear problems with discontinuous solutions, in particular, hyperbolic conservation laws with shocks and numerical methods for them are discussed in Chapter 8, which concludes Part I of the book and our basic treatment of partial differential equations and methods in the periodic boundary setting. Part II is devoted to the discussion of the initial boundary value problem for partial differential equations and numerical methods for these problems. Chapter 9 discusses the energy method for initial-boundary-value problem for hyperbolic and parabolic equations. Chapter 1 0 discusses Laplace transform techniques for these problems. Chapter 1 1 treats stability for difference approximations using the energy method and follows the treatment of the differential equations in Chapter 9. Chapter 1 2 follows from Chapter 1 0 in terms of development-here the Laplace transform is used for difference approximations. This treatment is carried out for the sernidiscretized problem: Only the spacial part of the operator is discretized. Finally, the fully discretized problem is treated in Chapter 13 using the Laplace transform. The so-called "normal mode analysis" technique is used and developed in these last two chapters. In particular, sufficient stability conditions for the fully discretized problem are obtained in terms of stability results for the semidiscretized problem, which are much easier to obtain. ACKNOWLEDGMENTS
The authors gratefully acknowledge the assistance we have gotten from our students and colleagues who have worked through various versions of this material and have supplied us with questions and suggestions that have been very helpful. We want to give special thanks to Barbro Kreiss and Mary Washburn who have expertly handled the preparation of our manuscript and have carried out most of the computations we have included. Their patience and good humor through our many versions and revisions is much appreciated. Pelle Olsson has gone over our manuscript carefully with us and a number of sections have been improved by his suggestions. Finally, we acknowledge the Office of Naval Research and the National Aeronautics and Space Administration for their support of our work.
I PROBLEMS WITH PERIODIC SOLUTIONS
1 FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
Expansions of functions in Fourier series are particularly useful for both the analysis and construction of numerical methods for partial differen tial equations. In this chapter, we present the main results of this theory which are used as the basis for most of the analysis in Part I of this book.
1.1
SOME RESULTS FROM THE THEORY OF FOURIER SERIES
To begin, we consider the representation of continuous complex valued func tions by Fourier series. We assume throughout this chapter that functions are 21r-periodic and defined for all real numbers. If a function is only defined on a bounded interval, then we can make it 21r-periodic by means of a change of scale and extend it periodically. However, the number of derivatives that the extended function has will crucially impact the results. The basic result is shown by Theorem 1 . 1 . 1 . Theorem 1.1.1. Let f(x)
E
C1(-oo, oo)* b e 21r- periodic. Then f(x) has a
Fou rier series representation
f(x)
1
V2;
*cn( -0(), ) [or Cn(a, b)) denotes the class of -0() < x < 0() [or 0() < a::; x::; b < O(»). O(»
w=-oo
n
(1.1.1)
times continuously differentiable functions for
-
3
4
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
where the Fou rier coefficients f(w ) are given b y
f(w )
1 &
f21r e-iWX f(x) dx.
( 1 . 1.2)
o
The series converges u niformly to f(x). One can weaken the assumptions. Theorem 1.1.2. Assu me that f(x) is 27r- periodic and piecewise in Cllf . f(x) E Cl (a, b ) in some interval a < x < b , then the Fou rier series Eq. (1.1.1) con vergesu nifonn ly to f(x) in any su b interval a < a � x � f3 < b . At a point of discontinu ity x the Fou rier series converges to � (f(x + 0) + f(x - 0».
As an example we consider the saw-tooth function, see Figure 1 . 1.1 below,
v(x) =
� 7r( - x) for °
<
x � 27r ,
v(x)
=
v(x
+
27r ).
(1. 1 .3)
Its Fourier coefficients are given by
v(w)
=
1 v2 7r
;;:;-
f27r 0
1 -
2
.
7r( - x)e -1W.x dx
Figure 1.1.1.
{ ov1, t,
for w = 0, for w :j. 0.
SOME RESULTS FROM THE THEORY OF FOURIER SERIES
5
Figure 1.1.2.
Therefore, vex) By Theorem
o
< Ol
L w"
o
2iw
SIll WX W
w= 1
(1.1.4)
1 . 1 .2, the series converges unifonnly to f(x) in every interval < 211". In the neighborhood of x 0, 211", the so-called Gibb's
� X � {3
=
Figure 1.1.3.
6
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
1.1.2 and 1.1.3 we show the graphs of the
phenomenon is evident. In Figures partial sums
N
sin wx L-w-
w=
N 10, 100,
I
respectively. Near the jumps, there are rapid oscillations that narrow, but do not converge, to zero. Analytically one can show that
R«N 1/2)x) &( Ixl :l/N ) +
+
where
R(y)
vex) w N.
7r =
N 10
2
-
-
YJ 0
sin t --
t
,
dt.
1.1.4.
We show VN(X) for = in Figure The partial sum VN (X) is obtained by simply setting the coefficients equal to zero for > One can also decrease the coefficients smoothly as increases. For example, the partial sum can be modified as follows: -
w
Figure 1.1.4.
SOME RESULTS FROM THE THEORY OF FOURIER SERIES
7
1.5
0.5
-0.5 -1
-1.5
Figure 1.1.5.
N
L w=
In Figures
1.1.5
and
I
sin (w7r/N ) sin wx w7r/N W
(1 .1.5)
1 . 1 .6, we have calculated VN(X) for N la, 100. =
There are fewer oscillations, but the jump is not as sharp. One can show that VN(X) converges to v(x) in every interval a < ex::;; X::;; {3 < 27r as N -7 00.
1.5
0.5
-0.5 -1 -1.5
Figure 1.1.6.
8
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION 2.5
2
1.5
0.5
3
2
4
6
5
Figure 1.1.7.
If we integrate Eq. v(1) (x) .
- �(
�x -
L � W �
ao
=
1
(1.1 .3), we obtain the smoother function
2'
� ) If x2
= 2
0
�
(� - t)dt
=
ao -
cos wx L w2 W= 1
0::; x::; 211",
( 1 . 1 .6)
where v( 1 )(x ) is Lipschitz continuous and its first derivative dv(l)/dx v has a jump at x = 0, ± 2�, ± 4�, . . . . The series converges uniformly for all x. The graph of vO) (x) is shown in Figure 1 . 1 .7. In Figures 1 . 1 .8 and 1 . 1 .9, we have calculated v�) = a o - L�=l(COS WX/W 2 ) for N = 10, 100, respectively. The rate of convergence is much better. If we integrate Eq. (1.1.3) p times choosing the constant of integration judi ciously, we obtain a 211"-periodic function v(p), which has p-I continuous deriva tives and whose pth derivative has a jump. Its Fourier coefficients satisfy the estimate =
I f/p ) (w)1 ::; constant/( l w I P +
1).
( 1 . 1 .7)
We want to show that the decay rate in Eq. (1.1.7) is typical. We say that a 2� periodic function f(x) is a piecewise C 1 function if we can divide the interval o::; x ::; 211" into subintervals aj ::; x ::; aj+ 1 , 0 = ao < a 1 < . . . < an = 211", so that f(x) is a C 1 function on every open subinterval.
9
SOME RESULTS FROM THE THEORY OF FOURIER SERIES 2.5
2
1.5
0.5
Figure 1.1.8.
Theorem 1.1.3. Let f(x) b e a 27r- periodic function and assu me that its pth
derivati ve is a piecewise C 1 -function. Then Ij(w ) 1
� constant/( l w l p+ 1
+
1).
2.5
2
1.5
0.5
o
2
3
Figure 1.1.9.
4
5
6
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
10
Figure 1.1.10.
Proof. Integration by parts for
V2,;j(w)
w *- ° yields
f1l" f(x)e-iwx dx o
-�( � j= O
i
=
� f'j+'
j= O
f(x)e- iwX
<Xj + I
<Xj
-0
<Xj + 0
-
f(x)e-iwx dx
dj(x) - .wx ) f<Xj+ I -1 -e 1 dx . <Xj
iw
dx
If p = 0, we obtain the estimate
constant
Iwl
If p > 0, then we obtain
V2,;j(w) We can apply the process to
=
� IW
f211" 0
df(x) e-iwx dx. dx
df/dx. The general result follows by induction.
SOME RESULTS FROM THE THEORY OF FOURIER SERIES
11
Let f(x) be a piecewise smooth 211"-periodic function with jumps f(O!j + 0) f(O!j - 0) at x = O!j , 0 � O!o < 0! 1 < . . . < O!n- l < 211". Then
f(l )
=
f(x) -
(f(O!j + 0) - f(O!j - O» u(x - O!j ) :L j n-
1
=O
belongs to C. Here u(x-O!j ) is the saw-tooth function shown in Eq. ( 1 . 1 .3) with jump 11" at x = O!j . df( l )/dx can have jumps df( l ) ({3j + O)/dx - df( l )({3j - O)/dx at x = {3j , 0 � {3o < {3 1 < . . . < {3m - l < 211". Then
belongs to C 1 This process can be continued and the study of general piecewise smooth functions can be reduced to the study of the saw-tooth function. It is often more convenient to study convergence in the L2 norm rather than pointwise. Let 1 denote the conjugate complex value of f. We define the L2 scalar product and norm by •
(f, g) Il f ll
5:11" 19 dx, (f,f) 1 /2
=
(5:11"
1 f 1 2dX
)
1 /2
and say that a sequence f14 converges to f in the mean, or in lim
14--)�
II f l4 - f ll
L2 , if
o.
=
The scalar product is a bilinear form that satisfies the following equalities:
(f, g) (Af, g)
=
=
( g,f), A(f, g),
(f + g, h) (f, h) (f, Ag) A(f, g), =
=
+
( g, h), ( 1 . 1 .8)
where A is a scalar. The norm satisfies
II Af ll
=
I AIII f Il ,
( 1 . 1 .9a)
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
12
1 (f, g)1 :5 I l fl l . \I gll ,
( l . 1 .9b)
II f + g il :5 I l fll + II g ll , IlI f ll - IIglll :5 II f - g il·
( 1 . 1 .9c) ( 1 . 1 .9d)
and the triangle inequalities
The inequality in Eq. ( 1 . 1 .9b) is a generalization of the usual Cauchy-Schwarz inequality for finite dimensional vector spaces and is obtained by considering the integrals as limits of Riemann sums. We can now state the fundamental Lemma 1.1.1. The exponential functions Cl/.,J2;)einx, n
orthonorm al with respect to the
(
L2
=
scalar produ ct, that is,
-- ) -_ {I
1 1 -- einx eimx .,J2; '.,J2;
for n 0 for n -J
=
m,
m.
Proof. Equation ( 1 . 1 . 1 0) follows directly from the definition
If
f(x) then we formally obtain Eq. ( 1 . 1 .2) from
211" 10 .,J2; 1
e-iWXf(x) dx
=
1 __ (eiwx ,f(x»
.,J2;
We will now investigate the convergence of
0, ± 1 , ±2, . . . are
( 1 . 1 . 1 0)
13
SOME RESULTS FROM THE THEORY OF FOURIER SERIES
few)
.
1
=
Vh
--
(e' WX ,j(x»
to f. From Lemma 1 . 1 . 1 ,
N
L
w=-N
Ij(w)12.
Also,
N
=
L
w=-N
(j(w)f(w)
+ f(w)j(w» = 2
N
L
w=-N
Ij(w)12,
that is,
II f - SNII2
=
I fll2 - (j, SN ) - (SN ,j )
+
IiSNI12
IIfll2
N
- L
w=-N
Ij(w)12,
and we obtain Theorem 1 . 1 .4. Theorem 1.1.4 (Bessel's inequality).
N
L
w=-N
h olds for all N. Furthe nnore,
if and only if Parseval 's relati on
The inequality
Ij(w)12 � II f ll2
( 1 . 1 . 1 1)
14
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
0. 1 . 1 2)
W= -OQ
holds. Iff E e l (-oo, oo) is 211'-periodic, then, by Theorem 1 . 1 . 1 , SN converges uni formly to f, that is, max i f(x) - SN(X)! Nlim - )� O�x�27r
0,
and, therefore, also lim N-)� II f
-
SNII
O.
Therefore, Parseval's relation holds for all 211'-periodic f E el (-00, 00) . It also holds for much more general functions. Let f be a piecewise continuous func tion. it is known that it can be approximated arbitrarily well in the L2 norm by 211'-periodic functions in e l , that is, there exists a sequence {fp } such that
f - fp ll plim -) � II
= o.
For example, the saw-tooth function can easily be approximated by el func tions, see Figure 1 . 1 . 1 1 . B y Eq. ( 1 . 1 .9),
IlI f ll - II f pll l
5
II f - fp ll ·
Therefore,
II fp ll plim - )�
=
II f ll ·
Also, using Eq. 0. 1 . 1 1 ) and the Cauchy-Schwarz inequality for absolutely con vergent sums we get
SOME RESULTS FROM THE THEORY OF FOURIER SERIES
Figure 1.1.11.
w=�oo
�
::; L
w=-oo
�
::; L
w=-oo
( If(w)1 ( If(w)1
+
+
Ifv(w) l)llf(w)1 - Ifv(w)11 Ifv(w)i) If(w) - fv(w)1
Thus,
IIfll2 -
�
L
w=-oo
If(w)12
and we have proved the following theorem.
15
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
16
Theorem 1.1.5. Any piecewise continuous function f can be expanded into a
Fourier series
1
few)
Vh W = -oo
which converges to f in the L2 nonn. [Parsevals relation (1.1. 12) holds.} We define the L2 space to consist of those functions f that can be approx imated arbitrarily well in the L2 norm by functions in Cl. Integration is now performed in the Lebesque sense. By the same argument as before, they can be expanded into Fourier series that converge to f. Therefore, the L2 space can also be characterized as the space of all convergent Fourier series
f =
1
Vh w = -oo
w = -oo
< 00.
Instead of C l functions, we could also use C" functions, because any C l function can be approximated arbitrarily well by C� functions. This is a very useful principle. A "bad" function can be approximated by a sequence of smooth functions. Estimates and other properties are derived for the smooth functions. Then the limit is taken and hopefully the desired properties are also valid in the limit. A slight generalization of Parseval's relation is shown by Theorem 1 . 1 .6. Theorem 1.1.6. Let f, g
E
L2 , then
�
(f,g)
L
w=-oo
f<w)g(w) ,
where
f(x ) g(x )
1
Vh w=- oo � 1
-- L Vh
.
g(w)eIWX •
w =-�
Proof. Iff, g E cl ( -00, 00) , then their Fourier series converge uniformly and, therefore, by Lemma 1 . 1 . 1 ,
17
EXERCISES
(f, g) For general functions in L2 the relation follows taking limits as before. EXERCISES 1.1.1. Prove Eqs. ( 1 . 1 .8) and ( 1 . 1 .9) for the L2 scalar product and norm. 1.1.2. Let f be a real function with the Fourier series
1
f( x)
V2;
w:::-oo
Prove that
N
1
-- L V2;
w =-N
!(w)eiWX
is real for all N. 1.1.3. Write a program for computing vex) VN(X) as in Figure 1 . 1 .3. Verify the theoretical error estimate numerically. -
1.2. PERIODIC GRIDFUNCTIONS AND DIFFERENCE OPERATORS
Let h 27r/(N + 1), where N is a natural number, denote a grid interval. A grid on the x axis is defined to be the set of gridpoints =
Xj
=
jh,
j
=
O, ± 1 , ±2, . . . .
A discrete, possibly complex valued, function U defined on the grid is called a gridfunction, see Figure 1 .2 . 1 . Here we are only interested in 27r-periodic gridfunctions [i.e., using the notation Uj u(Xj )), =
Clearly, the product and sum of gridfunctions are again gridfunctions. Their gridvalues are
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
18
0.5
•
•
•
•
0
. • • • •
,....
•
•
•
•
•
•
•
•
• •
2
•
•
-1
•
•
•
•
-0.5
• • • •
•
•
• • • • .
•
• • • • .
...".
•
4
•
•
5
•
•
•
6
•
•
• • • • .
•
• • • • .
...".
Figure 1.2.1.
We denote the set of all 21r-periodic gridfunctions by Ph . If U, V E Ph , then uv, u + V E Ph . We now introduce difference operators. They play a fundamental role throughout the whole book. We start with the translation operator E. It is defined by
If v E Ph , then Ev E Ph . Powers of E are defined recursively,
Thus, ( 1 .2. 1 ) The inverse also exists and
If we define E O by E O v = v, then Eq. ( 1 .2. 1 ) holds for all integers p. E is a linear operator and
19
PERIODIC GRIDFUNCTIONS AND DIFFERENCE OPERATORS
The forward, backward, and central difference operators are defined by
(E (E
D_ = (E o - E - 1 )/h E O )/h, E - 1 )/(2h) = ! (D+ + D_ ),
=
E - 1 D+
,
(1.2.2)
respectively. In particular, consider these operators acting on the functions eiw x. Then we have for all x = Xj
(eiwh - l)eiwx = (iwh + &(w 2 h2 ))eiwx , ( 1 - e-iwh )eiwx = (iwh + &(w 2 h2 ))eiwx , i sin (wh)eiwx = (iwh + &(w 3 h3 ))eiwx .
hD+ eiwx hD_ eiwx hDo eiwx
( 1 .2 . 3 )
Thus,
I( � ) eiwx I /( Do � ) eiwx /
&(w 2 h),
D+
=
I( - � ) eiWX / D_
&(w 3 h2 ).
&(w 2 h), ( 1 .2 .4 )
Consequently, one says that D+, D_ are first-order accurate approximations of a/ax since the error is proportional to h. Do is second-order accurate. Higher derivatives are approximated by products of the above operators. For example, (D_D+ v)j = h -2 ((E - 2E o + E - 1 ) v)j
h- \Vj+ l
-
2vj + Vj - l ).
In particular,
(eiw h
_
(_w 2 h2 Therefore,
2 +
+
e-iw h )eiwx
=
(9'(w 4 h4 ))e iwx .
- ( w2h ) eiwx 4 sin2
(1.2.5)
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
20
and D+ D_ is a second-order accurate approximation of a2 /ax2• Note that all of the above operators commute, since they are all defined in terms of powers of
E.
We need to define norms for finite-dimensional vector spaces and discuss some of their properties. We begin with the usual Euclidean inner product and norm. Consider the m-dimensional vector space consisting of all u (u(l), ... , u(m)f where u(j), j 1 , . . . , m, are complex numbers. We denote the conjugate transpose of u by u* (u* uT if u is real. The scalar product and norm are defined by ==
==
==
(u, v)
==
u*v
m
L u(j)v(j),
==
i= 1
and
lui
respectively. The scalar product i s a bilinear form that satisfies the following equalities:
(u, v) (v, u), (u + w, v) (u, v) (Au, v) }:(u, v), ==
==
==
+
(w, v), (u, AV)
==
( 1 .2.6a) ( 1 .2.6b) ( 1 .2.6c) A a complex number. A(u, v),
The following inequalities hold:
I(u, v)1 � lui lvi, lu + vi � lui + lvi, !lui - Ivll � lu - vi· (u, v) � lui· Ivl � olul2
1 4b Ivl2 for 0 > O.
+
(1 .2.7a) ( 1 .2.7b) (1 .2.7c) ( 1 .2.7d)
Let A = (aij) be a complex m x m matrix. Then its transpose is denoted by AT = (aii) and its conjugate transpose by A* (aii). The Euclidean norm of the matrix A is defined by ==
IAI = max iAui, lui =
1
where the norm on the right-hand side is the vector norm defined above. If A and B are matrices, then
PERIODIC GRIDFUNCTIONS AND DIFFERENCE OPERATORS
I Au l � IA l l u l , IA + B I � IA I I AB I � I A I I B I ·
+
21
( 1 .2.8a) ( 1 .2.8b) ( 1 .2.8c)
IB I ,
If the scalar A and vector u satisfy Au AU, then A is an eigenvalue of A and u is the corresponding eigenvector. The spectral radius, p(A), of a matrix A is defined by =
p(A)
=
where the Aj are the eigenvalues of A.
p(A)
max
I Aj l ,
p(A)
satisfies the inequality
]
�
IA I .
( 1 .2.8d)
We next define a scalar product and norm for our periodic gridfunctions of length N+ 1. For fixed h and N+ 1 , these functions form a vector space. However, we are interested in these functions as h -7 0 and N(h) + 1 -700• The Euclidean inner product and norm defined above would not necessarily be finite in this limit, so we must use a different definition. We define a discrete scalar product and norm for periodic gridfunctions by N
(u, v)"
L ujvjh
j= O
and
lI u ll �
(u, u)",
respectively. The scalar product is also a bilinear form and satisfies the same equalities as the Euclidean inner product above in ( 1 .2.6):
(u, v)" (v, U)h , (u + W, V)h = (u, v)" (AU, v)" = A(U, V)h , =
+ (w, v)",
(u, AV)"
=
A(U, v)", A a complex number.
( 1 .2.9a) (1 .2.9b) ( 1 .2.9c)
The following inequalities also hold in analogy with ( 1 .2.7):
I (u, V)h I � II u llh Ilv ll )" l (u, av)"1 � Il a ll=ll u llh ll vllh , Il u + v llh � Ilulih + Il v llh ' Illullh - Il v llh I � Il u - v llh .
Il a ll=
max ]
l aj l ,
( 1 .2. l Oa) ( 1 .2. 1 0b) ( 1 .2 . l Oc) ( 1 .2. l Od)
22
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
If
u, V are the projections of continuous hlim --,>O (u, V)h
=
functions onto the grid, then
(u, v),
converge to the L2 scalar product and norm. Therefore, the above estimates are also valid for the L2 scalar product and norm applied to C1 functions. Since any function E L2 can be approximated arbitrarily well by a Cl function they are valid for all L2 functions. This provides a proof for Eq. ( 1 . 1 .9). The norm of an operator is defined in the usual way,
II Q llh
=
II Qu ilh / li u lih
sup
ueJO
From this definition it follows that N
II E P u ll � implies
=
L j=O
=
sup
lIuliF 1
II Qu ilh '
II Qu llh � II Q llh ll u lih ' Thus, N
I Uj+p l 2 h
=
p
=
Also, 1 h II(E
-
L j= O
I Uj l 2 h
=
0, ±1, ±2, . . . .
II u ll �
( 1 .2. 1 1 )
E 0 )U llh
that is,
The general inequalities ( 1 .2. 12) give us
Actually, these inequalities for the norms of D+ , D_, Do can be replaced by
PERIODIC GRIDFUNCTIONS AND DIFFERENCE OPERATORS
Uj = (
equalities. For D+ we define
lIulI � I I D +ulI �
(N
+
N
-
23
1 )j and obtain
l)h,
L ((_l)j+ l
- (- 1)j )2 h- 1
j=O
4(N
+
1)h-1
which yields ( 1 .2. 1 3) Using the same gridfunction
Uj
again, we get (1 .2. 1 4)
For Do we choose
Uj = ij
Ilull �
(where i = vCl) and obtain
(N
+
N
l)h,
1 " ((_i) j+l £..J 4h j=O N+ 1
h so
I I Do llh
=
l/h.
( 1 .2.15)
We now consider systems of partial differential equations and consequently need to define a norm and scalar product for vector-valued gridfunctions U = (u( l ), . . . , u(m) f. Let U and v be two such vector-valued gridfunctions, then we define N
(u, vh
=
L (Uj' v)h j=O
( 1 .2. 1 6a)
and ( 1 .2. 1 6b) The properties shown in Eqs. (1 .2.9) and (1 .2. 1 0) are still valid. We can also generalize ( 1 .2. l Ob) when a is replaced by an (m x m) matrix A. If A is a
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
24
constant matrix we have ( 1 .2. 17) If A = Aj is a matrix valued gridfunction, then I (Au , uh l � max IAj l llu llh ll u llh.
( 1 .2. 1 8)
}
EXERCISES 1.2.1. Derive estimates for
where D D�, D_D�, D�D+, D�, DoD+D_. 1.2.2. The difference operators D+, Do both approximate a/ax, but they have different norms. Explain why this is not a contradiction. 1.2.3. Compute IID+D_ lIh' =
1.3. TRIGONOMETRIC INTERPOLATION
Let u E Ph be a 21T-periodic gridfunction and assume that N is even. We want to show that there is a unique trigonometric polynomial IntN u(x)
1
N/2
L
..J2:;;: w = - N/2
u(w) eiwX ,
which interpolates u, that is, IntN u(Xj )
0, 1 , 2, . . . , N. ( 1 .3 . 1 )
j
Equation ( 1 .3 . 1 ) represents a system of N + 1 equations for the N + 1 unknowns u( w). We want to show that this system has a unique solution. In fact, we can derive an explicit representation of u(w) in terms of Uj . This representation is a consequence of Lemma 1 .3. 1 . Lemma 1.3.1. The exponential functions
ei v X ,
/I =
0, ± l ±2, ±N/2 , are orthog
onal with respect to the discrete scalar product, that is,
,
25
TRIGONOMETRIC INTERPOLATION
(eivx , eil"Xh
=
{
2�'
0,
if v if 0 <
IJ., IIJ.
Prooj. If v = IJ. we have
(eivx , eivxh If 0 < I v
N
L, h
=
j= O
- vi � N.
(N + l )h
2�.
- IJ.I � N, then N
L, ei(I"-v)jhh
This proves the lemma.
j =O
1 ei(I"-v)27r ,---,-,--- --- h 1 - ei(I"-v)h -
o.
We can now prove the following theorem. Theorem 1.3.1. The interpolation problem (1.3. 1) has the unique solution -
u(w) =
1 (eiwx , uh, � 2�
v
Iw l
� N12.
(1.3.2)
Prooj. Assume that Eq. (1.3.1) has a solution. We multiply it by e-ivx h and sum to obtain
In particular, if Uj = 0, j = 0, 1,2, . . . , N, then the homogeneous equations only have the trivial solution. Therefore, Eq. (1.3.1) has the unique solution 0.3 .2). The process of representing the sequence {u(Xj)} � using the Fourier coef ficients u(w) in Eq. (1.3.1) is often called the discrete Fourier transform. This transform became very important with the advent of the so called fast Fourier transform (FFf). The FFf makes it possible to compute the coefficients ii(w) in &(N log N) operations, compared to the &(N2 ) operations required for their straightforward computation. This transformation can, of course, be defined for any sequence of data. The important fact here is that if the discrete function {Uj } can be considered as the restriction of a smooth function u, then the trigono metric polynomial Int N u is a very accurate approximation of u. This statement will be made more precise by deriving an error estimate.
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
26
REMARK. We assume that N is even and, consequently, that the mesh consists
of an odd number of points in the interval [0, 2'11-). This is a consequence of the assumption that our trigonometric polynomials are symmetric, that is, w goes from -N/2 to N/2. If N is odd, one can use nonsymmetric polynomials, where -(N + 1 )/2 + 1 ::; w ::; (N + 0/2. This corresponds to an even number of points in the interval [0, 2'11-) with h = 27r/(N + 1) and Xj = jh , j = O,I, . , N. All the results we present in this section can also be derived for this case by changing the summation limits. We restrict ourselves to symmetric forms for convenience. .
.
We now discuss properties of the interpolant. We obtain a discrete version of Parseval's relation from Lemma 1 .3 . 1 using the same argument used in the proof of Theorem 1 . 1 .6. Theorem 1.3.2.
Let j
1 , 2,
interpolate the two gridfunctions. Then Nj 2
L
w = -Nj 2
u( l ) (w )u(2Jcw)
( 1 .3.3)
We now prove that the derivatives of an interpolant can be estimated in terms of the difference quotients of the corresponding gridfunction. Theorem 1.3.3.
Let IntN u be the interpolant of a gridfunction u. Then
II IntN u l1 2
IIDiull� ::;
Nj 2
L
l u (w) 1 2
II :�l
IntN u
w = -Nj 2
I
=
=
l I u ll� ,
W ( ; fl IIDiull�,
0, 1 , . . . .
(1 .3.4)
::;
(1 .3.5)
TRIGONOMETRIC INTERPOLATION
27
Proof. Equation (1 .3.4) follows from Parseval's relation (1 .3.3). We also have
I I D1 IntN ull � =
L
21r
u(w)
( e lwh ) e 1WX _
h W eiW h - 1 21 2 = L IU(W)!2 u(w)1 l L h W W
1
\
1
I
2 h
( � rl 2 sin
h/2)
We obtain the upper bound in Eq. (1 .3.5) by using the inequality 2Ixl/1r :s; ! sin x l for I x l :s; 1r/2.
which proves Eq. 0.3.5). Now consider a 21r-periodic function u and assume that we can represent it as a Fourier series �
u(x)
.v2;
L
W=- �
u(w)eiwx •
Consider the restriction of u to the grid. Then we can interpolate the gridvalues and obtain
IntN u(x)
1
.v2;
NI2
L
w = -NI2
u(w)eiwx •
The relationship between the Fourier coefficients u( w) and the coefficients u(w ) of the interpolant is computed in Lemma 1 .3.2.
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
28
Lemma 1.3.2. �
u(w)
L
l=-�
In particular, if u(w)
=
u(w
0 for
+
I(N
Iwl::; N/2.
+ 1)),
(1 .3.6)
Iwl > N/2, then Int N u u. =
Proof We can write any integer p- in the fonn
p-
=
w
+
As in Section 1 . 2,
I(N
I wi::; N/2, where I is an integer.
+ 1 ),
h 27r/(N + 1) and, therefore, =
implies
1 v2;
L
f1-=�OO
u(p- )ei!,-Xj ,
N/
2 1 L v2; w=-N/ 2 v2;
N/2
L
w=-N/2
Ct.
u(w
+
J(N
+ 1 ))
)
/W
u(w)eiwxj•
Equation ( 1 .3.6) follows from the uniqueness of the interpolant. We can now prove the fundamental approximation theorem for trigonometric interpolation. Let sup
O�x�211" denote the
lu(x)1
L� nonn.
Let u be a 27r-periodic function and assume that its Fourier coefficients satisfy an estimate
Theorem 1.3.4.
lu(w)1
::;
I�m'
w
=I- 0,
m > 1.
( 1 .3.7)
TRIGONOMETRIC INTERPOLATION
29
Then 1 �21r (NI2) , -m ( _ m -_1
2
l Iu ( . ) - IntN uO Il � ::;
V
�
_
1_ .L..J (2j - 1)m j= I �
+
2(N + 1) N ( 1 .3.8)
Proof. We write u (x) = UN(X) + UR(X), where
UN(X)
=
UR (X)
=
1 V2:;;: 1 V2:;;:
L
u(w)eiwx ,
L
u(w)eiwx •
Iwl ", N12 \wl > N 12
We also write IntN U = IntN UN + IntN UR. IntN UN is the trigonometric inter polant of UN(X) and from Lemma 1 .3.2 we have IntN UN = UN . IntN UR is the interpolant of UR(X), so Lemma 1 .3.2 implies that �
UR(W)
=
L
j=-oo jeiO
u(w
+
j(N
+
Iwl
1) ,
::;
N12,
( 1 .3.9)
and Eq. (1 .3.7) gives us
2C (NI2)' -m V2:;;: m - 1
::; From Eqs. (1 .3.7) and ( 1 .3.9), we obtain
!uR(w) 1
�
::;
L
j = -� jeiO
l u(w
+
j(N
+
1» 1
�
::;
C ' j = - � Iw + j (N + 1 ) lm NO
L
I w l ::; N12.
Since Iw + j (N + 1 ) 1 � I - NI2 + j(N + 1) 1 > IN(2j - 1 )/2 1 for j > 0 and Iw + j(N + 1) 1 � IN12 + j(N + 1 ) 1 > IN(2j + 1 )/2 1 for j < 0, we can estimate the
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
30
last sum to obtain
Therefore,
and Eq. (1 .3.8) follows from
Theorem 1 .3.4 establishes uniform convergence for m > 1 and, by Theo rem 1. 1 .3, applies to continuous functions that are piecewise smooth. The term Int N UR is often referred to as the aliasing error. The final estimate of Int N UR shows that this error is quite small if the function u is smooth, that is, if m is large. Also, it is of the same order as the error we commit by truncating the Fourier series. The next result follows immediately from the last theorem. Corollary
1.3.1. There are constants C[ such that 1
+
I < m.
EXERCISES
1.3.1. 1.3.2.
1.4.
Write a program that computes IntN u(x), where u(x) is the smoothed saw-tooth function as defined in Section 1 . 1 . Verify numerically that the error is &(NJ-m) according to Eq. (1.3 .8). Formulate and prove the theorems in Section 1 .3 for an even number of points in the interval [0, 211"].
THE S-OPERATOR FOR DIFFERENTIATION
Theorem 1 .3.4 shows that we can obtain very accurate approximations of smooth functions u(x) using trigonometric interpolation. If the function u(x)
31
THE S-OPERATOR FOR DIFFERENTIATION
is well represented in terms of Fourier components that correspond to wave numbers w less than N/2 in magnitude, then IntN u is a very accurate approx imation. In particular, Int N U == u(x) if Uw = 0 for I w i > N/2. This is a good reason for using trigonometric polynomials as a basis for approximating deriva tives accurately. Denote by w(x) the derivative of the interpolating polynomial IntN u, that is,
w(x)
d dx
1
- IntN u
L
-J2; [w [ $ N/2 1 w � -. £...J iw u(w )ei x • -J2; [wi $ N/2
w (w)eiw x ( 1 .4.1)
Let u(x) be a smooth function and assume that we only know its values at the gridpoints Xj . We want to compute approximations of du/dx at the gridpoints Xj . We proceed by calculating the trigonometric interpolant and then evaluate the derivative of this interpolant in the points Xj . In detail, the {u(w)} are computed by using the FFT and the {w ( w)} are obtained by N + multiplications. Then w can be calculated in the gridpoints using the inverse FFT If we define the gridvalues of u as a vector u = (uo, U I , . . , UN l, and correspondingly, w = (wo, W I , ..., WN)T, then the relation between these two vectors can be written as
I
,
.
.
w
( 1 .4.2)
= Su
where S is an (N + 1) x (N + 1 ) matrix. The matrix S is a discrete form of the differential operator a/ax, as were the difference operators D+ and Do defined in Section 1 .2. Since it uses information from all gridpoints, it is natural to expect good accuracy. In fact, Corollary 1.3.1 gives us an error estimate. If there are many gridpoints, the computation can be performed very efficiently using the FFT
.
We will now analyze some basic properties of the matrix S. Denote by ew the vector
I wl In the previous section we have proved that these vectors, that is, u
Also,
1 � v21l'
�
£...J
[w[$ N/2
u
�
N/2.
( 1 .4.3)
can be expressed in terms of
u(w)ew .
(1 .4.4)
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
32
Iwl
$;
N/2,
( 1 .4.5)
because the interpolant corresponding to ew is e iwx• Therefore, ew is an eigen vector of S with the eigenvalue iw . The Euclidean scalar product of two eigen vectors is
{
N + 1 , if w if °
0,
= v,
< Iw
-
vi
$;
N,
( 1 .4.6)
which was proved in Lemma 1 .3 . 1 . This shows that the eigenvectors of S form an orthogonal basis. If V is the matrix whose columns are these eigenvectors, then we can write S = VAV* where A is a diagonal matrix whose entries are the purely imaginary eigenvalues of S. Consequently S· (VAV')* VA'V' = V(-A)V' -S, that is, S is skew-Hermitian. The eigenvalue of maximum modulus is iN/2, hence peS) = N/2, and, since S is skew-Hermitian, =:
=:
lSI
:=:
=:
max ISul
lui =
1
N/2.
We collect these results in the following lemma. Lemma 1.4.1. The matrix S, defined in Eq. (1.4.2), has the following proper
ties:
1. It is skew- Hennitian. 2. The eigenvalues are iw , w = -N/2, -N/2 + 1, . . . , N/2. 3. lSI = N/2.
REMARK. We can also think of S as an operator on Ph (see Section 1.2). Then the above properties become (u,Svh = -(SU,V)h,
N/2.
EXERCISES 1.4.1. Write a program that computes w =: Su according to Eq. ( 1 .4.2) for some function u, where au/ax is analytically known. Find the number of grid points N + 1 such that the error
GENERALIZATIONS
33
I
aU(Xi ) - N/2 5,j5,N/2 ax max
is equal to
max
-5 05,i 5, 50
g
I
--
aU(Xj ) -ax
-
I
w·
J
I
- Do uJ· .
1.5. GENERALIZATIONS
Let f(x) f(X I , X2 ), (x) g(X I ,X2 ) denote functions that are 211"-periodic in both X I and X2 . We define the L2 scalar product and nonn by =
(j, g)
=
f021r f021r
=
fg dXI dX2 ,
Ilfll
(1.5.1)
The trigonometric functions ei(w ,x),
x
=
w (X I ,X2 ),
(w I , W 2 ), wi integers, (w,x) W I X I + W 2X2 , =
(1 .5.2)
are again orthogonal, that is,
(ei (w, x) , ei ( v , x» )
=
{ 0,
(21I")2 , for W
Therefore, we obtain a fonnal Fourier series
f(x)
W l = - OO W2 :::: - OO
for w '"
II , II .
(1 .5.3)
(1 .5.4)
where
If f(xi ,X2 ) is a piecewise CP function, then we can again use integration by
34
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
parts to prove for Wj f- 0,
j
1 , 2,
where
Therefore, there are constants
A
i !(w) i
::;
K Kp, { K' and
Kp
such that
J W l JP+J W2 JP '
for w = 0, for w f- O.
( 1 .5.5)
We will now show that the Fourier series ( 1 .5 .4) converges uniformly to ! for smooth functions. If j (X l , X2 ) is a smooth function of Xl , X2 , then it is a smooth function of X 2 for every fixed X l . Therefore, the Fourier series
!(X " X2 ) !2 (X " W 2 )
00
1 L !2 (x , , w 2 )eiW2X2 , ..;2; W 2 = -OO 1 w !(x " x 2 ) e- i 2X2 dX2 , ..;2;
r�o
( 1 .5.6)
converge uniformly to !(X l , X2 ). Every Fourier coefficient is a smooth function of Xl and, therefore, can be expanded into a uniformly convergent series
1 W IXI , !(w l , w 2 )ei L ..;2; UJ I = - 00 with
(1 .5.7)
GENERALIZATIONS
35
Using Eq. ( 1 .5.5), we obtain Eq. ( 1 .5.4) if we substitute Eq. ( 1 .5.7) into Eq. ( 1 .5.6), and the series converges uniformly to f . Again, we can relax the smoothness requirement if we are only interested in convergence in the L2 norm. Let N
N
L L
w } = -N W2 = -N
j (w)ei (w,x) .
As before, IIf[[ 2
N
-
N
L L
w } = -N W2 = -N
[ j(w) [2,
and, therefore,
if, and only if, Parseval's relation, �
[[f[[2 =
�
L L
W I = -OO W2 = -OO
[ j(w) [ 2 ,
( 1 .5.8)
holds. As before, Parseval's relation holds for all smooth functions and all func tions that can be approximated arbitrarily well by smooth functions in the L2 norm. This class of functions is called L2 . The generalization of Parseval's rela tion (j, g)
( 1 .5.9) W j = -OO W2 = -OO
is also valid.
FOURIER SERIES AND TRIGONOMETRIC INTERPOLATION
36
A two-dimensional grid is defined by
h
=
27r/(N + 1),
jp
o ± 1 , ±2 . . . . ( 1 .5 . 1 0)
Gridfunctions are denoted by
We assume that the gridfunctions are 27r-periodic in both directions. Ex ] , EX2 will denote the translation operators in the X I and X2 directions, respectively, that is,
The difference operators D+x] , D+X2 ' . . . are defined in terms of Ex] , EX2 as before. The discrete scalar product and norm are now defined by
(u, vh
=
N
N
LL
j] = 0 h = O
ujuj h2 ,
l I ul l h
(u, u)h1 /2 .
As we can develop a Fourier series in two dimensions using a one-dimen sional Fourier series, we can develop the two-dimensional interpolating poly nomial using one-dimensional ones. Let u(x) = U(X I ,X2 ) be a gridfunction. For every fixed X I , we determine ( 1 .5 . 1 1 ) Then we interpolate
U2(X I , "'2 ) for every fixed "' 2 and obtain ( 1 .5 . 1 2)
Substituting Eq. ( 1 .5. 1 2) into Eq. ( 1 .5 . 1 1 ) gives us the desired two-dimensional interpolating polynomial which, therefore, can be obtained by solving 2(N + 1 ) one-dimensional interpolation problems. We can consider Fourier expansions of vector-valued functions instead of scalar functions. We can also consider vector-valued functions f (f( 1 ) , . . . ,f(m) ) in d space dimensions. We can use a different discretization interval in each space dimension hi = 27r/(NI + 1 ), I 1 , 2, . . . , d. Then Xj := (hI jl , . . . , hd h ) and /j = f(xj ). The Fourier expansion of f is then of =
37
EXERCISES
the fonn
j
j
where j (v) are the Fourier coefficients of j (v) . The discrete scalar product becomes Nd
L
jd = 0 where m
� j---:- (v) (v) £...J } g} . v=[
We will usually assume that h [
=
...=
hd to simplify the notation.
EXERCISES
1.5.1. 1.5.2. 1.5.3.
Fonnulate and prove the generalization of Theorems 1 .3 . 1 , 1 .3.3 in two space dimensions. Compute II D+xj IIh' II D-xj IIh , II Doxj IIh' j = 1 , 2, on a rectangular grid with gridsize hj in the Xj direction, j = 1 , 2. Discuss the generalization of the S operator to two space dimensions.
BIBLIOGRAPHIC NOTES
There is an extensive literature on Fourier series. Two of the books containing most of the basic theory are by Churchill and Brown (1978) and by Zygmund (1977). Some of the theory is also found in Courant-Hilbert (1953). The basic Theorem 1 . 1 . 1 is proven there and the examples in Section 1 . 1 are also dis cussed. Theorem 1 . 1 .2 is proven by Titchmarsh ( 1 937). Although most of the components of the Fast Fourier Transfonn (FFT) were known about 1920, it was the basic paper by Cooley and Tukey ( 1 965), which created the method as we know it today. A general description of the FFT is found in Conte de Boor ( 1 972).
2 MODEL EQUATIONS
In this chapter, we examine several model equations to introduce some basic properties of differential equations and difference approximations by example. Generalizations of these ideas are discussed throughout the remainder of this book. 2.1. FIRST-ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY
The equation Ut = Ux is the simplest hyperbolic equation; the general definition of the class of hyperbolic equations is given in Chapter 6. We consider the initial value problem
U t = ux, u(x, O) = f(x),
- 00 < - 00 <
x x
< 00, < 00,
0�
t,
(2. 1 . 1 )
where f(x) = f(x + 21r) i s a smooth 21r-periodic function. To begin, we assume that
f(x) consists of one wave. We try to find a solution of the same type
u(X, t)
=
1
�
v 21r
eiwx u(W , t) . �
(2. 1 .2)
Substituting Eq. (2. 1 .2) into Eq. (2. 1 . 1 ) yields the ordinary differential equation
du . � dt = lWU,
38
u(W, O)
=
J(w),
39
FIRST-ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY
which is called the Fourier transform of Eq. (2. 1 . 1). Therefore,
It follows that
_1_ V2;
u(x, t)
eiw(X +t)j(w)
= f(x + t)
(2. 1 .3)
is a solution of our problem. Now consider the general case
f(x)
1 V2;
�
L eiwxj(w) .
w = -oo
(2. 1 .4)
By the superposition principle
u(x, t)
1 V2;
�
L eiw(x + t)j(w)
w = -oo
=
f(x + t)
(2. 1 .5)
is a solution to our problem. For every fixed t Parseval's relation yields �
l I u( · , t)11 2
=
L j eiw tj(w) 1 2
w = -oo
w = -oo
(2. 1 .6)
lI u ll2 is often called the energy of u. Therefore, Eq. (2. 1 . 1 ) is said to be energy
conserving-the obvious phrase, norm conserving, is often used in this con text as well. Clearly, any method of approximation must be nearly norm con serving to be useful. We also note that there is a finite speed of propagation associated with this problem. The expression (2. 1 .5) shows that the solution is constant along the lines x + t = constant which are called characteristics (see Figure 2. 1 . 1). Any particular feature of the initial data, such as a wave crest, is propagated along these characteristics. In our case, the speed of propagation (or wave speed) is dx/dt = - 1 . For general hyperbolic systems, there may be many families of characteristics corresponding to different wave speeds of different components. The important thing is that these speeds are always finite. We now solve the problem using a difference approximation. We introduce a space step h = 2-rr/(N + 1), with N a natural number, and a time step k > O. h, k define a grid in x, t space, consisting of the gridpoints (Xj , tn ) := (jh, nk). Gridfunctions will be denoted by uJ = u(Xj, tn )' A simple approximation based on forward differences in time and centered differences in space is
40
MODEL EQUATIONS
Figure 2.1.1.
up + 1 = (I + kDo)uP = : QuP , uJ = /j, j = 0, ±1, ±2,
(2. 1 .7)
If u n is known at time tn nk, then we can use Eq. (2. 1 .7) to calculate up + I for all j . Thus, the initial data determine a unique solution. Also, if u n is 211" periodic, then u n + 1 is too. Therefore, we can restrict the calculation to j = 0, 1, 2, . . . , N and use periodicity conditions to extend the solution and provide the extra needed values for Eq. (2. 1 .7) at j = O, N, that is, U� I = uN, uN + I = uo. We will now calculate the solution analytically. First consider the case where f consists of one single wave, that is, =
/j
=
1
v2;
--
e'WXj f(w), .
A
j
0, 1 , 2, . . . , N.
As in the continuous case we make the ansatz
(2. 1 .8) that is, we assume that the solution can also be expressed in terms of one single Fourier component. Substituting Eq. (2.1 .8) into Eq. (2. 1 .7) yields
where A = kjh. This equation can be rewritten as
41
FIRST-ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY
where � = wh, and we get
Q = 1
+
(2. 1 .9)
if.. sin � .
Q is called the symbol, or amplification factor, of (l + kDo). We refer to it as the Fourier transform of (l + kDo) and Eq. C2.1 .9) as the Fourier transform of Eq. C2. 1 .7). The solution of Eq. C2. 1 .9) is
and it is clear that 1 yI2;
(l
+
)
n . k i h sin (wh) e'WXjf(w) A
solves our problem. Now we consider a sequence of mesh intervals k, h � O. We want to show that up converges to the corresponding solution of the differential equation. We have
(
1
+
i
�
S in
C W h)
r
&'Ck2 w 2 + kh2 w 3») n = (1 + &'CCkw 2 + h2 W 3 )tn » eiwtn . ( eiwk
Therefore,
uJn Thus, for every fixed w, we obtain lim u� k, h -,> O J in any finite interval 0 � t � T.
+
42
MODEL EQUATIONS
Now assume that the initial data are represented by a trigonometric polyno mial
M
1
u(x O)
L
V2; w = - M
,
eiwxj(w).
By the superposition principle, the above result implies that the solution of the difference approximation will converge to the solution of the differential equation as k, h � O. Thus, one might think that the approximation could be useful in practice. However, consider Eq. (2. 1 . 1 ) with initial data/ex) = 0 which has the trivial solution u(x, t) = O. Now consider the problem with perturbed data
jew) -
{ E'
for w = N14, 0, otherwise.
The corresponding solution of the transformed difference approximation is V n (NI4) �
that is,
For tn
=
=
(
1
+
k . i h sm
(
27r
N+
N 1 4
)) E ( n
-
1
+
) E,
k n ih
1 , that is n = 11k
Now consider any sequence k, h � 0 with klh = A > 0 fixed. Then
This "explosion," or growth, can be arbitrarily fast. For example, if we consider A = 10, k = 1 0 5 , then -
43
FIRST·ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY 50
c
.g ;I
'0 IZ)
e
.
� �
::E
40 30 20 10 0
5
10
15
Number o f Time Steps
20
25
30
Figure 2.1.2.
The numerical calculation is therefore worthless. In Figures 2. 1 .2 and 2. 1 .3 we have calculated the maximum of the solutions of Eq. (2. 1 .7) with initial data for 0 for 71'
� �
Xj
Xj
50
c
�
40
.g
30
.
20
IZ)
e
� �
::E
10
Number of Time Steps Figure 2.1.3.
� 71', � 271',
44
MODEL EQUATIONS
and stepsizes h = 10- 2 , k = 10- 2 and h = 10-2 , k 10- [ , respectively. The ana lytic results lead us to expect that the solutions will grow like 2nl2 and 101 n12 , respectively. The numerical results confirm that prediction. In realistic computations one must always expect perturbations, either from measurement errors in the data or from rounding errors due to the finite rep resentation of numbers in the computer. Therefore, we consider only such sequences k, h --7 0, for which =
sup
O "; tn "; T, W, k , h
I on I
�
(2. 1. 10)
K(T),
and we call the method stable for such a sequence. If we choose a sequence with
c > 0 constant, then
and Eq. (2.1 . 10) holds. However, this method is not practical to use for the following reasons:
1. 2.
As we will see, there are stable methods with k = ch. Therefore, the amount of work required to calculate u at t = 1 with this method is unnec essarily large. If we want to calculate u for large t, then the exponential growth factor eet can amplify small perturbations so much that the solution is worthless.
We modify our previous difference equation by adding artificial viscosity, that is, we consider
o
uj
-
fJ. '
Here (J > 0 i s a constant, which we will choose later. We can write Eq. in the form
u� + [ ]
-
k
u� ]
which approximates the differential equation
(2. 1 . 1 1 ) (2. 1 . 1 1 )
(2. 1.12)
FIRST-ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY
45
As h --7 0 we obtain Eq. (2. 1 . 1). Thus, Eq. (2. 1 . 1 1 ) is a consistent difference approximation of Eq. (2. 1 . 1), that is, the difference approximation converges formally to the differential equation as k, h --7 O. We will now choose (1, k, and h so that
(2. 1 . 1 3) In this case, Eq. (2. 1 . 10) is certainly satisfied. From Eqs. ( 1 .2.3) and ( 1 .2.5), Q is of the form Q = 1 + iA sin � - 4(1 A sin2
;,
�
wh, A
kjh.
Therefore,
(
1 - 4(1 A sin2 8(1A sin2
;)
1 2
2
+ A 2 sin2 � ,
+ 1 6(1 2A 2 sin4
1 - (8(1A - 4A2 ) sin2
;
1 2
+ 4A2 sin2
+ ( 1 6(1 2 - 4)A2 sin4
1 2
(
;.
1
� 2 -
- Sill •
2
)
'
There are two ways we can satisfy Eq. (2. 1 . 13):
1. Suppose 2(1 � 1 . If 0 � 8(1 A - 4A2 , that is, o
<
A
� 2(1 �
1
(2. 1 . 1 4)
then I QI � 1 . By letting I � I be small, we see that these conditions are also necessary. 2. Suppose 1 � 2(1. If we replace sin4 (V2) by sin2 (�j2) it follows that I Q I � 1 if
that is,
1 � 2(1, By letting sin (�j2)
=
2(1A � 1 .
(2. 1 . 1 5)
1 we see that these conditions are also necessary.
MODEL EQUATIONS
46
There are two particular schemes of the above type which have been used exten sively:
1. The Lax-Friedrichs Method (11
=
h/2k = 1/(2A)).
V}� + l
(2. 1 . 1 6) In this case, Eq. (2. 1 . 15) is satisfied if k/h :::; 1 , that is, I Q I :::; 1 . It is remarkable that the simple change up � �(uF+ I + up_ I ) has such an effect on the solution.
2. The Lax-Wendroff Method (11
=
k/2h = Aj2). (2. 1 . 1 7)
Now Eq. (2. 1 . 14) is satisfied if k/h :::; 1 . In Figures 2. 1 .4 and 2. 1 .5, we have used the Lax-Friedrichs method and the Lax-Wendroff method to calculate the solution of Eq. (2. 1 . 1 ) with initial data
f(x)
{ X,
21r
-
x,
for 0 :::; for 1r :::;
X
x :::; 1r,
:::; 21r,
Lax-Friedrichs method
1 .6
2
3
4
Figure 2.1.4. The Lax-Friedrichs method.
5
6
FIRST·ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY
47
Lax·Wendroff method
1 .6 1 .4 1 .2
0.8 0.6 0.4
2
4
3
Figure 2.1.5. The Lax-Wendroff method.
5
6
and k/h = 1/2, h = 2'11/ 10, 2'11/ 100, respectively. We show here the absolute maximum error plotted against time. The accuracy is not impressive, but there is no explosion. We now consider a rather general difference approximation of Eq. (2. l .1):
QvP ,
Q
s
L
v = -r
A p (k, h)EP,
Vo }
=
fj .
(2. 1 . 1 8)
Here the Ap are rational functions of k and h, and r, s � ° are integers. Thus, we use the s + r + 1 values vp_ r' . . . , vp+ S to calculate vp + 1 . We again consider simple wave solutions
By observing that Eeiwx
=
ei�eiwx , we obtain Q
that is,
48
MODEL EQUATIONS
(2.1 .19) We assume that the initial data belongs to L2, that is J(x) can be expanded as a Fourier series ,
f(x)
:=
1
� w= -oo
(2.1 .20)
w
For the difference approximation we use the restriction off(x) to the grid. We denote by
Nj2
1
L
� w = - Nj2
j(w)eiwx
(2. 1.21 )
the trigonometric interpolant of the gridfunction. We assume that
Nlim --7= I \ IntN f - f ll
:=
o.
(2. 1 .22)
From Theorem 1 .3.4, this convergence condition is satisfied if f is a smooth function. We now want to prove the following theorem. Theorem 2.1.1. Consider the difference approximation shown in Eq. (2. 1.18)
on a finite interval 0 $ t $ T for a sequence h , k � O. Assume that
1. The initial data satisfy Eqs. (2. 1.20) and (2. 1.22). 2. The approximation is stable, that is, there is a constant Ks such that for all k and h sup
O � tn � T
I(tl
$
Ks .
3. The approximation is consistent, that is, for every fixed w
Then the trigonometric interpolant IntNv of the solution of the difference approximation converges to the solution of the differential equation,
FIRST-ORDER WAVE EQUATION, CONVERGENCE, AND STABILITY
49
Proof. For every fixed tn , we can represent the solution of the difference approximation by its trigonometric interpolant and, therefore, we can think of the solution as being represented in terms of simple waves. From Eq. (2. 1 . 1 9), we obtain
Let 0 < M we obtain
<
N/2 be a fixed integer. From Eq. (2. 1 .5) and Parseval's relation
jl u(· , tn ) - IntN(vJ') j j 2
=
N/2
2:
w = -N/2
+
j eiw 1nj(w) - (t(nj(w ) j 2
2:
Iw [ > N/2
j j(w) j 2 ::; I
+
II
+
III,
where
I
=
II
=
M
2:
j Q n (nj(w) - iWlnj (w ) j 2 ,
w = -M
2
III = 2
2:
j j(w)j 2 ,
2:
j j(w)j 2 j Q n (n j 2 .
Iwl >M Iw [ > M
By Eq. (2. 1 .20), lim II = o. M -'> � From Eq. (2. 1 .22) and the second assumption, lim III ::; M -'> �
� 4K� Mlim -'> � .L.J .
[ w l >M
( j j(w) - j(w )j 2 + jj(w) j 2 )
o.
Finally, for every fixed M, the second and third assumptions together with Eq.
MODEL EQUATIONS
50
(2. 1 .22) imply that lim I S; 2 lim
N ---,> =
N ---,> =
M
L
w = -M
( I Q n (o(j(w) - J(w » 12
Now convergence follows easily. Let e > 0 be a constant. Choose M so large that II+ III < e /2. For sufficiently large N we also have I S; EI2 and, therefore, convergence follows. This proves the theorem. This theorem tells us that the solution of the difference approximation con verges to the solution of the differential equation if the approximation is stable and consistent. In actual calculations one uses fixed values of k and h, and wants to know the size of the error for these particular values of k and h. We will discuss such estimates in Chapter 5 . EXERCISES 2.1.1. The convergence of the solutions in Figures 2. 1 .4 and 2 . 1 .5 is rather slow. Explain why that is so and find which one of the terms I, II, or III is large for this example in the proof of Theorem 2. 1 . 1 . 2.1.2. Modify the scheme (2. 1 . 1 1 ) such that it approximates u/ = - Ux • Prove that the conditions (2. 1 . 14) and (2. 1 . 15) are also necessary for stability in this case. 2.1.3. Choose (J in Eq. (2. 1 . 1 1 ) such that Q uses only two gridpoints. What is the stability condition? 2.2. LEAP-FROG SCHEME
The difference approximations that we discussed in the previous section were all so-called one-step methods, that is, Vjn + \ could be expressed as a linear com bination of neighboring values vp_ " . . . , vJ'+ s at the previous time level. The leap-frog scheme
V}� + \ = vjn - \ + ",,I\(Vjn+ \ - Vjn_ \ ) ,
A
=
klh,
(2.2 . 1 )
i s a two-step method because vp + \ i s determined by values at two previous time levels. To start the calculation, we have to specify vJ and v) . The initial data yields vJ = /j, while v) can be determined by a one-step method. It does not need to be stable, because we use it only once. The simplest one is Eq. (2. 1 .7), that is,
51
LEAP-FROG SCHEME
vJ
= (I +
kDo)vJ
=
(I +
kDo)h·
(2.2.2)
We again seek simple wave solutions
and obtain (2.2.3) To solve Eg. (2.2.3) we make the ansatz (2.2.4) where z is a complex number. Substituting Eg. (2.2.4) into Eg. (2.2.3) gives us
and, therefore, Eg. (2.2.4) is a solution of Eg. (2.2.3) if, and only if, z satisfies the so called characteristic equation Z2 = 1 +
2iAz sin �.
(2.2.5)
For 0 < A < 1, Eg. (2.2.5) has two distinct solutions with
given by
Zl , 2
=
iA sin � ±
The general solution of Eg. (2.2.3) is
.JI - A2 sin2 �.
(2.2.6)
(2.2.7) The parameters IT l and IT 2 are determined by the initial data. If [,D(w) = j(w), then by Eg. (2.2.2)
MODEL EQUATIONS
52
and we obtain the linear system of equations A
0"1 + 0" 2 = j(w) , O" I Z I + 0" Z = (1 + i'A sin �)j(w) . 22 As in the one-step case, we consider the low frequencies with Then, if 'A kjh constant, =
ZI Z
2
=
(2.2.8)
Iwh l
«
l.
1 + iwk - 1 w2 k2 + &(w 3 k3 ) eiwk(l + ("'(w 2k2)) , _e-iwk ( l + (,i(w 2k2)) . - ( 1 - iwk - 1 w2k2 + &(w 3 k 3 » =
=
After a simple calculation, Eq. (2.2.8) gives us
and, therefore,
22 iJ n (w) = j(w)(1 + &(w 2 k2 »eiw ln ( l + (" (w k )) 2k2)) . + &(w 2 k2 )j(w )( _ l t e- iw 1n ( l + (Ci(w Thus, the solution consists of two parts. The first part approximates the corre sponding solution u(w , Tn ) = j(w )eiW1n , and the error is &(w3k2 tn ). The second part oscillates rapidly and is independent of the differential equation. It is often called a parasitic solution. Luckily, the amplitude is small for w2k2 « 1 and does not grow with time. Since the leap-frog scheme uses three time levels, Theorem 2. 1 . 1 does not apply as formulated. However, using the form of u n (w ) derived above, we can again use trigonometric interpolation to prove convergence to the solution of the differential equation. Smoothness of the initial data and the stability condition
'A
=
kjh
.,; I
-
0,
o >
0
(2.2.9)
for any sequence k , h ---7 0 are required for convergence (see Exercise 2.2. 1). We now discuss a property of the leap-frog scheme that can cause practical difficulties. Let a > 0 be a constant, and consider the differential equation
The simple wave solutions are now of the form
LEAP· FROG SCHEME
53
u
I
Vh
__
. ( ) elW x + 1 e-a 1(w), A
which clearly decay exponentially with time. We use the approximation
(2.2.10) The simple wave solutions again have the fonn
where now Zl , 2 are the solutions of Z2 =
I
+ (2iA sin � -
2ka)z,
that is, Zl 2
iA sin � - ka ± v' 1 + (iA sin � - ka)2 .
=
Consider the special case w = O. For ka « I , we have Zl
and, as before, l + (c (k 2a2 )) i) n(o) = ](0)(1 + &(k2 a2 ))e- aln( 2 a2 + &(k2 a2 )](w )( - It ea tn( l + (c (k )) .
Now the parasitic solution grows exponentially and can obliterate the exponen tially decaying solution. Therefore, the (unmodified) leap-frog scheme cannot be used for long time intervals. It is easy to modify the scheme and suppress this behavior. Instead of Eq. (2.2. 10), we use
(1
+
ka) vr
Now Z l , 2 are the solutions of
1
= (1 -
ka) vp - 1
+
2kDovj'.
(2.2. 1 1)
MODEL EQUATIONS
S4
(1 + ka)z2
1 - ka + 2 i 'Az sin
t
and
Z I, 2 =
i'A sin � ± 1 + ka
1 - 'A 2 sin2 � - k2 a2 ( 1 + ka) 2
Therefore,
and both z'\ 2 decay like e -a tn ; that is, the solutions have the same decay rates as the solution of the differential equation. We close this section by noting that the condition k :5 h, found necessary for the explicit schemes of Eqs. (2. 1 . 1 1 ) and (2.2. 1 ), is very natural. Recall that the solution u(x, t) of Eq. (2. 1 . 1 ) at any point (x, i) is determined by the value of f(x) at the point x + i on the x axis, because u(x, t) is constant along the characteristic x + t = x + i going through (x, i) and (x + i, 0). Now assume that (x, i) is a gridpoint. Then the solution of the difference approximation at (x, i) depends on the initial data in the interval x - ij'A :5 x :5 x + ij'A (see Figure 2.2. 1). If x + i does not belong to this interval, that is, if 'A > 1 , then we cannot hope to obtain an accurate approximation. The condition that the domain of dependence of the difference approximation include the domain of dependence of the differential equation is known as the Courant-Friedrichs-Lewy condition (CFL-condition).
(x, T)
x+T Figure 2.2.1. Domain of dependence for an explicit difference scheme.
x
55
IMPLICIT METHODS
The domain of dependence at a certain point (x, i) for a general hyperbolic differential equation is not a single point on the x axis but rather a set of points or a whole interval. EXERCISES
2.2.1. 2.2.2. 2.2.3.
2.3.
A�
Prove that the solution of the leap-frog scheme converges to the solution of the differential equation, if 1 0, 0 > 0. Derive the explicit form of the leap-frog approximation (2.2 . 1 ) for = 1 . Is the scheme suitable for computation? Let a = 1 0. Estimate the time interval [0, T], where the approximation (2.2. 10) can be used. Does T depend on w and/or k?
A
-
IMPLICIT METHODS
There is another way to stabilize the approximation (2. 1 .7). If we replace the forward difference in time by a backward difference, we get the backward Euler
method
(I
-
kDo)vp+ 1
vp,
=
If w e introduce the vector v = (va, . matrix form .
.
j = 0, 1 , . . . , N.
, vNf, then w e can write Eq.
(2.3. 1 ) (2.3 . 1 ) in
This is called an implicit scheme, because it couples the solution values of all points at the new time level. A linear system of N + 1 equations must be solved to advance the scheme at each time step, and it, therefore, may seem to be an inefficient method. However, as we will see later, these schemes are often efficient and, in fact, the only realistic choice. The now familiar way of introducing a Fourier component yields, for Eq. (2.3 . 1 ),
MODEL EQUATIONS
56
that is,
QU" (w),
1
- iA sin (
�
Obviously, 1 Q 1 1 , and, again, there is damping of all frequencies except for w = O, 7r/h. Note the important difference between Eq. (2.3 . 1 ) and the explicit schemes in Eq. (2. 1 . 1 1): The stability condition 1 Q 1 1 is satisfied for all values of A. In other words, the scheme is stable for an arbitrary time step. Such schemes are called unconditionally stable. This is typical for implicit schemes. This approximation is only first-order accurate, because the time differencing is not centered. Instead, we can use the trapezoidal rule for time differencing, and obtain the Crank-Nicholson method
�
j
O, I , . . . , N.
(2.3.3)
The amplification factor is 2 + iA sin � . 2 - iA sin �
(2.3.4)
Thus I Q I = 1 for all values of A. The scheme is also unconditionally stable and, as with the leap-frog scheme, there is no damping. The explicit and implicit approximations can be combined into the so called
() scheme
(l + ( 1
� () �
- () )kDo)vP ,
j
O, I , . . . , N.
(2.3.5)
It is unconditionally stable for () � 1 /2. The parameter () is usually chosen to 1 . The reason for introducing such an approximation be in the interval � is that the damping can be controlled by adjusting () . The system (2.3.2) is most efficiently solved by a direct method. Let the nonzero elements of the matrix A be denoted by aij , where i = 0, 1 , . . . , N and j = 0, 1 , . . . , N. We make a factorization A = LR, where L and R have the fonn
57
IMPLICIT METHODS
X
L
o
X
0 X
X
X
X
X • . . • • • . . . • . . . . .
X
R
X
X
X
X
X
0
X
0
The nonzero elements lij and recursive formulas roo
ri}
X
X
X
X
of L and R, respectively, are given by the
aDO ,
aj,j _ I rj - I ,j - I ajj - Ij,} - I r}
-
aj, j + I
}
1,
j
. . .
,N
-
1,
I ,}
- lj,J - I rJ - I , N
1,
j
. . .
,N
-
2,
'
aN - I , N aDO rOO IN, j - I rj - I ,j
j
1,
. . .
,N
-
2,
MODEL EQUATIONS
58
rNN
rN - I, N - 1 N- I aNN - L INjrj N. j=O
The system (2.3.2) is rewritten as (2.3.7) The solution is obtained by backward and forward substitution
(2.3.8) The number of arithmetic operations for the procedure in Eqs. (2.3.6) to (2.3.8) is proportional to N. Hence, for problems in one space dimension, the work required for the implicit method is of the same order as that for an explicit method. Note, however, that on parallel or vector computers the simpler algo rithmic structure of an explicit scheme may be an advantage. The nonzero corner elements aON and aNO in the matrix A are an effect of the periodicity conditions. For other types of boundary conditions, where A is tridi agonal without the corner elements, the fonnulas (2.3.6) still hold and we get 0, 0,
j
=
=
O, l , . . . , N 0, 1 , . . . , N
2, 2.
For methods with more than three points on time-level tn + 1 coupled to each other, the bandwidth p becomes larger. The same type of solution procedure can still be applied. The matrices L and R have the same number of nonzero subdiagonals and superdiagonals, respectively, as A has, and it can be shown that &(p 2N) arithmetic operations are required for the solution. For problems in two space dimensions on an N x N grid, the bandwidth is p = &(N), and a direct generalization of the method above leads to an operation count of the order of N4• In this case iterative methods can be considerably more efficient, and they are the only realistic methods in three space dimensions. EXERCISES
2.3.1. 2.3.2.
Prove that Eq. (2.3.5) is unconditionally stable for () � 1. Calculate the exact number of arithmetic operations required to advance by one step the implicit scheme (2.3.3). Compare it with the work required to advance by one step the explicit scheme (2. 1 . 1 1).
TRUNCATION ERROR
S9
2.3.3. Derive the direct solution algorithm for a system Av = b, where A has nonzero diagonals. Prove that the operation count is &(p2N) .
p
2.4. TRUNCATION ERROR
In the previous sections, we have derived several difference schemes to calcu late the solution u of Eq. (2. 1 . 1). In every case, we could write their solutions v in closed form and, therefore, we could calculate the error u - v explicitly. In this section, we discuss the truncation error, which is a measure of the accuracy of a given scheme. Instead of estimating the error u - v we calculate how well u satisfies the difference approximation. In Chapter 6, we use the truncation error to estimate u - v. The advantage of this procedure is that it can be used when u and v are not known explicitly. It can also be used for equations with variable coefficients. Let u be a smooth function. Using a Taylor series expansion around any point (x, t) we obtain
Dou(x, t)
u(x + h, t) u(x h, t) -
-
2h h2
uxCx, t) + 3! uxxx(x, t) + l
I
a5u(�, t) 5 aX
u(x + h, t) - 2u(x, t) + u(x - h, t)
h2 2h2
I
'
uxx(x, t) + 4! uxxxxCx, t) + 6!
2h4
I �� I
u(x, t + k) - u(x, t)
k
u(x, t + k) - u(x, t - k)
2k
(2.4. 1)
'
(2.4.2)
Ut(x, t) + "2 ut t (x, t) + 3! 1/; o(x, t), a3 u (2.4. 3) l 1/; o(x, t) 1 :5 t $rr:� a '0 k
k
k2
I � I,
60
MODEL EQUATIONS
-
U(X, t + k) - 2u(x, t) + u(x, t k) k2 2k2 2k4 UtI (x, t) + 4! Ut ltI(x, t) + 6! l/l2(X, t), a6 U(X, O . j l/l2 (x, t) i ::; max + at6 r - k <; � <; r k
I
I
(2.4.5)
Now assume that U is a smooth solution of Eq. (2.1.1) and substitute it into the difference scheme (2. 1 . 1 1). Then we obtain from Eqs. (2.4.1) to (2.4.4) and Ur = Un Uti = Uxr = Uxx ,
U'! + ! - u'! k )
)
+ "2 U rr (Xj, tn) - ahUxx CXj, tn) + &'(h2 + k2 )
=
k
( � - ah) Uxx(Xj, tn)
+ &'(h2
+
e) = : TJ.
(2.4.6)
We call Tj the truncation error and say that the method is accurate of order (p, q) if T = &'(hP + kq ). For a f- k/(2h) the method above is accurate of order (1, 1). For a = k/(2h), the Lax-Wendroff method, the order of accuracy is (2,
2).
Equation
(2.4.6) implies that U satisfies
(2.4.7) Subtracting Eq.
(2. 1 . 1 1 ) from Eq. (2.4.7), we obtain, for the error e = U - v,
et ! eJo
(1 + kDo)ej
0
_
+ akhD+ D ej + hj,
•
We will show that e is of the same order as T if the approximation is stable. One can also easily derive expressions for T for the other methods. The leap frog and the Crank-Nicholson method are accurate of order (2,2), whereas the backward Euler method is accurate of order (2,1). Thus, we expect that the error in time will dominate when using the backward Euler method unless the solution varies much slower in time than in space (in the truncation error the time step k is multiplied by time derivatives).
61
HEAT EQUATION
EXERCISES
2.4.1. 2.4.2.
2.5.
When deriving the order of accuracy, Taylor expansion around some point (x., t* ) is used. Prove that (x., t.) can be chosen arbitrarily and, in particular, that it does not have to be a gridpoint. Prove that the leap-frog scheme (2.2. 1 ) and the Crank-Nicholson scheme (2.3.3) are accurate of order (2,2). Despite the same order of accuracy, one can expect that one scheme is more accurate than the other. Why is that so?
HEAT EQUATION
In this section we consider the simplest parabolic model problem for heat con duction,
u(x, O)
=
-00 < -00 <
f(x),
x x
< 00, < 00,
°
� t,
(2.5. 1 )
with 21r-periodic initial data. We again use the Fourier technique to obtain the solution. The differential operator a2/ax2 corresponds to the multiplication operator -w 2 in Fourier space, and we obtain
au(w, t) at u(w, O)
= =
-w 2 u(w, t), j(w).
(2.5.2)
The solution of Eq. (2.5.2) is
u(w, t)
(2.5.3)
which yields
u(x, t)
1
vh
f(x)
w = -oo
1
vh
w = - oo
(2.5.4)
From Parseval's relation we obtain,
w = - oo
(2.5.5)
62
MODEL EQUATIONS 3 2.5
-1
Figure 2.5.1.
Equation (2.5.3) illustrates typical parabolic behavior; each Fourier component is damped with time, and the damping is very strong for high frequencies. Even if the initial data are very rough, the solution is an analytic function for t > 0, that is, the Fourier coefficients decay exponentially. In Figures 2.5 . 1 to 2.5.3 we have plotted the solution of Eq. (2.5 . 1 ) with initial dataj(x) = 1 + sin x+ sin l Ox for t = 0, 10- 2 , l . One can also show that, unlike the hyperbolic case, the speed of propagation
Figure 2.5.2.
HEAT EQUATION
63
Figure 2.S.3.
is infinite. We now consider simple difference approximations of Eq. (2.5 . 1 ) and begin with j = O, l , . . . , N.
(2.5.6)
The scheme is based on forward differencing in time and is often called the Euler method. We recall that the corresponding approximation (2. 1 .7) for Ut = Ux was useless, because it was unstable for any sequence k, h ---,) ° with k/h �
c > 0.
To compute the symbol Q, we use the basic trigonometric formulas of Sec tion 1 .2. From Eq. ( 1 .2.5) (2.5.7) where (J = k/h2 , � = who The transformed difference scheme is then (2.5.8) The condition I Q I
�
1 is equivalent to (2.5.9)
MODEL EQUATIONS
64
0.03
0.025
0.02
0.0 1 5 0.01
0.005
0
2
4
3
5
6
Figure 2.5.4.
We have calculated approximations of the same solution we plotted in Fig ures 2.5.1 to 2.5.3 using Eq. (2.5.6) with a &' N = 100. In Figures 2.5.4 and 2.5.5, we have plotted the error u v, for t = 10-2 , l . The condition given in Eq. (2.5.9) implies that the time step k must be cho sen proportional to h2 • This is often too restrictive. On the other hand, it is natural for an explicit scheme. As noted above, there is no finite speed of prop agation for parabolic problems. This means that the domain of dependence of the difference scheme must cover the whole interval in the limit k ---7 0, h ---7 0, =
-
0.003
0.0025 0.002 0.001 5
0.001
0.0005
Figure 2.5.5.
HEAT EQUATION
65
even for points (x, t) arbitrarily close to the x axis, otherwise the approxima tion cannot converge to the true solution. Figure 2.5.6 shows the expand ing domain of dependence for fixed t, decreasing h and k = a h2 The leap-frog scheme approximating Eq. (2.5.1) is •
u;n + 1
=
2kD+ D- u;n + u;n 1 -
'
j =
O, I , . . . N ,
The solution in Fourier space is of the form shown in Eq. are the roots of the characteristic equation
0,
.
(2.5. 10)
(2.2.7), where z ] , Z2 (2.5. 1 1 )
that is,
ZI, 2
=
4a sm2 � .
-
2
-
+
_
For � f. 0, one of Z I, 2 is larger than one in magnitude for all values of a > 0, the scheme is useless since it is unstable for any sequence k, h ----7 ° with kjh2 �
C > 0.
A small modification can be made to stabilize the scheme. Equation (2.5.10)
x
h h/2
hl4
Figure 2.5.6.
66
MODEL EQUATIONS
can be written Vjn + i = 2 (J ( Vjn i - 2 Vjn + Vjn_ 1 ) + Vjn - i , + and if we replace vp by
(vP + I + vp I )/2, then we obtain -
Vjn - I + Vjn I ) + Vjn - I ,
j
_
O, I, . . . , N.
(2.5. 1 2)
This is known as the DuFort-Frankel method. It is still explicit, since we can solve for vp + I and write it as
Now the characteristic equation is Z2 -
1 - 2(J 1 + 2(J
4(J (cos �)Z 1 + 2(J
0,
that is, ZI, 2 =
r: 2(J v A, cos � ± 1 + 2(J 1 + 2(J
---
(2.5. 1 3)
where A = 4(J 2 cos2 � + 1 - 4(J 2 . If A � 0, then A :5 1 , and i Zl, 2 i
2(J
:5 1 + 2(J
1
+
1 + 2(J
1.
If A < 0, then we write ZI, 2
1 (2(J cos � ± i V4(J 2 (1 - cos2 �) - 1 ), 1 + 2(J
--
and get
=
2(J - 1 < 1. 2 (J + 1
(2.5.14)
HEAT EQUATION
67
The DuFort-Frankel approximation is unconditionally stable, which is some what surprising, since it is explicit. The time step can be chosen independent of the space step. This seems to contradict the conclusion above that the domain of dependence must expand as h decreases. However, this apparently contradictory behavior is an illustration of the fact that stability is only a necessary condition for convergence. It does not guarantee that solutions are accurate approxima tions. It only guarantees that solutions remain bounded. To investigate the order of accuracy we calculate the truncation error. From Eqs. (2.4. 1 ) to (2.4.6),
T
U,!J + l u,!J - l 2k _
(
k2 �4 ut - Uxx + Ji2 Utt + &' k 2 + h2 + h2
) (2.5. 15)
Thus, limk, h ---7 0 T =
0 only if limk, h ---7 0 kjh O. Typically, one chooses =
o >
(2.5.16)
O.
Then the truncation error is &'(h2o), and the method is only accurate of order (2,2) if 0 = 1, i.e., k = O(h2 ), essentially the same restriction as that required for the explicit scheme. We now examine analogues of the implicit schemes introduced in the pre vious section. The backward Euler approximation is
O, I ,
j
..
. , N,
(2.5.17)
with the amplification factor
Q
1
1 + 4(1 sin2 "2
�
,
(1
=
(2.5.18)
The magnitude of Q is never greater than one independent of (1, and all nonzero frequencies are damped. Note that, as for the differential equation, the damping is stronger for larger w.
MODEL EQUATIONS
68
·0.1
·0.2
Figure 2.5.7.
In Figures 2.5.7 and 2.5.8, we show the error of backward Euler calculations with k = h and N = 100 for t 10- 2 , 1 , respectively. The initial data are the same as in Figure 2.5 . l . The Crank-Nicholson scheme =
j
Figure 2.5.8.
0, 1 , . . . , N (2.5. 19)
69
HEAT EQUATION
has the amplification factor . � 1 - 2(1 S Ill2 2 C 1 + 2(1 sin2 2"
Q
(2.5.20)
(1
and, like the backward Euler method, it is unconditionally stable. However, when (1 is large, Q is near - 1 for � :/: 0, and there is very little damping. This is a serious drawback because one would like to use time steps of the same order as the space step. With that choice, we get (1 = &( 1jh), and Q ---7 - 1 as h ---7 0 for every fixed � (i.e., as w ---7 00). We have calculated the approximate solution of Eq. (2.5 . 1 ) with the same initial data as before using the Crank-Nicholson method with k = h and N = 100 for t = 10- 2 , 1 . The error is plotted in Figures 2.5.9 and 2.5. 10. There is now an oscillating error (see Exercise 2.5.3). We can also combine the two implicit schemes for parabolic equations obtaining the () scheme (J + ( 1
j =
- () kD+D_)vP , 0, 1 , . . . , N, 0 � () �
1 , (2.5.21 )
which i s unconditionally stable for () � 1 (see Exercise 2.5.2). A s i n the hyper bolic case, the damping increases with () up to () = 1 (backward Euler), but the accuracy decreases.
0.2
f-
n
n
1\
n
n
n
n
n
n
n
0 . 1 f-
o
I 2
I 3
I 4
�
-0.1
-0.2
U
u
U
U
u
U
Figure 2.5.9.
U
U
u
U
MODEL EQUATIONS
70 0.00003 0.00002
-0.00002 -0.00003 Figure 2.5.10.
EXERCISES
2.5.1.
Assume that the initial data for Eq. (2.5 . 1 ) is a simple wave f(x) = eiwx • Determine the time t] , where l I u ( · , t l )1I = 1 0-6. Apply the Euler method from Eq. (2.5.6), and calculate the corresponding time t2 . Determine the optimal time step for a given h.
2.5.2.
Prove that the () scheme from Eq. (2.5.21), is unconditionally stable for
2.5.3.
Derive the truncation error for the backward Euler and the Crank Nicholson methods applied to u/ = Uxx • Prove that it is &(h2 + k) and &(h2 + k2 ), respectively. Despite this fact, at certain times the backward Euler method is more accurate for the example computed in this section. Explain this paradox.
2.6.
() � � .
CONVECTION-DIFFUSION EQUATION
In many applications, the differential equations have both first- and second order derivatives in space. We now consider the model problem for convection diffusion -
u(x, O) = f(x),
-
00
00
< x < < x <
00,
00,
0
� t,
Y
constant > 0, (2.6. 1 )
with 21r-periodic initial data. In Fourier space, the corresponding problem is
71
CONVECTION-DIFFUSION EQUATION
au(w ,
at
t) + lQWU(W, t) .
A
= -
A
1/ W 2 U(W , t),
u(w , O) = f(w ),
(2.6.2)
with solutions (2.6.3) Consider the difference approximation j
O, l , . . . , N.
(2.6.4)
The amplification factor is A Q
=
1
-
� 2ex sm2 "2 •
-
ak
h'
ex =
i}.. sin �,
The "parabolic" stability condition for the case
a = ° is
that is, ex � 1 . For a -J ° we have
1 - 4ex sin2
-
;
+
4(}..2 - ex2 )s2
where s = sin2 �/2. Thus,
4ex 2 sin4
+
4(}..2
;
-
+
4}.. 2 sin2
ex)s ,
(2.6.5)
(2.6.6)
; (1
sin2
;) ,
(2.6.7)
I Q I � 1 for all � if, and only if, O � s � 1.
(2.6.8)
¢(s) is a linear function of s and, therefore, Eq. (2.6.8) holds if and only if ¢(O) = }..2 - ex �
0,
that is, (2.6.9)
72
MODEL EQUATIONS
The conditions in Eq. (2.6.9) can be interpreted in this way: The parabolic term makes it possible to stabilize the approximation of the hyperbolic part. How ever, the coefficient 'Y/ must be large enough compared to a (or k small enough) in order to provide enough damping. Furthermore, the damping of the method is always less than that of the differential equation. The true parabolic decay rate for Eq. (2.6. 1 ) is not preserved by the approximation. Part of it is required to stabilize the hyperbolic part. As 'Y/ becomes small, this becomes more severe. In the previous section, it was noted that, for parabolic problems, the stabil ity restriction on the time step for explicit schemes (except the DuFort-Frankel method) is often too severe, and implicit approximations should be used. Implicit methods can also be used when first-order derivatives are present. For example, the Crank-Nicholson method
j = 0, 1 , . . . , N (2.6.10) is unconditionally stable. In applications, however, the hyperbolic part is often nonlinear, that is, a = a(u), and a nonlinear system of equations must be solved at each step. In this case, it is convenient to use a so called semi-implicit method. The simplest approximation of this kind for our problem is
(I - k'Y/D+ D_ )v; + I = -2kaDov; + (I + k'Y/D+ D_)Vp - I , j
=
O, I, . .
.
, N,
(2.6.1 1 )
which i s a combination of the leap-frog approximation and the Crank Nicholson approximation. v) must be computed by some other one-step method. In Fourier space, Eq. (2.6. 1 1 ) yields
(
1 + 40" 2 sin2
; ) fjn + I (w)
- i 2;>.. (sin �)fjn(w ) +
0" =
(
hZ '
1
-
;>..
=
40" sin2
ak
h ·
; ) fjn - I (W ), (2.6. 1 2)
The corresponding characteristic equation is
i 2;>" sin � 1 + {3
Z
-
1 - {3 1 + {3
=
0,
{3
(2.6. 1 3)
73
CONVECTION-DIFFUSION EQUATION
with solutions
ZI,2
=
-iA sin � ±
J1
- {32 - A 2 sin2 �
1 + {3
(2.6. 14)
.
First, assume that the square root is real, that is, (2.6.15) Then, 1 - {3 1 + {3
�
1.
Next, assume that (2.6. 1 6) Then the roots are purely imaginary, and we have, for IA I A sin � ±
J{32 + A2 sin2 � - 1 1 + {3
�
I A I + {3 1 + {3
< 1, <
1.
(2.6. 1 7)
Thus I z u l � 1 for I A I � 1 , which is the same stability condition we obtained for the leap-frog approximation in Eq. (2.2 . 1 ) of Ut = UX ' Note, however, that n 2 2 2 ZI = Z 2 for (3 + A sin � = 1 . Then the representation iJ (w) = a l Z I + a 2 z2 n becomes iJ (w) ( a , + a2n)ZI ' Because I Z I I � I A I < 1 in this case, we have n constant independent of n. Thus, l iJ n (w) 1 is bounded independent of n l Zl l w , n, and it is stable. We shall consider the approximation in Eq. (2.6. 1 1) in a more general setting in Chapter 5 . The time step can be chosen to be of the same order a s the space step with the semi-implicit scheme (2.6. 1 1 ), which is a substantial gain in efficiency com pared to an explicit scheme. This was achieved without involving the whole difference operator at the new time level.
�
=
EXERCISES
2.6.1.
Write a program that computes the solutions to Eq. (2.6.4) for N = 10, 20, 40, . . . . Choose the time step such that
MODEL EQUATIONS
74
1. 2. 2.6.2.
� 1, IAI � 1 .
a, defined in (2.6.5), is a constant with a
A, defined i n (2.6.5), i s a constant with
Compare the solutions and explain the difference in their behavior. Newton's method for a nonlinear system F(v) = 0 is defined by
where F' is the Jacobian matrix of F with respect to v. Assume that the coefficients a and 1J in Eq. (2.6. 1 ) depend on u. Prove that Newton's method applied to each step of the Crank-Nicholson scheme (2.6. 1 0) leads to linear systems of the same structure as discussed in Section 2.3.
2.7.
HIGHER ORDER EQUATIONS
In this section, we briefly discuss differential equations of the form
au ap u a-at axp , u(x, O) f(x),
p � 1, - 00
=
<
x
- 00
<
<
x
<
00,
00,
t � 0,
(2.7. 1 a) (2.7. 1b)
where a is a complex number and f is 21r-periodic. In Fourier space, Eq. (2.7.1) becomes
au(w, t) r - a(lw) u(w, t), at u(w, O) = j(w), _
.
(2.7.2a) (2.7.2b)
that is,
with
l u(w, t) 1
=
l eRe[a(iw) P] lj(w) l .
(2.7.3)
For the problem to be well posed, it is sufficient that the condition
Re[a(iw )P] � 0 be fulfilled for all
(2.7.4)
w. This ensures that the solution will satisfy the estimate
HIGHER ORDER EQUATIONS
7S
lI u( · , t) 1I � 11/01 1 ·
(2.7.5)
Since w is real and can be positive or negative, we obtain the condition sign(Re a) Im a
(_ l ) p/2 + J , 0,
ifRe a i- 0 and p is even, ifp is odd.
(2.7.6a) (2.7.6b)
The most natural centered difference approximation is given by p even p odd,
(2.7.7)
which, in Fourier space, yields p even, (2.7.8)
If a is real the Euler method can always be used if p is even. The leap-frog scheme can always be used if p is odd. This follows directly from the calcula tions made in Sections 2.2 and 2.5. For the Euler method, we have (2.7.9)
where Qp is defined in Eq. (2.7.8). If a is real, stability requires that the con dition
(_ 1) p/2 - J
•
4P/2 ak � 2, hP
p even ,
(2.7. 1 0)
be satisfied. For p � 4, the time step restriction is so severe that the method cannot be used in any realistic computation. Similarly, for the leap-frog scheme, we obtain a condition of the fonn
k � constant, p odd. hp
(2.7. 1 1 )
[This is easily seen if we follow the calculations leading to Eq. (2.2.9).] p = 1 is the practical limit in several space dimensions, and we conclude that implicit
76
MODEL EQUATIONS
methods are necessary for higher order equations. (In one space dimension, one could possibly use explicit methods for p = 2, 3).
EXERCISES
2.7.1.
What explicit method could be used for the Schrodinger type equation Uz =
iuxx ?
(2.7. 12)
Derive the stability condition.
2.7.2.
Define the Crank-Nicholson approximation for the Korteweg de Vries type equation (2.7 . 1 3) Prove unconditional stability.
2.7.3.
2.S.
Define a semi-implicit approximation suitable for the efficient solution of Eq. (2.7 . 1 3) with a = a(u ). Derive the stability condition (for a = constant).
GENERALIZATION TO SEVERAL SPACE DIMENSIONS
In two space dimensions the hyperbolic model problem becomes
au at
(2.8 . 1 )
with initial data
u(X, O)
=
f(x),
which is 211'-periodic in
f(x)
Xl
and X2 . If 1 211'
e' (w , x)'!(w) , "
A
w
we make the ansatz
u
_1_ 211'
ei(w , x) u(w t) '
,
u(w , O)
=
jew)
GENERALIZATION TO SEVERAL SPACE DIMENSIONS
77
and obtain
Thus,
is the solution of our problem. For general j, we obtain, by the principle of superposition, (2.8.2)
u
Thus, we can solve the problem as we did in the one-dimensional case. Also, the solution is constant along the characteristics, which are the lines X I + t = constant and X2 + t = constant. We now discuss difference approximations. We introduce a time step, k > 0, and a two-dimensional grid by 0, ±1 , ±2, . . . ,
h
27rj(N
+
1 ),
and gridfunctions by
nk. Corresponding to Eq. (2. 1 .7), we now have (2.8.3) Its Fourier transform is
un+ l ew )
=
(I +
i;\(sin
�I +
sin b»u n(w ).
(2.8.4)
It is of the same form as Eq. (2. 1 .9). Therefore, the approximation is not useful. We add artificial viscosity, that is, we consider
As before, we can choose ;\ approximation is stable.
=
kjh, a > ° such that I Q I � 1 ; that is, the
MODEL EQUATIONS
78
The leap-frog and the Crank-Nicholson approximations are also easily gen eralized. They are
U�J + I
Up - l + 2k(Dox1 + DOX2) uP ,
(I � (Dox1 + Dox2 ») ur i ( I + � (Dox1 + DoX2») up , =
-
(2.8.6a) (2.8.6b)
respectively. Equation (2.8.6a) is stable for k/h < 1/2, whereas Eq. (2.8.6b) is stable for all values of A = k/h . Both methods are second-order accurate. The parabolic model equation is
U(x , O) = f(x). Like the one-dimensional problem, its solution
becomes "smoother" with time because the highly oscillatory waves ( I w I » 1 ) are rapidly damped. We can easily construct difference approximations anal ogous to those used for the one-dimensional problem. We need only replace D+D_ in Section 2.5 by D+XI D x I + D+X2 D_X2 • The analysis proceeds as before. The explicit Euler method in Eq. (2.5.6) is stable for a = k/h2 $ i, whereas the implicit Euler, Crank-Nicholson, and DuFort-Frankel methods are uncon ditionally stable. -
EXERCISES
2.8.1.
Derive the stability condition for the leap-frog approximation to Ux + uy + UZ ' where the step sizes Llx, Lly Llz may be different.
Ut
,
2.8.2.
Derive the stability condition for the Euler approximation to Ut = Uxx + Uyy + uzz. Prove that the DuFort-Frankel method is unconditionally stable
for the same equation.
BIBLIOGRAPHIC NOTES Most of the difference schemes introduced in this chapter were developed very early, in several cases before the electronic computer was invented. The leap frog scheme was discussed in a classical paper by Courant-Friedrichs-Levy ( 1 928). In the same paper, the so called CFL-condition was introduced, that is, the domain of dependence of the difference scheme must include the domain of
GENERALIZATION TO SEVERAL SPACE DIMENSIONS
79
dependence of the differential equation. Today one often uses the tenn "CFL number" which, for the model equation Ut = aux , means A = ka/h . The Lax-Friedrichs scheme was introduced for conservation laws Ut = F(u)x by Lax ( 1 954), and the Lax-Wendroff method was presented in its original fonn by Lax and Wendroff ( 1 960). Various versions have been presented later, but for the simple model equations we have been considering so far, they are identical. Any approximation of a hyperbolic equation that is a one-step explicit scheme with a centered second order accurate approximation complemented with a damping tenn of second order, is usually called a Lax-Wendroff type approximation. The Crank-Nicholson approximation was initially constructed for parabolic heat conduction problems by Crank and Nicholson ( 1 947). The same name has later been used for other types of equations, where centered difference operators are used in space and the trapezoidal rule is used for discretization in time. The DuFort-Frankel method for parabolic problems was introduced by DuFort and Frankel ( 1 953). Stability analysis based on Fourier modes as presented here goes back to von Neumann, who used it at Los Alamos National Laboratory during World War II. It was first published by Crank and Nicholson (1947) and later by Charney, Fjortoft, and von Neumann ( 1 950), von Neumann and Richtmyer ( 1 950) and O'Brien, Hyman, and Kaplan ( 1 95 1 ). In this book we use Fourier series representations of periodic functions, but one could as well use Fourier integral representations of general � functions:
f(x) f(x)
� .WX dw, [ f(w)e' v'h [ f(x)e-I.WX dx, 1 -1 --
v'h
-
=
-
=
see, for example, Richtmyer and Morton ( 1 967). The gridfunctions can be extended such that they are defined everywhere if the initial function is defined everywhere. In that way, the Fourier integrals are also defined for the solutions to the difference approximations. The Fourier transfonned equations are exactly the same as they are for Fourier series and, accordingly, the stability conditions derived in Fourier space will be identical. The stability condition (2. 1 . 1 0) is the one that corresponds to the general sta bility definition, which will be given in Chapter 5. We notice that the condition 1 01 ::; 1 used in all our examples is stronger, since 1 01 ::; 1 + &'(k) is sufficient for Eq. (2. 1 . 10) to hold. However, if k/h is kept constant, our approximations are not explicitly dependent on k for the hyperbolic model equation. The same conclu �ion holds for the parabolic model problem if k/h2 is constant. There fore, I Q I ::; 1 is the only possibility for a stable scheme. On the other hand, for the equation Ut = Ux + u, we must necessarily have 1 01 = 1 + &,(k), and this motivates the more general stability condition.
3 HIGHER ORDER ACCURACY
3.1. EFFICIENCY OF HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS In this section, we develop higher order accurate difference approximations and compare the efficiency of different methods. To illustrate the basic ideas, we first consider discretizations in space only. Later the results for fully discretized methods will be summarized. We begin again with the model problem
Ul + aux
0,
u(x, O)
For simplicity we assume that computed, and we have
a > 0.
u(x , t )
=::
=::
aDo Vj (t ) Vj (
O)
(3. 1 . 1 )
As in Section 2. 1 , the solution i s easily
eiW(X - a l) ,
that is, the wave propagates with speed difference approximation is
dVj ( t ) -------;[t +
eiwx ,
0,
(3. 1 .2)
a to the right. The simplest centered
j
O, I , . . . , N,
(3. 1 .3)
The method of discretizing only the spatial variables is often called the method of lines. It yields a set of ordinary differential equations (ODEs) for the grid values. In general, one has to solve the ODEs numerically. In the method of lines, usually a standard ODE solver is used. The solution of Eq. (3. 1 .3) is 80
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS a sin
�
81
�
(3. 1 .4)
(The sUbscript 2 is used here to indicate that the difference operator is second order accurate.) Therefore, the error e2 = lI u - v lI= := maXO �j� ! u(Xj ) - Vj ! satisfies
lI eiw(Xj - a t)
wta
(
1
_
_
Si
eiw (Xj - a 2 t) 1I =
�� )
I !e- iw a t
=
+ &(�4 )
:
_
wt e
=
e- iw a21 1 1 = ,
+ &(�4 ), �
=
wh
,
(3. 1 .5)
where, for convenience, we have assumed that w � O. Using results from Sec tion 1 .3 , we can represent a general gridfunction by its interpolating polynomial. The largest wave number that can be represented on the grid is w = N/2. There fore we consider simple waves with 0 < w ::; N/2. If w is much less than N/2 in magnitude, then there are many gridpoints per wavelength, and the solution is well represented by the gridfunction Vj(t). In this case, � is a small number, and we can neglect &(�4 ) and obtain (3. 1 .6) If � is large, corresponding to fewer gridpoints per wavelength in the solution, then the difference operator Do will not yield a good approximation of a/ax and approximations Qp of order p > 2 are required. It is convenient to represent Qp as a perturbation of Do . We make the ansatz
Qp
=
Do
p/2 -
1
(- I Y cx v Ch2D+D_Y . L v=
(3. 1 .7)
O
To determine the coefficients, we apply the operator to smooth functions u(x). It is sufficient to apply Qp to functions eiwx , because general 27r-periodic functions can be written as a linear combination of these functions. We obtain
w ho If Eq. (3. 1 .7) is accurate of order p, then
[cf. Eqs. (1 .2.3) and (1 .2.4)] .
HIGHER ORDER ACCURACY
82
The substitution () = sin (�/2) yields arcsin ()
V1=02
p/2 - 1
() L v=o
v v a v 22 () 2 +
&«()P + I ).
By expanding the left-hand side in a Taylor series, the a v are uniquely deter mined (see Exercise 3 . 1 .5). Clearly Qp is accurate for order p when the a v are determined this way. We fonnulate the result in Lemma 3 . 1 . 1 .
Lemma 3.1.1. The centered difference operator Qp, which approximates a/ax
with accuracy of order p, has the form (3. 1 . 7) with ao
1,
/I
4 /1 + 2 a v - I ,
/1 = 1 , 2, . . .
, p/2
1.
The fourth- and sixth-order accurate operators are
Q4
Do
Q6 = Do
( I - � D+D_ ) (I - � D+D_ ,
(3.1 .8) (3. 1 .9)
The fonn of these operators could, of course, be obtained by using Taylor series expansions directly in physical space. For example,
Thus, if the leading error tenn is eliminated by using a second-order accurate approximation of Uxxx , then a fourth-order approximation is obtained. The nat ural centered approximation is (3. 1 . 1 0) and a Taylor series expansion shows that (3. 1 . 1 1 ) This procedure leads to the operator Q4 in Eq. (3.1 .8).
83
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS
We now give precise comparisons of the operators Q2 , Q4 , and Q6 . The error for Q2 was given in Eq. (3. 1 .5). For Q4 and Q6 , we obtain, correspondingly,
[ [
e4
w ta 1
e6
w ta 1
+ &(e ).
( + � ;) ] + &(�6), � ' 2� � ( + 4 � )] +
sin
�
--
sin --
�
1
sin2
1
2 sm "2 3
(3. 1 . 1 2a)
8 15 sm "2 '
(3. 1 . 1 2b)
We assume that I � I « 1 and neglect the &(�p +2 ) term. The time required for a particular feature of the solution to traverse the inter val [0, 271"] is 271"/ a. Because we are considering wave numbers w larger than one in magnitude, the same feature occurs w times during the same period in time, and therefore 271"/aw is the time for one period to pass a given point x . [This is seen directly from the form of the solution (3. 1 .2).] We consider com puting the solution over q periods in time, where q � 1, that is, t = 271"q/(wa). We want to ensure that the error is smaller than a given tolerance € ; that is, we allow
ep (w)
= E,
P = 2, 4, 6,
after q periods. The question now is: How many gridpoints N are required to p achieve this level of accuracy for the pth order accurate method? (For conve nience we use the notation Np here instead of Np + 1 .) We introduce the number of points per wavelength, Mp , which is defined by p = 2, 4, 6.
(3. 1 . 13)
This is a natural measure, because we can first determine the maximum wave number w that must be accurately computed and then find out what resolution is required for that wave. All smaller wave numbers will be approximated at least as well. From Eqs. (3. 1 .5) and (3. 1 . 13) we obtain
E,
t = 271"q/(wa),
and M2 can be obtained from the last equality. A similar computation can be carried out for the fourth- and sixth-order oper ators, and the results are
HIGHER ORDER ACCURACY M2 ( ;) 1/2 ( : ) 1 /2 , M4 "" ( � ) 1/4 ( : ) 1/4 , M6 ( � ) 1/6 ( : ) 1/6 .
84
21r 21r
21r
(3. 1 . 14)
7
For any even order of accuracy, the formula has the form
Mp Cp ( -;- ) I/p M2 ,M4 , M6 Vs M42 "" O M42 ( ) 1/4 y'2; M36/2 q
=
The relationships among
and
,
1
3
(3. 1 . 15)
are
.36
21r 14
.
(3. 1 . 1 6)
and we note that they are independent of E and q. Table 3 . 1 . 1 shows as a function of q for two realistic values of E . How can we use the above information? Assume that we know for a given problem how many Fourier modes are needed to adequately describe the solu tion. Then we choose the number of points such that the wave with the highest frequency is well resolved according to our analysis above; that is, f is suffi ciently small. If more detailed information is known about the spectral distribution of the solution v(w, t), then this can be used to obtain sharper comparisons of effi ciency by weighting the error function appropriately. Next we consider the fully discretized case. If leap-frog time differencing is used and the semidiscrete approximation is
Mp
dv dt
=
Qv,
then the full approximation is given by TABLE 3.1.1. Mp as a function of q EO = EO =
0. 1 0.01
(3. 1 . 17)
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS
85
(3. 1 . 1 8) (Here v is considered as a vector of the gridvalues.) We choose the time step k so small that the error due to the time differencing is small compared with E . One can show that this is the case if k2w3/6 « E . Table 3 .1.1 tells u s that the number of points per wavelength decreases by a factor three if E = 10- 1 , q 1 , and we replace Q2 by Q4 . The amount of work decreases by a factor 3/2, because it is twice as much work to calculate Q4 V as Q2 V. However, in applications we want to solve systems =
Ut
=
A(x, t, u)ux
+ F(x, t, u),
and often the evaluation of A and F is much more expensive than the calculation of the derivatives. In this case, we gain almost a factor of 3 in the work of these evaluations. The gain is greater in more space dimensions. In three dimensions, the number of points is reduced by a factor of 27, and the work is decreased by a factor of between 27/8 and 27, depending on the cost of evaluating the coefficients. The gain is even greater for the error level E = 10-2 • However, for general initial data, experience indicates that E = 10- 1 is appropriate to obtain an overall error of 10- 2 • The reason is that most of the energy is contained in the small wave numbers, and for those we have enough points per wavelength. The energy in the large wave numbers is small and therefore E = 10- 1 is tolerable. In the discussion here, it was assumed that only one period in time was computed. Table 3 . 1 . 1 shows that one gains even more by going to higher order accuracy if the computation is carried out over longer time intervals. In Figure 3 . 1 . 1 , we have calculated the solution of
(-)
Figure 3.1.1 (a) p = 2. (b) P = 4. (c) p = 6.
HIGHER ORDER ACCURACY
86
(b)
(c)
Figure 3.1.1 (Continued)
Ut
+
Ux
U(X, O)
= 0,
2" (7r
t
1
=
-
� 0,
x)
U(x + 27r, t)
wx w
sin W=
1
U (x , t), the sawtooth function,
using Eq. (3. 1 . 1 8) with Q = Qp, P = 2, 4, 6, N = 1 28, and k = 10- 3 to t = 7r. The results are not satisfactory. If we use Q6 , Table 3 . 1 . 1 shows that we can only compute the first 128j(5q l/6 ) waves in the Fourier expansion of the solution with an error � 10- 1 • The large wave numbers are not approximated well enough. In Figure 3. 1 .2 we have replaced the initial data by
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS
(a)
(b)
(c)
Figure 3.1.2 (a) p = 2. (b) p = 4. (c) p
=
6.
87
HIGHER ORDER ACCURACY
88
u(x,
0)
20
L w=
I
sin wx
w
Now the results for the fourth- and sixth-order methods are better. We see again the Gibbs ' phenomena as explained in Chapter 1 . To suppress it, we replace the initial data in Figure 3 . 1 .3 by
u(x, O) =
�(
( ;0 f )
1
sin wx
w
( al 2
-2
(b)
Figure 3.1.3 (a) p = 2. (b) p = 4. (e) p = 6.
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS
89
(e)
Figure 3.1.3 (Continued)
and in Figure 3 . 1 .4 by
These series are constructed according to the principle discussed for Eq. ( 1 . 1 .5): The coefficients are reduced to zero in a smooth way as w increases. For these smoother solutions the results for the three different methods show the expected behavior.
Figure 3.1.4 (a) p
(a) =
2. (b) p = 4.
(c) P = 6.
90
HIGHER ORDER ACCURACY
(b) 2
-2
(c)
Figure 3.1.4 (Continued)
The leap-frog time differencing used here is only second-order accurate, and it seems reasonable to use higher order methods for the time discretization. However, the operators Qp, derived for space discretization, cannot be used in the time direction because the resulting schemes are unstable. There are many time differencing methods developed for ordinary differ ential equations du/dt = Qu. One class of these methods is that of one-step multistage methods, where Qu is evaluated at different stages in order to go from u n to un+ I . The explicit Runge-Kutta methods are typical representatives for this class, and they will be discussed in Section 6.7. Hyman's method is another popular method of this class; it is also discussed in Section 6.7. Another class of methods is that of linear multistep methods, which have the
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS
91
form
k
q
� {3.Qv n+ I .=0
•.
(3. 1 . 1 9)
(Note, that "linear" refers to the fact that Q occurs linearly in the formula, Q itself does not have to be linear.) Explicit linear multistep methods of high accu racy can be constructed, but they require storage of the solution at many time levels, which is inconvenient for large partial differential equation problems. If the solution of implicit systems at every time step is acceptable, that is, (3q -:/:. 0 in Eq. (3. 1 . 19), then it is possible to construct more compact linear multistep schemes that do not require so much storage. One such method is Simpson 's rule, which we derive next. Consider the semidiscrete problem in Eq. (3. 1 . 17) again and the leap-frog scheme in Eq. (3. 1 . 1 8). For a smooth solution v(t) of Eq. (3. 1 . 17), a Taylor expansion around t = tn yields
2k
We want to eliminate the
k2
2" 1
term and note that
V; - I ) - Vln + &(k4) � (QV n + 1 + QV n - I ) QV n + &(k4 ), 2 (v,n + 1
+
_
that is,
Therefore, if we replace fourth-order method
2k
Qv n by � Qvn + �(Qvn + 1 + Qvn - I ), then we obtain the
� (Qv n + 1 6
+
4Qv n
+
Qv n - I ).
(3 . 1 .20)
HIGHER ORDER ACCURACY
92
This implicit method is only conditionally stable (see Exercise 3.l .2). As for all implicit methods, the question of how to solve the resulting system of equa tions arises. Direct methods are prohibitively expensive for realistic problems in several space dimensions, and iterative methods are more attractive. No matter what iterative method is used, it is important to have a good initial guess v[oi 1 for u n + The value u n on the previous time level is a natural choice. However, a better approximation can be obtained if urOl is computed using an explicit approximation of the differential equation. This explicit method is called a pre dictor. If the approximate values are substituted into the right-hand side, we obtain the corrector formula. This can be used as an iteration for u n + 1 . For Eq. (3. 1 .20), we obtain
1.
+ 1
u[np + 1 ]
=
1
Un -
1
+
k (Qu[np+] 1 3
_
+
4Qu n
+
Qu n - 1 ),
0, 1, . . . . (3. l .21)
The combined procedure is called a predictor-corrector method and, in practice, only a few iterations are used. Clearly, the stability for the combined method does not follow from the sta bility of the original implicit method. This would only be the case if, at each time level, the corrector is iterated to convergence. If a fixed number of itera tions is used, then the combined scheme is effectively explicit, and it must be analyzed to ensure stability. One possible predictor is
u{O] 1
+
4u n - 5u n - 1
=
2k(2Qu n
+
QU n - 1 )
followed by one step of Eq. (3. l .2 1). It can be shown to be stable with as defined by Eq. (3.1 .7). (The stability limit for k/h depends on p.) We next tum to the parabolic model problem
a >
(3. 1 .22) Q = Qp
0,
u(x, 0)
(3.1 .23)
which has the solution u(x, t)
(3.1 .24)
=
This is a so-called standing wave, whose amplitude decreases with time. The semidiscrete second-order approximation is
dul t) dt
ulO)
eiWXj ,
j =
0, 1, . . . , N,
(3. l.25)
93
HIGHER ORDER ACCURATE DIFFERENCE APPROXIMATIONS
which has the solution (3. 1 .26) This approximation has no phase error, but it has an error in amplitude. It is convenient to study the error of the exponent, which is
d2 (w) - ( h42 2 -� - w 2 ) at =
. - sm
If we use a Taylor expansion about
.
2
(3. 1 .27)
� = 0, we get (3. 1 .28)
which shows that the approximation has a smaller decay rate than the solution of the differential equation. For the error, we have a first approximation,
Thus,
e 1 2e
for
t aw 2 ' 1
that is, if the time interval [0, T] for the computation is not very small, the maximum error does not depend on T. This is due to the dissipative character of parabolic equations. We again introduce the error level E and the number of points per wavelength Mp = 2'7r/�, and we obtain
A corresponding calculation yields, for the fourth-order approximation,
HIGHER ORDER ACCURACY
94
TABLE 3.1.2. Mp for parabolic equations f = f =
2.5 4.4
3.5 11
0. 1 0.0 1
which gives us Table 3 . 1 .2. If the error level E = 10- 1 is tolerable, then there is no reason to use a higher order method, and even for E = 10- 2 it is questionable. Thus, for diffusion convection problems, where the diffusion dominates, second-order methods are often adequate. Instead of the high order accurate difference operators Qp used above, one can use compact implicit operators of Pade type. The approximation Wj of uAxj ) is obtained by solving a system (3. 1 .29) where Vj approximates u(Xj ). Here P and Q are difference operators using a few grid points. For example, a fourth order approximation of Ux is obtained with
P Q
1 _1
"6 (E- 1 2h
(E
+ _
41
+
E- 1 ).
E), (3. 1 .30)
A linear system of equations must be solved in each time-step. This extra work may well pay off, since the error constant is very small (see Exercise 3 . 1 .6).
EXERCISES
3.1.1. 3.1.2.
Derive a sixth-order approximation of ()2/dX2 and an estimate of the num ber of gridpoints M6 in terms of E . Add the third column for M6 to Table 3. 1 .2. Derive the stability condition for Simpson's rule (3.1 .20) when used for the equation Ut = Ux with Q4 .
FOURIER METHOD
9S
3.1.3.
Derive the order of accuracy in time for the predictor-corrector method [Eqs. (3. 1 .21) and (3. 1 .22)] when only one iteration is used in the cor rector.
3.1.4.
Verify Table 3. 1 . 1 by carrying out numerical experiments for ut+a(x) u.. = Hint: One can always find solutions of a partial differential equations in the following way. Let U be any function. Calculate
F(x, t) with the leap-frog scheme and a small time step. Ut - P(fJjfJx)U = F(x , t)
then
U is
the solution of
ut - P(fJjfJx) u u (x, 0)
3.1.5.
d()
(
arc sin ()
vt=02
)
1 = _ 1 - () 2
Derive the leading error term U..
3.1.7.
3.2.
U(x, 0).
Derive the coefficients of Lemma 3. 1 . 1 with the help of the relationship
� 3.1.6.
F
ch4
(
1 + ()
arc sin ()
vt=02
)
.
in the approximation
::::: P- 1 Qu ,
where P and Q are defined in Eq. (3. 1 .30). Compare this to the leading error term for Q4 defined in Eq. (3. 1 .8). Derive a Pade type fourth order approximation of Un, and derive the leading error term.
FOURIER METHOD
Again, consider the hyperbolic problem
+ au..
Ut
u (O)
=
with its solution given by
I(x)
0, =
1
-)2;
w=-oo
(3.2. 1 )
96
HIGHER ORDER ACCURACY
1
u(x , t)
v'2;
(3.2.2)
W = - DO
In Section 1 .4, it was shown how trigonometric interpolation can be used to derive high-order accurate approximations of derivatives using the S operator defined in Eq. ( 1 .4.2). The spacial coordinate x is discretized and the differential operator a/ax is replaced by S. We use the truncated Fourier series as initial data for the approximation. (In practice, the natural choice is the interpolating polynomial, which is not the same.) Let v be the vector with gridvalues Vj, j 0, 1 , . . . , N, as its elements. Then the Fourier method is defined by =
dv + aSv dt
=
v (O)
=
-
0, 1
L
v'2; I w l sN/2
f(w)ew .
(3.2.3)
Here, the vector ew consists of the Fourier component eiwx evaluated in each of the gridpoints, as defined in Eq. ( 1 .4.3). It was shown that these vectors form an eigensystem for S. vet) can be expressed as a linear combination [see Eq. ( 1 .4.4)] of these vectors. If this is substituted into Eq. (3.2.3), we obtain
[ av(w,at t) + iw v(w, t)] ew L I w l SN/2 L [v(w, O) - j(w)]ew I wl SN/
0,
_
2
=
0.
(3.2.4)
Since the vectors ew are linearly independent, the expressions in the brackets must vanish for each w . [The process of isolating each component v(w, t) can of course be considered as a diagonalization of the matrix S in Eq. (3.2.3).] We obtain
v(w, t) = e iw a1j (w),
jWJ
� N/
2,
(3.2.5)
and the approximate solution is
Vj(t)
1
L
v'2; I w l SN/2
eiw(Xj - a t)j(w),
j
O, l, . . , N. .
(3. 2 .6)
97
FOURIER METHOD
This can be compared to the true solution shown in Eq. (3.2.2). For smooth solu tions, l1(w I is very small for large values of I w I , and the approximate solution shown in Eq. (3.2.6) is very accurate. In particular, if f(w) = 0 for I w I > N/2, then the exact solution is obtained (even if the interpolation polynomial is used, as demonstrated in Lemma 1 .3.2). One way to define the accuracy requirements is to determine a maximum wave number, W max , for which the solution must be determined within a certain error, as described in the previous section for difference methods. With Fourier methods, we obtain the exact solution (dis regarding time discretization) for these first W max wave numbers. Now we compare the efficiency of difference methods. If one successively increases the order of accuracy p using centered difference operators, as was done in Section 3 . 1 , and use periodicity one can show that the Fourier method represented by the S operator is obtained in the limit as p ---7 00. The work per point using the centered difference operators is proportional to p and, therefore, the work per time step is &(pN). For large values of p, the amount of work required to implement the high-order methods straightforwardly is excessive, and the methods are inefficient. When p log N or larger, it is more efficient to use the Fourier method to implement the pth order approximation using trigono metric interpolation with an appropriate modified operator S. In this case, the operation count is reduced to &(N log N) per time step. It was noted above that the first N Fourier components are computed without error using the Fourier method. Disregarding the constant term v(O, f)/..)2;, it takes N points to determine the corresponding N Fourier coefficients. (N/2 sine-coefficients and N/2 cosine-coefficients in the real case.) Thus, exactly two points per wavelength are necessary, which is the lower limit for Mp . The work per wavelength for difference methods is defined as :=;
p = 2, 4, 6,
(3.2.7)
where Mp, which is defined in Eq. (3. 1 . 1 3), is a function of q/E . For the spectral method, Wp is independent of q and E , and it is only a function of the num ber of gridpoints, (N + 1). Figure 3.2.1 is a plot of Wp for the different meth ods. Again, we must remember that our analysis is for a very simple problem. Therefore, the conclusion can only be used as a rough guide. From Figure 3.2. 1 , w e can expect that the Fourier method will perform well over long time inter vals, when higher order accuracy is required, that is, when the maximum wave number to be accurately represented is large. The success of the Fourier method to approximate solutions of Eq. (3.2. 1 ) is due to the fact that the functions eiwx are the eigenfunctions of that problem. The Fourier method computes the first N terms of the expansion of the solution of Eq. (3.2. 1) in its eigenfunctions. For problems where the eigenfunctions are not close to Fourier modes, the advantage of the Fourier method is not that pronounced.
98
HIGHER ORDER ACCURACY
Work
15
10
4th
5
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
order
16th order 6th order 8th order FFf v = 64 FFf v = 3 2 FFf v = 16 FFf v = 8 ,
,
,
,
1 00
1 000
500
1 500
2000
Figure 3.2.1. Wp as a function of q/E. [Reprinted from Fornberg SIAM Journal of Numerical Analysis 12, 521 ( 1 975), with permission. Note that v = N in the figure.]
The Fourier method cannot be used directly for problems with nonperiodic solutions. A similar method based upon expansions in Chebyshev polynomials can be used in this case, but it will not be discussed in this book. For time discretization, the leap-frog scheme is often used:
(3.2.8) In Fourier space, the scheme has the form
v(wf + I = -2kaiwv(wf + v(wr - 1 ,
Iw l
� N12,
(3.2.9)
with the characteristic equation
Z2 + 2kaiwz
-
1
o.
(3.2. 10)
The solutions are given by
Z I, 2
-kaiw ± V I - (kaw)2,
Iwl
� N12,
(3.2. 11)
FOURIER METHOD
99
showing that the condition for stability is
I ka N/2 1
<
1 , or (3.2. 12)
This compares with k l a l /h < 1 for the second order leap-frog difference scheme. The dominating part of the work when using Eq. (3.2.8) is the two FFTs at each time step. The operation count is therefore &(N log N) per step. Another way of differencing in time is the trapezoidal rule (usually called the Crank-Nicholson method, when applied to centered difference approxima tions):
(3.2.13) The amplification factor is
Q
1 - ikaw/2
1 + ikaw/2 '
(3.2. 14)
so it is unconditionally stable. The implementation of Eq. (3.2. 13) can be carried out in Fourier space as follows: If the FFT operation is denoted by the operator T, and the mUltiplications by iw are carried out with the diagonal operator D, then S can be factored as
(3 .2. 1 5 ) and Eq.
(3.2.13) can be rewritten in the form (3.2. 16)
A multiplication of Eq.
(3.2. 16) from the left by T gives,
with "n :::: Tvn ,
(3.2.17) "n + I
is obtained from
HIGHER ORDER ACCURACY
100
(3.2. 1 8) and finally vn + I from vn + I
=
T- 1 vn + I . Hence, the algorithm, for each step, is:
1. Compute vn = Tvn using FFf. 2. Compute vn + I from Eq. (3.2. 1 8) (D is diagonal so it takes only operations). 3. Compute vn + I = T- 1 vn + I using the inverse FFf.
&(N)
Therefore, the operation count is approximately the same as for the explicit leap-frog scheme. If the coefficient a depends on x, then the algorithm is more complicated. Only hyperbolic problems have been treated here, but the definition of the Fourier method is obvious for higher order equations. For example, the operator S 2 approximates the differential operator (J2 /ax2 , and it takes only two FFfs to compute it. With the notation above, we have (3.2. 19) where the matrix D2 is diagonal with elements -w 2 , where I w i ::;; N/2. Similarly, a 2/axdy can be approximated by SxSy, where Sx and Sy are the one-dimensional operators corresponding to each space dimension. The Fourier method has been presented here as a way of approximating derivatives with high accuracy, and it can be obtained as the limit of differ ence approximations on a given grid. Another type of Fourier technique can be derived using Galerkin s method. This method can be defined in a more abstract form without specifying the type of basis functions to be used. With piecewise polynomials as basis functions, it can be used to derive the finite ele ment method. Here, we briefly describe how it can be used with trigonometric basis functions. Consider the general problem
��
u(x, O )
=
=
( a� )
p x, t,
(3.2.20a)
u,
f(x) ,
(3.2.20b)
where P
L Av (x, t) v= o
av . ax v
FOURIER METHOD
101
The differential equation is multiplied by a test function \O(x), which belongs to an appropriate function space 25. The equation is then integrated over the interval [0, 21r] to obtain
(ut , \0) (u( · , O), \O)
(Pu, \0), (j, \0)
(3.2.2 1)
Equation (3.2.20) is now reformulated as follows: Find a function u(x, t) that belongs to an appropriate function space .A for each fixed t such that Eq. (3.2.21 ) is fulfilled for each \O(x) E 25. This is called the weak form of Eq. (3.2.20a). The spaces .A and 25 are chosen so that the integrals in Eq. (3.2.2 1) exist. For example, if pea/ax) = a(x)a/ax, then 25 can be taken to be the set of all functions in ("", (0, 21r) with 21r-periodic derivatives and .A as all periodic functions with derivatives in L2(0, 21r). An approximation of Eq. (3.2.21 ) is obtained by replacing the spaces ./� and 25 by finite dimensional subs paces .A N and 25N. By choosing these spaces appropriately, we obtain the formulation of an approximation of the problem, which leads to a numerical algorithm. If .A N = 25N, this is called the Galerkin
method.
Choose 25N as the class of trigonometric polynomials and let j E 25N be an approximation of f. The Galerkin approximation is then defined as fol lows:
Definition 3.2.1. The Galerkin Approximation.
vex, t)
Nj2
1/� L
w = - Nj2
Find a function
D(w, t)eiwx
such that (U( "
(vt , \O)
0), \0)
(PU, \0), (j, \0)
(3.2.22)
Because the functions {eiwX }�0'_Nj2 span the space 25N, Eq. (3.2.22) is �ul filled for all \0 E 25N if and only if it is fulfilled for each basis function e1WX •
102
HIGHER ORDER ACCURACY
By introducing the Fourier series for v and multiplying by .j2:;;: we obtain
(L I
wI
'O,N/2
( �at - p ( x, t, �ax ) ) v(w, t)eiWX , eiVX )
0,
Ivl
::;
N12. (3.2.23)
By using Lemma 1 . 1 . 1 , this leads to
21f
-'
dv( v , t) dt
Ivl
--
::;
N12. (3.2.24)
The coeffic ients of v(w, t) on the right-hand side are integrals of known func tions and can be computed. The coefficients in the differential operator are peri odic functions of x and can be expanded into a Fourier series with coefficients a(/1-, t). Therefore, the coefficients of v(w, t) can be expressed in terms of a(w, t). Equation (3.2.24) is a system of linear ODEs, which can be solved by numerical methods, usually difference methods. The procedure described here was origi nally known as the spectral method. We next consider the case where P has constant coefficients. We take, as an example, p(a/ax) aa/ax and get, from Eq. (3.2.24), using Lemma 1 . 1 . 1 , =
dv(w, t) dt
=
Iwl
iwav(w, t),
::;
N12.
(3.2.25)
Equation (3.2.25) also holds for the coefficients v(w, t) of the Fourier method. If the initial data are chosen to be the same, then the solutions are identi cal. Because the coefficients v(w, t) = v(w, t) uniquely determine the trigono metric polynomial, the Galerkin and Fourier methods are identical in this case. For problems with variable coefficients, the methods are not identical. The Fourier method is often called the pseudospectral method, because the time integration is carried out in physical space, not in Fourier space, as it is in the spectral method. Let us consider another way of interpreting the Fourier method. Let a/x) be a Dirac delta function defined such that
j
.
O, I, . . , N .
103
FOURIER METHOD
If I{) in Eq. (3.2.22) is chosen as
OJ (x), we obtain j
0, 1 , . . . , N.
(3.2.26)
In other words, we are requiring that the differential equation be satisfied, but only at the gridpoints Xj . By assuming that v belong to a function space of dimension N + 1 for each t, it is possible to satisfy the equalities in Eq. (3.2.26). This is known as the collocation method, and the points {Xj } are called the
collocation points.
If we choose v to be a trigonometric polynomial, we obtain
- p( X, t � ) ) v(w, t)eiWX ) ( ( : t S N/2 X = Xj
L
JwJ
,
= 0,
j
0, 1, . . . , N, (3.2.27)
or, equivalently,
dv(w, t) iwxj e dt N J w J S /2
L j
L
___
=
J w J S N/2
0, 1 , . . . , N.
(p ( X, t, a: ) v(w, t)eiWX ) X = Xj , (3.2.28)
This is exactly the Fourier method as defined earlier. With obtain
a(Xj , t)
L
J w J S N/2
iwv(w , t)eiwxj ,
P a(x, t)iJjax, we
j
=
0, 1 , . . . , N. (3.2.29)
Let v(t) denote the vector with components defined by j
We compute
v(w, t) from v(t) using the FFT. Let A(t) =
diag(a (xo
0, 1 , . . . , N.
, t), a(x I ' t), . . . , a(xN , t)),
(3.2.30)
HIGHER ORDER ACCURACY
104
then Eq. (3.2.29) can be written as
dv(t) dt
A(t)Sv(t),
(3.2.3 1 )
which is the Fourier method. To summarize, Fourier and pseudospectral are different names for the same procedure. They are equivalent to the collocation method when the class of trigonometric polynomials is chosen to be the solution subspace. The Galerkin spectral method gives the same solution if the differential equation has constant coefficients.
EXERCISES 3.2.1. Write a program for solving u/ + Ux = 0 with the Fourier method. Com pute an approximation for each of the four initial functions presented in Section 3 . 1 . Compare the results to Figures 3 . 1 . 1 to 3 . 1 .4.
BIBLIOGRAPHIC NOTES The analysis of the semidiscrete problem leading to the estimates of the nec essary number of points per wavelength was introduced by Kreiss and Oliger ( 1 972) (they also discussed the Fourier method). Earlier work on these ideas was done by 0kland ( 1 958) and Thompson ( 1 96 1 ). In practice, the time dis cretization also plays an important role. An analysis of fully discretized approx imations was presented by Swartz and Wendroff ( 1 974). Their results indicate that higher order approximations, in space and time, are more efficient in most cases for hyperbolic problems. However, one should be aware that, so far, we are only dealing with periodic boundary conditions. When other types of bound aries are introduced, the requirement of stability creates an additional difficulty, which is more problematic with higher order accuracy. We discuss this in Part II of this book. A comprehensive treatment of Fourier and spectral methods can be found in Gottlieb and Orszag ( 1 977) and Canuto, et. al. ( 1 988). Our discussion of the relation between eigenfunction expansion and the Fourier method is based on Fornberg ( 1 975). The accuracy of the Fourier method has been discussed by Tadmor ( 1 986), where he shows that the con vergence rate as h ---7 0 is faster than any power hP• In Abarbanel, Gottlieb, and Tadmor ( 1 986), it is shown that the same "spectral accuracy" is also obtained for discontinuous solutions. Further references concerning the Fourier method are found in the Notes for Chapter 6.
EXERCISES
105
Higher order discretization methods in time are discussed further in Sec tion 6.7. In particular, stability results are presented for Runge-Kutta, for Adams-Bashford, and for Adams-Moulton methods. The time discretization for Fourier methods has been discussed by Gottlieb and Turkel ( 1 980). The predictor (3.1 .22) for the Simpson rule corrector (3. 1 .2 1 ) was suggested by Stetter ( 1 965).
4 WELL-POSED PROBLEMS
In Chapter 2, we considered several model equations and discussed their solu tions and properties. In this chapter, we formalize the results and develop con cepts that apply to general classes of problems. The concept of well-posedness was introduced by Hadamard and, simply stated, it means that a well-posed problem should have a solution, that this solution should be unique and that it should depend continuously on the problem's data. The first two require ments are certainly obvious minimal requirements for a reasonable problem, and the last ensures that perturbations, such as errors in measurement, should not unduly affect the solution. We refine and quantify this statement and con clude this chapter with a discussion of nonlinear behavior in this context.
4.1. WELL-POSEDNESS In Chapter 2, we noted that the L2 norm of the solutions of all our model equa tions could be estimated for all time in terms of the L2 norm of the initial data; that is,
Il u( · , t) 11 [See Eqs.
�
(2. 1.6), (2.5.5), and (2.7.5).]
u(x, O)
lIu( · , O) II ·
(4.1.1)
If we change the initial data =
f(x)
to
u(x, O) = f(x) + og(x),
where 0 < 0 « 1 is a small constant and II gO II = 1, we obtain another solution u(x, t) . The difference w(x, t) = u(x, t) - u(x, t) is also a solution of the same differential equation with initial data 106
WELL·POSEDNESS
107
W(X, O) = u(x, O) - U(x, O)
og(x).
The estimate (4. 1 . 1 ) gives us
li u( · , t) - u(· , t) 1i :5 o li g( · ) 11 = o.
(4. 1 .2)
Thus, Eq. (4. 1 . 1 ) guarantees that small changes of the initial data result in small changes in the solution, and we say that the solution depends continuously on the initial data. The requirement that the estimate (4. 1 . 1 ) hold for all time is too restrictive. In applications, the differential equations often contain lower order tenns. Con sider, for example, the equation
Ut = Ux
+ DiU,
Di
constant,
with 21r-periodic initial data =
U(x, O)
f(x).
If we introduce a new dependent variable,
u
=
e- IY.t u,
we obtain the model equation (2. 1 . 1 )
u(x, O) = f(x). Thus,
Il u( · , t) 11
=
Il u( - , O) I i ,
that is,
Il u( ·, t) 1i In this case, Eq. (4. 1 .2) becomes
Il u( ·, t) - u( ·, t) 11 :5 eIY.t o. In any finite time interval 0 :5 t :5 T, the difference is still of order 0, although for Di > 0 and large Dit the estimate is useless. However, the solu tion still depends continuously on the initial data. Now consider the system
108
WELL-POSED PROBLEMS
u =
[ uU(l)(2) ] ,
d
=
constant,
(4. 1 .3)
with 27r-periodic initial data
U(X, O)
=
f.
The unitary transformation
transforms A to diagonal form, that is,
Substituting
v = U' u as a new variable into the system (4. 1 .3) gives us
that is, (4. 1 .4) Thus, we obtain two scalar model equations whose solutions satisfy the esti mates j Therefore,
II v( 1 ) (· , t) 1I 2 + Ii V(2) (. , t) 1I 2 , Il v( 1 ) ( - , 0) 11 2 + II v(2) ( -, 0) 1I 2 and we obtain the energy estimate
1 , 2.
109
WELL·POSEDNESS
II Uv( · , t) 1I 2 = Il v(·, t) 1I 2 Il v( · , 0) 11 2 lI u' u( · , 0) 11 2 = Il u( - , 0) 11 2 .
lI u( · , t) 1I 2
=
(4. 1.5)
The system (4. 1.3) is symmetric and hyperbolic. We shall see that such systems always have estimates of the form Eq. (4. 1 .5). In many applications, the systems are hyperbolic, but not symmetric. In that case, we still obtain an estimate. A typical example is the system
d2
=
constant >
O.
We change the dependent variables by the so called diagonal scaling
u and obtain the system
[
U_ t = 01 d0- I
=
: =
Du
[ 1 0 ] u,_ o d
d > 0,
(4. 1.3),
] [ d02 01 ] [ 01 0 ] U_x d
Therefore, the estimate
lI u( · , t) 1I
=
lIu (· , O) 1I
holds and w e obtain, i n the original variables,
Il u( · , t) 11
II Du(·, t) 11 � I D l llu( - , t) 1I , I D l ll u( · , O) 1I I D I II D- 1 u( - , 0) 1I , � I D I I D- 1 1 Il u( · , 0) 1I . =
Thus, we have the estimate
lI u( ·, t) 11 � K ll u( -, O) II , with
In this case Eq.
(4.1 .2) becomes
(4.1.6)
110
WELL-POSED PROBLEMS
Ilu( · , t) - u( ·, t) 1I � Ko , and the difference is still of order 0, although the estimate is poor for very large or very small d. Instead of starting at t = 0 we can also solve the problem with initial data given at any time to. In that case, our estimates have the fonn
Il u( · , t) 1I � Ke,,(t - to) lI u( · , to) ll ·
(4. 1 .7)
We now introduce the concept of well-posedness for general systems. Con sider a system of partial differential equations
ut
=
( ; ) u,
P x, t,
t � to,
(4. 1 .8a)
with initial data
u(x, to)
=
f(x).
(4. 1 .8b)
u = (u( l ) , . . . , u(m» T is a vector with m components, x = (x( l ), . . . , x(d» P(x, t, a/ax) is a general differential operator Here
(
P x, t,
a ax
) = I� Av (x, t) ( ax(a l ) ) ( a ) VI
...
ax(d)
Vd
and
(4. 1 .9)
of order p. v = ( V I , " " V ) is a multi index with nonnegative integer elements, d and I v i = L Vi· The coefficients A v = A V 1 , ... , Vd are m X m matrix functions. For simplicity, we assume that A v (x, t) E C � (x, t). We also assume that the coefficients and data are 27r-periodic in every space dimension. We can now make the fundamental definition.
Definition 4_1.1.
f E C � (x):
The problem (4. 1.8) is well posed if, for every to and every
1. There exists a unique solution u(x, t) E C � (x, t), which is 27r-periodic in
every space dimension and 2. There are constants and K, independent off and to, such that Eq. (4.1.7) holds; that is, IX
Il u( ' , t) 11 � Ke,,(/ - t o) lI fOIi .
(4. 1 . 10)
111
WELL·POSEDNESS
This is not the only way well-posedness can be defined. For example, dif ferent norms might be used and the functional form of growth allowed might be different--()r suppressed entirely. As we will see, Definition 4. 1 . 1 is natural for a large class of problems and allows us to simplify the analysis in a natural way. The exponential growth must be tolerated in order to treat equations with variable coefficients and to be able to ignore lower order terms. It follows from the discussion in Chapter 2 that all of the periodic boundary problems presented are well posed. It is important to make the distinction that it is the problem that is well posed and not the differential equation itself. The problem setting is as important as the equation. Not all problems are well posed, as our next example will illustrate. If a problem is not well posed, then we say that it is ill posed. Consider the periodic boundary problem for the backward heat equation
(4. 1 . 1 1 a) for 0 ::; t and
0 ::; x ::; 211" and initial data U(X, O)
for
eiwxj (w)
=
(4. 1. 1 1 b)
0 ::; x ::; 211". The solution is given by u(X, t)
=
. 2 elWH W If(w), A
(4.1.12)
which grows rapidly in time if w is large. For example, if w = 10 and j (w) = 10- 1 0 , then u(O, 1) = 2.7 X 1043 . We cannot find an a for which Eq. (4.1. 10) holds independent of w. On the other hand, if w = 1, then
U(X, t)
=
eiX+lj(1),
and the solution is well behaved. If one can choose the initial data so that only small wave numbers are present, then a reasonably well-behaved solu tion will exist, and it may not seem important that the problem is ill posed. However, in applications, large wave numbers are always present, for exam ple, due to errors in measurement, and these severely contaminate the solution. One might think that this problem can be solved by simply filtering the ini tial data so that only small wave numbers are present. Theoretically, this is possible, but, as we have discussed in Section 2.1, in numerical computations, rounding errors always introduce large wave numbers and ruin the solution. In addition, because most application problems have variable coefficients or are nonlinear and high frequencies are then generated spontaneously in the solution at later times, these frequencies will destroy the reasonable behavior of their solutions.
1 12
WELL-POSED PROBLEMS
There are difficulties with our definition of well-posedness. Consider the periodic boundary problem for the equation
Ut
=:
Uxx + 100u
(4. 1 . 1 3)
with initial data
U(x, O)
=:
eiwxj(w).
The solution of Eq. (4. 1 . 1 3) is given by
Thus, for general data
U(x, 0)
1 V2; w = - oo
the solution is 00
U(x, t)
1 eiwX + ( lOO- w 2 )tj(w). L V2; w = - oo
(4. 1 . 14)
Using Parseval's relation, we obtain
w = - oo which shows that the problem is well posed. Equation (4. 1.14) shows that large wave numbers are damped but small wave numbers grow rapidly, and one might be tempted to call the problem ill posed. However, there is a funda mental difference between the behavior of the solutions of Eq. (4. 1 . 1 1 ) and Eq. (4. 1 . 1 3). Solutions of Eq. (4. 1 . 1 1 ) have no growth limit; arbitrarily rapid growth can be obtained by increasing w, whereas those of Eq. (4. 1 . 1 3) do have a growth limit. This shows us that it is not sufficient to only know that the problem is well posed-we also need to have estimates for a and K. This is especially true for numerical calculations because these constants control the
1 13
EXERCISES
growth rate of rounding and other errors. This will be discussed in the later chapters. EXERCISES
� �
4.1.1. Suppose that one wants to solve the problem (4. 1 . 1 3) for 0 t 2 and will allow 1 % relative error in the solution. Give a bound for the permissible rounding errors. 4.2. SCALAR DIFFERENTIAL EQUATIONS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
We consider the scalar equation
ut
auxx + bux + cu,
=
t
2:
to,
(4.2. 1a)
with 21r-periodic initial data
u(x, to)
=
f(x).
(4.2. 1b)
The coefficients a, b , and c are complex numbers. We want to investigate v.'hat conditions a, b, and c must satisfy if the problem is to be well posed. For equations with constant coefficients we can assume that to = 0, because the coefficients are invariant under the transformation t' = t - to. We want to prove the following theorem. Theorem 4.2.1. The problem (4.2. 1) is well posed if, and only if, there is a real constant a, such that, for all real w,
Re K
�
a,
K
:
= - aw 2
+ iwb + c.
(4.2.2)
Proof. We proceed as we did with the model equations. Let
f(x)
I
V2;
= --
e1 WXf(w ) .
A
be a simple wave. We construct a simple wave solution
u(x, t)
=
1
V2;
.
e1 WX u (w , t),
u(w , O)
=
J(w).
(4.2.3)
WELL-POSED PROBLEMS
1 14
Substituting Eq. (4.2.3) into Eq. (4.2. 1 a) gives us the ordinary differential equa tion
at u(w, t)
a
K U(W , t),
A
(4.2.4)
A
that is,
In this case
Therefore, the inequality (4. 1 . 1 0) is satisfied if and only if Eq. (4.2.2) holds. Now consider general initial data =
U(x, O)
I
.J2; w = -oo
f(x)
and assume that Eq. (4.2.2) holds. The solution of our problem exists and is given by I
U(x, t)
(4.2.5)
.J2; w = - oo
By Parseval's relation =
e2(ReK(w ))t I f(w) 1 2 L w = -oo
$;
e2CY.t Il f01 l 2 •
Therefore, inequality (4. 1 . 10) also holds for general data. This proves the the orem. We now discuss condition (4.2.2) 1. The constant c, which is the coefficient of the undifferentiated term, has no influence on whether the problem is well posed or not, because we can always replace lX by lX + c and Eq. (4.2.2) becomes Re (K
-
c)
=
Re (-aw 2 +
ibw)
$;
lX.
1 15
EXERCISES
As we shall see, this is typical for general systems. We shall, therefore, assume that c = o.
2. The equation is called parabolic if ar = Re a > O. In this case
Thus, the problem is well posed for all values of b. This is typical for gen eral parabolic systems: the highest derivative term guarantees, by itself, that the problem is well posed. 3. Re
a = O. Now Re K =
-
w lm b .
If 1m b f- 0, then the problem is not well posed, because we can choose the sign of w such that Re K becomes arbitrarily large. Thus, well-posed problems have the form
If ai f- 0, the equation is called a Schrodinger type equation. If ai = again have our hyperbolic model equation. 4. Re a < O. Now
there is no upper bound for Re any b.
K,
0, we
and the problem is not well-posed for
EXERCISES 4.2.1. Consider the differential equation
Derive the condition for well-posedness corresponding to Eq. (4.2.2). Is it true that the problem is always well posed if Re a4 < O?
WELL-POSED PROBLEMS
1 16
4.3 FIRST -ORDER SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
Let
[U(l)�X' ] (X, t)
u
=
u(m)
t)
be an m x m matrix and a vector function with m components, respectively. We consider the initial value problem
X
u ( O) ,
=
f(x).
(4.3. 1 )
We then want to prove Theorem 4.3 . 1 .
The initial value problem for the system (4.3. 1) is well-posed if, and only if, the eigenvalues A of A are real and there is a complete system
Theorem 4.3.1.
of eigenvectors.
Proof. To begin, let us prove that the eigenvalues of A must be real for the problem to be well posed. Let A be an eigenvalue and ¢ the corresponding eigenvector. Then
is a solution of Eq. (4.3. 1 ) with initial data
Thus,
Il u( · , t) 11 If the problem i s well posed, then Re (iW A)
�
=
a,
eRe (iwAI) Il fOII. constant
for all w. This is only possible if A is real. Now assume that the eigenvalues are real and that there is a complete set of eigenvectors ¢( l ) , , ¢(m ) . Let . • •
FIRST-ORDER SYSTEMS WITH CONSTANT COEFFICIENTS
Then
1 17
S transfonns A to diagonal fonn S - I AS
=
A
=
[AI
].
0
(4.3.2)
Introduce a new variable by
u
Su o
Then, Eq. (4.3 . 1 ) becomes (4.3.3) that is, we obtain
m
scalar equations
U- t(
j)
j)
-x( = ).I\J' u
•
From our previous results,
Therefore,
Il u( · , t) 1l = Il u(' , O) !i and
lI u( ·, t) 1I
II Su (·, t) 1I ::;; ISl lI u(', t) 1I = I S l lI u (· , 0) 11 I S I II S - 1 u( · , 0) 1I ::;; ISI I S - 1 1 I1 u( · , 0) 11 .
Thus, the problem is well posed if the eigenvalues are real, and there is a com plete set of eigenvectors. In particular, if A is Hennitian, it has real eigenvalues and a complete set of eigenvectors; therefore, Eq. (4.3 . 1 ) is then well posed. Now assume that the eigenvalues are real, but that there is not a complete set of eigenvectors. We start with a typical case
1 18
A Ut [ o � �1 ] Ux (AI -
.
WELL-POSED PROBLEMS
+
J)ux ;
(4.3.4)
1\
that is, A consists of one Jordan block. Let
u(X, O) =
� eiWXU(w , 0).
v 271"
Then the solution of our problem is given by
and, therefore,
Thus, the problem is well posed if we can find constants K , a such that, for all
w,
However, because the matrices
AI
(4.3.5) and J commute,
m- I
(iw) jJj ti
i =O
j!
L
(Observe that Ji == 0 for j � m.) The last expression grows like I w 1 m - I . Therefore, we cannot find K, a such that Eq. (4.3.5) holds and the initial value problem is not well posed for Eq. (4.3.4). Now consider the general case. There is a transformation S such that
FIRST-ORDER SYSTEMS WITH CONSTANT COEFFICIENTS
119
has block form and every block is a Jordan block. If all blocks are of dimension one, then the above matrix is diagonal and we have a complete set of eigen vectors. Otherwise, there is at least one Jordan block of dimension two, and the problem cannot be well posed. This proves the theorem. Systems like Eq. (4.3 . 1 ), where the eigenvalues of A are real, are called
hyperbolic. In particular, we have the following classification.
Definition 4.3.1. The system in Eq. (4.3.1) is called symmetric hyperbolic if A is a Hermitian matrix. It is called strictly hyperbolic if the eigenvalues are
real and distinct, it is called strongly hyperbolic if the eigenvalues are real and there exists a complete system of eigenvectors, and, finally, it is called weakly hyperbolic if the eigenvalues are real.
From our previous results, the initial value problem is not well posed for weakly hyperbolic systems, which are not strongly hyperbolic. It is well posed for strongly hyperbolic systems. Also, strictly hyperbolic and symmetric hyper bolic systems are subclasses of strongly hyperbolic systems, see Figure 4.3 . 1 . We now prove that lower order terms do not destroy the well-posedness of the initial value problem for strongly hyperbolic systems. Lemma 4.3.1.
Let y E e l satisfy the differential inequality dy
Then
dt
<
ay,
for t
-
Figure 4.3.1.
� O.
WELL-POSED PROBLEMS
120
Proof.
51 = e-Oi.ly satisfies d51 dt
that is,
� 0,
51(t)
�
y(O)
and the desired estimate follows. Now we can prove Theorem 4.3.2.
Consider a strongly hyperbolic system (4.3.1) and perturb it with an undifferentiated tenn
Theorem 4.3.2.
Ut
u(x, O)
=
=
A ux + Bu, f(x),
(4.3.6)
where B is a constant m x m matrix. The problem (4.3.6) is well posed. Proof. Let
S be the transfonnation (4.3.2). Let u
=
Su
and substitute into Eq. (4.3.6) to get
B ut Aux + Bu, j (x) O) = j (x) (x u , , =
=
=
Let j =
S - I BS, S - lf(x).
eiwxj(w). We construct a simple wave solution. x i w e u(w, t) into Eq. (4.3.7) gives us
(4.3.7) Substituting
u(x, t)
=
Ut = (iwA + B)u, u(w, O) j(w). =
Therefore, (4.3.8) Also,
FIRST-ORDER SYSTEMS WITH CONSTANT COEFFICIENTS
:t l iW
121
(ut, u) + (u , Ut),
(iwAu, U) + (u, iwAu) + (BU, u) + (fI, BU), (BU, u) + (U, BU), :s; 2a l u l 2 , a = IBI.
Therefore,
lu(w, t) ! 2 that is, (4.3.9) Now consider general smooth initial data. They can be expanded into a rapidly convergent Fourier series
f(x)
1
.,J2;
w = -oo
By Eq. (4.3.8), the formal solution is
u(x, t)
1 .,J2;
�
L
eiwx e(iwA + B)t](w).
(4.3. 10)
w = -oo
Using Eq. (4.3.9), we see that the series (4.3 . 10) converges rapidly for t > 0 and is a genuine solution of our problem. By Parseval's relation
w = -oo
w = -oo
Thus, the problem is well posed in the transformed variables. For the original variables we obtain
WELL-POSED PROBLEMS
122
lI u( · , t) 1I
IISu(· , t) 1I � / s l lI u(· , t) 1I � I S J eat ll ioll = I S J eat Il S - IfOIl � Keat ll fOIl , K = I S/ I S- I I . =
Therefore, the problem is also well posed in the original variables.
EXERCISES 4.3.1. For which matrices A , B is the system
ut energy conserving [i.e.,
=
Aux + Bu
lI u( · , t) 1I = lI u( · , O) II ]?
4.4. PARABOLIC SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
Let us consider second order systems
ut = Auxx + Bux + Cu - . Pu, u(x, O) = f(x).
(4 . 4. 1 )
The system is called parabolic if the eigenvalues A ofA satisfy
Definition 4.4.1.
Re A
� 0
o = constant > 0.
We find the following notation useful. Suppose A = A' is Hermitian, then we say that A � ° if (Au, u) � ° for all vectors u. We also say A � B if A - B � ° when A and B are Hermitian. We now want to prove the following theorem. Theorem 4.4.1.
tial equations.
Proof Let f(x)
The initial value problem is well posed for parabolic differen =
(1j..J2;) eiwxJ(w ) . We construct a simple wave solution u(x, t) =
Substituting Eq.
1
� V
211"
.
e'WX u(w , t).
(4.4.2) into Eq. (4.4. 1) gives us
(4.4.2)
123
PARABOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
£1/ = (-w 2A
+
iwB
£I(w, O) = J(w);
+
C)£I
= :
P(iw)£I, (4.4.3)
that is, iUw)tJ(w).
£I(w, t)
(4.4.4)
Now assume that A +
A* �
Then P(iw)
+
M,
o > 0.
(4.4.5)
P* (iw) � (-W 2 0 + 2 1 B l w + 2 1 C\ )1, IBI2 -- + 2 1 c \ I = : 2ex/. � 0
(
)
(4.4.6)
Thus,
a ( ) at u, u A
A
(£1/ , £I) + (£I, £It ) = (P(iw)£I, £I)
+
(£I, P(iw) £I),
(£I , (P* (iw) + PUw ))£1) � 2ex(£I, £I),
that is, by Lemma 4.3. 1 ,
Therefore, (4.4.7)
Now consider general smooth initial data. They can be expanded into a rapidly convergent Fourier series
f(x)
=
1
V2;
By Eq. (4.4.4), the formal solution is
w = -oo
WELL-POSED PROBLEMS
124
I
u(x , t)
�
V2;
L eiwx i(iw)tj(w).
w = -oo
(4.4.8)
By Eq. (4.4.7), series (4.4.8) also converges rapidly for t > ° and is, therefore, a genuine solution of our problem. By Parseval's relation
w = - oo �
�
e2 a t
L
w = -oo
I j(w) 1 2
We have proved that there exists a smooth solution which satisfies the desired estimate. We now show that such a solution is unique. Let u be any smooth solution of Eq. (4.4. 1). It can be expanded into a rapidly converging Fourier series
I
u(x, t)
V2;
u(w , t)
�
u(w, O)
I
v 21r =
j(w).
The differential equation gives us
U sing integration by parts, we obtain
Correspondingly,
w = -oo (eiwx , u(x, t )),
PARABOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
125
Therefore,
:t u(w, t)
P(iw )u(w, t),
U(W , O)
=
j(W) ;
that is, we obtain Eq. (4.4.4) again. This proves uniqueness. We now show that inequality (4.4.5) can always be obtained for parabolic systems by a change of the dependent variables. By Schur's Lemma (see Appendix A I , Lemma A.I .3), there is a unitary transformation U such that fq
U'A U =
al2 A2
a Im i1zm
a23
am - I , m Am
0 has upper triangular form. Let
D "
[
0
d
dm - I
be a diagonal matrix. Then
A
:=
D- I U*A UD
1
d > 0,
A + G,
where
A
[
0
A2
0 da l 2 0 0
Am da23
1
dm - I a- I m dm - 2a- 2m
G 0
dam - I ,m 0
WELL-POSED PROBLEMS
126
For sufficiently small d,
A
+
A*
=
A + A* + G + G*
2:
201
+ G + G*
2:
01.
Introduce a new variable by u =
UDu
and substitute into Eq. (4.4. 1 ). The new system has the form
and satisfies Eq. (4.4.5). As for hyperbolic systems, the change of variable does not affect their well-posedness. The theorem is proved. As in the scalar case for parabolic differential equations, the lower order terms do not have any influence on well-posedness. Also, by Eq. (4.4.6), for large w
and, therefore,
implies that the high frequency part of the solution is rapidly damped.
EXERCISES 4.4.1. Prove that there are positive constants 0, K such that the solutions to a parabolic system Ut = Auxx satisfy
(4.4.9) 4.4.2. Is it true that Eq. (4.4.9) holds with the same constants 0, K, if the system is changed to Ut = Au.x.x + Bux + Cu, where B is Hermitian and C is skew Hermitian?
GENERAL SYSTEMS WITH CONSTANT COEFFICIENTS
127
4.5. GENERAL SYSTEMS WITH CONSTANT COEFFICIENTS
We now consider general systems of the type of Eq. (4. 1 .8)
ut
=
p
u(x, O) with constant coefficients. Let vector and assume that
w
=
( a: ) u, f(x) ,
=
(w I , . . . , Wd )
(4.5. 1 ) denote the real wave number
d
L Wjx j .
(w, x)
j= i
( )
As before, we construct simple wave solutions. (4.5.2) Substituting Eq. (4.5.2) into Eq. (4.5 . 1 ) gives us
;t u(w , t)
u(w , O)
=
=
P(iw)u(w, t) j(w).
(4.5.3)
.
P(iw) is called the symbol, or Fourier transform, of the differential operator p(ajax) and is obtained by replacing ajax( j) by iWj Thus, P(iw) is an m X m matrix whose coefficients are polynomials in iWj . Equations (4.5.3) are a system of ordinary differential equations with constant coefficients, and the solution is given by
u(w , t)
=
iUw )rj(w).
(4.5.4)
Theorem 4.5.1. The initial value problem (4.5.1) is well posed if and only if there are constants K, such that, for all w, ex
(4.5.5) Proof. If the problem is well posed, then there are constants K, ex such that, for all solutions,
WELL-POSED PROBLEMS
128
Il u( · , t) 11 � Ke"'l ll u( , O) II . ·
(4.5.6)
Therefore, by Eq. (4.5.4), inequality (4.5.5) is a necessary condition. Now assume that Eq. (4.5.5) is valid and consider Eq. (4.5.1) with general smooth initial data. The data can be expanded into a rapidly convergent Fourier series
w By Eq. (4.5.4), the formal solution of our problem is
u(x, t)
=
(21r) -(d/2)
L ei(w ,x)i(iw )lj(w).
(4.5.7)
w
By Eq. (4.5.5), the series (4.5.7) also converges rapidly for t > 0 and is, there fore, a genuine solution. By Parseval's relation,
w
w w
We have constructed a smooth solution that satisfies the desired estimate. Because uniqueness can be proved in the same way as in the previous section, the theorem follows. We now discuss the algebraic conditions that guarantee that Eq. (4.5.5) holds. Theorem 4.5.2. The Petrovskii condition. A necessary condition for well A of PUw) satisfy the inequal
posedness is that, for all w, the eigenvalues ity Re A �
a.
(4.5.8)
Proof. Let A be an eigenvalue and 1J be the corresponding eigenvector. Then
and the theorem follows.
GENERAL SYSTEMS WITH CONSTANT COEFFICIENTS
129
Assume that the Petrovskii condition is satisfied and that there is a constant K and a transformation S(w) with
Theorem 4.5.3.
(4.5.9) for every w and S - I (w)p(iw)S(w) has diagonal form. Then the initial value problem (4.5.1) is well posed. Proof.
This proves the theorem. Theorem 4.5.4. The initial a such that, for all w,
value problem is well-posed if there is a constant
P(iw) + P*(iw) Proof. By
� 2aI.
(4.5.10)
Eq. (4.5.3), - ( U u) dt '
d
A
A
(Pu, u) + (u, Pu), (u, (p* + p)u)
� 2a(u, u).
Therefore, by Lemma 4.3.1,
that is, by
Eq. (4.5.4),
This proves the theorem. We can also express the above result in another way.
The differential operator ped/dx) is called semibounded if there is a constant a such that, for all smooth functions w(x),
Definition 4.5.1.
130
WELL-POSED PROBLEMS
(w, Pw) + (Pw, w) ::;; 2a(w, w).
(4.5 . 1 1 )
This definition might b e called semibounded from above. Note that it does not imply that the operator is bounded.
REMARK.
Next, we have the following theorem. Theorem 4.5.5. Proof. Expand
P is semibounded if, and only if, Eq. (4.5. 10) holds. w in a Fourier series w
(27r) -(d/2) L ei(w .x)w(w). W
Then,
Pw
(27r) - (d/2) L ei(w , X)P(iw)w(w). W
Therefore, Parseval's relation gives us
(w, Pw) + (Pw, w)
L (w(w), (P*(iw) W
+ P(iw »w(w» .
(4.5. 1 2)
If Eq. (4.5. 10) holds, then Eq. (4.5. 1 1 ) follows. The converse is also true. If Eq. (4.5 . 1 1 ) is satisfied, then it must hold for every simple wave w (27r) -(d/2) ei(w .x) w(w), that is,
=
(w(w), (P* (iw) + P(iw» w(w» ::;; 2a i w(w) i 2 for all vectors W. Therefore, Eq. (4.5. 1 0) follows, and we have proved the the orem. If one knows that an operator is semibounded, then one can prove the energy estimate in Eq. (4. 1 .7) immediately. Theorem 4.5.6. If P is semibounded, then the solutions of Eq. (4.5. 1) satisfy
the estimate
131
GENERAL SYSTEMS WITH CONSTANT COEFFICIENTS
(4.5. 1 3) Proof.
d
dt
(u, u) �
(Pu, u) + (u, Pu)
(ul , u) + (u, Ut ) 2a(u, u),
and Eq. (4.5 . 1 3) follows. EXAMPLE.
As in Section 4.3, we call a first-order system
Aj
symmetric hyperbolic, if the matrices j = 1 , 2, . . . , d. Now
P(iw)
i
are Hermitian, that is, if Aj =
Aj,
d
L Aj wj
j= 1
and
P(iw) + P*(iw)
= O.
Thus, the initial-value problem is well posed and
p(ajax) is semibounded.
The above result can be generalized. Theorem 4.5.7. Assume that there are constants a, K > 0 and, for every w, a positive Hermitian matrix H(w) = H*(w) with
(4.5 . 14)
such that "
"
'" *
"
H(w)P(iw) + P (iw)H(w) Then the initial value problem is well posed.
�
'"
2aH(w).
(4.5. 1 5)
132
WELL-POSED PROBLEMS
Proof. By Eq. (4.5.3)
d � �� dt (u, Hu)
(Pu, Hu) + (u, HPu), (u, (P*H + HP)u)
$
2a(u, Hu).
Therefore, by Lemma 4.3 . 1 ,
(u(w, t), H(w)u(w, t)
$
e2cxt(u(w, O), H(w)u(w, 0) ;
that is, by Eq. (4.5. 14),
Thus,
and the initial value problem is well posed. The theorem now follows from Theorem 4.5 . 1 . One can prove that the conditions of Theorem 4.5.7 characterize well-posed problems. Without proof we state the following theorem.
The initial-value problem (4.5. 1) is well posed if and only if we can construct Hermitian matrices such that conditions (4.5. 14) and (4.5.15) hold.
Theorem 4.5.8.
In applications, H(w) often has a very simple structure. Either H(w) = 1 or matrix which defines a scaling of the dependent variables. We can again express our results another way. Let H(w) satisfy Eq. (4.5. 14) and let
H is a diagonal
w
be two functions. We define a scalar product by
(v, W)H
=
L (v(w), H(w)w(w) . w
The scalar product (v, W)H defines the H nonn,
(v, V)j{2 = II v Il H, which is equiv-
133
GENERAL SYSTEMS WITH CONSTANT COEFFICIENTS
alent to the L2 norm l I u 11 2 , because, by Eq. (4.5. 14),
K- 1 (u, u) S; (u, U)H S; K(u, u).
(4.5. 1 6)
Now assume that Eq. (4.5. 1 5) also holds. Observing that
Pu
=
(27rr(d/2)
w
L ei( , x)PCiw )D( w, t), w
we obtain, from Parseval's relation and Eq. (4.5 . 1 5),
(Pu, U)H + (U, PU)H
L ( PCiw)D(w), H(w)D(w)
=
w
+ (D(w), H(w )PCiw )v(w ) ),
S; (D(w), (P*Ciw)H(w) + H(w)PCiw)v(w) , 2a l l u l I �. S; 2a L (v(w), H(w)D(w) =
w
Thus, the expression above is bounded from above. We generalize Definition 4.5 . 1 to the following definition. Definition 4.5.2. The operator P is called semibounded in the H norm if there
is a constant a such that
(PU, U)H
+
(u, PU)H S; 2a II ulI�
(4.5. 17)
for all smooth functions u. The following theorem corresponds to Theorem 4.5.6. Theorem 4.5.9. If P is semibounded in the H norm, then the solutions of Eq.
(4.5. 1) satisfy the estimate
Proof. The theorem follows from
d dt (U, U)H
=
(PU, U)H
+ (U, PU)H S; 2a(u, u)H.
134
WELL-POSED PROBLEMS
EXERCISES
4.5.1. Consider the first order system ut = Aux . Is it possible that the Petro vskii condition (4.5.8) is satisfied for some constant ex > 0 but not for ex = O? 4.5.2. Derive a matrix H(w) satisfying Eq. (4.5. 14) and (4.5. 1 5) for the system
Ut
=
[
1 0
10 2
] Ux-
4.6. SEMIBOUNDED OPERATORS WITH VARIABLE COEFFICIENTS In the previous section, we discussed semibounded operators with constant coefficients. Here we generalize the concept to differential equations
ut
=
( �)
P x, t,
u(x , to)
=
u,
t � to,
f(x ),
(4.6. 1 )
with variable coefficients. Definition 4.6.1. The differential operator P(x , t, a/ax) is semibounded if, for any interval tp � t � T, there is a constant ex such that, for all sufficiently smooth functions w,
2 Re (w, Pw)
=
( ( � ) ) + (p ( � ) ) w, P
-
, t,
w
-
, t,
w, w
� 2ex I w 11 2 . (4.6.2)
As for systems with constant coefficients, we have the following theorem. Theorem 4.6.1. lf the operator P(x , t, a/ax) is semibounded, then the solutions
of Eq. (4.6. 1) satisfy the estimate (4.6.3) We shall now give a number of examples. First, we need the following lemma.
135
SEMIBOUNDED OPERATORS WITH VARIABLE COEFFICIENTS
Lemma 4.6.1. If u, U
are periodic smooth vector junctions, then j
Proof. Let x_ =
( u,
(x( l ) , . . . , x(j
dU dx(J)
=
1 , 2, . . . , d.
1 ) , x(j + 1 ) , . • • , x(d) . Then
-
) f21" f21" ( f21" /\ u, d J ) dx(J) ) dx_ . dx( ) =
•..
0
0
U
0
.
Integration by parts gives us
f2'l1' \ u, ::IU) ) dx(J) dX o aU
(U, u)
2'l1'
x(j) � O
-
f2'l1' \ aU::1 J) , ) dx(J) , dx( 0
2 aU - f 'l1' \ ::1 U) , ) dx( J) , dX
U
U
0
and the lemma follows. 4.6.1. Symmetric Hyperbolic First Order Systems
We consider
du
at
=
)
d P x, t, at u :
(
=
d
L B/x, t) J�
1
du + C(x, t)u. dXU)
Here C is a general smooth periodic matrix function and Bj periodic Hermitian matrices. P is semibounded because
Therefore,
=
Bj
( ddBX(�) u, u) .
are smooth
136
WELL-POSED PROBLEMS
(u, Pu) + (Pu, u)
d
:::
�
( dB . )
L dX(�) u, u
J= 1
+ (Cu, u) + (u, Cu)
shows that Eq. (4.6.2) is satisfied. Throughout this book, we often use differential equations from fluid dynam ics to illustrate various concepts. The velocity of the flow has the components u, v, w in the x, y, Z directions, respectively. The density is denoted by p and the pressure by p. If viscous and heat conduction effects are neglected, the flow is described by the momentum equations ut + uUx + vUy + wUz + p - I px
Vt + UVx + V Vy + wVz + p - I py W t + U Wx + VWy + wWz + P - I pz
0 0 0
(4.6.4a)
and the continuity equation Pt
+ (up)x + (vp)y + (wp)z ::: o .
This system is called the Euler equations. There are five dependent variables; so we need another equation to close the system. Under certain conditions one can assume that p is only a function of p and that we have an equation of state
p
G(p ),
dG � 0 > o. dp
(4.6.4b)
For reasons that will become clear later, a is called the speed of sound. The system is nonlinear, and all the theory we have presented up to now only deals with linear equations. As we shall see later, the so called linearized equations play a fundamental role for well-posedness. These equations are obtained in the following way. Assume that there is a smooth solution of the original non linear problem. We denote this solution by (U, V, W, R). Assume that a small perturbation is added to the initial data, so that the solution is perturbed by E (U', v', w', p '), where E is small. We want to derive a simplified linear system for this perturbation. Consider the first equation of the Euler equations (4.6.4), and substitute the perturbed solution into it:
137
SEMIBOUNDED OPERATORS WITH VARIABLE COEFFICIENTS
(U
+
EU')t + (W
By assumption
feU;
+
+
+
(U + EU')(U + EU')x fW')(U + EU')z + (R
+
+
(V + W')(U + EU')y fP'r I (G(R + fP'»x O.
+
=
(U, V, W, R) is a solution, so we get Uu: + Vu� + Wu� + R- 1a2 (R)p: UxU' + UyV' + UzW' - R- 2 (G(R» xp ')
+
&(f 2 )
=
O.
By neglecting the nonlinear quadratic tenns, we obtain the linearized equation. For convenience, we consider unifonn flow in the z direction; that is, W = Uz = Vz = Pz = pz = 0 in Eqs. (4.6.4). Then the full linearized system is
where C is a matrix depending on U, V, R. This system is not symmetric hyper bolic. However, we can make it symmetric by introducing
a(R) p = -R p -
,
as a new function. The new system is
[�: l [ � r ] [ n [ � � � ] [ �1 [ �1 P
t
+
+
0
a(R)
a(R)
U
a )
0
a( ) V
p y
+
C
p
O.
(4.6. 5)
WELL-POSED PROBLEMS
138
4.6.2. Parabolic Systems
We call the system
strongly parabolic if, for all smooth w and all t, there is a constant 0 > 0 such that
,� ( ( a:�'
, A'j
!:, ) ( !:, a:�' ) ) +
A'j
'
� 0
t I ;�) I '
(4.6.6)
Clearly, if Eq. (4.6.6) holds, then P2 is semibounded because (w, P2 w) + (P2 w, w)
�
� J= I
- 0 £....
I
1
dW 2 -.:; < J oX ) '
If A ij == 0 for i -J j , then Eq. (4.6.6) becomes (4.6.7) Equation (4.6.7) holds if, and only if, the eigenvalues K of Ajj+Ajj satisfy K � O. simple example is the heat equation
A
Let PI
=
d
L
j= 1
Bx
j ( , t)
d < ) + dX j
C(x, t)
be any first-order operator with smooth coefficients and let P2 satisfy Eq.
SEMIBOUNDED OPERATORS WITH VARIABLE COEFFICIENTS
139
(4.6.6). The operator P2 + Pr is then sernibounded. We have
s;
s;
�
- 0 t I a":u, I i ' 2l1wll llPr wll, - 0 t 1 ;;, 1 ' 2Kr llwll 2 Ld 2K� IIwll ( 2K' 2Kl � ) II wll ' - � t II a":u, I ' +
+
+
j� r v 0/2
+
(4.6.8)
K r and K2 depend only on bounds for Bj and C.
The examples above show that, for symmetric hyperbolic systems and for strongly parabolic systems, zero-order and first-order terms, respectively, do not affect the serniboundedness of the operator.
4.6.3. Mixed Hyperbolic-Parabolic Systems
Consider a strongly parabolic system (4.6.9) and a symmetric-hyperbolic system VI =
P r v.
(4.6.10)
Here the vectors u and V do not necessarily have the same number of compo nents. j Let Qlm = r B;m a/ax( ) + elm , l = 1 , 2; m = 1 2 denote general first-order operators. Then the coupled system
2.f�
,,
(4.6. 1 1) has a spatial differential operator which is semibounded. Let w(l) and w (2) be vector functions with the same number of components as u and v, respectively. We have, using integration by parts,
WELL-POSED PROBLEMS
140
(w( l) , Q I 2W(2) ) + ( Q I 2W(2) , w( l) ) � constant
�
0,
(t. 1 ;��: Il
t. 1 ;:(�: I '
+
lI w'2' 1I + II W" ' lI lI w(2'1I
K. ll w''' 11 2
+
)
K2 (0 , J II w(2' 11 2 . (4.6. 1 2)
The constant 01 can be chosen arbitrarily small. The same type of inequality follows immediately for
For Re (w(!) , (P2 + Q I I )w( l) ) we use an inequality of the form (4.6.8), i.e., we have a negative term containing the derivatives of w( 1 ) which cancels the corre sponding term in (4.6. 1 2) if 0 1 is chosen sufficiently small. We know that P I is semibounded, thus the semiboundedness follows for the whole system (4.6. 1 1 ). As an application, we consider the Navier-Stokes equations. These equations describe viscous flow and are obtained by adding extra terms to the momen ' tum equations in (4.6.4). With p., p. > 0, p. , p.' � ° denoting constant viscosity coefficients we have, in two space dimensions,
P(UI + UUx + vUy ) + Px
p(vl + UVx + vVy ) + Py
=
p.
11v +
p.
PI + upx + VPy + p(ux + vy ) = 0, P = G(p).
,
a
ay
(ux + Vy),
The linearized equations are obtained in the same way as they were above for the Euler equations. Except for zero-order terms, they have the form Vuy + VVy +
-- a2 (R)
R
a2 (R) R
Px = Ii 11u + p.
Pv-
=
p. R
11v + (4.6. 13)
EXERCISES
141
We obtain the decoupled system by neglecting all first-order terms in the first two equations and R(ux + vy ) in the third equation. We can write ' JL JL d - .t:.u + - a (ux + Vy ), R R' x JL JL d .t:. v + (ux + Vy), VI = R If dY P I + UPx + Vpy = O.
UI
(4.6. l 4a)
-
(4.6. 14b) (4.6. 14c)
Integration by parts gives us
� :t ( 11 u 11 2 + II v 1 1 2 )
=
- {( � ) + ( � ) + ( + � (ux + ) } - + + + ux,
Ux
vy ,
Ux
�
Uy,
o(liux l1 2
uy
Vy )
li uy ll 2
IIvx l1 2
Ii vy I1 2 ),
where we have used the notation u = (u, vf . Thus, Eqs. (4.6. 14a) and (4.6.14b) are strongly parabolic, and Eq. (4.6. 14c) is scalar hyperbolic. Therefore, the dif ferential operator in space for the linearized Navier-Stokes equations is semi bounded. We have shown that for large classes of initial-value problems the energy estimate (4. 1 . 1 0) holds. One can show that these problems also have unique solutions, and we obtain Theorem 4.6.2. Theorem 4.6.2. The initial-value problem for symmetric hyperbolic, strongly parabolic, and mixed symmetric hyperbolic-strongly parabolic systems is well posed. EXERCISES 4.6.1. Let B(x, t) be a Hermitian matrix and let matrix. Prove that the system
ul
=
C(x, t)
be a skew-Hermitian
(Bu)x + Bux + Cu
is energy conserving, that is,
lI u ( ' , t) 1 1
=
Il u( - , O)II ·
4.6.2. Derive the exact form of the matrix C in the linearized Euler equations (4.6.5).
WELL·POSED PROBLEMS
142
4.6.3. Consider the linearized one-dimensional Euler equations, where U = 0 and R is a constant. Prove that the system represents two "sound-waves" moving with the velocities ±a(R). 4.7. THE SOLUTION OPERATOR AND DUHAMEL'S PRINCIPLE
In applications, the differential equations often contain a forcing function in every space dimension. In this case, the differential equations have the form
F(x, t) E c � , which is 21r-periodic Ut =
p( x, t, � ) u + F(x, t),
u(x, to)
=
t � to,
f(x).
(4.7 . 1 )
The appropriate generalization of Definition 4. 1 . 1 is as follows. Definition 4.7.1. The problem (4. 7. 1) is well posed if, for every to and every f E c � (x), F E c � (x, t),
1. There exists a unique solution u E c � (x, t) which is 21r-periodic in every
space dimension, 2. There are constants a and K independent off, F, and to such that li u(', t) 1i ::; Ke",(t - ro) ( li u( · , to) 1i +
max
10 $ 7 $ 1
Ii F( · , T) II ) ·
Thus, we require that we can solve Eq. (4.7 . 1 ) for every smooth F and that the solutions also depend continuously on F; that is, if II F Ii is small, then its effect on the solution is small. In this section, we want to show that Defini tions 4.7. 1 and 4. 1 . 1 are equivalent. If we can solve the homogeneous equa tions (4. 1 . 8) and the estimate (4. 1 . 10) holds, then we can also solve the inho mogeneous system (4.7 . 1 ) and our estimate is valid. We begin by proving a generalization of Lemma 4.3 . 1 that will be used frequently throughout the rest of the book.
Let a be a constant, (3(t) a bounded junction, and u(t) E C1 (t) a junction satisfying
Lemma 4.7.1.
du ::; au + (3(t), dt Then
t � to.
THE SOLUTION OPERATOR AND DUHAMEL'S PRINCIPLE
! u(t) ! � ea(l - to ) ! u(to) ! +
143
ft ea(t - T) !,8(r) ! dr 10
� ea(t - to) ! u(to) ! + cp*(a, t - to) max 1 ,8(r) l , to $.T$. t where cp * (a, t)
=
{� a t,
(eat
Proof By introducing the new variable
ea(t - to )
-
1 ),
if a -J. 0,
(4.7.2)
if a = O.
v = e-a (t - to)u, we obtain
( �� + av)
Thus,
that is,
I v(t) 1 � I v(to) 1 +
ftto e-a(T - to) I ,8(r) l dr,
and the lemma follows. We first consider the following system of ordinary differential equations
du dt u(to)
A(t)u + F(t), (4.7.3)
where A(t) and F(t) are continuous matrix and vector functions, respectively. We now construct the solution of Eqs. (4.7.3) using solutions of the correspond ing homogeneous problem
WELL-POSED PROBLEMS
144
dt
dv
A(t)v,
=
v(to)
=
(4.7.4)
VO.
Let t � to be fixed. The problem (4.7.4) has a unique continuously differen tiable solution v(t) for every vo. This gives us a mapping Vo H v(t) of the initial vectors onto the solution vectors v(t). Because problem (4.7.4) is a lin ear differential equation, it easily follows that this mapping is linear. Thus, there exists an operator S(t, to), which we call the solution operator, such that
v(t) Lemma 4.7.2.
=
S(t, to)v(to).
(4.7.5)
The solution operator has the following properties S(to, to) S(t2 ' to)
the identity, to S(t2 ' t dS(t J , to),
I,
�
tl
�
t2 ,
(4.7.6)
and there are constants K and a such that I S(t, to) 1
� KeCY.(t - to) .
(4.7.7)
Proof. The equalities in Eqs. (4.7.6) follow immediately from the definition of S. If we consider the solution of Eqs. (4.7.4) at initial time to, it is unchanged; that is, S(to, to) = I. If we consider solving Eqs. (4.7.4) at time t2 , then we can alternatively think of beginning at to and solving to t2 , or of beginning at to and solving for V(tl ) and using this value to solve the interval from t l to t2 . Because Eqs. (4.7.4) have a unique solution, we must obtain the same value, that is, S(t2 ' to) = S(t2 , t l )S(t l , to). Now consider
{ �� , v) + \ v, �� ) ,
� where 2a is an upper bound for Therefore, by Lemma 4.7 . 1 ,
and Eq. (4.7.7) follows.
(Av, v) + (v,Av), (v, (A + A*)v), 2a lvl 2 ,
I A + A * I , that is, I (A + A*)v l � 2a l v l
for all
v.
145
THE SOLUTION OPERATOR AND DUHAMEL'S PRINCIPLE
In this case, we obtain K = 1 in Eq. (4.7.7), but may be large. By allowing K > 1 and using a different technique, we can often get better estimates. As an ex
example, consider
A
[ 01 1000 ] '
=
in which case
101 o
A + A*
]'
ex
=
50.5 ;
that is, � e50.5(t - toJ•
I S(t, to) 1
(4.7.8)
With
T=
[ 101
]
10 10 '
- 10 1 '
]
we have
T- I A T I T/ · / T - I I For w =
=
[ 1� -� O ]
-
A,
10.
T- I v we have dw -
dt
Aw
and
d I 2 _ wl dt Thus,
=
2(w, Aw � 20 / w 1 2 . >
WELL-POSED PROBLEMS
146
which yields
which is to be compared with Eq. (4.7.8). Another important property of S is proved in the following lemma.
Let Uo be a constant vector. Then S(t2 , t l )UO is a CI function of both t2 and t l . Also,
Lemma 4.7.3.
at
a
a
�
oto
S(t, to)
=
S(t, to)
=
Proof. We recall that the solutions in its first argument because
A(t)S(t, to),
(4.7.9)
-S(t, to)A(to).
(4.7. 1 0)
vet) of Eqs. (4.7.4) are CI . S is continuous
lim (S(t + .6.t, to) - S(t, to))v(to) 1 .:I t -'> O I
=
lim
.:I t -'> O I
v(t + .6.t) - v(t) 1
=
o.
Continuity in the second argument follows from Eqs. (4.7.7). We have lim S(t, to + .6.to) - Set, to) 1 .:l to -,> 0 I
lim S(t, to + .6.to)(I - S(to + .6.to, to)) 1 .:lto -'> 0 I � lim KeCl.{t - (to + .:lto )) l! - S(to + .6.to, to) 1 .:lto -,> 0
Let
o.
.6.t > 0 , then Set + .6.t, to) - S(t, to) v(to ) .6.t ------
vet + .6.t, t) - vet) .6.t
so
S(t + .6.t, to) - Set, to) v(to) .:lt -,> o .6.t . 1 1m
=
dv(t) dt
--
=
exists, establishes Eq. (4.7.9), and shows that S is let .6.t l > 0 and consider
A(t)v(t)
Cl
=
A(t)S(t, to)v(to)
in its first argument. Now
THE SOLUTION OPERATOR AND DUHAMEL'S PRINCIPLE
147
An argument analogous to the preceding one yields
dv(t d dt
lim
Ll.IJ
�o
and, by continuity,
It follows that
Thus, S is also is proved.
C1
in its second argument, Eq. (4.7. l0) holds, and the lemma
We can now prove the following theorem. Theorem 4.7.1. Duhamel's Principle. Let S be the solution operator of the homogeneous problem (4. 7.4) introduced in Eq. (4. 7.5). The solution of the inhomogeneous problem (4. 7.3) can be expressed as
u(t)
=
S(t, to)uo +
Jtto S(t, T)F(T) dT,
(4.7. 1 1 )
and Eq. (4. 7. 7) yields ju(t)j ::; K(e,,(t - to ) j uo j +
cp * (a , t
- to) t max jF(T)j), o �7 �t
(4.7. l2)
where cp* (a, t) is defined in Eq. (4. 7.2). Proof S(t, T) is a continuous function of T and is continuously differentiable in t, so Eg. (4.7. l l ) is well defined, and u(t) E C 1 (t). Using Lemma 4.7.3, we have
148
WELL-POSED PROBLEMS
du dt
=
=
A(t)S(t, to)uo + A(t)u(t) + F(t),
fl A(t)S(t, r)F(r)dr + F(t) 10
and
u(O)
=
S(O, O)uo
=
Uo.
The inequality (4.7.12) then follows immediately from Eqs. (4.7.7) and (4.7 . 1 1), which concludes the proof. We now generalize this result to partial differential equations. Consider the system (4.7. 1 ) and suppose that the corresponding homogeneous problem
VI
=
( !)
P x, t,
=
V,
t � to,
v(x, to) f(x)
(4.7 . 1 3)
is well posed according to Definition 4. 1 . 1 . We can define a solution operator for Eqs. (4.7. 1 3) in the same way as we did for the ordinary differential equation. Let t � to be fixed. For every periodic initial function f(x) E C � (x) there is a unique solution v(x, t) E C � (x) so we obtain a mapping f(x) H v(x, t). This mapping is linear because the differential equation is linear, and it is bounded because we assumed that Eq. (4. 1 . 1 0) holds. We can, therefore, define a solution operator S(t, to) such that
v(x, t) II S(t, to)1 I
=
S(t, to)f(x),
� KeC'l.(I - lo ) .
(4.7.14)
We can use the same arguments that we used before for ordinary differential equations to show that the equalities (4.7.6) also hold for this solution operator. We now obtain
a at a a
to
S(t, to)f(x) S(t, to)f(x)
( ! ) S(t, to)f(x), - S(t, to)P ( x, to, ! ) f(x),
P x, t,
(4.7.15)
which corresponds to Lemma 4.7.3. By repeating the differentiation, we may conclude that S(t, to)f(x) E C � (x, t, to).
149
EXERCISES
As before, these results allow us to represent the solution of the inhomoge neous problem in terms of the solution operator for the homogeneous problem,
u(x, t) Therefore,
u(x, t) E
=
S(t, to)f(x) +
II
to
S(t, T)F(x, T) dT.
C=(x, t) and we obtain the estimate
li u( · , t) 1i � I I S(t , to )fO Ii + ( ��� Ii F(· , T) Ii) �
(4.7 . 1 6)
K(e
", (I
- lo l ll fOII
t
1
fo Ii S(t, T) lI dT
+ cp * (cx, t - to) t max Ii F( · , T) II ), "; 7 "; 1 o
(4.7. 1 7)
where cp* is defined in Eq. (4.7.2). In summary, we have proved this theorem.
If the solutions of the homo geneous system (4. 7. 13) satisfy the conditions of Definition 4. 1.1, that is, if it is a well-posed problem, then the inhomogeneous problem is also well-posed, its solution can be represented in terms of the solution operator for the homoge neous problem using Eq. (4. 7. 16), and its solution satisfies the estimate (4. 7. 1 7). Theorem 4.7.2. Duhamel's principle for POE.
EXERCISES 4.7.1. Carry out each step in proving that Eq. (4.7 . 1 5) implies S(t, to)f(x) E C=(x, t, to). 4.7.2. Derive the explicit form of the solution operator for Ut = aux. Use this form to write down the solution of U1 = aux + F(x, t). 4.8. GENERALIZED SOLUTIONS
Until now we have assumed that the data f(x) E C=(x) and F(x, t) E C=(x, t). In this case, we say that there is a classical solution. However, we can relax this condition by introducing generalized solutions. We again begin with the homogeneous problem (4. 1 .8) and assume that it is well posed. Let g(x) be a 27r-periodic function that belongs only to L2 • The function g(x) may not be smooth, but it can be approximated by a sequence of ' smooth functions fv (x) E C = (x) such that
vlim �= Il fv O - g ( ) 11 ·
O.
150
WELL-POSED PROBLEMS
We can then solve Eq. sequence of solutions
(4. 1 .8) for the initial functions I.(x) and obtain a v. (x, t) = S(t, to )f. (x) .
This sequence converges to a limit function vex, t), because
Also, vex, t) is independent of the sequence U. }, that is, if { f. } is another sequence such that lim ]. = g and v. (x, t) are the corresponding solutions of Eq. (4. 1 .8), then
II v. ( " t) - v. ( · , t) 1I
II S(t, to ) (]. O - 1. ( '»11 � Kec« I - I°lc ll ].O - g O Il + 11/.0 - gO Il),
and we see that lim . ---7 = v. = lim . ---7 = v• . We call vex, t) the generalized solu tion of Eq. (4.1 .8). This process is well known in functional analysis. Set, to ) is a bounded linear operator in L2 , which is densely defined. Therefore, it can be uniquely extended to all of L2 . We have just described this process. We look at an example of this process now. Consider the differential equation
VI + Vx = 0,
° � t,
(4.8.1)
with piecewise constant periodic initial data
g (x)
=
{
l
0, for ° � x � �1r, 1, for 1r < x < �1r, 0, for 'j1r � X � 21r.
(4.8.2)
We want to show that its generalized solution is
Vex, t)
=
g(x - t),
that is, it is a square wave, which travels with speed 1 to the right. We approximate g(x) by I. (x) E C = (x), which are slightly rounded at the comers and converge to g(x) (see Figure 4.8.1). Then
vAx, t) and it is clear that
=
I.(x - t),
151
EXERCISES
.....:��........
.. . .� .
..' ... ......�..... ...
vex, t)
Figure 4.8.1.
.-
vv (x, t) vlim -) �
g(x - t).
Using Duhamel's principle, the same process can be applied to the inhomo geneous problem. The only assumption we need to make regarding the forcing function G(x, t) is that we can find a sequence {Fv (x, t) }, where for each P , Fv (x, t) E C � (x, t) such that lim sup v -) � O $ / $ T
II Fv(x, t) - G(x, t) 1I
0
i n every finite time interval 0 � t � T. This extension process is very powerful, because we only need to prove existence theorems for smooth solutions. Then one applies the closure process above to extend the results.
EXERCISES 4.8.1. Construct explicit functions fv (x) such that lim v -) � fv (x) = g(x) is defined in Eq. (4.8.2).
g(x), where
4.8.2. Carry out each step in the definition of the generalized solution for
152
WELL-POSED PROBLEMS
ut u
where
= P
( :x )
u
+ F,
(x, O) = I(x),
F and I are not smooth functions.
4.9. WELL-POSEDNESS OF NONLINEAR PROBLEMS
We discuss nonlinear problems of the form
u/
: =
u
(x, O)
�) + F d d a a a ) ( A / x, t, ) + Bi (x, t, ) i L i ax( ax(i) .� ax(j) ) i= (
= P x, t, u,
=
u
u
u
u
1,) = I
+ C(x, t, u) u + F(x, t), I(x);
u
I
(4.9. 1 )
that is, the coefficients are functions of u but not of the derivatives of u . This is no real restriction, however. Consider, for example, (4.9.2) Let v = Ux . W = UXX' Differentiating Eq. (4.9.2) gives us
+
[00 0 0 o
2w2
which is of the form found in Eqs. (4.9. 1). In fact, any second-order nonlinear system with smooth coefficients can be transformed into a system of the form found in Eqs. (4.9. 1 ). We also assume that the coefficients and data are real and that we are only interested in real solutions. The following definition corresponds to Section 4.3.
0
Definition 4.9.1. The system (4.9.1) is called symmetric hyperbolic if all Aij = and if, lor all x, t, and u , the matrices Bi (x, t, u) , i = 1 , 2, . . . , d, are Hermitian.
153
WELL-POSEDNESS OF NONLINEAR PROBLEMS
If the A ij = Aij (x, t) do not depend on u, then we can define strongly parabolic systems in the same way as we did in Section 4.6; that is, the inequality (4.6.6) holds. If the Aij depend on u, then it is generally impossible to define parabo licity without reference to a particular solution. Consider, for example,
u(x, O) = f(x). If f(x) = - 1 + Eg, 0 < E « 1 , then the solutions will behave badly because we are close to the backward heat equation. On the other hand, if f = 1 + Eg, then the solution will stay close to 1 , and the equation is strongly parabolic at the solution. Definition 4.9.2. Let u(x, t) be a solution of Eqs. (4.9. 1). We call the system strongly parabolic at the solution if the inequality (4.6.6) holds. Here Aij Aij(x, t, u).
We can no longer expect that solutions will exist for all time. This difficulty occurs already for ordinary differential equations. The solution of
ut
u2 ,
UO ,
u(O)
is given by
Uo 1 - uot
u(t)
For Uo > 0, we have lim t --') l /llO U(t) = 00 If Uo < 0, then the solution exists for all time and converges to zero as t ---7 00 In Chapter 8, we consider .
.
ut
+
uUx =
0,
and show that its solutions stay bounded but that Ux may become arbitrarily large when t approaches some time to. In general, only local existence results are known; that is, for given smooth initial data f(x), there exists a finite time interval 0 � t � T such that Eqs. (4.9. 1 ) have a smooth solution. Here T depends on f. These existence results can be obtained by the iteration
154
WELL·POSED PROBLEMS
u;n + I )
=
(
P x, t, U(n) (X, t),
U(n + \) (X , O) f(x), U(O)(X, t) f(x). =
� ) u(n +
I) +
F,
=
n
=
0, 1 , . . . ,
(4.9.3)
Thus, one solves a sequence of linear problems and proves that the sequence of solutions u(n)(x , t) converges to a limit function u(x, t) which is the solution of the nonlinear problem (4.9. 1 ). Bounds of u(n) and its derivatives that do not depend on n are crucial in this process. Therefore, the linear problems must be well posed.
If all the linear problems above are symmetric hyperbolic or strongly parabolic or mixed symmetric hyperbolic-strongly parabolic, then u(n) and its derivatives can be bounded independently of n in some time interval ° ::; t ::; T and the nonlinear problem has a unique smooth solution. Theorem 4.9.1.
For symmetric hyperbolic systems, the last result is satisfactory. We do not need to know u(nl (x , t) to be able to decide whether the system is symmet ric hyperbolic. This follows from the algebraic structure of the equations. For strongly parabolic systems the same is true if the second-order terms are lin ear, that is, if the coefficients Aij do not depend on u. If the Aij depend on u, then we need to know u(n)(x, t) to decide whether the systems (4.9.3) satisfy the required condition. However, the next theorem tells us that we need only know that the condition is satisfied at t = 0. Theorem 4.9.2. Assume
that the system WI
=
p ( x, t,f(x), :x ) W
is strongly parabolic. Then there is a time interval ° ::; t ::; T, T > 0, such that the systems (4.9.3) have the same property, and the nonlinear system (4.9. 1) is strongly parabolic at the solution. One often solves partial differential equations numerically, although no ana lytic existence results are known. In this case, one would like to use the numeri cal results to infer that the analytic solution exists and that the numerical solu tion is close to that solution. We can proceed in the following way. We inter polate the numerical solution w and show that the interpolant u satisfies a per turbed differential equation
155
WELL-POSEDNESS OF NONLINEAR PROBLEMS
_
ut
p( x, t, u, ax ) u _
=
a
u (x, O ) = i.
_
+ F, (4.9.4)
Estimates for F - F and f -i can be obtained from the numerical solution and, hopefully, they are small. This depends on the accuracy of the method and how smooth the numerical solution is, that is, on bounds of w and its divided differences. Thus, by numerical calculation we have established the existence of a solution of Eq. (4.9.4). From this, we want to infer the existence of a solution of Eq. (4.9. 1 ) and show that SUPt l I u - u llN is small when SUPt II F - F II N and I If -illN are small. Here I I · IIN denotes a suitable norm, which, in general, depends on derivatives; that is,
IIf - i II�
=
L
U i $p
IIDUi U - i ) 1I2•
Typically p = [ � d] + 1, where d is the number of space dimensions. We now can make the following definition.
Let u be a smooth solution of Eqs. (4.9.1). We call the non linear initial value problem well posed for a time interval ° � t � T at the solution u, if
Definition 4.9.3.
1. there is a neighborhood ./Vof FJ defined by a suitable nonn I I · IIN: sup II F - F II N /
<
1/ ,
Ii i - f ll N
<
1/ ,
where 1/ > ° is a sufficiently small constant, such that Eqs. (4.9.1) have a smooth solution for all (F,J ) E ff, and 2. there is a constant K such that sup lIu -
O$ / $ T
u ll N � K1/ .
One can therefore prove the following. Theorem 4.9.3. If the system (4.9.4) is symmetric hyperbolic or strongly parabolic at U, or mixed symmetric hyperbolic-strongly parabolic, then it is well posed at U.
The error w =
u - u satisfies, to first approximation, a linear system
WELL-POSED PROBLEMS
156
( W(X, O) :::
Wt ::: PI X, t, U,
j - j,
�) W +
F - F,
(4.9.5)
which we obtain by linearizing Eqs. (4.9 . 1 ) at u. Thus, the bounds on W depend on the growth behavior of the solution operator of Eqs. (4.9.5). EXERCISES 4.9.1. Consider the nonlinear Euler equations (4.6.4). Prove that the corre sponding linearized system is not hyperbolic with the equation of state p = G(p) if the initial data are such that dG dp
<
0,
t = 0.
BIBLIOGRAPHIC NOTES
Well-posedness of the Cauchy problem for linear partial differential equations was defined by Hadamard ( 1921). He used a weaker form than the one used here. Petrovskii ( 1 937) gave a general analysis of POEs with constant coeffi cients that are well posed in Hadamard's sense. Well posedness in our sense is discussed in Kreiss ( 1964) and in Kreiss and Lorenz ( 1989). We have concentrated upon the estimates of the solutions in terms of the ini tial data. In the linear case this immediately leads to uniqueness. The existence of a solution is of course the most fundamental part of well-posedness, and we have indicated how this can be assured by using difference approximations for which the existence of solutions is trivial. In fact, it is often sufficient to dis cretize the differential equation in space. A system of ODEs of the type (4.7.4) is then obtained, and the classical theory for ODEs can be used to guarantee existence of a solution; see, for example, Coddington and Levinson ( 1 955.) The goal of our discussion has been to present the underlying theory for time-dependent POEs, which is important for their numerical solution. A much more comprehensive treatment can be found in Kreiss and Lorenz (1989). We have formulated general POEs as Ut = p(ajax)u. Many problems give rise to equations with higher order time derivatives. These equations can always be rewritten in first-order form by introducing v = Ut , W = Uti, as new depen dent variables and then be treated by the theory above. The theory can also be developed for the original higher order form. An example of this is in the paper by Friedrichs ( 1954), which deals with symmetric hyperbolic PDEs of second order. . . •
5 STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS OF LINEAR AND NONLINEAR PROBLEMS
In this chapter, we define the basic concepts used for the analysis of approxi mations of general initial value problems. We focus upon difference methods, but the theory developed can be applied to other methods as well.
5.1. STABILITY AND CONVERGENCE
Consider the periodic initial value problem for a general linear system of partial differential equations
p( x, t, :x ) u(x, O) f(x)
Ut
=
=
.
U,
(5. 1 . 1)
x
Here and U are vectors with d and m components, respectively. We assume that f and the coefficients are 21f-periodic, that is, 21f-periodic in all space directions and we are interested in 21f-periodic solutions. As discussed earlier, we introduce gridpoints by
x( j) ,
157
158
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
jv h
=
27r/(N +
I),
=
0, ±1 , ±2, . . . ,
N a natural number,
and a time step k > 0. For convenience, we always assume that k = "AhP, where "A is a constant greater than ° and p is the order of the differential operator in space. A general difference approximation is of the form
u=o
(J
=
n = q,
O, I , . . . , q.
q + 1, . . . ,
(5. 1 .2)
For simplicity, we write the gridfunction rI' without a SUbscript. It is to be under stood that Eqs. (5. 1 .2) hold in every gridpoint. The Qu are difference operators with 27r-periodic matrix coefficients, which may depend on x, t, k, h. The ini tial data are given 27r-periodic functions, and we are interested in 27r-periodic solutions. We assume that the operator Q- I is uniformly bounded and has a uni formly bounded inverse Q= 1 as h, k -----) 0, so that we can advance the solution step by step in time. For theoretical purposes, it is convenient to rewrite Eqs. (5. 1 .2) as a one-step scheme using the companion matrix. By introducing the vectors vn
f
=
=
+ (rI' q , rI'+ q- 1 , . . . , rI' ) T ,
U(q) ,f (q- l ) , . . . ,f (oy,
n =
0,
I, . . . ,
(5. 1 .3)
Eqs. (5. 1 .2) take the form
(5. 1 .4) where
I
(5. 1 .5)
is the companion matrix. For example, the leap-frog scheme in Eq. (2.2. 1 ) can be written as
STABILITY AND CONVERGENCE
159
(5. 1 .6) Because it corresponds to the solution operator Set, to) for differential equations, we define the discrete solution operator Sh by (5. 1 .7) The arguments k, h will be omitted. Sh can be expressed explicitly by
n-p
II
/t � l
Q(tn-/t ) '
I.
If Q is independent of t, then we have (5. 1 .8) As a norm we use
(5. 1 .9) and denote by Ii S,, 11 the corresponding operator norm. The concept of stability is the discrete analogue of well-posedness. The exis tence of solutions is easy to verify. We must obtain the needed estimates.
The difference approximation (5.1.4) is called stable for 0 < h $ ho, if there are constants as, C, Ks such that for all h
Definition 5.1.1.
(5. 1 . 1 0) This stability definition requires that the estimate (5. 1 . 1 1 ) hold for all initial functions f. The definition includes an exponential factor, because we must be able to treat approximations to differential equations with exponentially growing solutions such as, for example, Ut = Ux + u. On any finite
160
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
time interval [0, T], K(tn ) is bounded by K(T), which is bounded independently of h, k, f. Stability guarantees that the solutions are bounded as h ----7 0. Note the difference between the growth factors eCitn and eCin . The first is accepted, the second is not. For a given time step ko, one can always write
{3
=
a/ko.
However, when the mesh is refined to k = ko/2, it takes 2n steps to reach the same time value, and the growth is worse. This is the worst type of instability normally encountered. Another typical form of instability is (5. 1 . 1 2)
q > 0,
which is a weaker instability, but still prohibited by our definition. Next consider the case where the difference approximation includes a forcing function
Q_ I Vn+ I
q
=
L u=o
Qu if- u u
=
+
kF n ,
n
q, q + 1 , . . .
O, l , . . . , q.
,
(5. 1 . 13)
We again write Eg. (5. 1 . 1 3) as a one-step method
F
=
(Q::: J r, 0, . . . , O) T , (5. 1 . 14)
The discrete version of Duhamel's principle is given in Lemma 5 . 1 . 1 . Lemma 5.1.1. The solution of Eqs. (5. 1. 14) can be written in the fonn
Sh (tn , O) f
+
n
-
I
k L Sh (tn , t p+ d FP• p=o
Proof. Let v be defined by Eg. (5. 1 . 1 5). Then
vn+ I Observing that
Sh (tn+ J , O) vo
+
n k L Sh (tn+ I , t p+ d FP• p= o
(5. 1 . 1 5)
STABILITY AND CONVERGENCE
161 J.L <
n + 1,
we obtain
VO. '
(
Q(t. ) Sh(t" 0 ) VO + k Q(tn ) y" + kFn .
Because yO
=
f, yn
%
Sh(t" t. . , ) F'
)
+ kF"
is the solution of Eqs. (5. 1 . 14). This proves the lemma.
We can now estimate the solution of Eqs. (5. 1 . 14) in tenns of f and F. Theorem 5.1.1. Assume, that the difference approximation is stable. Then the solution of Eq. (5. 1.14) satisfies the estimate
where
Proof. Equation (5. 1 . 1 5) gives us
max
OS v Sn - l
I I Fv llh
n- l
L v=O
IISh (tn, tv+d llh k
and the lemma follows from Eq. (5. 1 . 1 0). This theorem shows that it is sufficient to consider homogeneous approxi mations. The proper estimates for inhomogeneous equations follow from sta bility. Note, that the factor k multiplying FV is lost in the estimate. There is a step-by-step accumulation of FV values; so the amplification is of order
n
-
k- 1 •
We can now discuss the influence of rounding errors, which are committed in each step. They can be interpreted in tenns of a slight perturbation of the
162
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
difference equation; we are actually computing the solution of q
Q_ 1 i'r l
L (]=o
n e ,
Q(] UH J +
(5. 1 . 16)
instead of Eqs. (5.1 .2). e is the order of machine (5.1 .2) from Eq. (5.1. 16) we get, for wn iT' - u", =
q
L Therefore, by Theorem
5.1.1,
(]=o
Q(] wn-(]
precision. Subtracting Eqs.
+ en .
(5. 1.17)
(5. 1.18) If the error due to truncation is of the order 0, then we require elk « 0, other wise there is danger that the rounding error will be dominant. For modem com puters with 64-bit words, rounding errors are not often a problem. We will next study the effect of perturbing each operator Q(] by an operator of order &(k). It is essential to understand the effect of these perturbations. For example, if u"+1 = Qou" approximates Ut = ux, then u"+1 = (Qo + k l)u" approximates Ut = Ux + u. It is convenient to know that such perturbations of order k do not cause instability, because we can often simplify the analysis by neglecting terms of order k. Let {Ra } be operators with a =
Instead of Eqs.
(5.1.2), we consider q
- 1 , O, . . . , q .
L (Qa
+ kRa )v" .
(]=o
(5.1. 19)
(5. 1.20)
We now prove that Eq. (5.1 .20) is stable if the unperturbed approximation (5. 1.2) is stable. For the proof, we need the following lemma. Lemma 5.1.2. Let Q be
a
bounded operator. For IIkQ l l h < 1
163
STABILITY AND CONVERGENCE
therefore (J + kQ)- 1
=J-
k(l + kQ)- I Q
kQ l , where
II Q llh . 1 - Il kQ llh
< -
II Q l II h
:= J +
Proof We have
sup
Ilvllh = I
11 (1 + kQ- l )v llh .
Let 1, then
Therefore the desired estimate and the representation follows. Theorem 5.1.2. Assume that the approximation (5. 1.2) is stable and that Eq.
(5. 1.19) holds. Then the perturbed approximation (5. 1.20) is stable. Proof Using the last lemma, we have
Therefore, using the last lemma, we can write Eq. (5. 1 .20) in the form
vn+ 1
=
where R i s uniformly bounded. Let becomes
(Q + kR)vn , wn
=
e-f3tn vn , (3
(5. 1 .2 1 ) > O. Then Eq. (5. 1 .2 1 ) (5. 1 .22)
where
Duhamel's principle can be applied to Eq. (5. 1 .22). The solution operator Sh (tn , tv), corresponding to e-f3kQ, is clearly e-f3k(n - v ) Sh (tn , t v ), where Sh is the solution operator for Eq. (5. 1 .4). Thus,
164
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
Now consider Eq. (5. 1 .22) for 0 S; JI S; n, and let
By Theorem 5 . 1 . 1 ,
The function cp * (a, t ) , defined i n Theorem 5 . 1 . 1 , decreases as a decreases. Hence, by choosing {3 large enough, the factor multiplying II wI' IIh can be made less than � . Therefore, for (3 sufficiently large; that is,
This proves the theorem. This theorem shows that we can neglect terms of order k in the stability analysis. However, as we have seen in Section 2.2 when discussing the leap frog scheme, terms of order k can play an important role. For practical purposes, a more refined stability definition is useful. Definition 5.1.2. Assume that the solution operator S(t, to) of the continuous problem satisfies the estimate
We call the difference approximation strictly stable for 0 < h S; ho, if as
As an example we consider
ut u(x, O)
= Ux =
-
f(x).
u,
S;
a + &(h).
165
STABILITY AND CONVERGENCE
From Section
2.2,
However, the leap-frog approximation
generates spurious solutions such that
The approximation is stable but not strictly stable. The modified scheme
(I + k)u"+ 1
=
2kDou" + (I
-
k)u"-I
is strictly stable. Next we will define the order of accuracy which is a measure of how well the difference scheme (5.1.2) approximates the differential equation (5. 1 . 1). Definition 5.1.3. Let u(x , t) be a smooth solution of Eq. (5. 1. 1). Then the local truncation error is defined by q
Q- I u(xj , tn+ d
-
L ,, = 0
Q"u(Xj, tn- ,, ).
(5. 1 .23)
The difference approximation (5. 1.2) is accurate of order (PI , P2) if, for all suf ficiently smooth solutions u(x, t), there is a function L(tn) such that, for h :s; ho ,
(5. 1 .24) where L(tn ) is bounded on every finite time interval. If P I > 0, P2 > 0, then Eq. (5. 1.2) is called consistent. Since we have assumed a relationship between k and h, the right-hand side of Eq. (5. 1 .24) is only a function of h. If k MP, then we say that Eq. (5. 1.2) has order of accuracy min (PI ,PP2). Several calculations carried out in Chapter 2 illustrate how one determines the order of accuracy. One always uses Taylor series expansions. For example, for the leap-frog scheme applied to u/ = ux, we obtain =
166
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS =
U(Xj, tn+ i ) - 2kDou(xj , tn ) - u(xj , tn- d hj, k2 4 4 h2 7j = 3 uxxx (Xj, tn ) + 3 Uttt (Xj, tn) + (9'(h , k ), -
(S. 1 .2S)
that is, the approximation is accurate of order (2,2). The DuFort-Frankel approximation (2.S. 1 2) for Ut = Uxx has its truncation error (divided by 2) defined in Eq. (2.S. 1S). For consistency, the condition (2.S. 16), k = ch i +o , 0 > 0, is required. Then the truncation error has the form
!..-
2
=
c2 h2o Utt
_
h2
12
Uxxxx
40 )
ff.V(h2+ + (/1
•
(S.1 .26)
If 0 > 0, the scheme is consistent, and the order of accuracy is min (20, 2). (According to our standard assumption, we would choose 0 = 1, since it is a parabolic equation.) For some approximations, it is more convenient to carry out the Taylor expansions about a point that is not in the grid. The Crank-Nicholson scheme (2.S. 19), approximating Ut = Un. has its center point at
Any differentiable function i.p(t) satisfies
Using Eqs.
(2.4. 1) to (2.4.6), we get
U(Xj, tn+ I ) - U(Xj, tn ) k
2 D+ D
1
-
-
[U(Xj, tn+i )
U(Xj , tn)] k2 k2 u (x., t. ) - 4 Uxxt/ (X', t. ) ttt 24
Ut (X ., t. ) - uxAx., t. ) + h2 4 - 12 Uxxxx (X t ) + (9'(h \ '"
+
+
k4 ).
Therefore, the order of accuracy is (2,2). We also need that the initial data of the difference approximation are con sistent with the solution of the differential equation. For one-step schemes, the correct data, f (0), can be used; but, for multistep schemes, the first q steps must
167
STABILITY AND CONVERGENCE
be computed using some other method. One-step schemes are usually used to compute these values. Definition 5.1.4. The initial data have the order of accuracy (P I , P2 ) if, for all
sufficiently smooth solutions u(x, t) of Eq. (5. 1. 1),
0, 1 , . . . , q.
(5. 1 .27)
The initial data are consistent if P I > 0, P 2 > 0. As an example, we use again the leap-frog approximation (2.2. 1 ) for Ut = Ux • By Eq. (5. 1 .25), the order of accuracy is (2,2). The scheme is stable for k/h < 1 . The only remaining question is how to compute V i . B y Taylor expansion
u(Xj, O) + kut(xj, O) + &(k2 ) u(Xj, O) + kuAxj, O) + &(k2 ) (I + kDo)u(xj, O) + &(kh2 + k2 ). Therefore, if we use
vJ vJ
u(Xj, O),
(I + kDo)vJ ,
then the initial conditions for the difference approximation are accurate of order (2,2). Note that the Euler scheme, used here for the first step, is only first-order accurate in time and actually unstable, but it has a local truncation error of order k2 . This is sufficient, because the scheme is applied for only one step. For a multistep scheme, this holds in general: the initial data can be generated by a difference scheme with one order less accuracy. Consistency does not guarantee that the discrete solutions of the approxima tion will converge to the solutions of the differential equation as the mesh size tends to zero. It was shown in Section 2. 1 that there are consistent schemes which have solutions that grow arbitrarily fast, although the differential equa tion has a bounded solution. The approximation must also be stable. Theorem 5.1.3. Assume that the solution u(x , t) of Eq. (5.1.1) is smooth and
that the approximation (5. 1.2) is stable. Assume that the approximation and its initial data are accurate of order (P I , P2 ). Then, on any finite interval [0, T], the error satisfies
168
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
1Iv" - u( ' , tn ) lIh S; Ks (e"s tn ll vO - u( · , O) lIh &(hP I + kP2 ),
+
II Q:: l llhrp� (a, tn ) 0 5,jmax llrillh ) <;; n 1 �
(5. 1 .28)
=
that is, the solutions of the difference approximation converge as h ---7 0 to the solution of the diffe rential equation. Proof If the solution u(x, t) is substituted into the difference scheme (5 . 1 .2), then we obtain q
L Qcr u(Xj , t � cr ) n
cr = O
Let wJ
=
+
hi-
(5. 1 .29)
u(Xj , tn ) - uJ. Subtracting Eq. (5. 1 .2) from Eq. (5. 1 .29) yields
Q� l wt l wJ
=
q
=
L Qcr wr cr cr O
+
hJ,
=
u(Xj , uk) - vJ,
u =
O, I , . . . , q,
and the estimate follows from Theorem 5 . 1 . 1 .
REMARK. There i s a fundamental difference between the case as < 0 and as >
O. If as < 0, the effect of the error in the initial data disappears, and the total error is of order &(hP I +kP2 ) uniformly in time. Thus, we can solve the equations on long time intervals without deterioration of the error bound. If as > 0, then the error grows exponentially, and if the solution does not grow as fast, then the error will dominate. Therefore, it is important that the approximation is strictly stable.
We end this section with a discussion of the connection between stability and convergence for generalized solutions. Let g E L2 • As in Section 4.8, we consider a sequence of continuous problems for v = 1 , 2, . . . ,
(5. 1 .30)
in a fixed time interval 0 S; t S; T. Here { f [ v ] } is a sequence of smooth functions with limv � � f[ v ] = g. Then u(t) = limv �� u[v](t) is the generalized solution of Eqs. (5. 1 .30) with initial data u(O) = g. Let
169
STABILITY AND CONVERGENCE
be a consistent and stable difference approximation of type (5. 1 .2). We consider the corresponding sequence (5. 1 .3 1 ) Let
For every fixed JI the solution u[v] of Eqs. (5. 1 .30) is a smooth function. There fore, for any £ > 0 and all t n with 0 ::; tn ::; T,
provided h = h(£, JI) is sufficiently small. We now use Fourier interpolation to define v[ v ] everywhere. As in Section 2. 1 , we denote the Fourier interpolant by IntN v[v ] . Also, IntN o[ v ] denotes the Fourier interpolant of o[v ] 's restriction to the grid. Then, by Theorem 1 .3.3, we obtain, for every fixed JI ,
I I Int N v[ v ] - o[v] lI ::; I I Int N v [v] - Int N o[vJ iI + I I Int N o[v] - o[ v] I I , = II v[ v ] - O[ v ] lI h + II Int N o[ v ] - o[ v ] 1I < 2£,
provided
u(tn )l,
h
h(£, JI )
is suffic iently small. Therefore, with
0 =
(uCtn+q ) , " "
provided JI is sufficiently large and h is sufficiently small. This proves conver gence. To prove that convergence implies stability, we assume that the approxima tion is not stable. Then there are sequencies
hv
---7
0,
kv
---7
0, 1,
Kv
---7 00,
JI
JI
---7 00,
JI
---7 00,
---7 00,
such that the solution of Eq. (5. 1 .3 1 ) with initial data f[ v ] fulfills the inequality
170
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
" >
"0
for some t in the interval [0, TJ. Consider now the solution w[v] of Eq. (5. 1 .3 1 ) with initial data
By linearity it satisfies the inequality
" >
"0 ·
Because g[.] --7 0 as " --7 00, the solution of the underlying well posed contin uous problem is w == O. This contradicts convergence. Thus we have proved the following theorem. Theorem 5.1.4. If the difference approximation (5. 1.2) is stable and consistent,
then we obtain convergence even if the underlying continuous problem only has a generalized solution. If the approximation is convergent, then it is stable. Theorem 5 . 1 .4 is the classical Lax-Richtmyer equivalence theorem, which states that convergence is equivalent to consistency and stability. We have proved that stability plays a very important role for numerical methods. In the next sections, we derive algebraic conditions that allow us to decide whether a given method is stable. EXERCISES 5.1.1. Derive the order of accuracy for the following difference approximations of Ut = A ux - u, where A is a matrix. 1.
2.
u"+ 1 = (1 - kI + kAD+ )u". (I + kI - kAD+ )u"+ 1 = u". ( l + kl)u"+ I = 2kADou" + (I - kl)u"- I .
3. 5.1.2. Find a suitable method to compute V i when the DuFort-Frankel method (2.5 . 1 2) is used to approximate Eq. (2.5 . 1). Derive an error estimate for the solutions. (You may assume that stability holds for all values of k/h2 ) 5.1.3. Derive the order of accuracy of the approximation .
for Ut = Uxxx + aux-
171
STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS
5.2. STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS
For approximations with constant coefficients, one can use Fourier analysis to discuss their stability properties. To simplify the notation, we only treat the one-dimensional case, and assume that the grid has N + 1 points in [0, 211"). We assume that the operators Qu in Eqs. (5. 1 .2) have the form
Qu
p
L
=
ApuE",
(5.2. 1 )
v = -r
where the matrices Apu are smooth functions of h but do not depend on Xj or tn. Let v'j be a solution of Eqs. (5. 1 .2). We can represent it by its interpolating polynomial, that is, by 1
v'j
N/2
L
..J2; w = -N/2
eiWXj ff(w),
j
O, I , . . . , N,
in the grid points. Substituting this expression into Eqs. (5. 1 .2) gives us
N/2
L
w = -N/2
(Q_ 1 eiwxj )ff+ 1 (w )
q
(Qu eiWXj )ff- U (w), L u
j
=o
O, I , . . . , N.
(5.2.2)
As in Section 2. 1 ,
Qu eiwxj
eiwxj Qu(n,
Qu (n
p
L
V=
-r
Apu ei"� ,
�
wh ,
(5.2.3)
denotes the so-called symbol, or Fourier transform, of Qu . Equation (5.2.2) shows that the ff+ 1 (w) are determined by
Q_ l (n ijn+ l (w)
q
=
L Qu(� )ff-U (w), u= o
(5.2.4)
because the vectors ( 1 , eiwh , . . . , eiwNh l , w N/2, . . . , N/2, are linearly inde pendent. Now we can prove the following lemma. =
Lemma 5.2.1. The inverse
-
Q:.: 1 exists and for °
<
h � ho ,
h
211"/(N + 1 ),
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
172
if and only if Q:: 1 (0 exists and for ° Proof.
Q:::
<
w
h � ha,
0, ±1, . . . , ± N/2.
(5.2.5)
exists if and only if the equation
j
=
O, I, . . . , N,
(5.2.6)
has a unique solution. As above, we can represent Wj , gj by their Fourier inter polants, and Eq. (5.2.6) is equivalent to
Q_ I (�)W(W)
=
g(w).
(5.2.7)
Then Eq. (5.2.5) follows from the discrete version of Parseval's relation
II w lI �
=
N/2
L
I w(w W
w = -N/2
=
N/2
L
w = - N/2
I Q:: :(�)g(w) 1 2
=
II Q:: : g lI �.
We look at some examples and begin with the Crank-Nicholson and Euler backward approximations of the hyperbolic system Ut = Au" [see Eqs. (2.3.3) and (2.3 . 1 )]. The operator Q_ I has the form
Q- I
= I
- OkADa ,
o =
1/2, 1 .
Here A i s a matrix with real eigenvalues av • Then
Q_ I
=
I
-
O"AiA sin �,
"A
k/h,
and the eigenvalues Z p of Q_ I are Zp = 1 - O"Aav i sin
�.
Because I zv I � 1 for all �, h, k, the condition (5.2.5) is fulfilled. As a second example, we consider the box scheme for the same equation
(5.2.8)
Q_ I
is given by
Q- I
I + "AA + (I - "AA)E,
hence
Q_ I
= I
+ "AA + (I - "AA)ei� .
If A has an eigenvalue a p = 0, then
Q_ I
has an eigenvalue
STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS
173
Therefore, Z p = 0 for � = 11", and the condition (5.2.5) is not fulfilled. This shows that one has to be careful when using the box scheme. For many other types of boundary conditions this difficulty does not arise. Now assume that Eq. (5.2.5) holds and let
Then we can write Eq. (5.2.4) as a one-step method
Iw l �
N12,
(5.2.9)
where
(5.2. 10) I
We can now prove the next theorem. Theorem 5.2.1. The approximation (5. 1.4) is stabLe if, and onLy if, Eq. (5.2.5)
hoLds and
(5.2. 1 1 )
for all h with h = 211"/(N + 1 ) � ho and aLL I w I � N12. Proof. The theorem follows directly from Parseval's relation
w
w
For one-step approximations of scalar differential equations, Q is a scalar and is easy to check. When Q is a matrix, the condition is generally difficult to verify. It is convenient to replace it by conditions on the eigenvalues of Q. Theorem 5.2.2. A necessary condition for stability is that the eigenvalues
of Q satisfy
for all h with h
�
I�I �
11"
ho (the von Neumann condition).
Zp
(5.2 . 1 2)
174
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
Proof. zJ is an eigenvalue of on . Therefore, if the method is stable, we obtain for any n
or
and Eq. (5.2. 1 2) follows because
n
can be arbitrarily large.
The von Neumann condition is often thought to be a sufficient condition for stability. However, the following example shows that this is not true. Consider the trivial system of differential equations
0, and the approximation
U
[ u(u(l2)) ] ,
(5.2. 13)
k
h,
(5.2. 14)
which has order of accuracy ( 1 , 1 ). The symbol is
(5.2. 15) which satisfies the von Neumann condition. The powers of 0 are easily com puted
(5.2. 16) and the norm is of the order n, which cannot be bounded by KeCXin . The condition (5.2. 1 1 ) can be expressed in a slightly different form: Multi plying it by e-CX in , the condition becomes
(5.2. 17)
175
STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS
and the problem is reduced to finding conditions, which guarantee that a family of matrices are power bounded. If the parameters � , h are fixed, then it is well known what conditions the eigenvalues must satisfy to guarantee Eq. (S.2. 17). They must be less than or equal to one in magnitude, and those on the unit circle must be distinct. The difficulty lies in the fact that the power-boundedness must be uniform in t h. In the special case that Q can be uniformly diagonalized, the von Neumann condition is sufficient.
Assume that there is a matrix T T(�, h) with I T I . I T- I I ::;; C, independent of � and h, such that =
Theorem 5.2.3.
for
C
(S.2.18)
Then the von Neumann condition (5.2. 12) is sufficient for stability. Proof. We have, with
I Qn l
peA) denoting the spectral radius of a matrix A,
I Tr ' QT r ' Q T . . . T - ' QTr' l ::;; I T I l diag (z " . . . , Zm(q+ , ) l n · I T - ' I ::;; Cp(Qn ) =
·
::;; CeO/Sin .
Normal matrices are diagonalized by orthogonal matrices
I T , 1 1 . Therefore, we have the following corollary. -
T
with
I TI
:=
:=
Corollary 5.2.1. If Q is a normal matrix, that is, if Q* Q QQ*, then the von Neumann condition is sufficient for stability. In particular, this is true for Hennitian and skew-Hennitian matrices Q.
As noted earlier, this means, in particular, that the von Neumann condition is sufficient for all one-step approximations of scalar differential equations. To find the eigenvalues for multistep schemes, it is easier to work with the original multistep form (S. 1 .2) and the corresponding formula in Fourier space, q
L ,, = 0
Q" Ul-fl (W).
(S.2. 19)
We need the next lemma. Lemma 5.2.2.
solutions of
The eigenvalues Z of the matrix Q, defined by Eq. (5.2. 10), are
176
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
(5.2.20)
This equation is usually called the characteristic equation for Eq. (5.2. 19), and is formally obtained by the substitution 1.1' � zn.
Proof. The eigenvalue problem for
Qv
= ZV,
An eigenvalue z with eigenvector
v
Q is
= (if, uq- I ,
• • •
, UO{.
v must satisfy
Q:: l Qouq + Q:: l Q l uq- 1 + . . . + Q:: l Q uO q
zuq,
zuq- I , zuq- 2 ,
uq uq- I
All vectors uP can be expressed in terms of uO, and we get, from the first equa tion,
After multiplying by
Q_ I the determinant condition (5.2.20) follows.
EXAMPLES. We begin with the leap-frog scheme (2.2. 1 ) for Ut = UX ' The deter minant condition for the eigenvalues z just derived gives us Eq. (2.2.5) for the solutions Z I , Z2 . If lA- sin � I = 1, there is a double eigenvalue, z = i or z = -i. In
the first case, the amplification matrix
Q is 1 o
Let T I be the matrix which transforms
] [ ] =
2i
1
1
0
.
Q to Jordan canonical form:
STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS A T_I I QT I
=
177
[ i i] . 1
°
Then
(!' A
=
TI
[
I
°
is unbounded. Again, the von Neumann condition is not sufficient. If, on the other hand, A :5; Ao
<
1,
(5.2.2 1 )
then I Z I - z2 1 � 0 > 0, where 0 i s independent of t ho Therefore, Q can be uniformly diagonalized and the von Neumann condition is sufficient. Now consider a system Ut = Aux, where A is a diagonalizable matrix such that T - I A T = D = diag (aj , . . . , am ). Substituting new variables w = T - I v into the leap-frog scheme
(5.2.22) gives us
(5.2.23) which is a set of m scalar equations. We have
I I kap -h-
:5; Ao
<
1,
/I
= 1 , . . . , m,
(5.2.24)
which corresponds to the condition (5.2.2 1), and a necessary and sufficient sta bility condition for Eq. (5.2.22) is
k h p (A) :5; Ao
<
1.
(5.2.25)
General stability conditions are complicated. Without proof we state the
Kreiss matrix theorem:
Theorem 5.2.4. Let F be a family of matrices A offixed order. The following four statements are equivalent. (The constants Cj , C2 , C3 j , C32 , C4 are fixed for a given family F.)
178
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
1 . The matrices in F are unifonnly power bounded, that is, for all integers
n
� o.
(5.2.26)
F, the resolvent matrix (A - zl)- I exists for all complex numbers z, I z l > 1 , and
2. For all A
E
C I (A - zl) -l l � z 2 (the resolvent condition). I l-1 3. For each A
such that
E
(5.2.27)
F, there is a matrix T with I T I � C3 1 and I T- I I
�
C3 1 ,
where � IzI i � 1,
(5.2.28)
and I bvl-' I � C32 (1 - Izv / ), v
I , 2, .
/.I. = v +
. . , m - 1, 1, v + 2, . . . , m.
(5.2.29)
4. For each A E F there is a positive definite matrix H such that C4 1 1 � H � C41, A ' HA � (1 - o)H,
o
=
� (1
-
z ), I max S vSm I v i
(5 . 2 . 30)
where {zv } are the eigenvalues of A. This theorem is, in general, not easy to apply. The condition 3 is probably the most straightforward, because of its resemblance to Schur's normal form. However, it is difficult to find a matrix T with a bounded inverse that allows one to satisfy the inequalities (5.2.29) for the off-diagonal elements. To obtain simpler stability conditions, one can require additional properties that are naturally built into the approximation. Dissipativity is such a property.
STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS
179
Recall from Sections 2.2 and 2.3 that some of the methods presented there damp the amplitudes of most frequencies. The word dissipation is sometimes used to describe any kind of decrease in norm. We define it more precisely here. Definition 5.2.1. The approximation (5. 1.4) is dissipative of order 2r if all the eigenvalues Zv of Q satisfy
I �I
::;
1r,
(5.2.3 1)
where 0 > 0 is a positive constant independent of �, h. The two important properties enforced by Eq. the following way:
(5.2.31) can be expressed in
1 . There is a damping of order 1 0 for all large wave numbers. 2. The damping factor approaches 1 in magnitude as a polynomial of degree 2r for small wave numbers. -
We analyze the dissipative behavior of a few simple methods. As a first example, consider the parabolic equation u/ Uxx and the Euler approximation (2.5.6). There is only one eigenvalue ==
A
Z
==
Q
==
1
-
� 4a sin2 2 '
This method is stable if a ::; �, but for a because z = - 1 for � = 1r. If a < 1, then I z l
==
<
(5.2.32)
1
the scheme is not dissipative, 1 for 0 < I � I ::; 1r and
(5.2.33) in a neighborhood of � = O. Therefore the scheme is dissipative of order 2. The dissipative property is very natural for parabolic problems, because the differential equation itself is dissipative [see Eq. (2.5.2)]. Actually, the relation (5.2.33), which is restricted to a neighborhood of � = 0, is a consequence of consistency. As we will see in Section 6.5, the dissipative property also plays an important role for hyperbolic problems. Consider the Lax-Friedrichs method (2. 1 . 16) for u/ Ux' Again, there is only one eigenvalue z, and ==
(5.2.34) The scheme exhibits the correct behavior (corresponding to dissipativity of order 2) near � = 0, if A < 1. However, I z l = 1 for � 1r, so the scheme is ==
180
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
not dissipative. (Some authors define approximations of this type to be dissipa tive and they call those schemes satisfying Definition 5.2.1 strictly dissipative.) The backward Euler method (2.3. 1) suffers from the same deficiency as the Lax-Friedrichs method: the inequality (5.2.3 1 ) fails at � = 'Tr. The Crank-Nicholson method (2.3.3) has no damping at all, and z is on the unit circle for all �. To construct a one-step dissipative method for u/ = ux , consider the approx imation
(5.2.35) where
c is constant, treated in Chapter 2. The amplification factor is (5.2.36)
with
where A =
kjh. In Section 2.1 A
we found that ::;
1,
I QI
1 < 2 -
::;
1 for
c.
(5.2.37)
With strict inequalities in Eq. (5.2.37), the scheme is dissipative of order 2. If c = � and A < 1, the order of dissipativity is 4. This is the Lax-Wendroff method (2. 1 . 1 7), which is accurate of oider (2,2). We end this section by showing that conditions for consistency and order of accuracy can be given in Fourier space. For convenience, we limit ourselves to first-order systems. Theorem 5.2.5.
We make the following assumptions:
The differential equation Ut = Pu has constant coefficients. 2. The difference operators in the approximation (5.1.2) have the form (5.2.1), where the coefficients A v tr are independent of x, t and h, (k = Ah, A constant). Then Eq. (5.1.2) has order of accuracy p if, for some constant �o > 0, 1.
STABILITY FOR APPROXIMATIONS WITH CONSTANT COEFFICIENTS
181
q Q- I (OiUw)k - L Qa (Oe- P(iw)ak a=O
� constant 1 � I P+ I
for I � I = I w h l � �o ·
(5.2.38)
3. For explicit one-step methods, Qo(�) must agree with the Taylor expan
sion of exp (P(iw)k ) through terms of order 1 � I P+ I .
Proof. The solution to Eq. (5. 1 . 1 ) is written in the form
u(x, t)
=
� u(w, t)eiwx , �: w
1 v!2;
A
where
u( w , t) A
eP(iw ) t uA (w , 0 ) .
=
By Definition 5 . 1 .2, the truncation error is
If Eq. (5.2.38) holds and the solution
u(x, t) is smooth, then
00
I i kT( · , tn) lI � � constant =
L
1 � 1 2(P+ 1 ) I u(w , tn) 1 2 ,
", = -00
constant h2(P+ I )
� constant h2(P+1) ,
00
L
I w 1 2(p+ 1) l u(w, tn) 1 2 ,
"' = -00
which shows that the order of accuracy is p. To prove the theorem in the other direction, we let u(w , t,,) = ° for w -:/:. W I -:/:. 0, and U(W I , tn) 1 1 . Then I =
( Q_ I (Oi(iW1)k - i Qa -P(i 1 a )
constant cf1P+I � IlkT( ·, tn) ll"
a
e
=0
W
l
k
eiW 1X u(W I , tn)
h
182
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
Because u(w 1 , tn ) is arbitrary, except for normalization, we get the necessary inequality
Q_ l (�) iUW I )k
-
q
L a=O
Qa e- Pciw 1 )a k
� constant hP+ ] � constant
I w ] h l p+ 1 •
But W l is arbitrary, so this proves Eq. (5.2.38). For explicit one-step schemes, Eq. (5 .2.38) becomes l ePUw )k
- Qo( O I � constant I � I P+ ] .
This proves the theorem. The theory in this section has been developed for one space dimension, but it can be extended to several space dimensions without difficulty. In fact, all of the definitions, lemmas, and theorems hold as formulated, the only modification that needs to be made is that w is a multiindex. EXERCISES 5.2.1. Formulate and prove Theorem 5.2.1 for approximations in d space dimensions. 5.2.2. Define the Lax-Wendroff approximation for the system Ut = Au,. Is it dissipative if
5.3. APPROXIMATIONS WITH VARIABLE COEFFICIENTS: THE ENERGY METHOD
In this section, we consider approximations whose coefficients depend on Xj and/or tn . If there is only t dependence, the Fourier technique used in the pre vious chapter can still be used. However, the solution operator in Fourier space is a product of operators Q(t v ) in this case. We do not discuss this further. If the coefficients depend on x, then the situation is different. The separation of variables technique, using the ansatz rIj = ( l / �) L.w if'(w)eiwxj , does not work. It is not possible to get a relation of the form
APPROXIMATIONS WITH VARIABLE COEFFICIENTS:
Q_ 1 U"+ I (W ) =
q
L u=o
183
Qu U"-U (W )
for the Fourier coefficients. The Fourier technique can still be used, but it must be embedded in more involved machinery. Stability conditions are derived for related problems with frozen coefficients, that is, the approximation is examined with x = x. and t = t. held fixed in the coefficients for every x. and t. in the domain. With cer tain extra conditions, it is possible to prove stability for the variable coefficient problem if all the "frozen" problems are stable. This has been done for approx imations of hyperbolic and parabolic equations, and the results for hyperbolic equations will be given in Sections 6.5 and 6.6. A more direct way of proving stability is the energy method, and this method will be developed in this section. In this method, no transformations are used, and the analysis is carried out directly in physical space. The idea is simple (but the algebra involved may be difficult): Construct a norm II'II� such that the growth in each step is at most ecxk, that is
(5.3. 1 ) If this norm i s equivalent to the usual discrete i2 -norm 1I ·lIh, then
(5.3.2) which is the type of estimate that we want. We first illustrate the idea using a simple example. The Crank-Nicholson method for Ut = Ux can be written in the form j =
O, l, . . . , N,
(5.3.3)
where Q = Do. Assuming that v is real, we multiply both sides by v't 1 + v'j and sum over all j . The result is
(5.3.4 ) so I lrfHl llh � 1 1 v''l1li if, for all gridfunctions v, the condition
(v, QV)h � 0
(5.3.5)
184
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
is fulfilled. By periodicity,
N
Vj (Vj+1 - vj- d 2 L j =O
(v, Qvh
0,
and stability i s proved for all k. This was already shown in Section 2.3 using the Fourier technique. However, the energy method can be easily generalized to variable coefficients. Consider Ut = a(x) ux and the same approximation (5.3.3) where Q = Qj = aj Do. We now have N
(aj - aj+ dvjvj+ l . 2 L j= O Assume that a(x) is Lipschitz continuous, that is, I D+ aj I
::;; c.
Then
C N 1 I (v, Qvh l ::;; 2 L "2 (v] + v]+ I )h j= O
(5.3.6)
From Eq. (5.3.3) we get
so 2
II v"+ 1 I lh ::;;
1 + Ck/2 1 Ck/2 _
2
2
I I v" II h ::;; (1 + (X l k) 1 I v" II h'
(5.3.7 )
where (XI is a constant depending on C. Thus, Eq. (5.3. 1 ) is satisfied with (X = (X I /2 and II . II� = I I · IIh . The inequality (5.3.5) is the key to the final stability estimate. We now treat general operators Q = Q(Xj , tn ). As for differential operators, we define serni boundedness: Definition 5.3.1. The discrete operator Q is semibounded if, for all periodic gridfunctions v, the inequality
Re (v, Qvh ::;; (X ll v II � holds, where (X is independent of h, x, t, and v.
(5.3.8)
APPROXIMATIONS WITH VARIABLE COEFFICIENTS:
185
For semidiscrete approximations,
dVj dt Vj (0) the so-called
j
O, I ,
. . . , N, (5.3.9)
method of lines, we obtain the following theorem.
Theorem 5.3.1.
The solutions of Eq. (5.3.9) satisfy the estimate (5.3. 1 0)
if Q is semibounded. Proof
d
dt
I I v lI�
=
2 Re
( v' dtdV )
::; 2a I I v lI� .
2 Re (v, QV)h
h
Therefore, Eq. (5.3 . 1 0) follows. We discretize time using the trapezoidal rule and obtain Theorem 5.3.2.
The approximation (5.3.3) is unconditionally stable if Q is semibounded. The solution satisfies the estimate Theorem 5.3.2.
(5.3. 1 1 ) where {3 = max (0, a) and
a i s defined in Eq.
(5.3.8).
Proof Taking the scalar product of Eq. (5.3.3) with If+ ! + If yields
Re (v"+ 1 + v", vn+1 - v"h
-k2 Re (vn+ l
::; {
=
+
v", Q (v"+ 1
+ vn )h,
k:; II lf+ l + lf ll � ::; ka( lIlf+ l ll � + Il lf l l �), if a > 0,
0,
if a
::;
0.
186
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
Therefore, if ex � 0, otherwise, (1
-
kex) lllr l ll �
� (1 +
kex) IIu" II �,
and Eq. (5.3. 1 1 ) follows. The estimate (5.3.10) shows that the constant ex determines the growth rate of the solution of the semidiscrete approximation. The fully discretized approxima tion preserves this growth rate with &(k) error only if ex � O. We remark that Eq. (5.3. 1 1 ) can be strengthened for ex < 0 with additional hypotheses on Q. The backwards Euler method (J
-
kQ)v"+ l VO
= =
v", (5.3. 1 2)
j,
preserves the correct growth rate in general.
The backwards Euler method (5.3.12) is unconditionally stable if Q is semibounded. The solution satisfies the estimate
Theorem 5.3.3.
(5.3. 13) Proof. Taking the scalar product of Eq. (5.3. 1 2) with
u"+ 1 gives us
that is,
and Eq. (5.3 . 13) follows. We need the difference version of Lemma 4.6. 1 . Lemma 5.3.1.
Let u, v be complex valued periodic vector gridfunctions, then (u, Evh
=
(E- 1 u, vh,
(5.3 . 1 4a)
APPROXIMATIONS WITH VARIABLE COEFFICIENTS:
(u, Dovh (u, D+ vh
= =
- (Dou, vh, -(D_u, vh.
187
(5.3. 14b) (5.3. 14c)
Proof. By definition
N
N+ I )h (U , V L j j+ l L (Uj_ I , Vj )h, j =O j= l N L (Uj_ I , Vj )h + j=O (E- 1 u , vh .
(u, Evh
We can express 2hDo = E + E- 1 , hD+ = E - I, and E, and the rest of the lemma foIlows.
hD_ = 1 - E - 1
in tenns of
A consequence is
(U, Douh
=
-(Dou, uh
that is,
Re (u, Douh
O.
We also need the foIlowing lemma. Lemma 5.3.2. Let u and v be complex vector gridfunctions, and let A = A(xj ) be a Lipschitz continuous matrix. IfD is any of the difference operators D+ , D_, Do,
then
(u, DAvh where IRI
::;
=
(u, ADvh + R,
constant ll u llh · ll v lJ ".
Proof.
E" (Aj v)
=
A V+j vv+j Bj
=
=
Aj E"Vj + hBj E" Vj , (A v +j - A)jh,
(5.3. 15)
188
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
that is,
By expressing the difference operators in terms of E, the lemma follows. For convenience, we now only consider examples with real solutions. Con sider a system Ut = Aux , where A = A(x, t) is a symmetric matrix. The semi boundedness condition (5.3.8) for the centered difference operator ADo follows from Lemma 5.3.2. Define R as in Eq. (5.3 . 1 5). Then
(u,ADou)h
=
(u, DoAuh
-
R
=
-(Dou,Auh
-(ADou, U)h
- R
- R,
and it follows that
(u,ADouh
� constant lI u ll �.
(5.3 . 1 6)
This shows that both the Crank-Nicholson and backward Euler methods are stable with Q = Qj = A(xj )Do. Note that the symmetry of A is essential for the energy method to succeed. If A is not symmetric, it must first be symmetrized. Consider, for simplicity, a constant matrix A, and let T be a symmetrizer; that is, T � I A T = A is symmetric (A could be diagonal). The Crank-Nicholson method is
or, equivalently, with
w = T � I V,
Theorem 5.3.2 gives us ({3
0 here ).
Actually, there is equality, since
(w, ADow)
=
(Aw, Dow)
The final estimate is obtained from
-(DoAw, w)
-(w,ADow).
(5.3.17)
189
APPROXIMATIONS WITH VARIABLE COEFFICIENTS:
1I v" ll h
o II Twn l ih � I T I . Ilwn ll h � I T I . Ilw ll h , � I T I . I T -I I · Iluo li h � constant Iluo l ih . =
(5.3. 1 8)
Indeed, the transfonnation T - I u can be considered as the definition of a new nonn for v", (5.3. 1 9) The nonn I I ' II� is used to obtain the step-by-step estimate 11 v"+ I II � � 11 v" 11 � , and the final estimate (5.3. 1 8) then follows from the equivalence of the nonns. This is a special case of the basic principle of the energy method: the construction of a special nonn such that Eq. (5.3. 1 ) is satisfied. Now consider the fourth-order accurate difference operator
(5.3.20) derived in Section 3 . 1 . Assuming that A is symmetric, we get, by using the identities (5.3. 14),
(u, QU)h
(u, ADou)h
o
+
h2
h2
-
6"" (u,ADoD+ D_ uh,
6"" (D_ u, A DoD_ uh
o.
This shows that also the operator (5.3.20) is skew-symmetric and the Crank-Nicholson method is again stable. Scalar parabolic problems are often given in self-adjoint fonn (5.3.2 1 ) where
a = a(x) � 0 > O . We consider the centered operator (5.3.22)
where aj - I /2 = a(Xj - I /2 ). For smooth functions a(x), u(x) we have
190
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
QU(Xj )
=
1 h
[a(xj+ I /2 )D+ u(x) - a(xj- \ /2 )D_ u(xj )] 1 h2 h a(xj+ I /2 ) [uAxj+ I /2 ) + 24 uxxx (Xj+ I /2 )
(
- a(xj- I /2 ) [uAxj- I /2 )
+
+
&(h4 )]
�: Uxxx(Xj- I /2 ) + &(h4)] )
•
Furthermore,
so
QU(Xj )
D+ a(Xj- I /2 )uAxj-I /2 ) + &(h2 ) [(aux )x ]x= Xj + &(h2 ),
and we see that Eq. (5.3.22) is a second-order accurate approximation in space. We also have
-(D_u, a- I /2D_ u)h � -D II D_u ll � � ° (5.3.23)
Q is semibounded. We now discuss a combination of the leap-frog scheme and the Crank Nicholson method. so
Theorem 5.3.4.
Consider
Assume that for all gridfunctions W Re (W, Q I W)h � Ql ll w ll �, Re (w, QO W)h 0, k ll Qo llh � 1 - 15 , =
Then the method is stable.
15 > 0.
(5.3.25a) (5 .3.25b) (5.3.25c)
ProoJ. We assume that Qo, Q I are only functions of x. The proof of the general case is left as an exercise. We write Eq. (5.3.24) in the form
191
APPROXIMATIONS WITH VARIABLE COEFFICIENTS:
By Eq.
(5.3.25a) II vn+l ll �
_
1I v"- 1 11 �
For any gridfunctions
°
=
Re (u
+
2k Re (v"+ 1 + v" - l , QO v" )h + k Re (v"+1 + v"- 1 , Q l (v"+l + v"- l )h � 2k Re (v"+I , Qov"h + 2k Re (v"- I , Qov" h (5.3.26) + a 1 k ll v"+1 + v"- l ll �. =
u, v, Eq. (5.3.25b) implies v, Qo(u
+
v)h
=
Re «v, Qou)h
+
(u, QO V)h ).
Also,
Therefore, we can write Eq.
(5.3.26) in the form (5.3.27)
where
By Eq.
(5.3.25c), 1 2k(v"+ 1 , Qov" h l � 2k ll Qo lih 1I v"+ l llh ll v" lI h , � ( 1 o )( lI vn ll � + 1I v"+ I Il �). -
Therefore,
(5.3.28) showing that (Ln+ 1 ri- is equivalent to the norm ( 1I v"+ 1 11 � + then & = 0, the norm U does not increase, and we have
1I v" II � ri- . If a l � 0,
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
192
Thus, the method is stable. If a t > 0, then & = a t , and Eqs. (5.3.27) and (5.3.28) imply that
(5.3.29) This proves stability. Let
u/
=
Pu
be a system of differential equations and assume that is,
(5.3.30)
P is
semibounded, that
(5.3.3 1 ) Now assume that we have succeeded in constructing a difference operator such that Re (w, Qw)j, � a t ll w ll � . Let
Q*
be the adjoint operator, so that
(u, Qvh
=
(Q* u, vh
for all gridfunctions u and v. By Eq. (5.3. 14), Q* is also a difference operator, and it approximates the adjoint differential operator P* . We write
Qo
=
� (Q - Q* ),
Then Re (w, Qowh = 0, and we can use the approximation (5.3.24). The inequality (5.3.3 1 ) implies that the solutions of the differential equation
APPROXIMATIONS WITH VARIABLE COEFFICIENTS:
193
satisfy
that is,
Therefore, the estimate (5.3.29) might not be satisfactory. However, we can introduce new variables u = u and obtain
e-c'I/
u/
(P
=
- a l l)u =: Pu,
and apply the method to the new problem. After we have written down the method, we can revert to the original variables. As an example, we consider the convection-diffusion equation
ur
EUxx
+
a(x, t)ux
+
constant > 0,
E
c(x, t)u =:
Pu, (5.3.32)
a, c real.
We first determine the adjoint operator P*. Integration by parts gives us (u, Pu) = (wxx - (au)x
+
cu, u),
that is,
� (P
+
P*u P*)u
� (P - P*)u
(au)x + cu, EUxx + � (aux - (au)x) EUxx + (c - � ax)u. � ((au)x + aux ).
EUxx
As a difference approximation we choose
By Eq. (5.3.14),
Qou
� (Do(au)
QI u
ED+D_ u + (c
+ aDou),
- � ax)u.
+
cu
194
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
Re (V, Qovh = 0, Re (V, Q l vh � Also, since
- E IID_v lll,
- � ax ) ll v lll, .
+ max x (c
=
II Do lih l/h, we have
k ll Qov llh
k
�2 �
II Do(av) llh
+ 2
k
� ( � Il av llh
+
Il aDov llh
Il a ll = II Dov llh
By Theorem S.3.4, the method is stable if
)
� Il a ll= < 1 - o .
REMARK. The theorem is also valid if we replace Eq. (S.3.2Sb) by Re (w, Qowh =
R(w),
(S.3.33)
where
I R(w)1
� constant I w ill, ·
In Section 2.2, we have applied the leap-frog scheme to
u/
=
Ux - au,
a
=
constant;
that is, in the framework of Theorem S.3.4, we have used
Qov
=
Dov - av,
In this case Re (w, QO W)h =
-a ll w il l"
and Eq. (S.3.33) is satisfied. The method is stable, but a parasitic solution devel ops and grows exponentially, although the solution of the differential equation decays. Therefore, one must be careful if one replaces Eq. (S.3.2Sb) by Eq. (S.3.33).
EXERCISES 5.3.1. Prove Theorem S .3.4 when
Qo, Q l
depend on t. 5.3.2. Prove Theorem S.3.4 with the condition (S.3.2Sb) replaced by Eq. (S.3.33).
195
SPLITTING METHODS
5.3.3. Use the energy method to prove stability for the difference scheme
5.4. SPLITTING METHODS
Splitting methods are commonly used for time-dependent partial differential equations. They are often used to reduce problems in several space dimensions to a sequence of problems in one space dimension-this can significantly reduce the work required for implicit methods. Consider, for example, the simplest form of the two-dimensional heat equa tion (S.4. 1 ) with periodic boundary conditions, and approximate it b y the standard second order Crank-Nicholson method
( I � (D+xD-x +
+ D+yD_y)
) u".
(S.4.2)
To advance If' one time step, we have to solve a linear system of equations in &'(h - 2 ) unknowns. Now replace Eq. (S.4.2) by
( I � D+xD-x ) (I � D+yD_y ) u"+ l (I � D+xD-x ) (I � D+yD_y ) u". -
-
=
+
+
(S.4.3)
Equation (S.4.3) is still second-order accurate because it differs from Eq. (S.4.2) by the third-order term
Now assume that If' is known. We write Eq. (S.4.3) in the form
196
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
(I � D+xD-x ) ( � D+xD-x ) ( � D+yD_y) d', (I � D+yD_y) d'+ 1 Z =
-
F
:=
I
F,
+
I
=
-
+
z.
(5.4.4)
The first step is to solve for z. Because the equation contains only difference operators in the x direction, we can solve it for every fixed Y = YV . This is particularly simple, because the resulting linear system is essentially tridiag onal. Direct solution methods for this type of system were discussed in Sec tion 2.3. It was shown that the solution is obtained in C9'(h- l ) arithmetic opera tions. Once one has determined z, one can determine d'+ 1 on every line x = Xj . Thus, instead of solving a linear system in C9'(h-2 ) unknowns, we can deter mine d'+ 1 by solving C9'(h- l ) systems in C9'(h - l ) unknowns. This procedure requires C9'(h-2 ) arithmetic operations, which is generally cheaper. The gain is more pronounced for more general equations with variable coefficients where specially designed methods for the constant coefficient system (5.4.2) do not apply. We next consider general types of splittings for one-step methods. Assume that the differential equation has the form (5.4.5) where PI , P2 are linear differential operators in space. Let mate solvers for each part, that is,
d'+ 1
Q I Vn
QI,
Q2 be approxi (5.4.6)
is an approximation of
VI
P l v,
(5.4.7)
and
wn+ l
Q2 Wn
(5.4.8)
is an approximation of
WI
P2 W.
(5.4.9)
197
SPLITTING METHODS
We assume that Q\ , Q2 are simple in the sense that both Eq. (5.4.6) and Eq. (5.4.8) together are much easier to compute (or to construct) than any direct solver of (5.4.5). One typical case is when Q\ and Q2 are one-dimensional but operate in different coordinate directions. If each of them is at least first-order accurate in time, then the approximation (5.4. 1 0) is also first-order accurate. This follows from the fact that, for smooth functions u,
j
1 , 2,
and, therefore,
If Q \ and Q2 satisfy II Qj llh ::;; 1 + &(k), j = 1 , 2, then obviously I I Q2 Q \ lI h ::;; I + &(k). Under the general stability conditions on Q\ and Q2 in Definition 5.1 . 1 , the stability of Eq. (5.4 . 1 0) does not necessarily follow. We can also construct second-order accurate splittings. Assume that Q\ and Q2 are accurate of order (p, 2) when applied to smooth solutions of Eqs. (5.4.7) and (5.4.9), respectively. Then
j Since
Uti
=
=
1 , 2.
(p\ v)r = p\ vr + p\/ v = (PI + p\/)v, and similarly for Wit, we get j
= 1 , 2. (5.4. 1 1 )
[If Pj has the form A(x, t)d/dX, then Ph denotes the operator (dA/dt) (d/dX).] The second-order splitting is
(5.4.12) By using the relations (5.4. 1 1 ), we get
198
Q
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
I + k
+
k2 2
[� [
PI (tn ) +
�
PI (tn+lj2 ) + P2 (tn )
1 1 "4 P2I (t n ) + "4 P2I (tn+ lj2 )
]
+ 2" PI (tn+ lj2 )P 1 (tn ) + P2 (tn )PI (tn )
I
+ PI (tn+ lj2 )P2 (tn ) + P� (tn) + + &(k3 + klzP).
� Pl t (tn) + � Plt (tn+lj2) + P2t (tn)]
If we expand PI and PIt around t = tn using Taylor series, we get k2 Q = I + k(P I + P2 ) + 2 (P2I + PIP2 + P2PI + P22 + PIt + P2) + &(k3 + klzP). But this is the unique form of a (p, 2)-order accurate one-step operator applied to a smooth solution of Eq. (5.4.5). We summarize the result in the following theorem.
Assume that Eqs. (5.4.6) and (5.4.8) are approximations of order (p, q), q � I, to Eqs. (5.4. 7) and (5.4.9), respectively. Then Eq. (5.4.10) is an approximation of order (p, I). If q � 2, then Eq. (5.4.12) is an approximation of order (p, 2).
Theorem 5.4.1.
�
In the second-order case, stability follows if II Q I (kI2, t) 1 1 I + &(k), + &(k) for all t . If these relations do not hold, the stability of I I Q2 (k, t) 1 1 Eq. (5.4. 1 2) must be verified directly. When the operator Q in Eq. (5.4 . 1 2) is applied repeatedly, Q I (kI2, tn ) . Q I (kI2, tn- Ij2 ) occurs in each step. By using Taylor expansions of P and Eq. (5.4. 1 1 ), we get
�I
showing that the method
199
SPLITTING METHODS
is an approximation of order (p, 2). Note. however. that each time a printout is required. a half-step with the operator Q\ must be taken. The splitting procedure described here can also be applied to nonlinear prob lems. and the arguments leading to second-order accuracy hold. This is useful. for example. when solving systems of conservation laws Ut
=
FAu) + Gy (u).
where the operators Q\ and Q2 represent one-dimensional solvers. The splitting method described above can he generalized to problems in more than two space dimensions. Assume that the problem
has time-independent coefficients and that
solves j = 1. 2
• . . .
, d.
with at least first-order accuracy in time. Then the approximation
is first-order accurate in time. The second-order version does not generalize in a straightforward way for d > 2. We emphasize that. as always when discussing the formal order of accuracy. it is assumed that the solutions are sufficiently smooth. The accuracy of splitting methods used for problems with discontinuous solutions is not well understood. In the explicit case. the splitting methods are usually as expensive as the original ones when counting the number of arithmetic operations. Still. there may be a gain in the simplification of the stability analysis. For example. it is easy to find the stability limits on k for the one-dimensional Lax-Wendroff operators such that
200
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
k2 II I + kDox + "2" D+xD-x Ilh :5; 1 , k2 II I + kDoy + "2" D+yD_y Ilh :5; 1 . It is more difficult to find the stability limit for the two-dimensional Lax-Wendroff type approximation for Ut = Ux + uy
(5.4. 1 3) The stability of Lax-Wendroff type methods will be discussed in Section 6.5. Furthermore, the implementation of factored schemes is easier for large-scale problems. For example, if each factor is one-dimensional, then, in each step, we operate on the data in one space dimension only. With
I + kD+yD_y,
( I � D+xD-x) - I ( I � D+xD-x ) , -
+
the approximation (5.4.3) is
This is called an Alternating Direction Implicit (ADI) method, which is a spe cial factored form different from Eq. (5.4. 12). It is also second-order accu rate as shown above, and it is, furthermore, unconditionally stable (Exercise 5.4.2). EXERCISES 5.4.1. Find the stability limit on A = k/h for the approximation (5.4. 1 3) with equal step size hx = hy = h. 5.4.2. Prove that the ADI-scheme (5.4.3) is unconditionally stable.
STABILITY FOR NONLINEAR PROBLEMS
201
5.4.3. Construct a second-order accurate ADI approximation of type (5.4.3) of
Use the energy method to prove that it is unconditionally stable.
5.5. STABILITY FOR NONLINEAR PROBLEMS
The stability theory presented so far holds for linear problems. Because most problems in applications are nonlinear, one might think that the linear theory is of no use for real problems. However, if the solution u of the nonlinear dif ferential equation is sufficiently smooth, then to first approximation the error equation can be linearized about the solution u, and convergence follows if the linearized error equation is stable. We start with a simple example. Consider Burger's equation I'
= constant > 0,
(5.5. 1 )
with 27r-periodic initial data
u(x, O)
=
f(x),
f(x)
One can then show the following.
E
C=.
(5.5.2)
Equations (5.5. 1) and (5.5.2) have a solution that belongs to c = for all t. Also, one can estimate the derivatives: for every i, j there is a constant such that Theorem 5.5.1.
Cij
max x. 1
ju Ci l' -( j) . I oX'oti di+ I j i+ �
< _
(5.5.3)
This estimate cannot be improved. We approximate the above problem by the Crank-Nicholson method
(5.5.4) and assume that
k = "Ah, "A > 0 constant.
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
202
The error
satisfies
(I - � (Qo(un+ I ) - h2en+ I DO») en+ I (I � (Qo(un) - h2enDo») en =
+
+
kRn+ I /2 (u),
(5.5.5)
where
and
(5.5.6) is the truncation error. Thus, the R2j are expressions in u and its derivatives of order up to 2j + 2. If we neglect the nonlinear terms kh2 en+ I Doen+ I and kh2 en Doen , then Eq. (5.5.5) is a stable linear difference scheme. Therefore, we obtain, in any finite time interval ° � t � T, (5.5.7) Observe, however, that this estimate is only useful if h2 1I Rj+ I /2 1Ih « 1 . If v « 1 , then by Eq. (5.5.3) we can only guarantee this if h « v. We can improve the estimate. Formally, Eq. (5.5.5) converges as h ---7 0, kjh = = constant > 0, to the differential equation
A
Thus,
W I t + UW Ix + Ux W I W I (X, O)
=
=
VW I xx + R(u),
0.
(5.5.8)
203
STABILITY FOR NONLINEAR PROBLEMS
(5.5.9)
0, where
is the truncation error. The S�;1 /2 are expressions in W I and its derivatives of order up to 2j + 2. We expect that e - W I = &(h2 ). Therefore, we make the ansatz (5.5 . 1 0) Substituting Eq. (5.5 . 1 0) into Eq. (5.5.5) and using Eq. (5.5.9) gives us
(I � (Q I (Un+l ) -
+
)
h4 er I Do» er l e?
:=
=
( I � (QI (Un) +
+ k(R ( l ) t+ I /2 , 0,
+
)
h4 e7Do» e7 (5.5 . 1 1)
where
and
If we neglect the terms kh4 e7+ I Doe7+ 1 and kh4 e7Doe'{, then Eq. (5.5 . 1 1 ) is a stable linear difference scheme, and, in every finite time interval 0 ::; t ::; T, we obtain a bound
Thus,
204
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
provided we could neglect the nonlinear tenns; that is, (5.5 . 1 2) This process can be continued. After p steps,
ep is the solution of
(1 - � (Qp(un+1 ) h2P+2e;+l Do») e;+l ( I � (Qp(un) h2P+2e;Do») e; +
+
=
epO
+
=
k(R ( p) t+ 1 /2 ,
+
(5.5 . 1 3)
O.
Here
where the wp are solutions of linear differential equations of type (5.5.8). Also,
- j=L1 h2jwj - h2p+2ep, p
u
v
(5.5 . 14)
and ep is bounded, provided we can neglect the nonlinear tenns in Eq. (5 .5. 1 3). We shall now use Appendix A.2 to bound the solution of the nonlinear equa tion (5.5 . 1 3). We consider a fixed time interval 0 � t � T = N k and write Eq. (5.5. 13) in the fonn Ae + Ef(e)
Here
b,
205
STABILITY FOR NONLINEAR PROBLEMS
e b f(e)
(epN , epN- I , epI ) ' «R ( p) )N- I /2 , (R ( p) )N - 3/2 , • . . , (R ( p) ) 1 /2 ), , . • •
1 (epNDoepN, epN- 1 DoepN-I , . . . , epI DoepI )
-2
+ 2
1
(epN- 1 DoepN- I , epN-2DoepN-2 ,
. • . ,
ep0Doep0 ),
and A denotes the linear part of the equation. We use the nonn
Jel2
max
lI en ll �.
/ e l�
max
l en (xj ) l .
lI en ll �
L
l en (xj)/ 2 h
n
Let
n, ]
Then
j
implies that
Stability of the linear part of the equation tells us that
For the nonlinear tenn we obtain
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
206
I f(u) - f(v)1 2
Il un Doun - u" Dou" ll �
� max
n
m:x
L I (un (x}) j
- u" (x) )Doun (x}) + u" (xj )Do(un (xj )
- u" (Xj )) 1 2 h
� constant h-2 (lul: + I vl:) max
n
lI un - u" 11 �
or I f(u) - f(v) I � constant h- 3/2 Vl ul 2 + I v l 2 l u - vi .
(5.5 . 1 5)
Thus, referring to Theorem A.2.2 of Appendix A.2, f is locally Lipschitz con tinuous, and the Lipschitz constant satisfies
that is, I I T := lOlA- l L ::; constant h2p+ /2 • Also, since f(O) = 0 by Eq. (5.5. 1 5), we get
showing that the assumptions of Theorem A.2.2 are fulfilled for h sufficiently small. Thus, Eq. (5.5. 1 3) has a solution e; with max
n
li e; - (e;)o llh
� constant h2p+1 /2 •
Clearly, the solution of Eq. (5.5. 1 3) is locally unique. The initial guess (e;)o is the solution of the stable linear problem, and it can be estimated in terms of maxn I I (R ( p)r l /2 I1h . Therefore, the solution e; of the nonlinear equation is bounded if h is sufficiently small. One can use the asymptotic expansion Eq. (5.5. 14) in two ways to improve the error bound.
STABILITY FOR NONLINEAR PROBLEMS 1.
207
Richardson extrapolation. Let us calculate the solution of Eq. (5.5.4) for 2h and denote the solutions Uh and U2h . By Eq. (5.5 . 1 4),
two stepsizes h and we have
U
Uh
U
U2h
p
+
L h2j wj
+
j= l p
+
L (2h)2j Wj j= 1
&'(h2p +2 ), +
&'(h2p +2 ).
Observe that the Wj are solutions of the linear differential equations and do not depend on h. Therefore, U2h
+
P
L (4 j=2
and we obtain a fourth-order accurate approximation (4Uh - U2h)/3 of u. Of course, one can use the above process to calculate even higher order approx imations. For example, if we calculate Uh, U2h , U3h, then we can form a linear combination that gives us a sixth-order accurate result. The above procedure can also be used to calculate an approximation of the error. We have as a first approximation
2. Deferred correction. In this case we first calculate U from Eq. (5.5.4). To compute an approximation of the error h2 e, we replace u by U in Eq. (5.5.5) and neglect the nonlinear terms:
(I � QO(v"+ l )) en+ 1 ( I � Qo(v")) en +
-
eO
+
kRn+ l /2 (u),
O.
Using Eq. (5.5. 14), we commit an error of order &'(h2 ) in this process, and we get
Again, one can generalize this procedure to obtain higher order approximations.
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
208
REMARK. Richardson extrapolation and deferred correction are general proce dures that can be used to decrease the error for all kinds of problems whenever an asymptotic expansion exists. We have shown that, to first approximation, the error is given by
where
WI
is the solution of Eq. (5.5.8). This estimate is only useful if 1 . Unfortunately, this often poses a problem. R is the truncation error and it is composed of U and its derivatives. The leading terms are pro portional to Um, Uxxx, P Uxxxx. The best estimate we can obtain using Eq. (5.5.3) is
h2 11 w7 11h
«
Therefore, we need h2 p - 3 « 1 , which can be a severe restriction on the stepsize. If this condition is not satisfied, not only is the accuracy poor, but the approx imation may blow up faster than exponentially. This is often referred to as non linear instability; however, it occurs in linear problems with nonsmooth coef ficients as well. The other difficulty is associated with the solution operator S(t, to) of the differential equation (5.5.8). If II s(t, to) II is exponentially increasing, then one has to make h exponentially small if one wants to solve the problem over long time intervals. In fact, the problem can be worse if the solution operator Sh (t, to) of the linearized difference approximation grows faster than S(t, to). It is not difficult to generalize these results. Consider a quasilinear system Ut =
p( x, t, u, :x ) u,
u(x, O)
=
f,
(5.5 . 1 6)
of order p and approximate it by a one-step method
zf+ I = (l + kQ(x, tn , vn » zf, VO = f·
(5.5. 17)
Assume that kjhP = A. is a constant and that the approximation is accurate of order q. Assume also that Eq. (5.5. 1 6) has a smooth solution U. Substituting U into Eq. (5.5.17) gives us
(5.5 . 1 8)
209
STABILITY FOR NONLINEAR PROBLEMS
We expect that
U v = @(hq), and therefore make the ansatz -
(5.5.19) Substituting Eq. (5.5. 19) into Eq. (5.5. 17) gives us
en+1 eO
(1 + kQ l l (x, tn , Un » en + khq Q 2 (X, tn , Un , en )en + kRn , '
(5.5.20)
0.
QII is the difference operator which we obtain by linearizing Q(x, tn , un)d' about U. Q 12 is the remaining nonlinear part. If we neglect the nonlinear terms, then Eq. (5.5.20) converges formally to a linear differential equation
WIt = P I (x, t, U)w, + R, WI(X, O) = 0.
(5.5.2 1 )
Here PI is the linearization of P(x, t, u)u about U. Assume that Eq. (5.5.2 1 ) has a smooth solution [i.e., Eq. (5.5.2 1 ) is a well-posed problem], and that
w7+ ' w?
=
=
0,
(1 + kQI , )w7 + kR + khq] Sn ,
is accurate of order q , . Then we define
el
by
For e l , we obtain the equation
e7+ ' e?
=
(I + Q21 (x, tn , Un , hqw7»e7 + khQ+q] Q22 (X, tn, un , hqw7 , e7 )e7 + k(R ( I ) r ,
= 0.
Now we can repeat this process. Thus, we can reduce the nonlinearity to higher and higher order in h. We can apply Theorem A.2.2 of Appendix A.2, provided the linearized difference approximation is stable. In particular,
Here
WI
is the solution of Eq. (5.5 .21).
STABILITY AND CONVERGENCE FOR NUMERICAL APPROXIMATIONS
210
EXERCISES 5.5.1. Prove that the L2 norm of the solutions of Eq. (5.5.1) is a nonincreasing function of t. 5.5.2. Consider the approximation (5.5.4) with Q(v)v replaced by
Q(v)v
:=
- "3
1
(vDov
+ Do v2 ) + IID+ D_ .
Use the energy method to prove that the solutions fulfill
5.5.3. Determine the coefficients extrapolation u
p
(X v
in the general formula for Richardson
L (X v V2V h v=o
+
(9'(h2 (P+ l ) .
BIBLIOGRAPIDC NOTES
The Kreiss Matrix Theorem was given by Kreiss (1962) ; the proof is also found in Richtmyer and Morton (1967). There are several constants occurring in the the orem, and it may be useful to know the relations between them. In particular, if the constant C2 in Eq. (5.2.27) is known, one would like to know the constant C 1 in Eq. (5.2.26). It has been proved by Spijker (1991) [see also LeVeque and Tre fethen (1984)] that the best possible value is C 1 = e . m . C2 and that C 1 and C2 grow at most linearly in the dimension of the matrices [see Tadmor (1981)]. For further results concerning these constants, see McCarthy and Schwartz (1965), McCarthy (1994), and van Dorsselaer, Kraaijevanger, and Spijker (1993). The general splitting procedure described in Section 5.4 is due to Strang
(1968).
Alternating direction implicit methods of type (5.4.3) were originally devel oped by Peaceman and Rachford (1955) and Douglas (1955). Later generaliza tions have been given by Douglas (1962) and Douglas and Gunn (1964). See also Beam and Warming (1978). The type of convergence proof technique used for nonlinear problems in Sec tion 5.5 was first used by Strang (1964), who treated quasilinear hyperbolic terms. Richardson extrapolation was introduced by Richardson (1927). Richardson extrapolation and deferred correction are discussed in Fox (1957) and Pereyra
(1968).
Stability difficulties resulting from nonsmooth coefficients and with nonlinear problems have been discussed by Phillips (1959), Kreiss and Oliger (1972), and Fornberg (1975).
6 HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
6.1. SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
We have already discussed first-order systems (6. 1 . 1 ) in Section 4.3. Here we only add some complementary results and an example. In Section 4.5, we have proved that for general systems with constant coef ficients we could obtain an energy estimate
d (u, U)H dt
�
2a.(u, U)H
if and only if the initial value problem is well posed. We now explicitly con struct the H norm.
Let A be a matrix with real eigenvalues and a complete set of eigenvectors that are the columns of a matrix T. Let D be a real positive diagonal matrix. Then Lemma 6.1.1.
is positive definite, and B
=
HA
is Hermitian. H is called a symmetrizer of A. 211
212
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
Proof. The matrix H is Hermitian, and it is positive definite. By assumption, is a real diagonal matrix and so is DT- 1 A T. Therefore,
T - 1 A T = T*A*(T - l )*
which proves the lemma. Now consider a strongly hyperbolic system such as Eq. (6. 1 . 1).
Let H be defined as in Lemma 6.1.1. Then the solutions of the strongly hyperbolic system (6.1.1) satisfy
Theorem 6.1.1.
(u( · , t), Hu( · , t)) Proof.
=
(u( · , 0), Hu( · , 0)).
(6. 1 .2)
HA is Hermitian and, therefore, by Lemma 4.6. 1, d - (u, Hu) dt
(ul , Hu)
(u, Hul),
+
(Aux . Hu) (u, HAux), -(u, A*Hux) (u, HAux), (u, (HA - A*H*)ux), +
+
o.
The theorem shows how the symmetrizer H is used to construct a new norm such that the solution is nonincreasing in that norm. As an example, we consider the Euler equations (4.6.4a) and (4.6.4b). If the linearization is made around a constant state, the lower order term in the linearized system vanishes. Considering perturbations that are independent of y and Z, we arrive at a one-dimensional system with constant coefficients:
[D [! +
o u o o
(6. 1 .3)
where a is the speed of sound. (We have dropped the prime sign here.) Thus, the system reduces to a 2 x 2 system and two scalar equations 0, 0, o.
A
=
[ RU a2U/R ] '
(6. 1 .4a) (6. 1 .4b) (6. 1 .4c)
SYSTEMS WITH CONSTANT COEFFICIENTS
213
Obviously, Eqs. (6. 1 .4b) and (6.1 .4c) are hyperbolic. The eigenvalues of A are K =
U
± a,
showing that the system (6. 1 .4a) is strictly hyperbolic under the natural assump tion a > O. [The system (6. 1 .3) is not strictly hyperbolic, because the coefficient matrix has two eigenvalues U. However, this has no real significance, because the system decouples.] The eigenvectors of A are ( 1 , ±R/a)T corresponding to the eigenvalues U ±a. The matrix T - I in Lemma 6. 1 . 1 is, therefore, T- I = �
2R
R/a 1 ] [ R/a . -1
For any diagonal matrix D = diag(d I , d2 ) we have
and, with d l
=
d2 = 2, we get the symmetrizer
With ii = (u, ap/R)T we get, using Theorem 6. 1 . 1 , (6. 1 .5) that is, the introduction of the new norm (u, Hu) can be interpreted as a single rescaling of the dependent variables u = (u, pl. This transformation can of course be applied directly to the system (6. 1 .4a). Let H I /2 = diag( 1 , a/R). Then
and Eq. (6. 1 .4a) takes the form
214
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
where
This is a symmetric system and Eq. (6. 1 .5) follows immediately using the energy method. Finally, we note that if U = 0 and p ap/R, then =
-
U/t = -apx r -
(6. 1 .6)
-aUx t
Ptt
that is, both U and p satisfy the wave equation. EXERCISES 6.1.1. Derive the symmetrizer H for the system (6 . 1 .3 ) without using the decou pled form. 6.1.2. Derive the symmetrizer H of
A = What is the condition on
[
1
b o
a 1
b
0
]
a . 1
a, b for H to have the desired properties?
6.2. SYSTEMS WITH VARIABLE COEFFICIENTS IN ONE SPACE DIMENSION
We now consider systems
A(x, t)ux + B(x, t)u u(x, O) = f(x). Ur
=
+
F(x, t), (6.2 . 1 )
The generalization of Definition 4.3 . 1 is as follows.
The variable coefficient system (6.2.1) is called strictly sym metric, and strongly or weakly hyperbolic if the matrix A(x, t) satisfies the corre sponding characterizations in Definition 4.3. 1 at every fixed point x = xo, t = to.
Definition 6.2.1.
215
SYSTEMS WITH VARIABLE COEFFICIENTS
We now assume that A(x, t), B(x, t), F(x, t), and the initial values f (x) are 27r-periodic in x. We start with a uniqueness theorem.
Assume that A e C1 (x, t) is Hennitian and that B e C(x, t). Then Eq. (6.2.1) has at most one 27r-periodic solution u(x, t) e C1 (x, t).
Theorem 6.2.1.
Proof. Let u(x, t), vex, t) be two solutions of Eq. (6. 2 . 1 ) belonging to Then w(x, t) u(x, t) - vex, t) satisfies the homogeneous equation
c 1 (x, t).
=
WI
=
Awx
Bw
+
with homogeneous initial condition
w(x, 0)
o.
As in Section 4.6, Lemma 4.6. 1 gives us
:t II w l1 2
=
=
(w, WI )
+
(w, Bw) -(w, Ax w)
+
where 2ex
(WI' w) = +
+
+
(Bw, w) (w, (B + B*)w) ::;
max ( I A" I + X, I
(w, Awx)
IB
+
(Aw.n w) 2ex ll w ll 2 ,
B* !).
Therefore,
and the theorem follows. Now assume that the system (6.2. 1 ) is only strongly hyperbolic. By Lemma 6. 1 . 1 , we can find a positive definite Hermitian matrix H = H(x, t) such that HA is Hermitian. If H e C1 (x, t), then we can proceed as in Theorem 6. 1 . 1 and obtain the following theorem.
Replace the condition "A is Hennitian " in Theorem 6.2.1 by "there is a positive definite matrix H(x, t) e C1 (x, t) such that HA is Hermi tian. " Then the system (6.2. 1) has at most one 27r-periodic solution u(x, t) e C 1 (x, t).
Theorem 6.2.2.
If the system is strictly hyperbolic, then one can choose the eigenvectors of
A to be as smooth as the coefficients of A. Thus, if A e C1 (x, t), then the same
216
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
is true for the symmetrizer existence result.
H
=
(T - 1 )*T - 1 •
Without proof we now state an
Theorem 6.2.3. Assume that the system (6.2. 1) is strongly hyperbolic, that A, B, F, andf, are 2-rr-periodic functions of x and that they belong to r(x, t). If there is a 2-rr-periodic symmetrizer H E C�(x, t), then the initial value problem (6.2.1) has a solution E r(x, t). In particular, if the system is strictly hyper bolic, then there is a symmetrizer H E C� (x, t).
If the system is only weakly hyperbolic, then we cannot expect that the prob lem is well posed. This was shown in Section 4.3 for systems with constant coefficients. We show that the introduction of variable coefficients makes the situation worse in the sense that more rapid growth may occur. Consider the system
au at
V' (t)
[ 01 11]
Vet)
au
(6.2.2)
ax '
where
V(t)
=
[ -cosm� tt
sin t cos t
].
The coefficient matrix in Eq. (6.2.2) has real eigenvalues but it cannot be diag onalized, so the system is only weakly hyperbolic. Introduce a new variable u = V v. Then v is the solution of '
av at
av ax
+
Cv,
C
=
-
v(x, O)
, [ 0 1] -1 0 ' Vu(x, 0).
V VI =
=
(6.2.3)
The system (6.2.3) has constant coefficients and we can solve it explicitly. Let the initial data consist of a simple wave
v(x, O) Then, as in Section D(w, t) satisfies
=
eiwxq(w),
w real.
4.3, the solution is of the form vex, t) D(w , t)eiwX, where =
SYSTEMS WITH VARIABLE COEFFICIENTS
dv/d t
A A
=
=
A V,
[ ] 1
lW ° •
1
1
217
v(W , O)
+
C
=
[
=
q(W),
iw + . lW
iW -1
1
]
.
(6.2.4)
The eigenvalues of A are given by
-
K I = lW + V (iw +
1 ),
K2
==
lW - V-(iw +
1 ).
(6.2.5)
Thus, the solution of Eq. (6.2.4) has the form
where gz denotes the eigenvector corresponding to the eigenvalue K[, and (1 1 and (1 2 are determined by the initial values, that is, by
For large values of I w i , one of the eigenvalues K z always satisfies Re K ==
1 "2W 1 1 /2 +
&'
(
1
I w l l /2
)
.
(6.2.6)
Thus, v(w, t) grows, in general, like exp( l w/2 1 1/2 t) and the same is true for u(x, t) and u(x, t). To solve the system (6.2.3) numerically is rather difficult. Even if the initial data for the transformed system (6.2.4) only consists of a linear combination M
u(x, 0)
L
x eiw q(w)
w = �M
of low frequency components, rounding errors introduce high frequencies that can be greatly amplified. For example, if w = 1 00, t = 1 0, then exp( l w/2 1 1/2 t) = 5 . 1 2 x 1 030 . Thus, if V I OO(O) = lO� I O at t = 0, then V I OO(t) is, in general, of the order 1 020 at t = 1 0. If the Euler equations are linearized around a solution that depends on x, y, Z and t, there is a zero-order term left in the system as shown in Section 4.6. In the one-dimensional case, where all y and z derivatives vanish, the sys tem does not decouple as it does for constant coefficients. However, if the zero-order term is disregarded, then we get a system with the same structure
218
as Eq. cients.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
(6.1 .3).
Thus, the symmetrization can be done as for constant coeffi
EXERCISES 6.2.1. Derive the explicit form of the zero-order term in the linearized Euler equations and prove that the system does not decouple in the one-dimen sional case. 6.2.2. In Section 6. 1, the second-order wave equation was derived from the linearized Euler equations with constant coefficients. Carry out the cor responding derivation for variable coefficients. 6.3 SYSTEMS WITH CONSTANT COEFFICIENTS IN SEVERAL SPACE DIMENSIONS
In this section, we consider first-order systems
d au au = L at v= I A v ax( v )
(6.3.1)
with constant coefficients. Here u = (u( ! ) , . . . , u(m) )T i s a vector function with m components depending on x = (x( I ) , . . . ,x (d) )T and t. The A v are constant m X m matrices. We are interested in 21r-periodic solutions, that is, P
=
,
1 , 2 . . . , d, t
(6.3.2)
� 0,
where ev denotes the unit vector in the x(v ) direction. Also, at t = 0, u(x, t) must satisfy 21r-periodic initial conditions
(6.3.3)
u(x, O) = f(x).
We now generalize Definition 4.3.1 and define what we mean by hyperbolic. Definition 6.3.1.
Consider all linear combinations
pew)
The system (6.3. 1) is called
wv real,
Iwl
=
(� ) d
w;
1/2 l.
219
SYSTEMS WITH CONSTANT COEFFICIENTS
1. strictly hyperbolic if, for every real vector w (W I . . . . , W d ) , the eigen values of P( w) are distinct and real, 2. symmetric hyperbolic if all matrices A v , v = 1 , 2, . . . , d, are Hermitian, 3. strongly hyperbolic if there is a constant K > and, for every real vector w, a nonsingular transformation T(w) exists with =
0
I
sup ( I T(w) 1 + I T- (W ) \ ) :s; K
Iw l = 1
such that T - 1 (w)P(w)T(w) A
= diag(�' 1 ' . . . , Am ),
A, Aj real,
4. weakly hyperbolic if the eigenvalues of P( w) are real. We, therefore, have the following theorem.
If the system (6.3.1) is strictly or symmetric hyperbolic, then it is also strongly hyperbolic. If the system is strongly hyperbolic, then it is also weakly hyperbolic. Thus, the relations as expressed in Figure 4.3.1 are also valid in several space dimensions. Theorem 6.3.1.
Proof The only nontrivial part is to prove that a strictly hyperbolic system is also strongly hyperbolic. It follows from Lemma A. 1 .9 in Appendix A. l that the eigenvectors can be chosen as smooth functions of w. Observing that I w i = 1 is a bounded set, it follows that I T(w)1 + I T - l (w)1 is uniformly bounded. In view of the example in the last section, weakly hyperbolic systems are sel dom considered in applications. In most cases the systems are either symmetric (or can be transformed to symmetric form) or strictly hyperbolic. As in Section 4.5, we can solve the initial value problem using Fourier expan sions. In particular, Theorem 4.5.3 gives us this theorem.
Theorem 6.3.2. The initial value problem (6.3.1) is well posed for strongly hyperbolic systems.
The generalization of the linearized Euler equations (6. 1 .3) to three space dimensions is
220
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
] + f [:] + [ � [D U + [ � 1 � ][: ] 0 0
0
U
U
0 o
0 W 0 0
a'
U
0
R
x
P
0
i ][ U
0
V 0 a' R 0 V 0 R 0
(6.3.4)
0,
a' R
p
W
z
where we again assume that a > O. The system is neither strictly nor symmetric hyperbolic. However, as in the one-dimensional case, it can be symmetrized by introducing p = apjR as a new variable. Consequently, the system is strongly hyperbolic. The symbol is given by
pew) A
=
[
0
a
ex
0 0
0
0 0
ex
RW I RW 2 RW3
Its eigenvalues
K\
ex
K are =
K2
= ex,
]
(a' /R)W I (a2jR)w 2 (a2jR)W 3 '
K3
ex
EXERCISES
+
ex =
UW I
a l w l , K4
ex
+ + VW 2
-
WW3 .
(6.3.5)
alw l .
6.3.1. Using the notation
for the system (6.3.4), find the matrix T such that T- I Aj T, j = 1 , 2, 3, are symmetric. Prove that there is no matrix S such that S- I Aj S, j = 1 , 2, 3, are diagonal. 6.3.2. Find a matrix T(w ) such that T- I (w)P(w)T(w) is diagonal and
I T - I (w) 1 . I T(w) 1 where Pew) i s defined i n Eq. function of w?
::;
constant for Iw l
=
1,
(6.3.5). Can one choose T(w) a s a smooth
SYSTEMS WITH VARIABLE COEFFICIENTS
221
6.4. SYSTEMS WITH VARIABLE COEFFICIENTS IN SEVERAL SPACE DIMENSIONS
In this section, we consider the initial value problem for
au at u(x, O)
d
L Av(x, t) v=
=
1 f (x).
au ax(v )
+
B(x, t)u
+
F(x, t), (6.4. 1)
u, f, and F are vector functions with m components and Av and B are m x m matrices. We assume that A, B, F, and f are smooth [i.e., they belong to C� (x, t)], 27r-periodic functions, and we want to find a 27r-periodic solution satisfying Eq. (6.3.2). Here
Hyperbolic is again defined pointwise.
The system (6.4.1) is called strictly, symmetric, strongly, or weakly hyperbolic if the matrix
Definition 6.4.1.
P(x, t, w)
=
d
L Av(x, t)wv,
Wv real,
(6.4.2)
v= 1
satisfies the corresponding characterizations in Definition 6.3. 1 at every point x = xo, t = to. Using Theorem 6.2.3, one can prove the following theorem. 1f the problem (6.4.1) is symmetric hyperbolic, then it has a unique smooth solution.
Theorem 6.4.1.
Except for the zero-order term, the linearized Euler equations have the form
(6.3.4), where U, V, W, a2 , and R now depend on x, y, z, and t . Hence, the system
can be symmetrized and the theorem can be applied. If the system is only strongly hyperbolic, then the existence proof is more complicated and is beyond the scope of this book. In fact, a general existence theorem is not known, and one has to make more assumptions. We present the idea of the proof. Consider the symbol iP(w), defined by Eq. (6.4.2), and the diagonalizing matrix T(x, t, w). By Lemma 6.l . 1 , we can construct a sym metrizer
H(x, t, w)
=
(r 1 (x, t, w» * D(x, t, w)T - 1 (x, t, w)
(6.4.3)
for every fixed x, t, w. Now assume that H(x, t, w) is a smooth function of all variables. We can then construct a positive definite bounded Hermitian operator H as in Section 4.5 such that we can obtain an energy estimate for (u, Hu). We state the result as follows:
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
222
Theorem 6.4.2. Assume that the symmetrizer (6.4.3) can be chosen as a smooth function of all variables. Then the initial value problem (6.4. 1) is well posed. If the system (6.4. 1) is strictly hyperbolic, then there is a smooth symmetrizer. EXERCISES 6.4.1. Consider the linearized Euler equations (6.3.4) with variable coefficients obtained by linearizing around a nonconstant flow U, V, W, R, a2 Is there any flow such that the system is strictly hyperbolic at some point •
X o, Yo, zo, to?
6.5. APPROXIMATIONS WITH CONSTANT COEFFICIENTS
Throughout the previous chapters, hyperbolic model problems have often been used to demonstrate various features for different types of approximations. In this section, we shall give a more unified treatment for approximations of linear hyperbolic problems with constant coefficients. In Section 5.2, it was shown that necessary and sufficient conditions for sta bility may be difficult to verify for problems with constant coefficients. Further more, for variable coefficient problems, the Fourier transform technique used in Section 5.2 does not apply directly. However, if the differential equation is hyperbolic, it is possible to give simple conditions such that stability follows from the stability of the constant coefficient problem. The key to this is the concept of dissipativity, which was discussed at the end of Section 5.2. [The definition of dissipativity was given for the one-dimensional case there. The only required change in the condition (5.2.31) is that I � I � 7r must be replaced by I �j l � 7r,j = 1 , 2, . . . , d.] First, we consider the hyperbolic system (6.3. 1 ) and the explicit one-step approximation
vn + I Q=
Qv n ,
=
L BIE 1 ,
EI
(6.5.1)
1
where I = (I I . 12 , Id) i s a multiindex. Ev i s the translation operator in the coordinate x(v ) . It is assumed that the coefficient matrices BI do not depend on (Xj, tn ) and that the space and time step are related by k "Ah , where "A is a constant. All the results in Chapter 5 are given for general problems, and, conse quently, they apply here. However, by using the structure of the differential equation, it is possible to derive sufficient stability conditions that are easier to verify. For example, we have the following theorem. . • "
=
APPROXIMATIONS WITH CONSTANT COEFFICIENTS
223
Assume that Eq. (6.5.1) is an approximation of the strongly hyperbolic system (6.3.1). Then, it is stable if it is dissipative of order 2r and the order of accuracy is at least 2r 1 .
Theorem 6.5.1.
-
Proof. The main idea is to use the information about the eigenvalues of Q(�) to bound the norm. Consistency is used to connect the properties of the approx imation to those of the differential equation. We first consider small values of I � I . It was shown in Theorem 5 .2.5 that the order of accuracy can be defined in Fourier space, and, from the accuracy assumption, we have
Q =
i'(iw)k
+
R
Since the system
=
i'i
Lv Av� v
+
R,
(6.3.1) is strongly hyperbolic, there is a matrix T such that
where D is real and diagonal. We now use the matrix T to transform Q to "almost diagonal form" so that the magnitude of the norm is only slightly larger than the magnitude of the eigenvalues. Then the deviation from diagonal form can be compensated for by using dissipativity. Let S(O be a unitary matrix that transforms
(6.5.3) to upper triangular form. We get
(6.5.4) where
Let
R2 = S* T- I RTS, j
=
12 ,
,
(6.5.5)
where Lj , Dj , and Uj denote the strictly lower triangUlar, diagonal, and strictly upper triangular parts, respectively. By construction L I + L2 = 0, hence I L l I = &( 1 � 1 2 r). The matrix RI is unitary, and thus normal. Parlett (1966) has shown
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
224
that such matrices satisfy the inequality
l U I I � vim(m - 1) I L l I , and we obtain (6.5.6)
Because
I R2 1 &( 1 � 1 2r), it follows that I U2 1 &( 1 � 1 2r) and, therefore, =
=
(6.5.7) for some constant ao. The next step is to transform S* Ql S so that ao is replaced by a sufficiently small constant. In this way, the influence of the off-diagonal elements can be annihilated by the dissipative part of the eigenvalues that form the diagonal matrix D I + D2 . Let the diagonal matrix B be defined by
b
(6.5.8)
> 1.
We have
I BS* Q I SB- 1 1
I B(R I + R2 )B- I I , = I B(D + D2 )B- I + B(Lr + L2 + Ur + U2 )B- I I , I � I Dr + D2 1 + I B(U I + U2 )B- I \ . =
An estimate of the first term is immediately obtained from the dissipativity condition. For the second term, we note that if U is any strictly upper triangular matrix, a direct calculation shows that the transformation B UB- I decreases the norm by a factor b. Hence, from Eq. (6.5.7), we get
01
> 0,
(6.5.9)
if b is chosen sufficiently large. The inequality (6.5.9) can be interpreted as the construction of a new norm for Q, which is bounded by one in the neighborhood of I � I = 0. We have
(S* Q 1 S)*B2S*Q rS = B[BS* QrSB- I nBS* QrSB- I ]B, � ( 1 - 0 1 1 � 1 2r )B2 . Recalling that S is unitary, we get, by Eq.
(6.5.3),
APPROXIMATIONS WITH CONSTANT COEFFICIENTS
225
that is, with (6.5 . 1 0) it follows that (6.5 . 1 1 )
H clearly defines a nonn (u, Hu) 1 /2 in the m-dimensional vector space, because it follows from Eq. (6.5. 10) that H is Hennitian and positive definite with (6.5. 1 2)
CI
for > 0 and C2 > O. So far, we have only considered a neighborhood of I � I O. However, for ::; 1 D , D > 0 and, therefore, by Lemma A. 1 .5 of I � I � �o > 0, we have Appendix A. l , we can find a unifonnly positive definite such that
-
p(QJ
A
A
A
Q*HQ ::;
=
( - ) H. "2 D
1
H(n
A
(6.5 . 1 3)
H
Without restriction, we can assume that satisfies Eq. (6.5. 12) for all �. In the new nonn the difference approximation is a so-called contraction, that IS,
(u, Hu),
lunl�
H
(un, Hun) = (Qun - I , HQun - l) (un- I , Q*HQun- I ), ::; (un - I , Hun - l) = lun - 1 1 1·
1·1
The nonn if can be transfonned into a corresponding nonn in physical space. Any vector gridfunction Uj can be written as
(27rr d/2
L uw ei(w , Xj).
(6.5. 14)
w
The operator H is defined by
w
and we obtain, from Eq. (6.5 . 1 2),
(6.5 . 15)
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
226
(6.5. 1 6)
Therefore, Parseval's relation and Eq.
IIv n ll �
::;;
b-2 (v n , Hv n h
::;;
b- 2
::;; ::;;
.
..
(6.5. 1 1 )
:=
b- 2
L (fl, HfJ'),
L (fJ' - l , HfJ' - l), w
< _
b- 2
give us
w
v L COv , HAD) w
b2(m - 1 ) IlvO II �.
b-2 (vO, HVO )h , (6.5 . 17)
Thus, the method is stable. This way of constructing a new norm is essential for approximations with variable coefficients. The technique used is a generalized form of the energy method introduced in Section 5.3. It is very similar to the method of proving well-posedness for nonsymmetric continuous problems. The assumption that the order of accuracy must be at least 2r- l is restrictive. For example, the Lax-Wendroff method (2 1 . 1 7) is dissipative of order 4 and accurate of order 2. Therefore, Theorem 6.5 . 1 does not apply. There is a way to handle this difficulty. For general approximations, some extra condition must be used. For example, if we require strict hyperbolicity, the accuracy restriction can be removed. .
Theorem 6.5.2. Assume that Eq. (6.3.1) is strictly hyperbolic and has constant coefficients. Then the approximation (6.5.1) is stable if it is consistent and dis sipative. Proof. We first consider a neighborhood of � 0, I � I ::;; �o. By consistency, it follows from Theorem 5 .2.5 that the symbol has the form :=
By assumption, the eigenvalues of 2. . A .�: are distinct. Thus, there is a matrix T(f) with I T(n l ::;; C, I T - 1 (n l ::;; C, such that T - 1 2. . A.��T is diagonal, and, for I � I sufficiently small, we have (6.5 . 1 8)
APPROXIMATIONS WITH CONSTANT COEFFICIENTS
227
that is, by dissipativity,
(6.5.19) The remaining part of the proof is almost identical to that of Theorem 6.5.1. For I � I > �o, all of the eigenvalues are bounded away from the unit circle. We construct a new norm such that IQl fi < 1 and obtain the result. Dissipativity by itself is not sufficient to guarantee stability. Consider the approximation if + I
=
un +
[ -11 ] h D D
1 a °
2 + - un ,
which is consistent with Ur = 0, U = (u( l ) , U(2) T. Note that this is not a strictly hyperbolic system. The amplification matrix is
�
Q showing that, for a < for small �, we have
=
I
+
� ' 4a sm2 "2
[
-1 °
1 -1
]
'
1/2, the approximation is dissipative of order 2. However,
and obviously the stability condition
(5 . 2. 1 1 ) is not satisfied.
One can also prove the following theorem. Theorem 6.5.3. Assume that Eq. (6.3.1) is symmetric hyperbolic and has con stant coefficients. Then the approximation (6.5.1) is stable if
I. the coefficient matrices Bl are Hermitian 2. it is dissipative of order 2r 3. the order of accuracy is at least 2r - 2. The Lax-Wendroff method is based on the Taylor expansion
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
228
(6.5.20)
For the differential equation,
(6 . 5 . 2 1 ) we have
(6.5.22) which yields the approximation d1+ 1
=
[/ +
k2 + k(ADox + BDoy) + 2 (A2D+xD_x (AB + BA)Dox Dov + B2 D+yD_y)]u n •
We want to prove that Eq. (6.5.23) is stable if A and can also write the scheme in the form
(6.5.23)
B are Hermitian.
We
.
(6. 5 24) where
Qo = ADox + BDoy, Q l A2 (D+xD_x - D6x ) + B2 (D+yD_y (A2 D�x D:x + B2D�yD:y ).
-�
D6y ),
The amplification factor is
Q Qo Ql
=
X2 A X2 A I + X Qo + 2 Q2o + 2 Q l , i(A sin � l + B sin b), A
-
(
4 A 2 sin4
�
+
B2 sin4
�) .
By Lemma A. 1 A in Appendix A. 1 , we can estimate the eigenvalues with the help of the field of values. We have
z
of
Q
APPROXIMATIONS WITH CONSTANT COEFFICIENTS
229
Also,
(w, Q , w)
c
4
[H
sin'
� ) wi
'
� 8 max ( IA I 2 , IBI 2 ),
I ( �) ! ]
+ B
sin'
I Qowl 4 � [ iA(sin � I )w l + IB(sin � 2 )w I J 4 , � 8 [ IA(sin � I )w I 4 + IB(sin 6 )w I 4 ], <
8 · 1 6 max (IAI ' , IB I ' )
+
I ( �) i ] B sin'
w
'
[H
sin'
w
�) ! w
'
'
.
32 max ( IA I 2 , IB I 2 )(w, Q l w), I Qo w l 2 � 4 max (IA I 2 , IBI 2 ).
=
Therefore, we obtain
Observing that IA I 2
=
p 2 (A), it follows that I z l 2 � 1 for (6.5.25)
230
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
If A and B are nonsingular, then (w, Q 1 W) � O( I � 1 1 4 + 1� 2 1 4) and, consequently, the scheme is dissipative of order four for A < AO. Therefore, by Theorem 6.5.3, it is stable. If A or B is singular, we cannot use the theorem because the approximation is not dissipative. To prove stability we use the Kreiss Matrix Theorem 5.2.4. We have proved that
1 I (v, Qv) 1
max
Ivl�
�
1,
for A � AO'
Therefore, the solutions of the resolvent equation
(Q - zl )w
=
g
can be estimated. The equality
-(w, Qw) + z l wl 2
-(w, g)
implies, for I z I > 1 , that (izi
- 1) l w 1 2
�
I wl lg l ;
that is,
Therefore, the approximation is stable. We have proved Theorem
6.5.4.
The Lax-Wendroff method (6.5.23) is stable if A � AO, where is defined in Eq. (6.5.25). It is dissipative of order four if A and B are non singular and A < AO.
Theorem 6.5.4.
AO
If A or B is singular, we can modify Eq. (6.5.24) so that it will be dissipative by adding a fourth-order dissipation term; in other words, we use
(6.5.26) where (J
constant >
O.
231
APPROXIMATIONS WITH CONSTANT COEFFICIENTS
As above, we can show that the approximation is dissipative of order four and, therefore, stable for all sufficiently small A. Next we consider linear multistep approximations. Theorem 6.5.5. Assume that Eq. (6.3.1) is strictly hyperbolic with con stant coefficients and that Q is a difference operator that is consistent with Lv Av ajax( v ) . Then the approximation
is stable if it is dissipative and if the only root on the unit circle of (6.5.27)
o
is the simple root Z = 1. (For convenience it is assumed that the coefficients and are independent of h.)
(3a
(Xa
Proof. By Lemma 5.2.2, the eigenvalues of the symbol corresponding to the one-step form are given by
0,
where
(6.5.28)
Q is the symbol of Q. By consistency kQ
=
(�
k i
= iA I � 1
A,w, + fJll w I ' h»
L Av �:
+ (9'( 1 � 1 2 ),
)
=
rio.
In = 1.
Strict hyperbolicity implies that there is a matrix sufficiently small
is diagonal with distinct eigenvalues. Here Thus, Eq. (6.5.28) is equivalent to
� A" , + fJll < 1 2 ),
I T- I I
T = T(f)
such that for I � I
� constant, I T I � constant.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
232
(6.5.29) By assumption, for � = 0, this equation has exactly m roots Zj with Zj = 1 . Because D(r) is diagonal, we can separate Eq. (6.5.29) into m different equa tions q
L
,, � - I
(ex " -
,B" I � l iAdj )z- "
o
1 , 2 , . . . , m.
j
Each one of these correspond to the one-step form
for a scalar equation in Fourier space (cf. Lemma 5.2.2). Here, the ij(jl are (q + 1) x (q + 1 ) matrices. Thus, with w = (w( l l , w(2l , . . . , w(m l l , we have transformed the problem in Fourier space to the block-diagonal one-step form
where
By assumption, for � = 0, there is only one eigenvalue of ij(jl with Hence, by the dissipativity assumption, the eigenvalues satisfy
I ZI I I zv I
� �
1 - 01 1 � 1 2r, -
0 2,
1.
[
ijUl can be transformed to the form
]
S-I Q-( j) S = ZOI ij0jl i ,
�
=
01 > 0, V = 2, 3, . . . , q + 1. 02 > 0,
Thus, by Lemma A. 1 .6 in Appendix A. l ,
�1
Z
�
where p(ijijl ) �o, and lsi + I S- I I constant. Recalling the - 02 , I � I technique in the proof of Theorem 6.5.1, there is another norm 1 · lil2 such that
233
APPROXIMATIONS WITH CONSTANT COEFFICIENTS
I Q- 2( j) l ii2
$
1 -
03 >
03 ,
I�I
0,
$
�o.
With
[ � tJ ,
H we now have
IS- I Q( j)Sl ii
I�I
$ 1,
$
�o .
This concludes the case I � I $ �o. For I � I > �o , all eigenvalues Zj are bounded away from the unit circle. There fore, we can complete the proof as in Theorem 6.5.1. Up to this point, we have assumed that there are no lower order terms in the differential equation. However, we recall from Section 5.1 that lower order perturbations and forcing functions do not affect stability. This means that we can treat general systems of the form d au au � - Av v) ax( £... at 1
--
-
V=
+
B(x, t)u
+
F(x, t),
(6.5.30)
where the matrices A v are constant. For most approximations, the additional terms can be approximated in an obvious way. For example, one version of the leap-frog scheme is
2k
d
L AvDovvn V=
I
+
2kBn v n
However, the example given in Section stitute
2.2
+ if - I +
2kP .
(6.5.3 1)
shows that it is often better to sub
In both cases, it is clear that the resulting scheme has the form of Eq. (5. 1 .20). Therefore, the scheme is stable if the principal part is stable. The effect of the forcing function is described in Theorem 5 . 1 . 1 . A straightforward derivation of a scheme of Lax-Wendroff type is obtained
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
234
by differentiating Eq.
UII
(6.5.30). For the one-dimensional
case, we have
AUxt + B(x, t)Ut + Bt (x, t)u + Ft (x, t), A2 uxx + (AB + BA)ux + (ABx + B2 + Bt ) u + AFx + BF + Ft .
Assume that the derivatives of B and F are available analytically. Then the scheme is v" + 1
=
(I + k
+ kADo +
;
k
A 2D+ D_
)
un
(�
(AB(tn ) + B(tn)A)Do +
(
+
)
+ B(tn)2 + Bt(tn)) + B(tn ) u n + k
r
�
�
(ABAtn )
)
(AF; + B(tn )Fn + Fn ,
(6.5.32)
and consists of three parts of different orders in h. The first term is the principal part, and we assume that (kp(A) 5 h) so that it is stable. The operator kDo occurring in the second part is bounded because
If Bx and Bt are bounded, then stability follows from Theorem 5 . 1 .2 and an estimate in terms of F, Fx . and F/ is obtained from Theorem 5.1.1. Actually, the requirements on B and F can be weakened. If B i s bounded, but Bx and Bt are not, we substitute
2 (B(xj + 1 . tn ) - B(Xj 1 . tn )), A
-
1
2 (B(Xj , tn + d - B(Xj, tn - l )),
which are bounded operators. Since these terms are multiplied by an extra factor k, stability follows. Similarly, we can substitute differences for derivatives of F and obtain an estimate in terms of F as for the differential equation. EXERCISES 6.5.1. Prove stability for the approximation Eq. (6.5.21) is singular.
(6.5.26) for the case that A or B in
APPROXIMATIONS WITH VARIABLE COEFFICIENTS
235
6.5.2. Let
and use "backward differentiation"
for an approximation of Ut
= Aux + Buy- Derive the stability condition for k/h.
6.6. APPROXIMATIONS WITH VARIABLE COEFFICIENTS
In Section 5.3, general approximations with variable coefficients were treated, and the energy method was used as the main tool for stability analysis. For dissipative difference schemes, it is also possible to generalize the theorems discussed in the previous section, which were based on Fourier analysis. The symbol Q and dissipativity are defined pointwise. Note, however, that Q cannot be used to obtain an analytic representation of the solution in general. Consider the symmetric hyperbolic system (6.4.1) with variable coefficients and a one-step explicit difference scheme
Q
(6.6.1)
The symbol i s given by
where
Dissipativity is defined as follows. 6.6.1. The approximation (6.6. 1) is dissipative of order 2r if all the eigenvalues Z," of Q(x, t, 0, n satisfy
Definition
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
236
I ZI'(x, t, O, OI � (1 D I � 1 2r ), I � v l � 7r, II = 1, 2, . . . , d, J.I. = 1, 2, . . . , m, -
for all x, t , where 15 > ° is a constant independent of x, t, and �. We now generalize Theorem
6.5.3.
6.6.1. Assume that the hyperbolic system (6.4.1) and the approxima tion (6.6.1) have Hermitian coefficient matrices that are Lipschitz continuous in x and t. Then the approximation is stable if it is accurate of at least order 2r - 2 and dissipative of order 2r. Theorem
We shall not give the proof here. The technique used is similar to that used to prove Theorem 6.5 . 1 . A new norm is constructed via the matrix iI defined in Eq. (6.5.10) (where T is now unitary since we have Hermitian coeffi cients). For strictly hyperbolic systems we have the following theorem. 6.6.2. Assume that Eq. (6.4.1) is strictly hyperbolic and that the approximation (6.6.1) is consistent, dissipative, and has coefficients that are Lipschitz continuous in x and t. Then the approximation is stable.
Theorem
As an example, consider the Lax-Wendroff method for the system
ut
=
A(x)ux,
where A(x) is uniformly nonsingular. Using Utt
=
A(x)uxt
=
A(x) [A(x)uxlx
we obtain the second-order accurate approximation
(6 . 6.2) Here Ai can be used in place of Ai 1/2 without losing second-order accuracy. Let k/h = A = constant > 0. If A(x) is at least Lipschitz continuous we can write Eq. (6.6.2) in the form _
.vi.n + I
=
where the operator
R
(I
k2
+ kAjD0 + 2 Aj2D+ D ) Vin + kR·.Vjn , -
(6.6.3)
is uniformly bounded. Therefore, the last term can be
APPROXIMATIONS WITH VARIABLE COEFFICIENTS
237
disregarded and we obtain the symbol
The approximation is consistent and dissipative of order 4 if
kjh
max e(A(x» .x
<
1.
Hence, by Theorem 6.6.2, it is stable if A(x) has distinct eigenvalues. Nonlinear systems can often be written in so called conservation form
a F(x, t, u), ax u(x, O) = f(x). Ut =
(6.6.4)
Consider the second-order accurate two-step procedure
Vj + 11//22 n+
(6.6.5a)
if} + 1
(6.6.5b)
where we have used the notation FJ = F(xj , tn , v'j). Note that the halfstep sub scripts and superscripts are just notation, no extra gridpoints are generated. Equation (6.6.5a) just defines provisional values for Eq. (6.6.5b) and the approx imation is a one-step scheme in the sense that no extra initial data are needed. Assume that Eq. (6.6.4) has a smooth solution U. As in Section 5.5, the stability behavior of the scheme is determined by the linearized error equation. We make the ansatz
Then
where
r }
A(x, t) gives us
a
a;; F(x, t, U),
238
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
ejn++ 11//22 "2I (ej+n ejn ) "2k D+(Ae)j kGj+ 1/2, eJ � «Ae);:iji - (A e);� iH ) k H'j, 1
O.
+
n
+
+
+
n
+
(6.6.6)
Here the forcing functions and H are functions of U and e. The coefficients of the nonlinear tenns in e are of the order By referring back to Section 5.5, we find that e is bounded if the linear part of Eq. (6.6.6) is stable. For the discussion of the stability, we can assume that A is constant and disregard and H. Then we can easily eliminate the halfstep and obtain
G
h4.
G
e�) + I
e�}
e�}
+ +
h A(
k ej + 11//22 k [I n h A "2 (ej +1 n+
which is equivalent with the original Lax-Wendroff approximation (6.6.3) with R = O. (All methods that reduce to this scheme for constant coefficients are said to be of the Lax-Wendroff type.) Hence, for < 1 , e is bounded and the error v - U =
&(h2).
kp(A)jh
EXERCISES 6.6.1. Prove that the second-order accuracy of Eq. (6.6.2) is retained if is replaced by Aj• 6.6.2. Consider the Lax-Wendroff approximation (6.6.2) of the linearized Euler equations (6.1 .3), where U, R , and a2 depend on x and t. Theorem 6.6. 1 or 6.6.2 cannot be used to prove stability if the flow has a stagnation point U(xo, to) = 0 or a sonic point U(xj, t l ) = a(x I , t I ). Why is that so, and how can the approximation be modified to be stable?
Aj - 1/2
6.7. THE METHOD OF LINES
If one discretizes in space and leaves time continuous, one obtains a system of ordinary differential equations. This system is then solved by a standard method
THE METHOD OF LINES
239
like a Runge-Kutta method or a multistep method. Most methods in use can be constructed in this way, for example, the leap-frog and the Crank-Nicholson methods. In the latter case, the trapezoidal rule is used for time discretization. Exceptions are the Lax-Wendroff method and the combination of the leap-frog and Crank-Nicholson methods. Consider a strongly hyperbolic system
au
at =
p ( axa ) U =
� £....
v
Av
au ax( v )
(6.7.1)
with constant coefficients. We can solve it by Fourier transform; that is, we solve the system of ordinary differential equations
dft dt
P(iw)u,
p
(6.7.2)
We now construct difference approximations. To begin with, we discretize only space and not time; that is, we approximate pea/ax) by a difference operator
(6.7.3) where the matrices B[ do not depend on h. We obtain a system of ordinary differential equations
dw dt
(6.7.4)
where w is the vector containing the gridvalues. We assume that the approxi mation is accurate of order 2r - 1, r � 1 ; thus, for smooth solutions u(x, t), we have
(6.7.5) In general, Eq. (6.7.4) is not useful because it is not stable [which means that Eq. (6.7.4) has solutions that grow like exp (at/h), a > 0). We modify the system and consider
dw dt
Qw,
Q
(J
constant >
0,
(6.7.6)
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
240
where
Dr Dr (_ I )r- 1 h2r - 1 � ,£..J H(V) -x(V) · Observe that the order of accuracy is not changed. We want to show that, for sufficiently large (J , the solutions of Eq. (6.7.6) decay exponentially. Fourier transfonning Eq. (6.7.6) gives us
l�p l �
11",
�
=
wh,
where
h (Mi�) Q2
-4'
IR2r(i�) 1 � const. 1 � 1 2r ,
(�> � )1
P(i�)
+
R2r (i�), n2 '
The fonn of Q I is derived using the same technique as in Theorem 5.2.5. By assumption, Eq. (6.7. 1 ) is strongly hyperbolic. Therefore, there is a transfor mation S such that S- I PS = iA where A is a real diagonal matrix. Introducing S- I W = tV as a new variable gives us
(6.7.7a) For sufficiently large
(J ,
Q + Q'
-
<
(J
2h
� Q2 ,
(6.7.7b)
and we have proved that the solutions of Eq. (6.7.6) decay exponentially. In analogy to Definition 5.2. 1, we define dissipativity for the method of lines.
The approximation (6. 7.6) is dissipative of order 2r if all the eigenvalues sp of Q satisfy
Definition 6.7.1.
(6.7.8) where as is the exponential growth factor.
241
THE METHOD OF LINES
Now assume that
QI
is accurate of only order 2r - 2. Then
where
Thus,
and Eq. (6.7.7b) holds if S- IR2r - I S is anti-Hermitian. If Eq. (6.7.1) is a sym metric hyperbolic system, then S is a unitary matrix, and we need anti-Hermitian coefficients in R2r - I . This condition is fulfilled for all centered approximations of the type Eq. (3.1 .7). Thus we have proved the following theorem. 6.7.1. Approximate P by any difference operator QI that is accurate of order 2r - 1 . We can add dissipation terms of order 2r that do not change the order of accuracy so that the semidiscrete approximation (6. 7.6) is stable and dissipative of order 2r. If Q I is only accurate of order 2r - 2, then the procedure leads to a stable approximation if the system of differential equations is symmetric hyperbolic and if Q I is a centered approximation of the type of Eq. (3.1.7). Theorem
We now discretize time and discuss some of the standard methods used to solve ordinary differential equations numerically. We start with the Runge-Kutta methods. 6.7.1.
Runge-Kutta Methods
Let y' = f(y, t) be a system of ordinary differential equations. The classical fourth-order accurate Runge-Kutta method is given by
where
k > 0 denotes the time step, and
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
242
kf(u n , tn ) ,
kl
( (
kl kf U n + 2 , tn +
k2
k 2 , tn + kf U n + 2 kf(u n + k3, tn + k ) .
k3 k4
�) , �) ,
For the system (6.7.6), we obtain kl
k Qun ,
k2
kQ
k3
kQ
k4
kQ
(I � ) (I � (I � ) ) (I (I � (I � ) ) ) Q u n,
+
+
Q
+ kQ
Thus,
n+ l
U
=
+
+
Q
Q
un ,
+
(� ) � }=o
(k Q)) ). f.
Q
un .
Un .
(6.7.9)
General pth-order accurate Runge-Kutta methods for Eq. (6.7.6) are of the form if + 1 =
(�
(k Q) ) � a . (k Q. , ) } + · f � )· � J ). } o } +l =
=p
)
un .
(6.7. 1 0)
Let Q be the modified difference operator in Eq. (6.7.6). We Fourier transform, change variables as in Eq. (6.7.7a) and obtain n+ I = v
(�
(k QJ) � a . (k QJi + � J ). ,. .) ., }=o } =p + l
�
)
v
n
(6.7 . 1 1 a)
'
where if p = 2r
-
1,
(6.7.1 1b)
THE METHOD OF LINES
243
or if p
2r - 2. (6.7.1 1 c)
In the second case, we assume that (6.7 . 1 2) This is a natural assumption for symmetric hyperbolic systems and centered difference operators. We need the following lemma. Lemma 6.7.1. Let A be a matrix and {cxdj= p + 1 given scalars, where p is an integer with 1 � p � m. For sufficiently small J A J we can find a matrix B that commutes with A such that
m
Ai ., +
L
i=p+ l
-
J.
The matrix B has a series expansion
Ai cxJ· ' f. ' J
(6.7.13)
00
B
L
(6.7.14a)
j =p + 1
where CXp + 1 1 (p + I ) ! -
p+ i3 2
=
CXp + 2 - 1 (p + I ) !
cxp + 1 - 1 (p + 1 ) !
'
(6.7.14b)
Proof. We make the ansatz (6.7.14a) and write Eq. (6.7 . 1 3) as
tt + B where s
m
L
i=p + I
=
e4 + s,
cxJ· - 1 l A
.
__ _
' f. J
i
i
L A" =m+ 1 J .
244
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
After multiplying by e-A , we get
and after taking the logarithm of both sides
(6.7.15) Because log (I + e-A S) is well defined, the matrix B exists, and the coefficients in Eq. (6.7. 14) are defined by the series expansion in Eq. (6.7. 15). When calculating the coefficients of the leading terms, we obtain Eq. (6.7.14b). This proves the lemma.
{3j
When substituting kQ
(6.7. 1 1) can be written as
-7
R
A, k
-7
B
in the lemma, the solution of Eq.
where
(6.7.16)
(3p + 2
{3p +
with defined in Eq. (6.7.14b). I and Therefore, the approximation is stable if Re k Q + R)
(
First, consider the case p
=
= � (k(Q + R) + k(Q + R)*) $ O.
-
2r 1 . Then, recalling that
A(�) &( I �/ ), we have =
Furthermore,
(AR2r(iO + (]A(Mi�)), ('j(AIR,,(i�)1l - 4'"A (� sin" �; ) I.
Re (kQ) = Re �
Thus, for
A sufficiently small and (] sufficiently large,
(6.7. 17)
245
THE METHOD OF LINES
Re k( Q
+
R) :s; o.
Next, consider the case p 2r 2, and assume that Eq. Then Eq. (6.7. 17) holds and, furthermore, =
Re (kR)
�
,Bp + 1 Re (kQ)2r - 1 &« k I Q I )2r), ,Bp + 1 Re (iAA(�))2r- 1 + &( I A� 1 2r) +
Thus, we have the same situation as for p
6.7.2.
=
2r �
=
(6.7.12)
is satisfied.
&( I A� 1 2r ).
1 and we obtain Theorem
Theorem 6.7.2. If the Runge-Kutta method is accurate of order p 2r 1 , then approximation (6. 7.10) is stable, provided is sufficiently large and A is sufficiently small. If p = 2r � 2, this is also true if the system is symmetric and condition (6. 7. 12) holds. =
�
(J
The last theorem tells us that we can stabilize any Runge�Kutta method by adding dissipative terms. We now construct stable methods by using nondissi pative operators in space. We approximate
by
Q
(6.7.18)
where
Qqx(V)
DOx(v)
are the centered operators
Q
(q/2 ) -
L j=O
1
(� I ) j OlAh2D+x(v) D_x(v» ) j ,
(3.1 .7). Then (q/ )
2 -1
- sm �. L
_
hr .
.
}=o
4j 01j
( . �. ) 2j . sm 2
'
that is, the symbol is of the same form as P(iw). Therefore, there is a transfor-
246
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
mation that transfonns Q to diagonal fonn Q, and we can assume that kQ where
AA,
=
i
A is a real diagonal matrix, that is, Re (k Q)
=
O.
The derivation leading to the fonn shown 'in Eq. applies here, and, for sufficiently small
A,
(6.7.16) for the matrix R also if p is odd, if p is even.
Therefore, we obtain the following theorem.
A,
6.7.3. For sufficiently small the Runge-Kutta methods shown in Eq. (6. 7. 10), with Q defined by Eq. (6. 7. 18), are stable if
Theorem
( - I )(P+ 1 )/2{3p + l
<
0,
for p odd,
0,
for p even.
or p ( - I ) ( + 2)/2 {3p + 2
<
Here {3p + 1 and {3p + 2 are defined as in Eq. (6. 7. 14b). If the above expressions are positive, then the methods are unstable. For the classical Runge-Kutta methods, all (Xv case, the last theorem and Eq. (6.7.14b) tell us
=
0
in Eq.
(6.7.10).
In this
Theorem 6.7.4. If (Xp + J = 0 for p odd or (Xp + I = (Xp+ 2 = 0 for p even, then the Runge-Kutta methods are stable for p = 3, 7, 1 1, . . . or p 4, 8, 12, . . . and sufficiently small. For all other p, the methods are unstable. =
A
Thus, the classical fourth-order accurate Runge-Kutta method is stable. The same is true for the third-order accurate Runge-Kutta method (usually called the Runge-Kutta Heun method). We now consider the model equation
(6.7. 19)
THE METHOD OF LINES
247
in more detail. We approximate ajax by the fourth-order accurate operator
and obtain the system of ordinary differential equations Wt
=
(1Do(h) - tDo(2h))w,
(6.7.20)
which, after Fourier transformation, becomes Wt
=
iaw,
ha
1 sin �
=
-
� sin 2 � .
(6.7.21)
We discuss the classical Runge-Kutta methods of order p = 1 , 2, 3, 4. In this case, all coefficients a . = 0 in Eq. (6.7.10). For p = 1, we obtain Euler's explicit method. In Section 2. 1, we proved that it is unstable. For p = 2, the method is also unstable. This follows from Theorem 6.7.4. For p = 3, we obtain the Fourier-transformed Runge-Kutta Heun method:
q The method is stable for 0 <
Izl 2
=
I q l � V3, because
( 1 �2 ) 2 ( q -
+
_
)
q3 2 6
A simple calculation shows that
max (hla l )
z
1.372.
Therefore, we obtain stability for
A
�
1.26.
The classical fourth-order Runge-Kutta method becomes
U' + 1
=
(1
+
q2 iq - 2
Aha, A
kjh.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
248
The method is stable for 0 < I q l
Izl2 =
(1
::;
.J8, because
) (
2 q2 q4 - 2 + 24 +
)
3 2 q q - (;
Thus, we get the stability condition
A
::;
2.06.
For any given approximation Q in space, the work required for the fourth order Runge-Kutta method is � times the work required for the third-order method. However, the increase in the stability limit more than offsets the increase in work. Therefore, the fourth-order method is preferable. It is also preferable to the Lax-Wendroff method. The amount of work doubles, but the increase in the stability limit again offsets it. Compared with the leap-frog scheme, the fourth-order Runge-Kutta method requires four times as much work. Therefore, the leap-frog scheme is more efficient if we can use the maxi mal time step without loss of accuracy. In general, this is impossible, because the method is only second-order accurate in time. However, in many problems with different time scales the accuracy is retained. The stability condition for the leap-frog time differencing with the fourth order approximation (6.7.20) is
A
<
0.72.
However, the storage requirement for the leap-frog scheme is twice that of the Runge-Kutta method. Also, the Runge-Kutta method has no spurious solutions that can grow exponentially. Thus, the fourth-order Runge-Kutta method may be preferable in this case. There are also higher order Runge-Kutta methods. Here, not all OIj = O. Many of them, like the Runge-Kutta Fehlberg or the Nystrom methods, are unstable for nondissipative approximations. We again consider the model equation (6.7.19), but now we approximate it by the dissipative approximation W1
=
(±D 3 0 (h)
-
l3 D0 (2h)w
+ (7(- 1)r - 1 h2r - 1 Dr Dr W +
Fourier transform gives us, instead of Eq. WI = -yW,
h-y
=
-
(7
,
> O.
(6.7.22)
(6.7.21),
ihOi
-
4r (7 sin2r I ·
2
(6.7.23)
For the time integration, we use the fourth-order Runge-Kutta method. Theorem
249
THE METHOD OF LINES
6.7.2 tells us that the approximation is stable for sufficiently small �
=
kjh. In
practical applications, one wants to know the largest possible �. In the theory of difference approximations for ordinary differential equations, one calculates the region {} of absolute stability. If My E n for all
�, then the method is stable
in our sense. For the fourth-order Runge-Kutta method. {} is the shaded region in Figure 6.7.1.
6.7.2. Multistep Methods One can use multistep methods instead of Runge-Kutta methods. In this case. we only need to calculate Qu once at every time step. However. storage requirements increase with the order of accuracy. We have already discussed the leap-frog scheme that is the simplest one. Now. we discuss the explicit
3i
-3
r-------;--
-3i
Figure 6.7.1. Stability region for the fourth·order Runge-Kutla method.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
250
Adams-Bashford method
Un + 1
U n + kQ
p
L 'Ym (kD_t )m u n , "1m = (_ 1)m
J� ( �)
ds,
(6.7.24)
and the implicit Adams-Moulton method p
L 'Y:(kD_t )m un + 1 ,
U n + 1 = U n + kQ
m =O
(6.7.25) They are both accurate of order p + 1 . Observe, however, that, in the second case, we need to know u only at max (p, 1) time levels to calculate u n + 1 Table 6.7.1 shows the first seven coefficients. For p = 0, the Adams-Bashford method is just the explicit Euler method, and the Adams-Moulton method is the Euler backward method. For p = 1, the Adams-Moulton method is the trapezoidal rule, which gives us the Crank-Nicholson method. Adams-Moulton methods are often used in the predictor-corrector mode. As an example, we use the third-order Adams-Bashford method •
u n[�+ 1 = U n + kQ (23u n 12
_
16u n - 1 + 5u n - 2 )
(6.7.26a)
a s a predictor and the fourth-order Adams-Moulton method
(6.7.26b) as a corrector. TABLE
'Ym 'Ym *
0
6.7.1.
m
1
2
1
-2
2
5
12
1
- 12
3
3
8
1
- 2:4
4
25 1 720 19 - 720
5
95 2 88 3 - 1 60
6
1 9087 60480 863 - 604 80
THE METHOD OF LINES
2S1
For the dissipative operators Q in Eq. (6.7.6), one can prove the following the� orem. Theorem 6.7.5. Theorem 6.7.2 is also validfor the Adams methods. If we use the nondissipative approximation (6.7.20), then we obtain the fol� lowing theorem.
Theorem 6.7.6. Use the nondissipative approximation (6.7.20). For sufficiently small A, the Adams-Bashford methods are stable for p 3,4,7,8, . . . and the Adams-Moulton methods are stable for p 1,2,5,6, . . . . Otherwise, they are =
=
unstable. The predictor-corrector method (6.7.26) is stable.
The stability regions 0 for the fourth-order accurate Adams-Bashford and Adams-Moulton methods are shown in Figures 6.7.2 and 6.7.3, respectively.
0.41
02i
0.3
-o.2i
-O.4i Figure 6.7.2. Stability region for the fourth·order Adams-Bashford method.
252
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
1.51'
-3.0 +---+
-1 .51 Figure 6.7.3. Stability region for the fourth-order Adams-Moulton method.
For the Adams-Bashford method. the restriction on the step size is so severe that it cannot compete with the fourth-order Runge-Kutta method. If the Adams-Moulton methods are used in the predictor--corrector mode, then they are competitive if one has enough storage. The Hyman method uses the leap-frog scheme as predictor v,r+ I t"
:;
un- I
+
2kQu"
(6.7.27.)
combined with the corrector (6.7.27b)
The stability region in Fourier space of the predictor itself is limited to part of the imaginary axis; this region is extended by the corrector step so that the
THE METHOD OF LINES
combined method has a stability region that includes part of the left half-plane and a small part of the right half-plane (see Figure 6.7.4). This indicates that Q may contain dissipative tenns and also that the time dif ferencing introduces dissipation itself. even if Q is one of the centered difference operators (6.7.18). The stability limit for the model equation (6.7.20) is
)..
<
1.09.
The order of accuracy in time can be shown to be 3. Again the efficiency is comparable with the classical fourth-order Runge-Kutta method. We have only discussed difference operators Q for the approximation in space in this section. The results can also be applied to Fourier operators Q
=
S.
This was done in Section 3.2. where the leap-frog time differencing and the trapezoidal rule were discussed.
1.5i
-1.5
-t-------_+_
-1.5i Figure 6.7.4. Stability region for the Hyman method.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
254
EXERCISES 6.7.1. Prove the the Hyman predictor-corrector method accurate in time. Derive the stability condition.
(6.7.27) is third-order
6.7.2. Consider a strongly hyperbolic system
U/
=
d
L A v ux(v),
(6.7.28)
v= 1
and the space approximation
where Qv = Dov or Qv = 1Dov(h) - 1Dov C2h). Assuming equal step sizes in all directions, derive the stability conditions for 1. the leap-frog scheme, 2. the third- and fourth-order Runge-Kutta methods.
In particular, what is the stability limit expressed in terms of U, V, W, a, = kjh for the three-dimensional Euler equations and the fourth-order Runge-Kutta method?
A
6.7.3. Replace Qv by the Fourier operator Sv in Exercise stability conditions.
6.7.2 and derive the
6.8 THE FINITE-VOLUME METHOD
The finite-volume method is designed for systems written in conservation form, and an attractive property of the method is that it automatically produces a conservative semidiscrete form. Another advantage is that it gives a convenient way of designing approximations on an irregular grid. The main applications are in several space dimensions, but, to explain the main ideas, we begin by considering the one-dimensional case. The method is based on a conservation law
ut + FxCx, t, u)
0,
(6.8.1)
THE FINITE-VOLUME METHOD
255
which implies
d dt
2
1f
f
0
u(x, t) dx
=
0.
(6.8.2)
In the linear case, F(x, t, u) = A(x, t)u. We consider nonuniform grids, since the method is strongly associated with such approximations. The x axis is divided into a sequence of finite volumes Il.j = [Xj 1 /2 , Xj + 1/2 ] with lengths hj . The vector Uj represents U on �, and is often thought of as an average over �. We associate Uj with the value of U at some point Xj in � (see Figure 6.8. 1). It is first assumed that the points Xj (and Xj + 1 /2 ) are generated by a smooth transformation Xj = x(�j), where the points �j are uniformly distributed in the � space. This is most often the case in applications. The conservation law is integrated over the finite volume yielding _
d dt
1/2 f>:>:j _+ 1/2
u(x, t) dx + Fj + 1 /2 (t) - Fj - 1 /2 (t)
=
0,
where the notation Fj(t) = F(xj, t, Uj) has been used. [We also write Fj for Because we want to express the approximations in terms of Uj and Fj, the vector Fj+ 1 /2 is replaced by the average (Fj+ 1 + Fj)/2, and similarly for Fj - I /2 • In the integral, U is represented by its value at x = Xj. In this way, we obtain the centered finite volume method
F/t) .]
du·
1
(Fj + l - Fj - 1 ) dt + 2h
J _
.
=
O.
(6.8.3)
J
On a uniform mesh, we recognize the usual centered semidiscrete difference approximation. Disregarding boundary terms, a summation over j yields the conservation property
d dt
L ujhj j
0,
Llj
Llj - 1
Llj + 1
•
xj - 2
xj - 1 Figure
xj - 1 /2
xj
xj + 112
xj + 1
xj + 3/2
6.8.1. Finite volumes !J,.j and gridpoints
Xj.
xj + 2
x
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
256
which is the discrete analogue of Eq. (6.8.2). Let h be a constant given step size, for example, the uniform spacing in �-space, and let aj = hj /h. Then Eq. (6.8.3) takes the form
dUj 1 + -;; DoFj dt J
D.
With F = A u, where A is a constant matrix, we get du + AJDo u-J dt
J _
=
(6.8.4)
D,
which can be viewed as a linear approximation with variable coefficients. If the coefficient matrix Aj is Lipschitz continuous, that is,
(6.8.5) then the operator Aj DO is semi bounded [ef. Eq. (5.3.16)], which implies that Eq. (6.8.4) is stable. The condition (6.8.5) is equivalent to
or
(6.8.6) But this inequality follows immediately from the assumption that the gridpoints
Xj + 1/2 are generated from a smooth transformation of the � space, which has a uniform grid. The second-order accuracy of Eq. (6.8.3) also follows from this assumption. The differential equation (6.8.1) takes the form _
where u(t t) = ously
(6.8.7)
u(x(�), t). A second-order approximation of Eq. (6.8.7) is obvi
(6.8.8) where b(x)
=
dUdx,
Vj = v( �j , t) ,
Fj
=
F(x(�j ), t, V(�j, t)). With F = A u, Eq.
THE FINITE-VOLUME METHOD
257
(6.8.8) can as well be considered as the approximation (6.8.4) in physical space, with aj replaced by l /bj . Therefore, second-order accuracy is established for Eq. (6.8.4) if
( dX )
dt ., � = �j
+ &(h2 ) .
This follows immediately from
For an actual implementation it is sufficient to store either the sequence {Xj + 1 /2 W= I or the sequence {Xj }1= I . In the latter case, Eq. (6.8.3) is modified as
dUj Fj + I - Fj + Xj + I - Xj dt
--
I
I
(6.8.9)
o.
Since Xj+ 1 - Xj _ 1 = 2h(dx/d� )�= �j + &(h3), this is still second-order accurate. Actually, the latter fonnulation is preferable, because contrary to Eq. (6.8.3), it is consistent if the grid {Xj } is not generated by a smooth transfonnation from a unifonn grid. Also note that Xj is, in general, not the midpoint of the interval Ilj . How ever, after a completed calculation with Eq. (6.8.3), where the grid {Xj + 1/2 } has been used, the Uj values can be plotted at the centerpoints (Xj+ 1/2 + Xj I /2 )/2 . Because
-
Xj + 1 /2 + Xj - I /2 2
we have
showing that second-order accuracy is preserved with such a procedure. [Of course, this does not hold for a nonsmooth transfonnation x(�).] The method (6.8.3) is sometimes modified as
258
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
( (
1 Vj I + Vj dVj F Xj + I /2 , t, + 2 + hj dt
)
-
(
F Xj - I/2 , t,
Vj + Vj I 2 _
))
0,
which has essentially the same stability and accuracy properties. Next, we consider the two-dimensional case for a system in conservation law fonn
Ut + FAx, y, t, u) + Gy(x, y, t, u)
=
(6.8. 1 0)
o.
The mesh consists of quadrilaterals, but these are not necessarily rectangles. It is assumed that the mesh has been obtained by a smooth transfonnation of a unifonn rectangular grid such that neighboring quadrilaterals do not differ too much from each other in shape and size. In general, the new coordinate lines are not straight lines, but locally they are considered as such, which is consistent with the second-order accurate methods we consider. Figure 6.8.2 shows the mesh with the finite volume Ll in the middle with comers A, B, C, D. Green's fonnula yields
J L utdx dy L� (F(x, y, t, u) dy - G(x, y, t, u) dx) = +
after integrating Eq. (6.8. 10) over Ll and where
0
(6.8 . 1 1 )
all i s the boundary of Ll. The
y D
c
•
•
w
A
o
•
s
Figure 6.8.2. Finite volume mesh.
x
THE FINITE-VOLUME METHOD
259
notation Uo can be used for the vector function value at the point 0 (the image point of the center of the rectangle in the corresponding uniform grid), or for the average of u over .11. It makes no difference for the method to be discussed here, and we use the first definition. The coordinates at the point A are denoted by (XA , YA ), and so on, and we use the notation FA for F(XA, YA, t , U(XA , YA, t», and so on. The line integrals between A and B are now approximated by the average of the vector values at 0 and S multiplied by the proper distance, and the other ones analogously. We get
f
at.
G dx
Z
Fo + FN Fo + F w ( YA - YD ) + ( YD - yc ) + ' 2 2 Go + Gs G O + GE (XB - XA ) + (xc - XB ) 2 2 G G O + GN o + Gw (XA - XD ) + (XD - xc ) + 2 2
With vol(Ll) denoting the area of .11, the integral in Eq. (6.8. 1 1 ) is represented by
ff
t.
In this way we get the
duo
2vol(Ll) dt
utdx dy
= vol(Ll)
duo . dt
--
centered finite volume method
+ (YB - YA )(Fo + Fs) + (Yc - YB )(Fo + FE ) + ( YD - yc )(Fo + FN ) + ( YA - YD ) (Fo + Fw ) - (XB - XA ) (G O + Gs ) - (xc - XB ) (GO + GE ) - (XD - xc )( Go + GN ) - (XA - XD )( GO + Gw )
=
O.
(6.8. 1 2) When summed over all volumes .11, all of the F and G terms cancel except for boundary terms, establishing the conservative property in the discrete case. If the mesh is rectangular, we have
YB - YA = YD - Yc By the usual notation
=
Xc - XB
(xo , Yo ) = (Xj , Yj ) ,
o.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
260
and the finite volume method reduces to
G j 1 - Gi,j - l F j - Fi - 1 ,j dVij = O. + i+ l, + i, + 2.::l dt 2.::lXi Yj
(6.8. 1 3)
--
When designing finite volume methods with the Vij representing point values, it seems more natural to integrate over the quadrilateral .::l with comers at the gridpoints according to Figure 6.8.3. The line integrals are now approximated by the trapezoidal rule using the vector values at the comers, yielding
+ ( YD - yc ) (Fc
+ FD ) + ( YA - YD )(FD + FA ) ,
= ( Yc - YA ) (FB - FD ) - ( YB - YD ) (Fc
-
FA ) ,
and analogously for fa
Y Yc
YD
YB
A
B
x
Figure 6.8.3. Finite volume A with comers at the gridpoints.
THE FINITE· VOLUME METHOD
261
0.
(6.8.14)
On a rectangular grid this method reduces to
(6.8. 1 5) which is the regular box difference scheme. Regardless of time discretization, Eq. (6.8. 14) leads to an implicit scheme, hence it is not in common use. Fur thermore, dissipation terms are often included in finite-volume methods, and in this case compactness is lost. For discretization in time of the finite-volume methods we use the results of Section 6.7. The finite-volume method is based on a conservation law form of the differential equation, and, as we have demonstrated, nonlinearity is no hindrance. For application to the Euler equations, we need to rewrite Eq. (4.6.4). The conservation law form of the Euler equations is (6.8. 1 6)
0, where
[ pw�� ] ' [P p;uuw�U2 ] ' p ] pu [p pvw:U�U2 , [p �:pw2 ] . pw pu p p pp up w wp (uE up cp
=
F
=
=
H
G
+
These equations are complemented by the equation of state = p(P). For many types of flow there is no such simple relation between pressure and density. Instead one introduces the internal energy e as a new dependent variable and use an equation of state of the type = ( , e). This requires a new differential equation, which is
at aE
+
a x a
+
)
+
a y a
(uE +
)
a
+ az
( E +
) = 0,
(6.8. 1 7)
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
262
where
With E as the fifth component of the vector cP we have an extended and more general conservation law of the type in Eq. (6.8. 1 6).
EXERCISES 6.8.1. Verify that the conservation law (6.8. 1 6) is equivalent to the the system (4.6.4). 6.8.2. Derive the linearized form CPt + Acpx + Bc/>y + Ccpz
=
0
of Eq. (6.8. 1 6) by introducing a small perturbation around cp = <1> . Prove that the eigenvalues of A, B, and C are identical to those of the corre sponding coefficient matrices for the original linear Euler equations.
6.8.3. Use the results of Exercise 6.8.2 to derive the stability limit for the finite volume method (6.8. 1 3) applied to the Euler equations with a fourth order Runge-Kutta time discretization. 6.9. THE FOURIER METHOD When applying the Fourier method to symmetric systems with constant coeffi cients there is no difficulty to verify stability (Exercise 6.7.3). This is because the S operator is skew-symmetric. However, in general, it cannot be expected that a straightforward implementation of the Fourier method will be stable for sys tems with variable coefficients. The energy method used for difference methods is based on the fact that the difference operator Q approximating a/ax almost com mutes with a variable coefficientA(x); that is, QA(x) = A(x)Q + &(1). This is gen erally not true for the Fourier differentiation operator S, even if A(x) is Lipschitz continuous, and we must modify the method to ensure that it is stable. Corresponding to the concept of dissipativity introduced for difference meth ods, we introduce a smoothing operator H for the Fourier method. The basic idea is to modify the coefficients for higher wave numbers in each step so that the oscillatory components are damped. At the same time, we do not want to adversely affect the accuracy significantly, because high accuracy is one of the essential features of the Fourier method. We only consider problems in one space dimension here.
THE FOURIER METHOD
263
As usual, we work with N + 1 gridpoints in the interval [0, 271"), N even, and for convenience we introduce the notation M = N/2. Let TM denote the space of trigonometric polynomials with basis {eiwx/ ..J2; }l w l sM . We divide U E TM into two parts, Uj and U2, corresponding to low and high wave numbers 1
L
V2;
I w l SMj U - Uj.
u(w)eiwx,
o
<
Mj
<
M, (6.9. 1 )
We want to construct a smoothing operator H = H(l, Mj , d) such that affected, and such that U2 is affected only if u(w) is large. We take u =
where
u(w)
=
{
fi(,,),
1
Hu
V2;
if
d ll ud l . Twr s gn(u(w)), l
�
L
I w i sM
I w l ::; Mj
u(w)eiwX ,
or
otherwise.
l u(w) 1 ::;
Uj
i s not
(6 9.2a) .
d ll uj l l I w ll
,
(6 9.2b) .
Here I is a natural number and d is a constant. Obviously, we have HUj = Uj and II Hull ::; I lul I · The following lemma shows that the smoothing operator does not affect a large class of functions.
Let U E TM and define the smoothing operator H(l, MI , d) by Eq. (6.9.2). Then H(l, Mj , d)u = U if Lemma 6.9.1.
where
I ddluUl l1 ::; 'YIl u lI ,
(6.9.3)
(6.9.4) Proof.
have
The function is split into two parts
Uj
and
U2 as in Eq. (6.9 . 1 ), and we
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
264
The condition
(6.9.3) implies
and from Eq.
(6.9.4) we obtain
Thus, if w '" 0,
which yields
Therefore by Eq. (6.9.2b), the coefficients tion, and the lemma is proved.
u(w) are unaffected by H by defini
Now consider the differential equation
Ut
=
A(x, t)ux,
(6.9.5)
where A(x, t) is a Hermitian matrix. Integration by parts gives us
d
dt
lI u ll 2 = - Re(u, Axu),
(6.9.6)
and therefore we obtain an energy estimate. Denoting by w the gridfunction written as a vector with N + 1 components, we can write the semidiscrete modified Fourier method for Eq. (6.9.5) as
wt
=
AHSw.
(6.9.7)
THE FOURIER METHOD
265
Here S is the Fourier operator for differentiation defined in Section 1 .4. A is the Nm x Nm matrix with A(xj , t) as blocks on the diagonal. The implementa tion of the operation H S can be done using only two FFTs. The S operator is factorized as S = T- I D T, where T represents the FFT and where D represents the multiplication by iw . If H denotes the operation v(w) = Hu(w), where v(w) is defined by Eq. (6.9.2), then Eq. (6.9.7) becomes WI
=
AT - I HDTw.
(6.9.8)
h
The approximation (6.9.8) can be considered as an evolutionary equation in the space TM. Let w = Tw and AN = TA denote the Fourier interpolant of w and A respectively, and define AN' by
Then the method (6.9.8) can be written in the form (6.9.9) Let AN,w be the Fourier coefficient of AN, and assume that �
C({3) 1 + I w l J3 '
{3
> 1,
(6.9. 10)
where the constants C and {3 are independent of N. w in Eq. (6.9.9) is divided into two parts, W I and W2 , as defined by Eq. (6.9.1). Without proof, we state the following theorem.
Solutions of the approximation (6.9.9) [or equivalently (6.9.8)J satisfy the estimate
Theorem 6.9.1.
:t II w l1 2
�
- ( w, d:: w) Re
*
+
(C I ({3)M2 - J3
+
C2 (d)Mt - I ) ll w I1 2 • (6.9. 1 1 )
Here {3 is defined in Eq. (6.9.10) and I and d are defined in the definition (6.9.2) ofthe smoothing operator H. 1f I > 2, (3 > 2, then the estimate (6.9.11) converges to the estimate (6.9.6) for the differential equation as M �
00
.
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
266
REMARK. The restriction of w to the grid gives us w with norm i i w iih . Thus , Eq. (6.9. 1 1 ) represents also a stability estimate for the solution of Eq. (6.9.8). We wiII derive an error estimate for the method (6.9.8). As with difference methods, the error estimate is easily obtained for smooth solutions once sta bility is established. This is because the truncation error can be calculated in a straightforward manner, and then Duhamel ' s principle can be applied. We have the following theorem. Theorem 6.9.2. Assume that the coefficients and data of Eq. (6.9.5) are C" smooth. If M I is sufficiently large and I � 2 then, on every finite time interval [0, T], there are constants such that for any p
Cp
sup
O$ t $ T
li u Ct) - w (t) i ih
�
CphP,
h
�
ho.
(6.9 . 1 2)
(This is often called spectral accuracy.) Proof For each
t, the true solution u(x, t) is divided into two parts (6.9. 1 3)
where (6.9.14a) (6.9.14b) We consider now the solution in a fixed time interval 0 � t � T. Since u(x, t) is C'" -smooth and we can derive estimates of u and its derivatives in terms of the coefficients and data it follows that
provided M, is sufficiently large. Also there are constants sup
O$t$ T
II URt - AHuRx ii
�
KpN-P,
We shall now determine the truncation error
for p
=
Kp
such that
1 , 2, . . . .
(6.9. 15)
EXERCISES
267
Ut - Aux = UNt - AUNx + URt - AURx I{) : = URt - AURx ' o =
=
UNt - AHuNx + I{) ,
Therefore we have on the grid
By Eq. (6.9.15) the error estimate follows from the stability estimate. Finally, we want to point out that the smoothing process is not needed for stability reasons if we write the differential equation in the form
(6.9. 1 6) and approximate it by
Sl
:
=
! (AS +
SA).
(6.9. 17)
Since S is anti-Hermitian and A is Hermitian, the operator S l is also anti-Her mitian, and we obtain an energy estimate.
EXERCISES 6.9.1. Write a program for the Fourier method (6.9.7) for the scalar equation Ut a(x, t)ux • Carry out numerical experiments, and compare the accuracy by varying the parameters I , M l and d in the smoothing operator H. 6.9.2. Use the form (6.9. 16) with A = a(x, t), and write a program for the Fourier method of type (6.9.17). Compare the performance with the method in Exercise 6.9. 1 . =
BIBLIOGRAPHIC NOTES Proofs for Theorems 6.2.3, 6.4. 1, and 6.4.2 can be found in Nirenberg (1972). For approximations of the Lax-Wendroff type, separate stability investigations have been performed that do not use the theorems given in Section 6.5. Lax and Wendroff (1964) proved the weaker stability limit
1 1 . ( --(A ) - 2v'2
"' <
mIll
p
1
--
, p (B)
)
(6.N.1)
HYPERBOLIC EQUATIONS AND NUMERICAL METHODS
268
for the approximation
if + ]
[I
(6.5.23). The modified Lax-Wendroff method
+ k(ADox + BDoy) +
+ B2D+yD_y) +
: (A2D+xD_x + (AB + BA)DoxDoy
k
�4 (A2 + B2 )D+XD_xD+yD_y ] un ,
(6.N.2)
was given by Lax and Wendroff (1962). The addition of the last term may well pay off, because the stability condition is relaxed. For Hermitian matrices A and B the stability condition is
(6.N.3) This scheme, as well as (6.5.23) , is dissipative if the matrices A and B are nonsingular, but the latter condition is not required for stability of (6.N . 2). Another method of the Lax-Wendroff type is the MacCormack (1969) method. Several versions of this method have been developed, but the most popular one is the time-split method, (MacCormack and Paully, 1972). For the conservation law U t = Fx + Gy , we define the one-dimensional operators
u* u ** Q] (k)un u+ u++ Q2 (k)un
= =
=
un + u* + 1 (un un + u+ + 1 (un
kD_x Fn , kD+xF* , + u** ), kD_yGn , kD+y G+ , + u++ ).
The Strang type splitting for two-dimensional problems yields
(6.NA) Stability conditions have been given for the Euler equations. These conditions reduce to the one-dimensional ones:
k( I UI + a) � h ] ,
(6.N.5)
where h] and h2 are the stepsizes in the x and y direction, respectively. One-step schemes have many advantages for obvious reasons, but it is more complicated to achieve higher order accuracy. Jeltsch and Smit (1985) obtained
BIBLIOGRAPHIC NOTES
269
general conditions on the number of points required for general one-step meth ods. In particular, they proved that the order of accuracy q for any stable one step "upwind" scheme
v'J + I
r
L cxIE1v'J 1=0
is limited by q � min (r, 2).
(6.N.6)
This shows that any linear upwind method with order of accuracy three or higher is unstable. Theorems 6.5 . 1 , 6.5.2, 6.6. 1 , and 6.6.2 were proved by Kreiss ( 1964) for t independent coefficients. The proof of Theorem 6.5. 1 presented here was given by Parlett (1966). The proofs can also be found in Richtmyer and Morton ( 1967). In Section 6.7, several methods for time discretization were discussed. These methods and several others are found in the literature on ordinary differential equations, for example, in Hairer, N!1lrsett, and Wanner (1980) or Gear ( 197 1). A thorough discussion of Runge-Kutta methods, including a software pack age for selecting optimal parameters for application to PDEs, was given by Otto (1988). The Hyman method was presented by Hyman (1979). The proof of The orem 6.9 . 1 was given by Kreiss and Oliger ( 1 979). Further stability results for the Fourier method are given in Tadmor ( 1986).
7 PARABOLIC EQUATIONS AND NUMERICAL METHODS
7.1 GENERAL PARABOLIC SYSTEMS We have already treated a few parabolic model examples in Chapter 2. In Chap ter 4, we defined strongly parabolic systems of second order and showed that they give rise to semibounded operators that lead to well-posed problems. In this section, we give a brief summary of the parabolicity definitions and main results for general systems. First, consider one-dimensional second order systems (7. 1 . 1) The general definition of parabolicity is now given.
The system (7. 1.1) is called parabolic if, for every fixed Xo and to, the eigenvalues Aj of A(xo, to) satisfy Definition 7.1.1.
(7 1 .2) .
We then have the following theorem. Theorem 7.1.1.
is well posed.
lfthe system (7.1.1) is parabolic, then the initial value problem
Proof. By Lemma
A.LlO there is a matrix T such that (7. 1 .3)
Introduce a new variable by 270
u = T - 1 u. Then the coefficient matrix of the prin-
271
GENERAL PARABOLIC SYSTEMS
cipal part of the resulting system is T - 1AT, which satisfies Eq. (7. 1 .3). The theorem then follows as shown in Section 4.6. We now consider higher order parabolic equations. We begin with a scalar equation
ur
=
- (auxx )xx + (bux)xx + cUxx + dux + eu + F.
Equation (7. 1 .4) is called parabolic if a = Re a
;;:::
a(x, t)
(7. 1 .4)
satisfies
0 > O.
(7. 1 .5)
The initial-value problem is well-posed. To show this we need the following inequality.
Lemma 7.1.1.
For any constant r > 0, (7. 1 .6)
Proof Integration by parts gives us
II ux l1 2
=
(ux , ux )
=
� r ll uxx ll 2 + Let
l (u, uxx ) 1
� lI u ll 2 •
4
u be a smooth solution of Eq. (7. 1 .4). Then, we obtain
:t lI u ll 2
=
2 Re (-(u, (auxx)xx ) + (u, (bux )xx ) +
(u, cuxx ) + (u, dux ) + (u, eu) + (u, F)).
Let r > 0 and IL > 0 be constants, bo = maxx r I b(x, t)J, and define eo correspondingly. Integration by parts gives us ,
co, do,
and
272
PARABOLIC EQUATIONS AND NUMERICAL METHODS
Re (-( um auxx » � -8 I1 uxx Il 2 ,
1 u 2 + bO2J1. ll uxx ll 2 i (buw ux ) i � 4J1. lI x ll
( 4J1. + b6J1. ) lI uxx ll 2, i ( u, cuxx ) i � Co ( J1. IIUxx Il 2 + 4� lI u ll 2 ) , 1
� _ _ lI u ll 2 + 16J1.7
�
1 i ( u, dux ) i � 7d6 11ux ll 2 + 47 lI u ll 2 � 7d6 l1 uxxll 2 7 6 + + lI u ll 2 , 7 i (u , eu)i � eoll u ll 2 , i ( u, F ) 1 � � lI u ll 2 + � 11F1I 2 . -
(: : )
Thus, we obtain
where a = 2 (- 8 + (7/4J1.) + b6J1. + CoJ1. + 7d6) and depend on u . With 7 = 2J1. & we obtain
{3
is a constant which does not
and, by choosing J1. small enough, a becomes negative. Then Lemma 4.3 . 1 gives us the estimate. We have here shown that the desired estimate holds. To prove existence, we can construct a difference approximation satisfying the corresponding estimates. The limit process as ---7 0 then gives us existence. Systems of the form
h
where P3 is a general third-order differential operator, are called parabolic if the eigenvalues of A satisfy Eq. (7. 1 . 2). The proof that the initial value problem is well posed follows as before. Now we consider systems in more than one space dimension. The simplest parabolic problems are those where no mixed derivative terms appear. In Sec tion 4.6, it was shown that the initial value problem is well posed for systems
GENERAL PARABOLIC SYSTEMS
273
(7. 1 .7) provided that
A + A* �
OJ,
B + B* �
OJ.
Here P I is a general first-order operator. If mixed derivative terms appear, the estimates can become more technically difficult. We start with an equation with real constant coefficients (7. 1 .8) It is called parabolic if there is a constant
W2,
0 > 0 such that, for all real W I and (7. 1 .9)
In this case, the amplitudes of the large wave numbers are damped. Now consider an equation with variable coefficients Ut =
where
PI
a(x, y, t)uxx + b(x, y, t)uxy + c(x, y, t)uyy + P I U + F,
(7. 1 . 10)
is a general first order operator. We have the following definition.
7.1.2. Equation (7.1.10) is called parabolic if Eq. (7.1.9) holds for every xo, Yo, and to ·
Definition
For systems
ut
=
A(x, y, t)uxx + B(x, y, t)uxy + C(x, y, t)uyy + P I U + F
(7. 1 . 1 1 )
we have this definition.
Definition 7.1.3. Equation (7. J.11) is called parabolic if, for all real W I and W 2 and all xo, Yo, and to, there is a constant 0 > 0 such that the eigenvalues Aj of
satisfy the inequality (7. 1 . 1 2) Finally, we define a general parabolic system of order 2r in sions.
d space dimen
274
PARABOLIC EQUATIONS AND NUMERICAL METHODS
Definition
7.1.4.
Consider a system of order 2r given by 2r
au at
L PAx, t, ax)u j= O
+ F,
(7. 1 . 13)
where Plx, t, ax) are general homogeneous diffe rential operators of order j with smooth 27f-peri odic matrix coefficients. The system (7.1. 13) is called parabolic if, for all real WI, , W d and all Xo, to, there is a constant l> > 0 such that the eigenvalues Aj of • . .
V I + --- + Vd = 2r
satisfy the inequality Re Aj
� l>(wTr + . . . + w�r ).
(7. 1 . 14)
One can prove the following theorem. Theorem
7.1.2. If the system (7. 1.13)
lem is well posed.
is parabolic, then the initial value prob
EXERCISES 7.1.1. The "viscous part" of the Navier-Stokes equations for the fluid velocity field (u, u, w) is
au at au at aw at
STABILITY FOR DIFFERENCE AND FOURIER METHODS
275
where
a2 a2 a2 + + aX2 ay2 aZ2 · Prove that the system is parabolic if the viscosity coefficients fulfill j) > 0, ' j) � 0.
7.2. STABILITY FOR DIFFERENCE AND FOURIER METHODS As indicated in the previous section, parabolic problems are easier to solve numerically than hyperbolic problems and other types of problems as well. The damping property built into the differential equation carries over to the approx imation in a natural way. The stability analysis is also simplified, since the con dition of dissipativity, which plays an important role for hyperbolic problems, is almost automatically satisfied if the method is consistent. Most parabolic equations contain derivatives of different orders in space. The first question to resolve is whether or not it is sufficient to study the principal part; that is, can lower order terms be ignored? The general stability theorem to be given later shows that this is the case, but it does not follow from the general perturbation theorem (Theorem 5 . 1 .2). To illustrate this, we look at the simple equation
and the approximation (7.2. 1 ) This scheme was studied i n Section 2.5, and it was shown that the condition (7.2.2) is necessary for stability. If
A
is a constant, the last term of Eq. (7.2. 1) is
which is not of the order k, as required by Theorem 5 . 1 .2. However, the symbol of the difference operator on the right-hand side of Eq. (7.2. 1 ) is
PARABOLIC EQUATIONS AND NUMERICAL METHODS
276
Q
1 -
4A sin2
1 -
41, ,;n'
;)
;
+
v'M:
i
sin �
with
I QI
=
((
2
+ Ah;n' ,
) /' '
�
1
+
&(kl,
if A � �. The perturbation itself is of the order Vk, but its effect on the mag nitude of the symbol of the operator is of the order For general parabolic equations and general approximations, this is not true. For example, consider the differential equation
k.
Ut = - Uxxxx
where
ot
+
otUxx,
(7.2.3)
is a constant, and the (admittedly somewhat strange) approximation (7.2.4)
The symbol is (7.2.5) The stability condition without the second-order term is A � 2. The critical points for the principal part are � = 0 and {� = 7r/2, A = 2 } , that is, at those points where Q touches the unit circle. For all other values of � and A, the perturbed Q is inside the unit circle if is small enough. For A = 2, � 7r/2 we have
k
=
Q
-(1
+ 20t V2k),
which yields (7.2.6) Thus, the method is unstable. This example exhibits typical behavior. The approximation behaves well for low wave numbers, that is, for small values of t because the properties are then essentially determined by the differential equation. For high wave numbers,
STABILITY FOR DIFFERENCE AND FOURIER METHODS
277
however, the approximation has its own character, and perturbation results for the differential equation cannot be carried over. The remedy for this example is simple. If the limit point A. = 2 is eliminated, then the symbol for the principal part only touches the unit circle at � = 0, and the perturbation is not harmful. In this case, the approximation is dissi pative according to Definition 5.2. 1 . The importance of this concept has been demonstrated for hyperbolic problems. It is also essential for parabolic prob lems, where it is closely related to the differential equation. This close connec tion makes it possible to give simple conditions so that the condition (5.2.3 1 ) i s fulfilled. For a parabolic problem o f order 2r, it is natural to set A. = k/h2r, where A. is a constant. Then the approximation can be written so that the coef ficients only depend on h. The general approximation (5. 1 .2) can be rewritten in the fonn q
L (au ! +
u=-I
Qu )v"- "
(7.2.7a)
O, I , . . . , q,
(J =
(7.2.7b)
where au are constants and a _ I = 1 . For convenience, it is assumed that the step size is h in all space directions. For F n == 0, x = x., and t = t. fixed, the Fourier transfonn of Eq. (7.2.7a) is q
L (au! +
,, = - 1
where the matrices
Qu (x. , t. , h, �))vn-u
0,
(7.2.8)
Q" are polynomials in h. The principal part i s given by q
L (a,, ! +
,, = -1
Q,, (x., t., 0, �))vn- "
0,
where
Qu (X" t., 0, 0)
=
0,
(J
- 1 , O,
The symbol for the corresponding one-step fonn is
. .
. , q.
(7.2.9)
278
]
PARABOLIC EQUATIONS AND NUMERICAL METHODS
�
- (I + Q- l ) [ (a,I + Q,) �
o I
�
(7.2.10) where the arguments X" t. have been left out everywhere and where (O, � ) have been left out in the right-hand side. The eigenvalues of Q(O, O) are the eigenvalues of the matrix
(7.2. 1 1) 1 The conditions for stability are given in terms of the eigenvalues of this matrix. Without proof we give sufficient conditions for dissipativity. Theorem 7.2.1. Assume that Eq. (7.2.7) is a consistent approximation of the parabolic equation (7.1.13). Then, it is dissipative of order 2r if all the eigen values of Q(O, �) are inside the unit circle for � -:i 0, I � p I ::; 1f, II = 1 , 2, . . . , d, and all the eigenvalues of A except one are inside the unit circle.
REMARK. If the initial function is a constant, the solution of the equation with only the principal part is constant. The corresponding value of � is zero, hence, there must be one eigenvalue of A which is one. by
The general stability estimate can be given in the maximum norm defined
(7.2. 12) The divided differences can also be estimated, and we write
ITI
d
� p=
I
Tp .
(7.2.13)
7.2.2. Assume that Eq. (7.2. 7) is a consistent and dissipative approx imation of the parabolic equation (7.1.13), and that the coefficient matrices of
Theorem
STABILITY FOR DIFFERENCE AND FOURIER METHODS
279
Eq. (7.2. 7a) are Lipschitz continuous. Also, assume that k = Ah2r , that the eigen values of A are inside or on the unit circle, and that the ones on the unit circle are simple. Then the approximation is stable and there is an estimate sup n
111:,1 /2 ,D' u" 11,,- � K
(t,
IT I
)
1I f " 11,,- + s�p II F " II" - .
= 0, 1 , . . . , 2r - 1 .
(7.2. 14)
REMARK. Because the conditions on the eigenvalues of A are less restrictive than those of Theorem 7.2. 1 , the dissipativity condition is needed in this for mulation. This stability result is very general. The only essential limitation is that A = kjh2r is a constant. This is natural for explicit schemes where the von Neumann condition usually has the form kjh2r ::; constant. For implicit meth ods, however, it is usually not a natural condition. As an example, consider the Crank-Nicholson approximation of Ur = -Uxxxx (7.2. 1 5) The amplification factor is 1
- 8A sin4
�j2 4 1 + 8A sin �j2
'
which fulfills all the conditions of Theorem 7.2.2 for any value of A. The scheme has order of accuracy (2, 2). If the time and space derivatives are of the same order, it is natural to choose k and h to be of the same order. For example, with k = h = 0.0 1 , we get A = kjh4 = 1 06 . Some of the constants involved in the stability proof depend on A, and K in the final estimate may be very large if A is very large. In general, since it is desirable to choose k proportional to hP, P < 2r, implicit methods must be used, and it is usually possible to apply the energy method. In many applications the differential operator in space is semibounded, and the difference operator can be constructed such that it is also semibounded. Then Theorems 5 .3.2 and 5.3.3 can be applied, and stability is obtained with an arbitrary relation between k and h. As an application of the stability theory, we study a generalized form of the DuFort-Frankel method. Equation (2.5. 1 3) shows that one root is on the unit circle for � = 7r and, accordingly, the method is not dissipative. For Ut = auxx , a modified scheme is
PARABOLIC EQUATIONS AND NUMERICAL METHODS
280
A
=
k/h2 • (7.2. 1 6)
"y
The original method is obtained with Fourier-transformed scheme is
= 1 . The characteristic equation for the (7.2.17)
where
{3 = 4Aa sin2
;.
N ow we assume "y
> 1,
(7.2. 1 8)
which yields the inequalities 2ex > (3
2! O.
(7.2. 1 9)
The properties of the roots of Eq. (7.2. 17) are given by the following lemma.
Assume that the conditions (7.2.19) hold. Then the roots Z l , Z2 of Eq. (7.2. 1 7) are inside the unit circle, except for {3 = 0 when Z l = 1 .
Lemma 7.2.1.
Proof. The roots of Eq. (7.2. 17) are
ZI ,2
=
1 __ 1 + ex
( ex - {3 ± VI - (3(2ex - (3)) .
(7.2.20)
If 1 - (3(2ex - (3) 2! 0, then, by Eq. (7.2. 1 9),
V I - (3(2ex - (3) ::; 1 . Therefore, because l ex - {31 ::; ex from Eq. (7.2. 1 9), we get only holds if {3 = 0, and in that case
ZI I Z2 1
I �I
::;
1 . Equality (7.2.21 a)
1,
ex + l
!zI, 2 1
<
1.
(7.2.21b)
281
STABILITY FOR DIFFERENCE AND FOURIER METHODS
If
1 - {3(2a - {3) < 0, then the roots are complex, and IZI. 2 1
=
This proves the lemma.
I
/
a 1 a+ 1 -
<
1.
(7.2.22)
Because the scheme is consistent for any value of the constant A, it follows from this lemma and Theorem 7.2.1 that it is also dissipative under the condition (7.2.18). Because only one root is on the unit circle for � = 0, all the conditions of Theorem 7.2.2 are fulfilled for any value of A. Let us next consider the system Ur = Auxx , where A has positive eigenvalues. A straightforward generalization of Eq. (7.2.16) is obtained by replacing a by A at both places where it occurs. However, because the last term is only present for stability reasons, a more convenient scheme can be obtained by substituting P (A)/ for A giving
(7.2.23) Denote an eigenvalue of A by a. Then the eigenvalues matrix are given by Eq. (7.2. 17), where
a = 2A'YP (A) > 0,
{3
=
4Aa sin2
z
;
of the amplification
> O.
The calculation above for the scalar case also goes through in this case, and Eq. (7.2.18) is still the condition for stability. The DuFort-Frankel method can also be used for time discretization with the Fourier method. For systems Ur = Auxx , we get
(7.2.24)
with its Fourier transform
(7.2.25) Again, the characteristic equation is of the form
a
=
(7.2. 17) with
2A'YP (A),
The condition 2a > {3 used in Lemma 7.2. 1 implies 4'YP (A) > a(hw)2 , 7r. The approximation will be unconditionally stable if we choose
�
'Y > - ' 4
I wh l
�
(7.2.26)
PARABOLIC EQUATIONS AND NUMERICAL METHODS
282
EXERCISES 7.2.1. Prove that the eigenvalues of the amplification matrix for the scheme (7.2.23) are given by Eq. (7.2. 1 7). 7.2.2. If a time-marching procedure is used for computing a steady-state solu tion, it is desirable that the error be independent of k , particularly if large time steps are used. Does the DuFort-Frankel method have this property?
7.3. DIFFERENCE APPROXIMATIONS IN SEVERAL SPACE DIMENSIONS We first consider equations of second order, because they are the only parabolic equations for which explicit methods can possibly be used in realistic applica tions. Even for these problems, there is a severe restriction on the time step, but because explicit methods are so much easier to implement, in particular, on vector and parallel computers, they are sometimes used. First, consider a parabolic system written in the form
+ C(x, t)u. (7.3. 1 ) The Euler method is (7.3.2) where
k
�[
+
±
A . . ('"lD",,) D_,(,)
p. � p + I
A pp. (tn )Dox(p) Dox(l'l + B p (tn)DOX(Pl
The amplification factor for the principal part is
]
+ kC(tn).
Q = I A, where -
(7.3.3)
DIFFERENCE APPROXIMATIONS IN SEVERAL SPACE DIMENSIONS
283
ApI' Lemma 7.3.1. A ssume that the matric!s ApI' are Hermitian and that the system (7.3.1) is parabolic. Then the matrix A defined by Eq. (7.3.4) is positive semi definite, and (A) $; 4 1: App (Ap p).
To derive a convenient stability condition, we assume that all the matrices are Hermitian. We then have the following theorem.
d
P
p=
(7.3.5)
1
Proof. The symbol (with reversed sign) of the principal part of Eq. (7.3. 1 ) is
p =
t (A"W; ,t. ! A" W,W, ) +
By the parabolicity condition, we have
Re (v, PV) � o.
(7.3.6)
The discrete symbol corresponding to P is
A Ap
t ( A,A" 4
=
;
sin'
h
k/ ; .
+
,t. ! ,1A,A,A"
sin <, sin <,
)
.
(7.3.7)
The condition (7.3.6) implies
Re \ t ,t. ! ,1A,A, A" \t )
sm <, (s m <, ) v
v,
A,A, , (sin' <, )v
� Re V,
)
,
for any vector v. The inequality
Re (v, Av) � 0, then follows from 1 sin � 1
$; 21 sin
(7.3.8)
�/2 1 ; that is, A is positive semi-definite.
PARABOLIC EQUATIONS AND NUMERICAL METHODS
284
Furthermore.
( (v Av) � ( v. � ..L.J v AvAvv 4 � + �v ) v) . ( 4 � + �v ) Av (v,A v vv) � � ..L.J L A v p (A vv). v 4 max
.
JvJ = 1
max
v= 1
sin 2
max
J vJ = I
J �v J � 'lI"
=1
sin 2
2
sin2
sin2
2
max
J vJ = 1
•
d
= ao
=1
where
ao
=
I-
in Eq. (7.3.5) is obtained by elementary calculus.
All the eigenvalues of the principal part of
p (A)
<
Q=
A are less than one if
2.
Thus. a sufficient stability condition is obtained from Lemma 7.3 . 1 if d
L A vp (Avv)
�
1·
(7.3 .9)
v= 1
We remind the reader that a more restrictive condition on k may be required for practical computations when lower order terms are present. as shown for the model problem (2.6. 1 ). In particular. this is true when the matrices are small compared to The second-order accurate operators used in the Euler method (7.3.2) can be replaced by fourth-order accurate operators. for example.
Av!,
Bv.
D+X(v) D_x(v) -J D+X(v)D_x(v) Dox(v) -J Dox(v)
( I - �;
(I - �
D+X(v)D_x(v»
D+X(v) D_x(v»
)
)
,
•
The corresponding stability condition is a little more restrictive but, for a given accuracy requirement, a coarser mesh in space can be used, and the scheme on a coarser mesh can be more efficient (cf. the discussion in Section 3. 1). The DuFort-Frankel method requires a small time step to be accurate. It is still sometimes used since it is convenient to use an unconditionally stable explicit method. In particular. this is true when the time-dependent equations
DIFFERENCE APPROXIMATIONS IN SEVERAL SPACE DIMENSIONS
285
are used to compute a steady-state solution. In this case, the behavior of the solution as a function of time need not be computed accurately, and the time step can be chosen to optimize the convergence rate as tn ----7 00• Another advantage with the DuFort-Frankel method is that it can be con veniently combined with leap-frog differencing for the first-order terms. The resulting scheme is
+
±
p. = v + 1
Avp.Dox(p)Dox(/l) + BVDOX(p» d
- 2k-y L v 1
) (7.3 . 1 0)
=
where all the matrices are evaluated at t = tn . It is unconditionally stable for -y > 1 . The order of accuracy is 2 if the X v = k/h; are constants. The zero-order term is taken at the central time level in the version given in Eq. (7.3.10). As for the model equation (2.6 . 1 ), this may give rise to an increasing parasitic solution if k is not very small. The remedy for this is, again, to replace Cv" by C(v"+ 1 + v"- I )/2, but there is a penalty. A system of equations with m unknowns must be solved at each gridpoint. The severe restriction on the time step is not present when implicit meth ods are used. The 0 scheme introduced in Section 2.3 for a hyperbolic model problem can be used for parabolic problems as well. With Q2 (tn) defined by Eq. (7.3.3), the approximation is (7.3 . 1 1 ) The Crank-Nicholson method i s obtained for () = 1/2; this i s the only case that is second-order accurate in time. Unconditional stability holds for all () with () � ! (see Exercise 7.3.7). The () scheme (7.3. 1 1) can also be used for general parabolic problems. For problems of the type
-
d a2ru "" "'-' A v ax( v )2r ' V= 1
we simply define
(7.3. 1 2)
PARABOLIC EQUATIONS AND NUMERICAL METHODS
286
d
k L A.(D+X(v)D_x(v) Y.
(7.3. 1 3)
.= 1
In several space dimensions, direct methods for solving the large algebraic systems at each step are expensive. However, iterative methods usually work well for parabolic problems. This is because the coefficient matrix is naturally positive definite and fulfills the conditions for fast convergence of most iterative methods. It is usually more efficient to split the scheme into a sequence of one-dimen sional steps. For two-dimensional problems and the Crank-Nicholson method (0 = �), the splitting procedure resulting in the ADI-scheme was described in Section S.4. With Qx and Qy defined by
QAt) Qy(t)
=
=
A 1 (x, t) (D+XD_xY , A2 (x, t) (D+yD_yY,
(7.3. 14)
the scheme can be written in the form
(I � (I �
QAtn + 1/2 )
-
-
)
Qy(tn+ 1 )
u" + 1 /2
)
u"+ 1
(I + � (I + �
Qy( tn )
)
u",
QAtn + l/2 )
)
(7.3. 1Sa)
u" + 1 /2 ,
(7.3 . 1 Sb)
and it is unconditionally stable if Re (u, QxU)h Re (u, QyU)h
::;
::;
0, 0,
(7.3. 16)
(see Exercise S.4.2). With a different ordering of the operators, we obtain the fractional step method
(I � (I �
QAtn + 1 /2 )
-
-
)
Qy(tn+ d
u" + 1 /2
)
u"+ l
(I + � (I + �
QAtn)
)
u" ,
Qitn + I /2 )
)
(7.3.17a)
u" + 1 /2 .
(7. 3 . 1 7b)
Stability follows directly from semiboundedness. Assuming real solutions and constant coefficients, we obtain from Eq. (7.3. 1 6)
DIFFERENCE APPROXIMATIONS IN SEVERAL SPACE DIMENSIONS
1 I v"+I II� - 1 I v" + 1 /2 11 � S; 1 I v" + 1 /2 11 � - 1 I v" 1I � S;
�
287
(v"+ 1 + v" + 1 /2 , Qy(v"+ 1 + v" + 1 /2 » h S; 0,
(v"+ 1 /2 !5.2
+ v", QAv" + 1 /2 + v"» h S; 0,
which gives us
The method (7.3. 17) is a splitting method, like those discussed in Section 5 .4, because each half-time step is one-dimensional. To obtain second-order accuracy in time for general problems, Eq. (7.3. 17) must be followed by two half-steps in reversed order. If Qx and Qy commute (which is a severe restriction), then Eq. (7.3. 17) is, actually, equivalent to the ADI scheme and, consequently, second order accurate. Applying the operator (kj2) Qx to Eq. (7 . 3 . 1 7b) we get
I(I � ) (I � ) (I (I (I (I -
Qx
-
Qy
v"+ 1 =
-
+ +
+
� � � �
Qx Qy Qy Qx
) (I ) (I ) (I ) (I
+ -
+
+
� � � �
Qy Qx Qx Qy
) ) ) )
v" + 1 /2 , v" + 1 /2 , v", v".
But this is the AD! scheme, which is seen by applying the operator [/ - (kj2)Qx] to Eq. (7.3. 1 5b) and using Eq. (7.3. 1 5a). Fractional step methods have an advantage over ADI methods. Because each half-step is strictly one-dimensional, only one line of data is needed to define each system to be solved, and this line can be directly overwritten by a new solution. This is convenient for large problems. The fractional step method is based on one-dimensional Crank-Nicholson steps. Another method is obtained if the backward Euler method is used as a basis:
(I
- kQx)v" + 1 /2 - kQy) v"+ l
(I
(7.3. 1 8a) (7.3.1 8b)
If the operators Qx and Qy satisfy Eq. (7.3 . 1 6), there is an energy estimate for each half-step and stability follows. This method is only first-order accurate in time, and, because each half-step is only first-order accurate for the one-dimen-
288
PARABOLIC EQUATIONS AND NUMERICAL METHODS
sional problem, the accuracy cannot be raised by the usual reversal procedure. However, the method (7.3. 1 8) is sometimes preferred over second-order accu rate ones when strong damping is required. The generalization of fractional step methods to more than two space dimensions is obvious. Finally, we note that parabolic problems often are given in self-adjoint form, as discussed in Section 7 . 1 . Consider, for example, second-order equations in the form
(7.3. 1 9) where the matrices A p are positive definite. In such cases, the space operator is negative semidefinite, because
(u, t a;" (
A"
a;', ) )
e
- t ( a:�'
, A,
a:�' ) � O.
(7.3 20)
The same property carries over to the discrete case if the approximation is defined to be
dVj dt
d
L D+x(p) (A p (x.p , t)D_x(p) Vj, p= I
(7.3.2 1 )
where
For further comments on this method, see the Bibliographic Notes.
EXERCISES 7.3.1. I s it true that the matrix A given b y Eq. (7.3.4) always has eigenvalues with non-negative real part, when the corresponding differential equation is parabolic? 7.3.2. Derive the estimate (7.3.5) by proving
max
I � I ::; ".
(4
sin2
� 2
+
sin 2 �
)
4.
289
BIBLIOGRAPHIC NOTES
7.3.3. Derive the stability condition for the Euler method (7.3.2) when applied to the parabolic equation
Ut = (a + ib)uxx, where a and b are real numbers. 7.3.4. Find a parabolic differential equation for which the Euler method (7.3.2) is unstable for all values of A = k/h2 (equal step size in all space dimen sions). 7.3.5. Define the DuFort-Frankel method (7.2.23) with D+D_ replaced by the fourth-order accurate operator. Derive the condition on ')' for uncondi tional stability. 7.3.6. Assume that sufficient accuracy is obtained by using the Crank Nicholson method on a grid with 1 00 points in each space direction and k = h for a parabolic problem of order 2r. When compared with the Euler method on the same grid in space, is there any breakpoint do for the number of space dimensions and/or 2ro for the order of the equation, when one method becomes more efficient than the other? 7.3.7. Prove unconditional stability for the () scheme (7.3. 1 1) if () � �.
BmLIOGRAPHIC NOTES The treatment of general parabolic systems in Section 7 . 1 is very brief. A more thorough discussion is found in Kreiss and Lorenz ( 1 989), where Theorem 7. 1 .2 is proved for second-order equations. The general theory for difference approximations in Section 7.2 was orig inated by Widlund ( 1 966). The proof of Theorem 7.2. 1 is easy, but the proof of Theorem 7.2.2 is not. The basic technique is similar to the one used for hyperbolic problems in Sections 7.5 and 7.6. From a stability point of view, the method (7.3.2 1 ) seems to be the best way of approximating the spacial differential operator in Eq. (7.3 . 1 9), because no lower order terms perturb the decay rate. However, with regard to accuracy, the situation is not that simple. Dykson and Rice ( 1 985) investigated this for time-independent problems in two space dimensions. Their conclusion is that, unless the solution varies much more rapidly than A l and A2 as functions of x, the form
dVj dt is preferred.
8 PROBLEMS WITH DISCONTINUOUS S OLUTIONS
In this chapter, we consider the difficulties that arise when discontinuous solu tions of hyperbolic equations are approximated. There is an extensive literature on this subject including several recent books. We do not survey the many spe cial methods that have been developed. Instead we discuss the basic phenomena and the numerical techniques necessary to overcome the difficulties.
8.1. DIFFERENCE METHODS FOR LINEAR HYPERBOLIC EQUATIONS So far we have concentrated on the approximation of smooth solutions. The notion of generalized solutions was introduced in Section 4.8, and these need not be smooth, or even continuous. We carried out some computations in Sec tion 3. 1 for our model hyperbolic equation using centered difference methods with the sawtooth function as initial data and observed that the results were no good. In Figure 3. 1 . 1 , we observed that the approximation was obliterated by high-frequency oscillations. This is typical behavior of difference methods when used to approximate a discontinuous solution. In this situation, these dif ficulties arise solely from the discontinuous initial data. We again return to our model hyperbolic equation
°
::;;
a,
x
::;;
°
211",
::;;
t
(8. 1 . 1 )
with the piecewise constant 211"-periodic initial data considered in Section 4.3,
u(x, O) 290
=
f(x)
=
{
1,
0,
for ° ::;; x < t1l", c Lor 32 11" < x < 4"3 11", for 1 11" < x ::;; 2 11", -
-
(8. 1 . 2)
DIFFERENCE METHODS FOR LINEAR HYPERBOLIC EQUATIONS
291
which has a periodic solution
u(x
u(x, t)
+
211", t).
(8.1 .3)
As we know, the solution of this problem is given by u(x, t) = f(x + at) and is constant along the characteristics x + at = constant. In Figure 8. 1 . 1 , we display approximations of the solution of the problem (8.1.1) to (8.1.3) with a = - l , h = 211"/240, k = 2h/3 at t = 40k and t = 360k = 211" obtained using the leap-frog, Lax-Wendroff, Lax-Friedrichs, and first-order upwind methods. Note that u(x, 211") = u(x, O). The first-order upwind method is defined by
vJn+ 1
vJn
+
vJn+ 1
vJn
+
ak
n
- (vJ·
VJ_ I ) '
if a
n h (Vj + 1 - vJ ) ,
if a
h
ak
<
O, and
� 0.
(8.1 .4)
The other methods have all been defined and used extensively in earlier chap ters. These computations are all unacceptable unless extremely fine grids are used. Our earlier linear convergence proofs do apply, as h ----7 ° the solutions do con verge. The leap-frog scheme has severe oscillations near the discontinuities that spread at a rate proportional to t. This is typical of all nondissipative methods. The Lax-Wendroff method has overshoots and undershoots near the disconti nuities, but they do not spread as rapidly. Dissipation is essential when approx imating discontinuous solutions. The Lax-Friedrichs and first-order upwind methods have no oscillations or overshoots and undershoots. However, they smear the discontinuity over a large region. They are examples of so-called monotone methods, which introduce no new maxima or minima for sufficiently small A. = k/h when used for scalar equations. Unfortunately, they can be at most first-order accurate. It has been shown that it is generally better to add a dissipative tenn to a nondissipative approximation than to use an inherently dissipative method such as the Lax-Wendroff method. Consider centered methods accurate of even order p and dissipative of order 2r. It has been shown that the spread and amplitUde of the oscillations slowly decrease as p increases. If 2r < p, as is the case with the upstream method, the viscous effects dominate as t/h increases. If 2r > p, as is the case with the Lax-Wendroff method, the viscous effects decay as t/h increases (see the Bibliographic Notes). We next approximate Eqs. (8. 1.1) and (8. 1.2) using a fourth-order centered method with a fourth-order dissipative tenn
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
292 1 .5
Leap Frog
1 .25
�-.j' II
.....
0.75 0.5 0.25 0
3
-0.25 -0.5
4
5
6
5
6
(a)
1 .5 1 .25
� 0 -.j' II ....
0.75 0.5 0.25 0 -0.25 -0.5
(b) Lax-Friedrichs
3
2
-0.5
(c)
Figure 8.1.1. (a)-(c).
4
5
6
293
DIFFERENCE METHODS FOR LINEAR HYPERBOLIC EQUATIONS First o rde r
1 .25
..rt 0 -.:I' II ....
upwind
0.75 0.5 0.25 0
-0.5
1 .5 1 .25
� II
...
4
2
6
(d)
Leap-frog
0.75 0.5 0.25 0
3
-0.25 -0.5
4
(e)
5
-0.5
(0
Figure 8.1.1. (d)-(f).
6
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
294 1 .5
Lax-Friedrichs
1 .25
& II
....
0.75 0.5 0.25 0 -0.25 -0.5
(g) First order upwind
5
-0.5
6
(h)
Figure 8.1.1. (g ),(h).
(8. 1 .5) We use the fourth-order Runge-Kutta method in time with h = 2'11/240 and k = 2hj3 ; f = 0. 1 , 0.05, and 0.0 1 . Dissipation introduced by the time discretization is of sixth-order, so the fully discretized method is dissipative of order 4. The results are shown in Figure 8. 1 .2 at t = 40k and t = 211". It is clear that f = 0.01 is too small to control the oscillations. The results with f = 0.05 and 0. 1 are much better; they are also better than the results in Figure 8. 1 . 1 , but still leave a lot to be desired. A fine global, or adaptive, grid is still needed to get very accurate results.
DIFFERENCE METHODS FOR LINEAR HYPERBOLIC EQUATIONS 1 .5
295
f = 0.1
1 .25
/I
� "i' II ...
0.75 0.5 0.25 0
2
-0.25 -0.5
V
I
5'
4
3
6
(a)
1 .5
f = 0.05
1 .25
� "i' II ...
0.75 0.5 0.25 0
2
-0.25 -0.5
V 3
4
5
6
4
5
6
(b)
1 .5
f = 0.01
1 .25
� "i' II
...
0.75 0.5 0.25 0 -0.25 -0.5
2
3
(c)
Figure 8.1.2. (a)-(c).
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
296 1 .5
f = 0.1
1 .25
0.75 0.5 0.25 0
3
2
-0.25 -0.5
4
5
6
4
5
6
4
5
6
(d)
1 .5
f = 0.05
1 .25
0.75 0.5 0.25 0 -0.25 -0.5
1 .5
2
3
(e) f = 0.01
1 .25
0.75 0.5 0.25 0
3
-0.25 -0.5
(f)
Figure 8.1.2. (d)-(f).
METHOD OF CHARACTERISTICS
297
Special techniques for smoothing the initial data to increase the convergence rate and to recover piecewise smooth traveling waves from oscillatory approx imations have been developed. Some of these ideas are noted in the Biblio graphic Notes. Of course, we could compute the exact solution of this problem using, for example, the upwind scheme with = - 1 . However, this is only possible for a problem with constant coefficients. In this case, the method degenerates to the method of characteristics that we consider next.
akjh
8.2. METHOD OF CHARACTERISTICS First, we again consider the scalar initial value problem for Eq. (8. 1 . 1) with periodic initial data If is constant, then we can write down the 0) = solution directly:
u(x, f(x). a u(x, t) f(x + at); =
that is,
u(x, t) f(xo),
(8.2. 1)
x, t x + atlinesXo. characteristics characteristic x + at Xo. f(x) -a 8.2.2).
(8.2.2)
=
for all called
with
=
or
Thus, the solution does not change along the so (see Figure 8.2. 1): =
If, for example, consists of a pulse, then the pulse moves with the speed without changing form (see Figure If we solve this problem with difference methods, as we have seen in Section 8.1, we have a lot of difficulty with spurious oscillations. t
Figure 8.2.1. Characteristics for a < O.
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
298
u
--------+---J:.--,L---"I---<��--------
_ _ _ _ _ _ _ _ _
���__________
t =M
� t=O
_ _ _ _ _ _ _
Figure 8.2.2.
t = 2M
A moving pulse.
x
Now assume that a = a(x, t) is a smooth real function of x, t. In this case the characteristics are defined as solutions of the ordinary differential equation
----;It =
dx(t)
-a(x(t), t),
(8.2.3a)
=
(8.2.3b)
with initial data
x(O)
Xo.
If a is constant, then we recover Eq. (8.2.2). The solution of Eq. (8.2.3) does not change along the characteristic lines, because
d u(x(t), t) dt
O.
The problem (8.2.3) is an initial-value problem for an ordinary differential equa tion. Its solution exists for all time, because a(x, t) is, by assumption, a bounded smooth function of x and t. Also, every point (x, t) lies on exactly one char acteristic line, because we can solve Eq. (8.2.3a) backwards in time; that is, given a point (x, t) we can solve Eq. (8.2.3a) starting at (x, t) in the negative t direction. This process defines a characteristic as a unique relation between x and t for any given Xo (see Figure 8.2.3), which we write in the form
1/; (x, t)
=
Xo.
(8.2.4)
From Eq. (8.2.3), the solution of Eq. (8. 1 . 1) is given by
u(x, t)
=
f(xo )
=
f(1/;(x, t)).
(8.2.5)
METHOD OF CHARACTERISTICS
299 (x, t)
x
Figure 8.2.3.
A characteristic line.
This is the method of characteristics. EXAMPLE 8.2. 1 . If a =
-x, then Eq.
(8.2.3a) becomes
dx(t) dt
x(t);
or
x(t)
=
etxo.
Thus, the characteristics diverge (see Figure 8.2.4); if one solves the equation
x
Figure 8.2.4. Diverging characteristics.
300
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
along a fixed set of characteristics, then the distance between the computed points increases as t increases. This is adequate because the solution
u(x, t)
f(xe-I)
=
becomes smoother with time since the derivative
UX = f' e-I , decays exponentially. Also, in every finite interval lim
I --') �
u(x, t)
=
Ix l
� a,
f (0).
Finite difference methods on a fixed grid will yield good results as In fact, one could decrease the number of gridpoints with time.
t increases.
EXAMPLE 8.2.2. If a = x, then Eq. (8.2.3a) becomes
dx(t) dt
=
-x(t),
or
x(t) = e- IXo. Now the characteristics converge (see Figure 8.2.5) and the resulting solution
U(x, t)
=
f (xel )
becomes rougher with time, because the derivative
Ux
=
f' eI
grows exponentially. Thus, finite difference methods will only provide accurate answers if one increases the number of points exponentially with time. None of these problems occur if one uses the method of characteristics, that is, if one calculates the characteristics by an ODE solver and then uses the fact that u is constant along the characteristic. Also, the distance between computed points will decrease exponentially.
METHOD OF CHARACTERISTICS
301
t
x
Figure 8.2.5. Converging characteristics.
We can also solve general scalar initial value problems
u/ = a(x, t)u", + b(x, t)u + F(x, t), U(X, O) = f(x),
(8.2.6)
by the same method. On a characteristic, Eq. (8.2.6) becomes a nonlinear system of ordinary differential equations
dx dt du dt
- a(x, t),
x (O)
=
xo,
b(x, t)u + F(x, t),
u ( O)
=
f (xo) ·
(8.2.7)
This system can be solved by a Runge-Kutta or a multistep method. The method of characteristics seems to be optimal for the homogeneous equation (8. 1 . 1 ) with regard to accuracy. However, we may have difficulty with an inhomogeneous equation. Consider, for example,
u/ which is equivalent to
=
-xu", +
sin
x,
(8.2.8)
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
302
dx dt du dt
x, sin x,
that is,
du dt
x(t)
It is not difficult to obtain an accurate solution. However, the computational points diverge exponentially, and we may need to know the solution between computational points. In this case, the solution does not become smoother with time, and interpolation may be inaccurate. This difficulty can be overcome using the method of characteristics on a grid. We consider this later. If we were to use the method of characteristics to solve Eq. (8. 1 . 1 ) with the initial step function (8. 1.2) we would find it easy to obtain any desired accuracy. The strength of the method of characteristics comes from the fact that we are integrating the solution along lines on which it is smooth. We are not differencing across discontinuities. Next we consider systems
Ut = A(x, t)ux + B(x, t)u + F(x, t), u(x, O) = f (x) .
(8.2.9)
We assume that the eigenvalues A of A are real and that there is a smooth transformation S = Sex, t) such that
Introducing U = S- 1 U as a new variable gives us
].
SUr + StU
=
ASux + ASx u + BSu + F;
B
=
S- 1 (BS + ASx - Sr),
that is,
ut
=
Aux + Bu + F ,
If B == 0, then Eq.
F
(8.2. 10) reduces to a set of scalar equations
S- I F. (8.2. 10)
EXERCISES
303
vU) t
=
A'(X t)V (i) I
'
X
+ p (i) '
i
=
1 , 2, . . . , m,
which can be solved by the method of characteristics. The system then be solved using the iteration
Vt[j + l ]
=
(8.2.1 1 ) (8.2.10) can
:: [j ]
Avx[j + l ] + F ,
(8.2.12)
Thus, the general case can also be solved using the method of characteristics. Observe that there are now m different sets of characteristics
dx(i) dt
Xo(i) ,
-A i (X, t) ,
1, 2, . . . , m,
(8.2. 1 3)
and that these characteristics do not change from iteration to iteration. As an example, we consider the system
Ut
vt
= =
-xux + v, xVx - u.
(8.2.14)
We calculate U and v along the divergent U characteristics of Figure 8.2.4 and along the convergent v characteristics of Figure 8.2.5, respectively. In the first equation, we need to know v along the u characteristics. In practice, one can obtain those values by interpolation from the values on the v characteristics. This poses no difficulty because the v characteristics are convergent. However, for the second equation, the interpolation of u from values on the v character istics can be a problem because those characteristics are divergent.
EXERCISES 8.2.1. Give an estimate of I au/ax I for the solutions of Eq. (8.2.8), and compare it to the solutions of Ut = -XUx • 8.2.2. Write a program that solves Eq. (8.2.8) by the method of characteristics. Compute the solution Vj ( tn ) on a regular grid using linear interpolation, and show that the accuracy deteriorates as t increases. 8.2.3. The solution of a scalar hyperbolic initial-value problem is uniquely defined by the concept of characteristics, even if the initial data is dis continuous. Prove that this solution is equivalent to the generalized solu tion defined in Section 4.8. 8.2.4. Write a program that solves Eq. (8.2.14) by the method of characteris tics as described above. Use initial data with compact support.
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
304
METHOD OF CHARACTERISTICS IN SEVERAL SPACE DIMENSIONS 8.3.
Scalar equations
d au au aj (x, t) V L at ax( ) V= I u(x, O) = f(x),
+
b(x, t)u + F(x, t), (8.3.1)
can be solved i n the same way as the one-dimensional problem. Using the nota tion a = (a I , . . . , ad ), x = (x I , . . . , Xd ) the characteristics are the solution of the system
dt =
dx
and, on the characteristics, Eq.
dt =
du
-a(x, t),
Xo ,
(8.3.2)
u(O) = f (Xo).
(8.3.3)
x(O)
(8.3.1) becomes
b(x, t)u
+
F(x, t),
The behavior of the characteristics can be very complicated. For example, the characteristics of
are
They diverge in the x direction and converge in the y direction. If the matrices A v (x, t) in the hyperbolic system (6.4.1) are all diagonal (i.e., if the components of u are only coupled through lower order terms), then the characteristics are well-defined for every component of u, and we can proceed as in the one-dimensional case. General systems such as Eq. (6.4. 1 ) cannot eas ily be solved by the method of characteristics, because one can not diagonalize all of the matrices A v with a single transformation S(x, t) in general. However, sometimes one splits the differential operator so that partial steps can be taken using characteristics. As an example, we consider the Euler equa tions for incompressible flow, where the density p is assumed to be constant. Formally, we obtain the new reduced equations by letting p == 1 in the original Euler equations (4.6.4). In two space dimensions, the new system is
EXERCISES
305
ul + UUX + VUy + Px VI + UVX + Wy + Py Ux + Vy = O.
= 0,
(8.3.4a)
= 0,
(8.3.4b) (8.3.4c)
A more convenient system for computation is obtained by differentiating Eq. (8.3.4a) with respect to x and Eq. (8.3 .4b) with respect to y and adding the two. The new equation replaces Eq. (8.3.4c), and we get
UI + UUx + vUy + Px = 0, VI + UVx + VVy + Py = 0, Pxx + Pyy = _«ux )2 + 2uyvx + (vyi).
(8.3.5)
Locally we can solve Eq. (8.3.5) by the iteration
UI[n + 1 ] VI[n + 1] en ] + Pxx
+ U[n] Ux[n + 1 ] + V[n ] U[n + 1 ] + px[n] = 0, + U[n] Vx[n + I ] + V[n] V[n + I ] + p[n] 0 pyy[n] = _ « Ux[n] ) 2 + 2uy[n] vx[n] + (Vy[n] )2 ). Y
Y
Y
=
,
Thus, we consider p in the first two equations as a given function and, therefore, the first two equations are scalar equations with the same characteristics
EXERCISES 8.3.1. Consider the solution of UI = -xux +yuy at t = to in the square 0 5 x, y 5 1 . What is the domain of dependence at t = O? Draw a picture showing the distribution of the initial data required to define the solution on a uniform grid {Xi , Yi } at t = 1 0. 8.4. METHOD OF CHARACTERISTICS ON A REGULAR GRID In the previous sections, we have shown how a solution can be computed by using ODE methods along the characteristics. If this is done, the solution is, in general, obtained at points that have no regular distribution in the computational domain. Recall the examples in Section 8.2. For practical reasons, one often wants to have the solution represented on a regular grid. This can be achieved by interpolation after the computation is completed. The interpolation can also
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
306
be done as part of the method at each time step; we now discuss this technique. These methods are called semi-Lagrangian methods. First consider the model equation u, = ux , and let uF denote the computed value at the gridpoint (Xj. In ). To find a value uF + 1 at the new time level, the intersection (x. , tn) of the characteristic with the previous time level is needed. In general, this is not a gridpoint, and an interpolation formula is used to approx imate u = u:. The simplest formula is obtained by using linear interpolation between neighboring points. The characteristic is moving a distance k in the t direction during one time step. We assume that it intersects the previous time level in the interval (XV , xH d at a distance h. from Xv [i.e., (JI j)h + h. = k] (see Figure 8.4. 1 ). The interpolation formula is -
h .U�+l + (h - h.) u: h and uJ� + 1 teristics
u � yields the simplest version of this modified method of charac-
(8.4. 1 )
This method is a special form of difference method; but, because X v i s not necessarily a neighboring point (or equal) to Xj , it is not of the usual form. A stability analysis finds
h. /h, t (n
+ l)k nk
Figure 8.4.1. The method of characteristics (v
-
j
=
2).
307
METHOD OF CHARACTERISTICS ON A REGULAR GRID
with
I QI2
=
1 - 2a(1 - a)( 1
-
cos o.
By definition, 0 < a ::;; 1; therefore, I QI ::;; 1, and the method is unconditionally stable. Next we generalize to variable coefficients and consider
Ut + a(x, t)ux + b(x, t)u
=
a(x, t)
F(x, t),
<
O.
(8.4.2)
The characteristic x(t) is defined by dx dt
=
a(x, t),
(8.4.3)
and with dldt denoting differentiation along such a curve, Eq. written in the fonn du -
dt
+ b(x, t)u
=
(8.4.2)
F(x, t).
can be
(8.4.4)
Assume that the solution is known at t = tn. Let r(x, t) denote the characteristic passing through the point (x, t), and let (x., tn) be the intersection of r(Xj , tn + d with the line t = tn . The trapezoidal rule for Eq. (8.4.3) is
(8.4.5) This is a nonlinear equation for the unknown approximation is available,
x
•.
Because a very good initial
(8.4.6) Newton ' s method can be used to solve it efficiently. When x. is known, the trapezoidal rule can be used to approximate Eq.
(8.4.4):
(8.4.7) The point x. will not generally fall on a gridpoint. The functions b and F may be known for all x, t; in that case only v: need be computed. We use quadratic interpolation here. Let JI be the index such that Xv < x. ::;; Xv + 1 > with x . x v = h• . -
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
308
Then the points X. , Xv + 1 , Xv + 2 are used for interpolation and the fonnula is
1 n U. -- "2
(1 - ex)(2 - ex)uvn + ex(2 - ex) uvn + 1 - � (1 - ex) uvn+ 2 ' 2 h. � 1. (8.4.8) h
The complete algorithm for one time step is as follows: x. from Eq. (8.4.5) using Newton ' s method with the initial approximation defined by Eq. (8.4.6). For each j compute ur 1 using Eq. (8.4.7), where u: is defined by Eq.
1. Compute 2.
(8.4.8).
To establish stability, we assume, without loss of generality, that a is a neg ative constant and b F == O. Then, it follows from Eq. (8.4.3) that x. > Xj; that is, JI � j i n the fonnula ==
1 (1 ex)(2 - ex)uvn + ex(2 - ex)uvn + - � (1 1 2
"2
O < ex � 1 .
(8.4.9)
In this case, we obtain the amplification factor
Q
==
ei(v
- (� m
(l
- ex)(2 - ex) + ex(2 - ex)ei� -
� ( l - ex)e2i� ) ,
(8.4. 10)
and a straightforward calculation shows that
I QI � 1, for 0 < ex � 1 . We note that the method i s unconditionally stable; that is, the time step k can be chosen arbitrarily. The method looks like an explicit difference method, and unconditional stability may seem surprising. However, it is not a contradiction of earlier results. There will be a growing number of gridpoints between Xj and X v as the mesh ratio kjh increases and the method cannot be classified as explicit, even if only three points are used at the previous time level. The essential fact is that these three points are chosen so that the domain of dependence (which is just a curve in the X, t plane) is always included in the expanding stencil. If k is chosen such that k l a l � h, then the scheme is a regular explicit dif ference scheme
METHOD OF CHARACTERISTICS ON A REGULAR GRID
V�} + l =
1 - (1
2
ex
- ex)(2 - ex)v·)n + ex(2 - ex) Vjn+ 1 - � 2 =
klal/h.
309
(1
To calculate the order of accuracy, we rewrite the scheme as
- ex) Vjn+ 2 ' (S .4 . 1 1 )
V�} + 1 Vn} k _
A Taylor expansion about (Xj , Tn ) yields a truncation error k UI I/2 on the left hand side, which is canceled by the last term on the right hand side (Ul t = a2uXJ). Similarly, the one-sided operator D+ yields a truncation error hux x/ 2 that is canceled by the second term. Accordingly, the scheme has only truncation error terms of order (h2 +k2 ), which shows that second-order accuracy is automatically attained with the method of characteristics if quadratic interpolation is used for
v
• .
For the general case with arbitrary k and variable coefficients a(x, t) and the method is still second-order accurate. This can be proved by first considering Eq. (S.4.5) as a second-order approximation of Eq. (S.4.3), which determines the x. points. The interpolation formula for v: yields an &(h3) error if the point x. is exact. The perturbation of this point introduced by the numeri cal method computing x. is locally &(h3) , and the total interpolation error is (9(h3) in each step. The method has a second-order global error. It should be mentioned that the sign of the coefficient a (x, t) may be dif ferent in different parts of the domain. The only modification of the algorithm required, if the solution x. of Eq. (S.4.5) satisfies the inequality Xv 1 :s; x. < Xv :s; Xj (corresponding to a > 0), is that a quadratic interpolation formula using the points Xv 2 , X v 1 , Xv is substituted for Eq. (S.4.S). The method described here is based on an interpolation formula that uses only points on the same side of Xj as the incoming characteristic at (Xj' tn + 1 ) , even ifj = P . This seems natural, because the whole method is based on tracing the domain of dependence. Furthermore, for nonlinear problems, when shocks may be present, it is advantageous to use information from only one side of the shock (see Section S.S). However, from a stability point of view, there is nothing that prohibits use of points x v - I , Xv , or x v + I (for a < 0). Actually, if a is constant and klal < h , this translation changes the scheme (S.4. 1 1 ) into the Lax-Wendroff method. We again discuss the generalization to systems in one space dimension and consider
b(x, t),
_
_
_
Ut + A(x, t)ux
=
O.
(S.4. 12)
The basic idea is to separate the various characteristics from each other and to
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
310
integrate the corresponding combinations of variables along these characteristic curves. The vector u has m components and, because the system is hyperbolic, we can find m left eigenvectors ¢i(X, t), such that
¢T (x, t)A (x, t)
=
i
Ai (x , t)¢T (x, t),
=
1, . . . , m,
where the Ai are the eigenvalues of A. After multiplying Eq. obtain
(8.4. 13 )
(8.4. 12) by ¢T, we
1 , . . . , m. The
m
(8.4.14)
families of characteristics are defined by
dx dt
- =
Ai(x, t),
=
1, . . . , m.
(8.4. 15)
For each family, we choose that characteristic ri(Xj , tn + 1 ) which passes through the point (Xj , tn + 1 ) and the trapezoidal rule becomes [as in Eq. (8.4.5)]
(8.4.16) where X' i is the intersection of ri(Xj , tn + 1 ) with t = tn . When the m points X'i are known, the corresponding vectors if.i are computed from Eq. (8.4.8), where + I are then computed using the v , h., and ex now depend on i. The vectors vp trapezoidal rule applied to Eq. (8.4.14) differentiated along each characteristic
i i +I
=
1 , . . , m. .
(8.4. 17)
This is an m x m system for the unknowns v} )n , i = 1 , . . . , m, which can be solved by a direct method. Figure 8.4.2 shows the situation for m = 3. The generalization to systems with lower order terms can be done by itera tion as discussed in Section 8.3. It can also be done by adding the extra terms (premultiplied by ¢T) to Eq. (8.4.14). The approximation (8.4.17) is modified, but the system (8.4. 16) is unchanged. If the system is nonlinear, then the eigenvalues Ai also depend on u. Hence, the systems (8.4. 16) and (8.4. 17) become coupled, and they must be solved simultaneously. The method of characteristics is generally much more difficult to apply to problems in several space dimensions. However, it is easily generalized for scalar problems as discussed in Section 8.3. The equation
METHOD OF CHARACTERISTICS ON A REGULAR GRID
(n
311
+ l)k nk
x
jh
Figure 8.4.2. Method of characteristics for a 3
x
3-system.
x y t)ux + b(x,y, t)uy + c(x,y, t)u F(x, y, t)
Ut + a(
, ,
=
(8.4.1 8)
is rewritten in the form
du + c(x, y,t)u F(x, y, t), dt x(t), y(t) r(t) dx a(x, y, t), --;[i dy b(x, y, t). --;[i =
-
along
r(t),
where the coordinates
and
of the curve
=
=
(8 .4 . 1 9
)
are defined by
(8.4.20)
The system (8.4.20) is solved using the trapezoidal rule, and Newton's method is applied at each time step with known at = n + ! and with the point as the unknown intersection of the characteristic curve with the plane = To find the corresponding value tf., interpolation is required, and this can be done in many ways. As an example, consider negative values of a and b, and let YJL ) be the point such that
t(x.,y.)tn.
t t
(Xj, Yj)
(Xv.
x. - Xv � hI , y. - YJL � h2' where h I and h2 are the regular step sizes. Interpolated values on the line are computed using
v�
=
y y. =
� ( 1 - a2 )(2 - a2 ) vi: + a2 (2 - a2 )v�JL + I - �2 (1 - a2 ) v�JL + 2 ' a2
=
h2./h2 ,
i
= P, "
+ 1, ,,
+ 2.
(8.4.21 )
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
312
These values are then used to define v� by applying the same formula in the x direction
"2 (1 - a d (2 - al ) v�. + al (2 - al ) v� + I .*
I
v�
al = h l . /h l .
� (l 2
(8.4.22)
Figure 8.4.3 illustrates the procedure. As mentioned above, the method of characteristics is not very convenient for general systems in several space dimensions. However, in many applications as, for example, in fluid dynamics, the system has the form
u/
+
aux
+
buy
+
Pu
(8.4.23)
= 0,
where a and b are scalar functions and P is a differential operator with matrix coefficients. In Section 8.3, it was shown how the method of characteristics can be used with iteration. Another possibility is to use a Strang-type splitting described in Section 5.4. Let QI (k) be the operator that solves the system
u/
+
aux
+
buy
=
0
(8.4.24)
using the method of characteristics (for m identical components) over one time step, and let
(8.4.25)
hz
{
YI1+z v r·,
YI1+1
v�
Xv
X.
i=
v, v +
1,
v+2
are computed at 0
i s computed at ¢
xv+l
� hl
Figure 8.4.3. Interpolation scheme in two space dimensions.
313
EXERCISES
be a difference approximation to approximated by
Ut
+ Pu = O. Then the system (8.4.23) can be (8.4.26)
and the results in Section 5.4 concerning stability and accuracy are valid. One case in which this type of splitting may be advantageous is where part (8.4.24) of the differential equation represents the fast waves in the system. The unconditional stability of QI (k) then makes it possible to run the full scheme (8.4.26) with larger time steps than could be used with an explicit method.
EXERCISES
8.4. 1. 8.4.2.
I Q I :5 1 , with Q as defined in Eq. (8.4.10). Define the modified method of characteristics if Eq. (8.4.8) is replaced by
Prove that
Xv _ Xv. Xv +
quadratic interpolation using the points 1. and I . Prove uncon ditional stability if the differential equation is Ut + aux = O. Also prove that the method is equivalent to the Lax-Wendroff method in this case, if k l a l <
h.
8.5. REGULARIZATION USING VISCOSITY We have seen that variable coefficients can cause converging characteristics. As t � 00, discontinuities can occur. As an example, consider the problem
au . au sm at ax ' u(x, O) = sin x.
X
� 0, (8.5.1)
The behavior of the characteristics near x = 0 is determined by the coefficient sin x. Near x = 0, sin x behaves essentially like x. Figure 8.5. 1 shows the solu tion at t = 471' 3 computed with a very fine mesh. Thus, we have the same situation here that we discussed in Section 8.2. After some time, the solution develops a large gradient at x = 0, and we expect numerical difficulties. We approximate a/ax by the fourth-order accurate oper ator Q4 = (I (see Section 3.1) and solve the resulting ordinary differential equations by the classical fourth-order Runge-Kutta method on the
/
Do (h2/6)D+D_) -
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
314
1 .5
-3
2
-2
3
-1
-1 .5
Figure 8.5.1.
interval -7r :5 x :5 7r. Figure 8.5.2 shows the numerical solution at t = 47r/3 with h = 27r/120. Difficulties near x = 0 are apparent. We now alter Eq. (8.5 . 1 ) to
aw aw . sm x at ax w(x, 0) = sin x.
+E
a2 w ax2 '
(8.5.2)
1 .5
-3
2
-1
- 1 .5
Figure 8.5.2.
3
REGULARIZATION USING VISCOSITY
315
Here E: > 0 is a small constant. We expect that w will be close to the solution u of (8.5. 1) in those regions where the derivatives of u are small compared to l iE: · One can prove that the derivatives of w satisfy an estimate
(recall Theorem 5.5.1). Here Cp, q depends only on p, q and not on t. Thus, in contrast to Eq. (8.5.1), there are uniform bounds for the derivatives of the solu tion. Therefore, difference methods can be used to solve Eq. (8.5.2). The error is of the form hqDq + l w, where D denotes differentiation. Thus, if hqE: -(q + l ) « 1 , then the error is small. We have also solved Eq. (8.5.2) by the method of lines using the fourth-order Runge-Kutta method in time. In space, we replaced
The truncation error is &'(h4D5w + E: h4D6w) = &'(h4 E: -5) in space. In Figure 8.5.3, we show w for E: = 0.0025 and for the same h and t as shown in Figure 8.5.2. It is clear that, away from a small neighborhood of x = 0, w is close to u. There is an error near x = 0 partly because w is not equal to u, and partly because there are not enough gridpoints to approximate w well. An accurate approximation of w would require h z 0.1 E: 5/4 . 1 .5
2
-1
- 1 .5
Figure 8.5.3.
A regularized solution.
3
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
316
EXERCISES 8.5.1. Solve Eq. (8.5 . 1 ) using the Lax-Wendroff scheme with decreasing step sizes h. Explain why the oscillations become more severe as h decreases.
8.6. THE INVISCID BURGERS' EQUATION In this section, we consider nonlinear problems. In this case, discontinuous solu tions can be generated spontaneously in finite time from smooth initial data. We consider Burgers' equation as an example. First, consider the Cauchy problem U/
+
UUx = 0,
U(x, O) We assume that f(x) E
=
-00 <
x
< 00,
(8.6. 1a) (8.6. 1b)
f(x).
c� is a strictly monotonic function such that
lim f(x)
x � -oo
=
a,
(8.6.2)
lim f(x) = b,
x ---7+00
where either a > 0 > b or a < 0 < b (see Figure 8.6.1). We think of the coefficient u in Eq. (8.6. 1) as a given function, and, therefore, we can solve the problem by the method of characteristics. The chacteristics are given by dx dt
u,
x(O)
xo·
(8.6.3)
f
a
x
b
(a)
Figure 8.6.1a. a > 0 > h.
THE INVISCID BURGERS' EQUATION
317
f
x
(b)
Figure 8.6.1h. a < 0 < b.
Along the characteristics
U(x, t)
=
f(xo)
= constant
(8.6.4)
Therefore, Eq. (8.6.3) becomes very simple. The characteristics are the straight lines
x(t)
=
f(xo)t + Xo.
(8.6.5)
Now, assume that a > 0 > b. Then there is a point x such that
f(xo)
> 0
for Xo
< x,
f(xo)
<
0 for Xo >
x.
Therefore, the characteristics to the left of x and the characteristics to the right of x will intersect (see Figure 8.6.2a). By Eq. (8.6.4), U is constant along the characteristic lines. If two characteristics intersect, then u will not, in general, be a unique function, and the solution will not exist. Also, just before the intersection the solution has a very large gradient, because the solution is converging to different values. We can calculate the blow up time [i.e., the first time when two different characteristics arrive at the same point (x, t)]. In this case, there are xo, Xo such that
x that is,
=
f(xo)t + Xo
=
f(xo)t + xo;
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
318
(f,t)
x
(a)
Figure 8.6.2a. a > 0 > b.
Xo - Xo f(xo) - f(xo )
t
where � lies between Xo and xo . Thus, the blow up occurs at
T
-
. mm -� <x < �
( f'(x)1 ) · - --
(8.6.6)
For t > T, the solution forms a shock wave that is defined as the limit of viscous solutions in the next section. Now assume that a < 0 < b. In this case, the characteristics diverge (see Figure 8.6.2b) and the solution becomes smoother with time. (f, t)
(b)
Figure 8.6.2b. a < 0 < b.
THE INVISCID BURGERS' EQUATION
319 u
Figure 8.6.3.
Consider a sequence /. (x) of monotonically increasing initial data with lim /. (x)
•
��
:::
{ -1,1 ,
for x :5; 0, for x > O.
(8.6.7)
Using the characteristics, a simple calculation shows that the corresponding solutions converge to
u(x, t) :::
{I,
for t :5; x, x/t, for -t :5; x :5; t, - 1, for x :5; -to
(8.6.8)
[Observe that x/t is a solution of Eq. (8.6. 1a).] Thus, for every fixed t, the solution has the form shown in Figure 8.6.3. u(x, t) is called a rarefaction wave. We have already studied similar effects for linear equations in Section 8.2. The difference in the linear case is that the characteristics never intersect. They can only come arbitrarily close to each other. Therefore, the solution exists for all time although its derivatives can become arbitrarily large. Now consider initial data with a finite number of maxima and minima (see Figure 8.6.4). As long as the solution exists, maxima and minima remain
Figure 8.6.4.
320
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
u
maxima and minima. On intervals where is monotonically increasing or monotonically decreasing, the characteristics diverge or converge, respectively. In the monotonically decreasing parts, the characteristics eventually intersect and the solution does not exist beyond the blow up. 8.7. THE VISCOUS BURGERS' EQUATION AND TRAVELING WAVES Corresponding to Section 8.5, we regularize the inviscid equation adding dissipation, and consider the Cauchy problem -00
u(x,O) f(x), €
=
<
x
<
00,
(8.6. 1)
by
� 0, (8.7.1)
f(x)
where > 0 is a small constant. We also assume that and its derivatives are unifonnly bounded with respect to Theorem 5.5. 1 shows that for every there exists a constant such that
axi+i j utj Iaa I
i,jx.
�
::;
Cj,j€ -(i+j) •
Ci,j
(8.7.2)
As we will see later, this is the best we can hope for with general initial data. However, for monotonically increasing data, we can do better. One can prove the following theorem. 8.7.1. Consider Eq. (8. 7.1) with initial data as in Eq. (8.6.2), where aTheorem b (i.e., f is monotonically increasing). For every i, j , there exists a constant C�j' which does not depend on € , such that for all t <
Therefore,(8.6.the1)solution equation as € ofO.Eq. (8. 7. 1) converges to the solution of the inviscid In Figure 8.7.1, we show the solution of Eq. (8.7. 1) calculated with initial data as in Eq. (8.6.2) with a 1 and b 1. The solution is shown at t 200k with € 0. 1 using a fourth-order accurate centered approximation in space and the fourth-order Runge-Kutta method in time with k O .lh and h 0.02. The approximation is satisfactory when k € and h €. view of Eq. (8.7.2), this is consistent with truncation error analysis. ---7
=
=
=
-
=
=
«
«
=
In
THE VISCOUS BURGERS' EQUATION AND TRAVELING WAVES
321
0.5
-
-7.5
10
-5
-2.5 -0.5 -1 �,-----
Figure 8.7.1.
a s
In this case the solution converges to steady state. In general, if > b, then the solution converges to a that is, there is a constant such that
u(x,
t)
=
cp(x
-
traveling wave, st), x lim -----j. - oo II' = a, x lim � +oo II'
b,
a
>
b. (8.7.3)
We shall prove the next theorem. 8_7_2. The viscous Burge rs' equation has traveling wave solutions satisfying Eq. (8. 7.3). Theorem
Proof. We introduce a moving coordinate system z
x
t' t.
-
st,
Neglecting the prime sign and using the same notation variable, Eq. (8.7. 1) becomes
u/
-
sUl + UUz
=
u for the dependent
fUlZ '
which can be written in the so called conservation form
(8.7.4)
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
322
In this new frame, a traveling wave of the form of Eq. (8.7.3) becomes station ary. Thus, cp must satisfy the ordinary differential equation
(-scp +
2 2"1 cp )'
We can integrate Eq.
c,,," '-'Y
-
Z ---) - OQ
a,
lim cp z ----) +oo
b.
(8.7.5)
(8.7.5) and obtain
f. cp'
The constants implies
lim cp =
,
cp 2
d
.,.. + 2 + d'
-S'"
s and d
constant
(8.7.6)
' are defined by boundary conditions: limz �±= cp = 0
b2 +d -sb + 2
-sa +
o.
That is,
s d With these values Eq.
b2 a 2 2(b - a) (b + a)a 2 _
b+a , 2 a2 ab 2 2
-
(8.7.6) becomes ab
2'
(a - b)2 8
(8.7.7)
We now choose a value Zo such that cp (Zo )
a+b 2
(8.7.8)
Such a point must exist because of the boundary conditions in Eq. (8.7.5). We then integrate Eq. (8.7.7) in the forward and backward direction of z with z = Zo as a starting point. Clearly,
THE VISCOUS BURGERS' EQUATION AND TRAVELING WAVES
I{)
,
<
0
for
I
a+b I{) - --
2
I
<
la - bl
2
and
'
I{)
,
=
0
323
for I{)
=
a, b.
a > b, I{) decreases toward I{) = b when Z -7 00 and increases toward a as Z -7 -00. Therefore, Eqs. (8.7.7) and (8.7.8) give us the desired solution if a > b. If a < b, we cannot obtain a solution because the solutions of Eq. (8.7.7) are monotonically decreasing for a < I{) < b. The traveling waves obtained for a > b are called viscous shock waves. If
The solution of Eq. (8.7.7) is not unique because we can choose Zo arbitrarily. We can determine Zo in the following way. Consider the initial-value problem (8.2.5). For t -7 00, it will converge to a traveling wave I{) of the above type. Let I{)o be the traveling wave with Zo = O. For s = (b + a)/2, the differential equation gives us
�
a at f_ � (u - I{) O) dz
f�� u1dz f�� ( (su - � ) z + EUzz) dz =
a2
b2
= sb - 2 - sa + 2
0
'
Therefore,
f�� (u - I{)o) dz f�� (f - I{)o) dz. =
Because
u rapidly converges to I{) as t -7 00, we obtain that
determines the correct Zo and I{)(z). We can normalize I{) by introducing
I{) - � (a + b) � (a - b) as a new variable. Equations
(8.7.9)
(8.7.7) and (8.7.8) then become
'i1/;' = 1/;2 1/;(zo) = o.
I,
E
4E a-b'
(8.7.10)
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
324
Equation
(8.7.10) can be solved explicitly. We obtain
e-ll/ff el"/ff . e- / + e / _
(8.7.11)
Equation (8.7.11) shows that the steep gradient of the traveling wave is confined to an interval of width E log E (i.e., there is an internal layer at z = 0). Except in this layer, a change of E has very little effect on the solution. Also, in agreement with Theorem 8.7.1 , differentiating Eq. (8.7. 10) gives us
I b"' l�
1,
I atzl t
=
�,
0, 1, 2, . . . . (8.7.12)
The estimates (8.7.12) are sharper than those of Eq. (8.7.2), because they give us the correct dependence on the so called The stronger the shock, the thinner the internal layer and the larger the gradient. We can now consider the limit process E -7 In this case, 1/; converges to the step function
shock strength a - h. O.
1/;
=
{ +-11,,
for z < for z >
0,
O. The limits of viscous shock waves are called inviscid shock waves. In Figure 8.7.2, we have calculated a viscous rarefaction wave, that is, we solve Eq. (8.7.1) with initial data from Eq. (8.6.7) and 0.005, h 0.02, and We use the same method as in Figure 8.7.1 . The result agrees well kwithO.Eq.lh.(8.6.8). Due to the viscosity effect, the solution is smooth for t > O. We can summarize our results. If the initial data are monotonically increas E =
=
ing, then the solution
=
C'"
u converges to the solution of the inviscid equation as t= 1
x
(a)
Figure 8.7.2. (a) t
=
1.
THE VISCOUS BURGERS' EQUATION AND TRAVELING WAVES
325
t= 2
-L
-L____-L_____L____��---L----�----�----�-
__ __
-4
3
-3
(b)
Figure 8.7.2. (b) t
=
4
x
2.
a
-7 O. Iff(x) is of the form shown in Figure 8.6.1a, then the state u_ = will propagate to the right until it approaches the state u+ = b which has either been moving to the left (b < 0) or more slowly than u_ to the right. These states are then connected by a viscous layer and a traveling wave is formed. The traveling wave moves with speed s = + b). As € -7 0, this traveling wave converges to the step function €
!(a
u
=
{a,b,
for x - sf < for sf > Zo o
zo,
For general initial data as in Figure 8.7.3, the monotonically increasing parts will remain smooth and the monotonically decreasing parts will be transformed into traveling waves that converge to "jump" functions as € -7 O. The traveling waves move with speed ! (u + + u_ ) , where u_ and u+ are the values to the left and right of the viscous layer. We have computed the solution of Eq. (8.7.1) with the initial data displayed i n Figure 8.7.3, € = 0.005, h = 211/250, k = 2h/3. The result is shown in Figure 8.7.4 at f = 1.511'.
-10
15
Figure 8.7.3. Initi al data.
326
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
0.8 0.6 0.4 0.2 -10
-5
-0.2 -0.4 Figure 8.7.4. Solution at t '" 1 .511'.
All the results above can be extended to more general equations -00
Ut + g(u)x = fUn, u(x, O) = f(x).
< X <
00 ,
t
�
0, (8.7.13)
One can prove the following theorem.
Consider Eq. (8. 7.13) with initial data f(x) satisfying Eq. (8.6.2). Assume that g(u) is a convex function with
Theorem 8.7.3.
If a > b, then the solution of Eqs. (8. 7.13) and (8.6.2) converges to a traveling wave with speed s
g(a) - g(b) a-b
One does not need to calculate a traveling wave precisely to determine its speed. Assume that there is a traveling wave whose front at time to is positioned at Xo and at time to+.dt at xo+s.dt (see Figure 8.7.5). Let 0 > s.dt be a constant. We now integrate Eq. (8.7.13) over the interval [xo - o ,xo + 0]. During the time .dt, the integral increases by s.dt(u_ - u+ ). This is just an increase due to the state u_ mov�g into the interval less the state u+ moving out of the interval. Thus, we obtain
THE VISCOUS BURGERS' EQUATION AND TRAVELING WAVES
327
� --------�
to + Llt
-------�-----
u_
u+
-----_
to ---
u+
Figure 8.7.5. -;------�r_--�-+_--r_--��
+ duAxQ + 5) - uAxQ
-
5» ;
that is,
s REMARK. If g(u) = previous result.
� u2 ,
then
s = (u+ + u_)/2 + &(e), that is,
we recover our
The interesting fact is that the speed does not depend on the detailed profile of the wave. This is because the derivative terms in Eq. (8.7. 13) are in conservation form. If we have an equation of the form
ut + a(x, t, u)ux
=
eb(x, t, u, ux)uxx>
we cannot proceed in the same way and the speed of a traveling wave may depend on the profile. In the next section, we discuss numerical methods. A difference approxi mation is said to be in conservation form, or conservative, if it can be written as
328
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
(8.7. 14) where Hj = H (Vj + P ' , Vj q ) is a gridfunction. For example, if Hj = i (vJ + vJ 1 ), we obtain Eq. (8.8.3). If the differential equation is in conservation form, consistent approximations need not be. The discussion above can be generalized to the discrete case. The conclusion is that the shock speed will not necessarily be obtained accurately with such approximations unless the shock profile is resolved using a very fine grid near the shock. This restriction is usually too severe. Thus, one should use conservative approximations. The above construction can also be used for systems of conservation laws. Then u and g are vector functions with m components. We apply the construc tion above, component by component, obtaining m relations • • •
_
_
v
= 1 , 2, . . . , m,
(8.7.15)
which connect the state on both sides of the traveling wave to its speed. For E = 0, the conditions (8.7. 15) are called the Rankine-Hugoniot shock relations. As we have seen, as E -7 0, the solutions of our problem converge to limit ing solutions with smooth sections separated by traveling jump functions. Such solutions are called weak solutions of the inviscid equation. There is a large amount of literature on weak solutions, which are usually defined directly using a so-called weak formulation and not as the limit of viscous solutions. We will not discuss that here and refer to the literature. 8.8. NUMERICAL METHODS FOR SCALAR EQUATIONS BASED ON REGULARIZATION Equations for physical systems often contain dissipative terms to model physical processes such as diffusion or viscosity. If we think of Burgers' equation as a model of fluid flow, then it is natural to consider the viscous equation
u/ +
1 (u 2).x
2
_ -
vuxx•
(8.8.1)
(We have chosen to write v instead of E to emphasize that we are considering the naturally occurring viscosity; E will denote artificial or numerical viscosity.) Unfortunately, v is very small in many applications, for example, v = 1 0-8 . If we solve the problem using a difference approximation, truncation error analysis tells us that we must choose h « v. This is often impractical. So we might increase v and solve
(8 . 8.2)
NUMERICAL METHODS BASED ON REGULARIZATION
329
instead with, say, E ,., 10- 2 • In this case, the requirement h « E is practically feasible. In the previous section, we have seen that the solutions of Eqs. (8.8. 1 ) and (8.8.2) consist of smooth parts between traveling waves. In the smooth parts, the solutions differ by terms of order &(E). Also, the speed of the trave ling waves is, to first approximation, independent of E . The solutions differ in the thickness of the transition layers; &(E) instead of &(1'). For these reasons, methods based on the concepts above often give reasonable and useful answers. We discuss the procedure above in more detail, which will also explain why difficulties may arise. We approximate Eq. ( 8.8 2) by .
( 8.8.3 )
and we want to calculate traveling waves. We only solve for stationary waves, that is, we want to determine the solution of lim
J --') �
Vj
=
-a,
lim
J --') - � .
Vj
a, a > O.
(8.8.4)
We can also write Eq. (8.8.4) in the form
Therefore,
a2 2
'
(8.8.Sa)
that is, (8.8.Sb) We normalize Eq. (8.8.Sb) and write it in the form
- 1 ) : = V-j F(Vj +
+
1
- 7V-j2
+
v
1
+
7
=
V-j + 7V-j2 - 7 - . ha
( 8.8.6)
The functions F(v), G(v) are parabolas with F" == -7, G" == 7. They have their extreme values at v = 1/(27) and v = - 1/(27), respectively, where F = -G = 7 + 1/(47). Also, F(±1) = G(±1) = ±1. There are two cases.
330
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
Figure 8.8.1.
CASE 1 .
1"
�
!.
Figure 8.8. 1 clearly shows that, for any given Va with - 1 < of Eq. (8.8.6) is monotonically decreasing toward - 1 .
va < 1 , the solution Vj
CASE 2. 1" > ! . Figure 8.8 .2b is a blowup of the area around the point of con vergence. Now the convergence, in general, will not be monotone. The solution Vj will oscillate around - 1 as j increases.
(a) Figure 8.8.2a.
NUMERICAL METHODS BASED ON REGULARIZATION
331
(b)
Figure 8.8.2b.
Starting with Vo = 0 we have calculated Vj , j = 0, 1, . . . , for different values of T (see Figure 8.8.3). Again, we see that, for T < ! , the convergence is monotonic as j increases and, fOr T > !, it is oscillatory. This can also be seen by linearizing Eq. (8.8.6) around v = - 1 . Let v = - 1 + v'. If we neglect quadratic terms, we obtain 1 - 2T 1
,
v· + 2T J .
---
(8.8.7)
x
Figure 8.8.3.
332
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
Therefore, sign Vj + I = -sign Vj if T > � . Furthermore, Eq. (8. 8.7) also shows that the convergence is slow as j increases if 0 < T « 1 or T » 1 . If the convergence is slow, then the discontinuity will affect many gridpoints. We can also solve Eq. (8.8.4) for j $ O. The result is completely symmetric. The calculation above and Eq. (8.8.7) show that the discontinuity is partic ulady sharp if we choose the dissipation so that I
T
2
or
E
ha
2'
In this case, Eq. (8.8.3) becomes
dv· J dt
_
2 + lD 2 0 vJ
(8.8.8)
Observing that
we can write Eq. (8.8.8) in two equivalent forms:
(8.8.9a) or
(8.8.9b) For large j, we have Vj =
-a + Wj, IWj l « 1 , and, therefore, j
»
Correspondingly,
for j
«
-1.
1.
NUMERICAL METHODS BASED ON REGULARIZATION
333
Thus, Eqs. (8.8.9a) and (8.8.9b) are closely related to the so-called ference schemes o
for Vj
<
+
0,
2
I D vj -
2:
= o
upwind dif
for Vj >
O.
(8.8. 1 0)
One can use Eq. (8.8.8) as an integration scheme. The choice of E = haj2 was based on an analysis of the stationary solution, so we cannot expect the method to be useful if the shock speed is not small. In that case, one should locally introduce a moving coordinate system as discussed earlier. Instead of using Eq. (8.8. 1 0), where one uses a different method when v changes sign, one can use one formula. This leads to so-called flux-splitting methods. Introduce functions
g+ _
Then
2
v
=
g+ + g_
{V2,, I Vv < , 0
if 'f
> 0, 0
_
g-
=
{O,v2,
if v if v
� <
O.
0,
and, instead of the formulation (8.8.10), we can use (8.8. 1 1 )
There are some drawbacks with these methods. To obtain sharp waves, one must know the shock strength and use a regularization coefficient E = &(h). Thus, in the smooth part of the solution, the method is only first-order accurate. One can avoid these problems by either using a "switch" or a more compli cated dissipation term (or both). Instead of Eq. (8.8.3) we consider (8.8 . 1 2) One way to build in the shock strength is to use
p- \
L
p v=-
h I D+ vj + v l .
(8.8 . 1 3)
Typically, one uses p = 2, because discontinuities are not smeared over more than three grid points when efficient methods are used. Then I{)j a b near the discontinuity, and we can achieve the correctly scaled viscosity by choosing E = hj4. Where the solution is smooth, the dissipation term is of order &(h2 ). Thus, the approximation is second-order accurate in the smooth regions. In Figure ""
-
334
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
8.8.4, we show a calculation obtained using Eq. (8.8. 1 2) with E = h/4. It is free of oscillations. We can also combine Eq. (8.8 . 1 2) with a switch. The idea is to only use dissipation near a discontinuity. To do this, one must construct a monitor M that can locate discontinuities. One possibility is p
Typically,
p=
>
O.
2. Then we can select a threshold M and define
We now consider the more general equation (8.7. 1 3) with initial data (8.6.2), where a > b. One can construct traveling waves as before. In the moving coor dinate system, where z ---7 x - st, Eq. (8.7 . 1 3) becomes (8.8. 14) where
g l (u) For so called
:=
s
g(u) - su,
weak shocks, that is, 0 < a - b « s
""
g'(uo),
Uo
g(a) - g(b)
a-b
(8.8. 1 5)
1,
a+b 2
0.5
-0.5 - 1 P-
--
t = 1 00k
Figure 8.8.4.
NUMERICAL METHODS BASED ON REGULARIZATION
Therefore, since
g� ( uo) ""
""
335
0,
( g, uo) + g� uo) u (
"" g�' ( uo ) « u
(
_
2
(
Uo )2 )
z·
By assumption, gil > O. Therefore, Eq. (8.8. 1 4) behaves essentially like Burg ers' equation. In particular, the optimal amount of dissipation is proportional to the shock strength a - b. If a b » 1 , then it is not possible in general to relate the optimal amount of dissipation to the shock strength. Corresponding to the relation (8.8.Sa) for Burgers' equation, it follows that the profile of the discrete traveling wave is the solution of -
(8.8 . 1 6) using Eq. (8.8 . 1 5) as j ---7 00, V ---7 b. Introducing v = b + as a new variable j W and neglecting quadratic terms gives us the linearized equation
that is,
Wj + !
= Wj
1
j + � l g� (b)1 W .
(8.8 . 1 7)
2€
[Observe that gil > 0, and, therefore, for some number �, b � � � a, we have g� (b) = g'(b) - s = g'(b) - g' (�) < 0.] The optimal dissipation in front of the traveling wave is now
Correspondingly, we should choose €
� I g� (a) 1
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
336
behind the discontinuity. These two values can be vastly different and need not be related to the shock strength. We can incorporate these values into the difference approximation by using (8.8. 1 8)
REMARK. In practice, one replaces I g; (uj )1 by [ l/(2p + 1 )] L� = -p I g; (uj+ p ) I . Again, recall that Eq. (8.8 . 1 8) is defined in the moving coordinate system. In the original coordinate system, the approximation is not useful if the shock speed is large. EXERCISES 8.8.1. Prove that Eq. (8.8. 1 6) has a unique monotone solution provided sufficiently large.
Elh is
8.9. REGULARIZATION FOR SYSTEMS OF EQUATIONS Consider a system of conservation laws « 8.9. 1 ) where U and
g are
m
vectors. Again we are interested in traveling waves
U(x, t) = cp (x - st),
lim x -1o ±OO
cp =
U±.
(8.9.2)
As in the scalar case, we introduce a moving coordinate system
z = x - st,
t' = t,
and we obtain, after dropping the prime sign,
Ut Now
+ (g(u) - sU)z =
fUzz .
(8.9.3)
cp is the solution of the stationary system (g(cp ) - scp )z =
tcpw
lim z � :too
cp (z) =
U±.
Integrating Eq. (8.9.4) gives us the Rankine-Hugoniot relation
(8.9.4)
REGULARIZATION FOR SYSTEMS OF EQUATIONS
337
(8.9.5) Thus, the end states u+ and u_ cannot be chosen arbitrarily. We now discuss the choice of u+ , u_ in more detail . For many systems occurring i n real applications, extensive analysis and numerical computations have been done. Based on those results, certain proper ties of the solutions are well understood. For example, there are solutions of the same form as in the scalar case; outside an internal layer of width &(f I log E I), they converge rapidly to the end states u±. Therefore, for large Izl, we can replace Eq. (8.9.3) by for z
» 1
(8.9.6a)
or for z
«
-1.
(8.9.6b)
Here A == agjau is the Jacobian evaluated at u+ and u_ , respectively. We now assume that the systems (8.9.6) are strictly hyperbolic for f 0, that is, we can order the eigenvalues A± s of A (u± ) - sl in ascending order ==
-
Af - s
<
Ai - s
< ... <
A� - s .
(8.9.7)
We first assume that AJ - S =I- 0 for all }. The case AJ- S == 0 will be discussed for the Euler equations at the end of this section. The corresponding eigenvectors are linearly independent, and, therefore, there are nonsingular matrices S± such that
(8.9.8) Here Af sl > 0 and Ai sl < 0 are positive and negative definite diagonal matrices. We introduce new variables in Eq. (8.9.6) by -
v
==
-
Su, S
==
S+
for z
» 1
and
S
==
S_
for z
«
-1,
and obtain
o + (Af - sI) � f�z' 01 + (Ai - sI)�1 = f��, ==
(8.9.9)
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
338
for
z»1
and
z
«
- 1, respectively. Observe that the dimension of
A� can depend on z where z > 0 or z < O. Now consider the case
AT and
e = O. The components of v are the characteristic variables. They are also called the Riemann invariants. Let z » 1 . Then, the components of V I are constant along those characteristics that move to the right in the new coordinate system (z, t). Therefore, the values v{ at z = +00 do not have any direct influence on the solution in the shock layer z. On the other hand, the components of V II are constant along characteristics that move into the shock layer. Therefore, it is reasonable to describe
(8.9. l Oa) Correspondingly, for z « - 1, we obtain that the components of V I are constant along characteristics that move into the shock, and we describe
v�
=
lim V I .
(8.9. l Ob)
z ---7 -00
Assume that the dimension of V{ I is k and that of v� is p. In the original variables, we obtain k linear relations between the components of u+ and p linear relations between the components of u_ . Also, the m (nonlinear) relations of Eqs. (8.9.5) have to be satisfied. Thus, the 2m + 1 variables u+ and u_ and the shock speed s have to satisfy m + k + p relations. Therefore, we require that
k
+ P =
m
+
1;
that is, the number of characteristics entering the shock shall be m + 1 . This is called the entropy condition. Under reasonable assumptions, one can show that one can solve this system of equations. Assume now that there is a traveling wave I"(x st) . For large values of z, the function I" satisfies to first approximation -
Introducing
1/; = SI" as new variables,
we obtain for every component
v (AJ"!" - s)·t.< YI < ) The general solution of Eq.
=
E·1.Yl z<(v ) '
(8.9. 1 1) is given by
1/; (v ) (8.9. 11)
339
REGULARIZATION FOR SYSTEMS OF EQUATIONS
We are interested in bounded solutions . Therefore, we have to distinguish between the two cases. CASE 1 .
CASE
-
A; s > O. Then necessarily (1 2 = 0 and
-
2. A; s < O. Then (1 2
converges rapidly to
1/;t:,)
as
is undetermined and
z � 00.
U = u+
+ &
Thus, in the original variables we obtain
(
-I
max
ht
s
h+
- s
e�
z
)
.
(8.9.12)
The corresponding result holds for z « The conclusion from this analysis is the following: The Riemann invariants corresponding to characteristics corning out from the shock are constant, and they will cause no numerical difficulties. For the characteristics going into the shock, the corresponding Riemann invariants have a sharp layer. This is natural, since the value given at z = 00 must suddenly adjust to the conditions at the shock after having traveled smoothly 1eftwards across the right half-plane. For the discrete approximation, the sharp layer causes an oscillatory solution if no extra dissipation term is added. We now use a central difference method with an added artificial viscosity term. Our method of lines approximation in the moving coordinate system is .
(8.9.13) where Mj is an m x m positive definite matrix and both Vj and Mj are gridfunc tions on the grid Xj = jh, j = 0, ±I, ±2, . . .. In the scalar case, our linear analysis showed that we could eliminate spu rious oscillations if we choose the coefficient of the viscous term to be pro portional to the local characteristic speed. The situation is more difficult here, because we generally have many different speeds. Suppose we choose M = fl, where E maxI � p � m l Ap I. Spurious oscillations will be suppressed, but com ponents of the solution corresponding to small l Ap I will be smeared out exces sively. This is a severe problem if the eigenvalues differ by orders of mag nitude. We now look at this situation in more detail. Suppose that we have a steady '"
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
340
solution with nearly constant states separated by a shock. We consider this solu tion outside the shock layer and linearize around the states on the left and right of the shock. After diagonalization of the resulting equations, the linearized steady problems are of the fonn
(8.9.14) where J is a constant matrix corresponding to A = g' , which is different on the left and right of the shock. Assuming that J has a complete set of eigenvectors, we can write
(8.9.15) where
A is diagonal and the columns of Q are the right eigenvectors of J. Set
Then Eq.
(8.9.14) can be written as (8.9. 16)
The components of Wj are the characteristic variables for this system. Because Q and Q- l are also constant on each side of the shock. If we detennine M from
J is constant,
(8.9. 17) then Eq. (8.9.16) is an uncoupled system of m scalar equations approximat ing Eq. (8.9.1 1). Consider the right-hand side of the shock. Again, we have to distinguish between the two cases.
1. 'A; - s > O. As in the continuous case, the bounded solutions are constants and therefore no dissipation is needed.
2. 'A; - s < O. Then we obtain a scalar equation of the same fonn as in the previous section. We can suppress oscillations by choosing Ep i 'A; s ih/2. ""
Thus, in the general case
p E =
{�
E,
i 'A p - s i h , for ingoing characteristic variables, otherwise.
REGULARIZATION FOR SYSTEMS OF EQUATIONS
341
Here E is a minimum level of dissipation, which one always should add to control noise. We now consider the equations for gas dynamics as an example. In particular,
+ (pu)x = 0 (pu)r + (pu2 + p)x 0 (pE)r + «PE + p)u)x + p E Pr
=
0
(8.9.18)
for -00 < x < 00 and t � o. The variables p, u, and E are the density, the velocity, and the total energy, respectively. The pressure p is determined from an equation of state
where the constant 'Y is the ratio of specific heats. The eigenvalues of the Jacobian of this system are AI = U, A2 = U + a, and A3 = u - a, where a = V'YP/p is the speed of sound. The so-called Riemann invariants that result from the diagonalization are
e
p'Y - 1
u
r3
where e = equations
=
+
u -
2a
'Y -
1
2a
'Y - 1
E - u2/2 is the internal energy. These quantities satisfy the differential p
= 1, 2, 3.
The values of the rv and Av are determined by the states on either side of the shock. Suppose we want to compute a steady-state solution that consists of a shock separating a supersonic region (u > a) on its left from a subsonic (0 < u < a) region on its right. Then
A2 > Ai > A3 > 0 on the left and
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
342
on the right. All of the characteristics go into the shock from the left and only one, A3, goes into the shock from the right. The coefficients fv can now be chosen according to our rule above, that is, fp = lAp I h/2, p = 1 , 2 , 3 . The resulting matrix M will, in general, be nondiagonal, and we need to compute the eigensystem Q. However, if we have a strong shock with u � a, then l u + a l "" l ui "" l u - a i , and we can use M = EI with E "" lu l h/2 without computing Q. Again, the derivation is only valid for slowly moving shocks. In addition to shocks, the Euler equations have solutions with contact discon tinuities, where the density p is discontinuous. However, the variables u, p , and E are continuous across a contact discontinuity. The wave travels with speed s = u, and, because A l = u is continuous, the characteristics are parallel along the discon tinuity. Thus, the behavior of the solution is similar to those of linear equations. We have seen that discontinuous solutions ofthe linear model equation ur+ux = o are not computed very accurately, even if a high-order method is used. The rea son is that the phase error for high wave numbers is large, which causes oscilla tory solutions. By giving up some of the accuracy for low wave numbers, we can increase the accuracy for high wave numbers and thereby improve the situation. Consider the approximation
(8.9. 19) where
1
Instead of selecting the parameters a = and {3 = 1 such that Q is a sixth-order approximation of a/ax, we minimize the phase error in the least-square sense over a certain interval [- �o, �o] : ou'n Ci , {3
J�O ( 1 -�o
-
� �
(1
+
2 . 2 � (3 8 sm - + 3 2
a-
15
-
.
4
sm
� 2
) ) dt ..
The following table shows a and {3 for three different values of �o. Note that, for �o = 7r/3 and �o = 7r/4, Q is close to a fourth-order approx imation. (In fact, the standard fourth-order approximation gives essentially the same numerical result.) For the very large wave numbers, we still need a damp ing term, and we substitute Q -7 Q
(8.9.20)
343
REGULARIZATION FOR SYSTEMS OF EQUATIONS
TABLE 8.9.1. Coefficients to Minimize over [ -
�o
0.88 0.98 0.99
71"/2 71"/3 71"/4
�o' � o] {3
1 .90 1 .35 1 .84
Note that the extra term is of order h2 , which is smaller than what is required for shocks. Furthermore, it affects only the amplitude and not the phase. We have computed the solutions of the well-known shock-tube problem, also called the Riemann problem. The flow is governed by Eq. (8.9.18), and the initial data are constant on each side of the point Xo = 8.35 (p , PU, p E)1 = 0
_ -
1 1 , 8.92 8), { (0.5, (O, 445,0.0,0.31.4275),
for x < for x �
Xo, Xo.
The solution develops a shock, a rarefaction wave, and a contact discontinuity, all of them originate at x = Xo. For proper treatment of the relatively strong shock, we use the scalar vis cosity coefficient (h/2)D_ l uj + aj l D+ in all equations. However, it is activated only in the neighborhood of the shock and is cut off gradually by using the so-called van Leer limiter (see Section 8. 10). In the implementation, the sign of U + a s is tested to find the location of the shock, where the shock speed s is known a priori, for this example. The fourth-order Runge-Kutta method has been used for time discretization. In Figure 8.9.1 , the variables p + 3.3, p, and U 1.3 are shown for h = 0.1 at t = 1 .8 (the shifting of p and U has been introduced for clarity of the picture). The expansion wave and the shock look fine, but the contact discontinuity in p is smeared out too much. The reason for this is that all these waves are located at the same point initially. Therefore, the first-order viscosity term for the shock acts also on the contact discontinuity in the beginning, and the damping is too strong to keep the sharp profile. Once it is smeared out, there is no mecha nism for sharpening it, because the characteristics do not converge, as they do for the shock. The expansion wave has good accuracy in agreement with the experiment done in Figure 8.7.2. The most straightforward and efficient procedure to overcome the difficulty with the smeared out contact discontinuity is to refine the grid. In this way the first-order viscosity term becomes smaller, and the profile becomes sharper. The refined grid is used only in the beginning. When the two discontinuities are well separated, the computation continues on the coarse grid. In our example, the change was made at t = 0.6, when there were five grid points between the discontinuities. The new and better result is shown in Figure 8.9.2. -
-
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
344
5 ,-------,--,---,
o
2
o o o
o
o o
-1
-2
L-__________�__________-L__________�__________�
o
5
15
10
20
Figure 8.9.1. Shock-tube computation with one grid.
5
�
0
2
4
0
3
b
2
0 0
co
� 0
o
0
-1
-2
0
o
5
10
15
Figure 8.9.2. Shock-tube computation with a refined grid in the beginning.
20
345
HIGH-RESOLUTION METHODS
In the analysis above, we have considered each side of the shock separately. The viscosity term has been designed so that the solution behaves well outside the shock layer and so that the shock speed is accurate. To obtain the cor rect detailed behavior of the solution inside the shock layer, further analysis is required.
8.10. HIGH-RESOLUTION METHODS We have seen that it is necessary to use a viscosity term to control spurious oscillations and avoid too much smearing when discontinuous solutions are approximated. In this section, we discuss several methods of this type that have been used. We consider approximations of the scalar equation
ut + g(u)x = 0,
- 00 <
x
t � 0
< 00,
(8. 10.1)
with periodic initial data
u(x, O) = f(x) = f(x + 1),
- 00 <
x
< 00
(8. 10.2)
and a periodic solution
u(x, t) = u(x + I, t),
t � O.
(8. 10.3)
We first consider so-called flux-limiter methods in conservation form
(8.10.4) where A = k/h and F(v,/) = F(V'/_ b . . . , V'/+ r ) on a grid Xj = jh, tn = nk. Consistency requires that F(u, u, . . . , u) = g(u) (see Exercise 8.10. 1). For example, if we let A(u) = g'(u) (for systems A is the Jacobian matrix), we can write the Lax-Wendroff method in conservation form
:
k2 (g(v/+ I ) - g(v/_ I » + 2 (Aj + l /2 (g(V/+ l ) - g(v/» v/ + I = v/ h 2h (8. 10.5) Aj - 1 /2 (g(VP ) - g(v/_ I ))), -
where Aj ± I /2 is evaluated at (v'/ + v'/± I )/2. Flux limiter methods are based on the idea of using a higher order flux F2 , like the Lax-Wendroff flux, in regions when the solution is smooth and a lower order flux F l when the solution is not smooth. A single flux function is built of these basic flux functions. One can view the higher order flux as the lower
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
346
order flux plus a correction, that is,
(8.10.6) A flux limiter cp (uJ ) can be introduced to smoothly go from F I to F2 and define a new flux function
which can be rewritten as
(8. 10.7) If the data is smooth then cp should be near one; and cp should be near zero near a discontinuity. To discuss several flux limiters in a simple setting and to compare them with the computational results of Section 8.1, we again return to the model equation (8.1.1) with periodic solutions. We consider combining the Lax-Wendroff flux with the lower order upwind method flux. If we assume a > 0 in u/ + a ux = 0, then we can write this method as
ur I = up -
ak U ) h (uP - P- I
�� ( 1 - ahk ) (UP+ I - 2up + UP_ I ).
(8.10.8)
This flux function is
F(un = aup +
�(1
Equation (8.10.9) is of the form of Eq. flux limiter, we modify Eq. (8. 10.9) as
-
ak h
) (uP+
(8. 10.6),
I
with
- un . FI = auF -
(8. 10.9) To obtain a
(8. 10. 10) The cp function needs to be a function of the smoothness of the data, so it is natural to consider cp(OJ), where
(8. 10.11)
HIGH-RESOLUTION METHODS
347
is the ratio of neighboring gradients. If OJ is near 1 , then the solution is smooth and if it is very different from 1 , then the gradient is changing rapidly. This measure will not be accurate near extreme points of the solution. In that case, it is possible that OJ < 0, even if vJ is smooth. Beam and Warming have used the limiter
tp(O) Sweby suggested using
tp(O) =
{
0, 0, 1,
=
O.
0 :5; 0, 0 :5; 0 :5; 1 , 1 < 0,
(8. 1 0. 1 2)
(8. 1 0. 1 3)
and this yields the so-called second-order TVD scheme of Sweby. Roe has sug gested his "superbee" limiter
tp(O) =
max (0, min ( 1 , 20), min (0 , 2)).
(8. 1 0. 1 4)
Van Leer proposed the limiter defined by
tp(O) =
101 + 0 . 1 + 101
(8. 1 0. 1 5)
We repeat the calculations shown in Figure 8 . 1 . 1 using these limiters. Recall that we used step function initial data given by Eq. (8. 1 .2), set a = 1 , h = 2'11/240, and k = 2h/3, and displayed the results at t = 40k and t = 21r. In Figure 8.10.1 we repeat these calculations using the Beam-Warming, Sweby, Roe, and van Leer smoothers, respectively. The flux-limiter methods were developed as nonlinear methods to obtain more accuracy than could be obtained with monotone schemes and still pre vent oscillations. These methods were developed to have nonincreasing total variation. If we define the total variation TV(v/) as
TV(vn
=
L j
Ivp - vp_ d ·
(8 . 1 0. 1 6)
Then TVD schemes are those that satisfy TV(vP + 1 ) :5; TV(vP ). The methods we have described here, excepting the Beam-Warming method, were developed as TVD methods and have accuracy that is second order over most of the domain. However, it is known that TVD schemes must degenerate to first-order accuracy at extreme points. In our discussion above, we assumed a > 0. It is clear that a similar method can be developed for the case a < 0. However, these two cases can be written using a single formula that is valid for any wave speed.
348
PROBLEMS WITH DISCONTINUOUS SOLUTIONS Beam-Warming
Sweby
0.75
0.75
0.25
0.25
0.5
0
-0.25
0.5
2
-0.5
3
4
(a)
,
6
0
-0.25
2
-0.5
0.75
0.25
0.25
0.5
2
3
4
5
6
0
-0.25
2
-0.5
(c)
0.75
0.75
0.25
0.25
4
5
6
5
6
Sweby
0.5
0.5
0
5
-0.25 -0.5
6
0
-0.25
-0.5
(e)
(t) van Leer
Roe's "superbee"
0.75
0.75
0.25
0.25
0.5
0.5
-0.5
3
(d)
t = 40k
Beam-Warming
-0.25
6
0.5
-0.5
0
5
van Leer
0.75
-0.25
4
(b)
Roe's "superbee"
0
3
2
3
(g)
4
5
6
0
-0.25
-0.5
t = 21t
Figure 8.10.1. (a)-(h).
2
3
(h)
4
1
5
6
349
HIGH-RESOLUTION METHODS
The upwind flux function can be written as
(8.10.17) and the Lax-Wendroff flux as
(8. 10.1 8) We can then introduce a limiter and write
cp(vn F(v}n ) = F I (v}n ) + -2
( ( ak ) .
sIgn
h
-
h
ak
) a(v
n }+ I
-
v}n ) (8.10.19)
since l a l = sign (a)a = sign (akjh)a. We can now define cp as before, but need to define () to be the ratio of slopes in the upwind direction. If we write h = j sign (akjh), then -
(8.10.20) We now discuss slope-limiter methods. These methods are based on cell aver ages. If we use a grid of points x) = jh, h > 0, then we can associate the cell average
Vi et) = h- I
jXj+ Xj _
1/2 1/2
(8.10.21)
v(x, t) dt
with the grid point x}. The first method of this type was that o f Godunov, which can be described as follows:
1.
Given data V/ at time tn, construct a function Godunov's method uses
v n (x)
x} S; x < x} + hj2, x} + hj2 S; x < X} + I , on
x} S; x S; x}
+
I,
and so on.
defined for all
x.
350
2. 3.
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
Solve the conservation law exactly on this subinterval to obtain at tn + l .
Define
Vp + 1
using Eq.
8.10.21
with
v n + I (x)
t = tn + l .
This method has been generalized by using more accurate reconstructions in step 1. above. For instance, we could consider a piecewise linear reconstruction
Xj _ 1/2 � X < Xj + 1 /2, with slope
Sj
based on the data
sF!]
Vp.
(8. 10.22)
If one takes the obvious choice
Vjn+ I
h
-
Vn ]
one recovers the Lax-Wendroff method for Eq. (8. 1.1). This shows that these methods can be second-order accurate, but may still exhibit oscillatory behavior. Several slope limiter methods have been constructed that yield TVD schemes. One simple such choice is the min mod limiter with
where min mod (a, b)
= 1 (sign(a) +
sign(b)) min ( i a l , I b l ).
We now look at the ENO methods, which are the so-called essentially nonoscillatory methods. To achieve higher order accuracy, the TVD criteria is replaced by the condition that there exist a constant a such that
(8. 10.23) which guarantees that
(8. 10.24) so that these methods will be total variation stable. We describe a second-order ENO method for simplicity and then indicate how the ideas can be generalized. We again consider Eq. (8. 10. 1)
Ut + g(u)x = 0,
t
� 0
351
HIGH-RESOLUTION METHODS
with
U(X, O) =
I(x).
If we integrate this equation over [Xj _ I /2 , Xj + 1 /2 ]
u}!l + l
k
u}!l - h ( gj + 1/2
-
x
[tn , tn+ t 1
gj 1/2)' -
we get
(8.10.25)
where
gj + 1/2
k- I
f:
and
uj =
h-
I
g( U (Xj + 1/2, tn +
T» dT
(8.10.26)
f"j + 1/2 u(x, tn) dt. Xj-1/2
The function u in the integral in Eq. (8. 1 0.26) must be reconstructed from the cell averages uj to define an algorithm. The ENO method uses piecewise poly nomials to reconstruct u(x, tn ) and then Taylor expansion in time over [tn , tn+ I ]. Assume that Vp approximates uj. We then compute Vp + I from
k vjn+ 1 = Vjn - h ( -gj + 1/2 - gj - 1/2 )'
(8. 10.27)
We define
(8.10.28) where
(
)
k n n h n V}· + 2 1 - h aj Sj ' k n h R n 1 aj + I Sjn+ I ' + h Vj+ 1/2 = vj + I 2" L
+
Vj 1/2
-
-
with ap =
' g (vP),
and
sp
(
)
(8.10.29)
is a piecewise smooth function of x around Xj . The
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
352
problem of solving the initial-value problem for a nonlinear conservation law with piecewise constant data is referred to as the Riemann problem. In Figure 8. 10.2, we show the result using a second-order ENO method obtained for the same linear coefficient problem used for Figure 8.10. 1 . For this constant scalar case with g(u)x = Ux and periodic boundary conditions, we use
and
The min mod function is the same as defined previously extended to three vari ables.
0.75 0.5 0.25 0
4
3
2
-0.25
5
6
5
6
t = 40k (a)
-0.5
I
0.75 0.5 0.25 0 -0.25 -0.5
4
3
2
t
=
2n
(b)
Figure 8.10.2.
.\
HIGH-RESOLUTION METHODS
353
The ENO results are not as accurate as the results of Roe's "superbee." The ENO methods smear out contact discontinuities; that is, a discontinuity that traverses a region where the characteristics are parallel. In this situation, there is no propagation of information into the discontinuity to sharpen it. To further compare the ENO scheme with the other four schemes, we applied each method to a new set of initial conditions. The equation we are approximating is the scalar equation defined in Eq. (8. 1 0 . 1 ) with g(u)x = Ux- The initial conditions are those taken from a paper by Harten ( 1 989),
uo(x
+ 0.5)
{
-x sin ( � 1rx2 ), I sin (21rx) I , 2x - 1 - sin (31rx)/6,
- l < x < _ l3 ' Ixl < t, t < x < l.
(8.10.30)
There is a shift in the values of the right-hand side for display purposes. Figure 8. 1 0.3 shows the exact solution as a solid line and the approximated solutions
Beam-Warming • •
Sweby
Figure 8.10.3.
(b)
A comparison of methods.
354
PROBLEMS WITH DISCONTINUOUS SOLUTIONS
Roe's "superbee"
(c)
van Leer
•
(d) ENO
(e)
Figure 8.10.3. (Continued)
as dots at t = 2. These figures show that the second-order ENO scheme is more accurate for these complex initial conditions than the other second-order schemes described in this section.
EXERCISES
355
EXERCISES 8.10.1. Prove that consistency requires
F(u, u, . . . , u) = g(u) in Eq. (8.10.4).
BmLlOGRAPHIC NOTES Much work has been done on the theory and development of numerical meth ods for nonlinear hyperbolic conservation laws and shock solutions during the last decade. The material in this book does not cover all that. Recent books devoted to this topic have been written by Godlewski and Raviart (1990) and LeVeque (1990). There is also the review paper by Osher and Tadmor (1988). Applications in fluid dynamics are detailed in the book by Hirsch (1990). The results discussed on the behavior of the oscillations at a discontinuity for centered difference methods are from Hedstrom (1975) and Chin and Hedstrom (1978). Other results and references on the spreading error can be found in Brenner, Thomee, and Wahlbin (1975). Linear monotone schemes are shown to be at most first order accurate in Harten, Hyman, and Lax (1976). The proof of Theorem 8.8.1 can be found in Kreiss and Lorenz (1989). The recovery of piecewise smooth solutions from oscillatory approximations has been discussed by Mock and Lax (1978) and by Gottlieb and Tadmor (1984). Smoothing of the initial data to increase the rate of convergence has been treated in Majda, McDonough, and Osher (1978). Our discussion of the required size of the dissipative coefficient is based on the work of Kreiss and Johansson (1993). Gunilla Johansson also did the shock tube computation presented in Section 8.9. Hybrid methods that combine difference methods with the method of char acteristics have been considered (see e.g., Henshaw, 1985). Difference methods in conservation form were considered by Lax and Wendroff (1960). Osher and Chakravarthy (1984) have shown that TVD methods are first order at extreme points. Tadmor (1984) has shown that all three point TVD schemes are at most first order accurate and Goodman and LeVeque (1985) have shown that, expect for trivial cases, any TVD scheme in two space dimensions is at most first order accurate. Engquist, Lotstedt, and Sjogreen (1989) have derived a different type of TVD method by adding a nonlinear filter to standard centered methods. The flux limiters discussed were introduced by Sweby (1984), Roe (1985), and van Leer (1974). The ENO schemes were developed by Harten et. al. (1986, 1987), Harten (1989) and Shu and Osher (1988). The second-order example is based on Harten (1989). In Harten (1989), he developed ENO methods with "subcell resolution" to reduce the smearing of contact discontinuities. Tadmor (1984) has shown that the Lax-Friedrichs, Engquist and Osher (1980), Godunov and Roe methods all have numerical viscosity in decreasing order.
II INITIAL-BOUNDARY -VALUE PROBLEMS
9 THE ENERGY METHOD FOR INITIA L-BOUNDARY-VALUE PROBLEMS
9.1. CHARACTERISTICS AND BOUNDARY CONDITIONS FOR HYPERBOLIC SYSTEMS IN ONE SPACE DIMENSION
We start with the scalar hyperbolic equation
au
au
a =
at = a ax ' i n the domain
constant,
(9. 1 . 1 )
0 s; x s; 1 , t � 0 (see Figure 9. 1 . 1). At t = 0, we give initial data U(x, O)
=
f(x).
(9. 1 .2)
t
o
x
Xo
1
Figure 9.1.1. Characteristics of Eq. (9. 1 . 1 ) for a > 0, a
o =
x
0, a < 0.
Xo
1
359
360
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
As we have seen in Chapter 8, the solutions are constant along the characteristic lines x + at = constant. If a > 0, then the solution of our problem is uniquely determined for
0,
x �
t �
To extend the solution for x + at >
0,
x
+
at � 1 .
1, we specify a boundary condition x =
1.
(9. 1.3)
For the solution to be smooth in the whole domain, it is necessary that g l (t) and f(x) are smooth functions. It is also necessary that gl (t) andf(x) be compatible or satisfy compatibility conditions. The most obvious necessary condition is that
(9. 1.4) Otherwise, the solution has a jump. (In that case, we only obtain a generalized solution.) If Eq. (9.1 .4) is satisfied and g l E C 1 (t) andf(x) E C l (x), then u(x, t) is Lipschitz continuous. To obtain solutions belonging to C 1 (x, t), we first note that v = Ux satisfies
vr =
aux,
u(x, O)
=
0
f ' (x),
� x �
1, I
t �
0,
a- g, (t) . ,
To ensure that v be continuous everywhere, the initial and boundary values must match each other at (x = 1 , t = 0). This leads to the condition af' ( 1 ) = g� (O).
Also,
W = Ur
(9.1.5)
satisfies
Wr =
awx ,
w(x, O)
=
w( 1 , t)
=
0
� x
� 1,
auAx, O) = af' (x),
�
0,
g� (t),
and the condition (9. 1 .5) also ensures that w is continuous everywhere. Thus, u is C l (x, t). Higher order derivatives satisfy the same differential equation (9.1 .1), and we get higher order regularity by adding more restrictions on higher order deriva tives of f and g at (x = 1, t = 0). The same technique can be applied to any
361
CHARACTERISTICS AND BOUNDARY CONDITIONS
problem to ensure that we get smooth solutions. For systems of nonlinear equa tions, these compatibility conditions may become complicated. The easiest way to satisfy all of them is to require that all initial and boundary data (and forcing functions) vanish near the boundaries at t ::::: 0. We now consider the other cases for a. If a 0, we do not need any boundary conditions because au/at := ° implies :::::
u(x, t)
:=
f(x),
a
:::::
0.
(9.1 .6)
If a < 0, then the solution is uniquely determined if we give the boundary condition
u(O, t) ::::: go(t),
a <
(9. 1.7)
0.
Now we consider a strongly hyperbolic system with constant coefficients
au at where that
A auax '
° :5
x
:5
1, t
�
(9.1 .8)
0,
u has m components. Let S be composed of the eigenvectors of
[1
�]
where
>
A
such
(9.1 .9)
0,
Ar
°
<
0,
° are diagonal matrices. We introduce a new variable
v S- 1 u. Then, we obtain the system :::::
av
at
A �' ax
(9. 1.10)
362
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
or
a - vI at
Al
a I V, ax
a - v III at
a - v II at
o.
Using the previous argument, we obtain a unique solution if we specify the initial condition
v(x, O)
=
f(x),
o
::;
x ::; 1,
and the boundary conditions
With these conditions the problem decomposes into m scalar problems. We can couple the components by generalizing the boundary conditions to
R�/v ll(1, t) + R{llv lll(1, t) + g l (t), R& v l (O, t) + R&llv lll(O, t) + g ll(t).
(9. 1 . 1 1 )
Here R{l, R{Il, R&, and R&1l are rectangular matrices that may depend o n t. It is easy to describe these conditions in geometrical tenns. A lii = 0 implies that V lll(x, t) = f lll(x). Thus, we need only discuss the influence of the boundary conditions on V I and V II. We write Eq. (9. 1 . 1 1 ) as
v l (1, t) = gl : =
R llv ll(1, t) + g l (t), g l + R{/1 111(1),
V II(0, t) = R I V I (0, t) + g il(t), g il : = g il + R&llf 1ll(0), (9. 1.12)
where R/ : = R& and R Il : = R{l . Starting with t = 0 the initial values for V I and V II are transported along the characteristics to the boundaries x = 0 and x = 1, respectively. Using the boundary conditions, these values are transfonned into values for VII(0, t) and V I (1, t), which are then transported along the characteris tics to the boundaries x = 1 and x = 0, respectively. Here the process is repeated (see Figure 9.1 .2). Because of these geometrical properties the components of v are called characteristic variables. The number of boundary conditions for x = 0 is equal to the number of neg ative eigenvalues of A, or, equivalently, the number of characteristics entering the region. Correspondingly, at x = 1, the number of boundary conditions is equal to the number of positive eigenvalues of A. No boundary conditions are required, or may be given, for vanishing eigenvalues. In most applications, the differential equations are given in the nondiagonal
CHARACTERISTICS AND BOUNDARY CONDITIONS
363
t
x
Figure 9.1.2.
form
(9.1 .8), and the boundary conditions are linear relations Lou(O, t)
Here
=
(9. 1 . 1 3)
go(t),
l r + 1 ,m . :
lm - s, m
J
'
are rectangular matrices whose rank is equal to the number of negative and positive eigenvalues of A, respectively (or better, the number of characteristics that enter the region at the boundary), If we use the transformation (9.1 .9), the differential equations are trans formed into Eq. (9.1 .10), and the boundary conditions become
LoSv(O, t) = go(t),
(9. 1.14)
That is, we again obtain linear relations for the characteristic variables. Our initial-boundary-value problem can be solved if Eq. (9.1. 14) can be written in the form (9. 1 . 1 1 ); then, we can solve the relations (9.1 .14) for v l/ (O, t) and v / (1, t), respectively. We have now proved the following theorem.
Theorem 9.1.1. Consider the system (9. 1.8) for 0 � x � 1, t � 0 with initial data at t = 0 and boundary conditions (9. 1.13). This problem has a solution if the system is strongly hyperbolic, the number of boundary conditions is equal
364
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
to the number of characteristics entering the region at the boundary, and we can write the boundary conditions so that the characteristic variables connected with the ingoing characteristics can be expressed in terms of the other variables. As an example, we consider the normalized wave equation
o :5;
x
:5;
1,
t
�
0,
(9. 1. 15)
with the initial conditions
u(x, O) = fo(x),
au
at
(x, 0) = f l (X),
(9. 1 . 16)
and boundary conditions
u(O, t) = go(t), Introducing a new function
u(x, t)
=
(9. 1 . 17)
u(1 , t)
u by
f� �: (x, r) dT I fl (�) dt +
or, in other words,
au at we can write Eq.
au ax '
(9.1 . 15) as a first-order system
[ ]
[ ]
� u _A� u at u - ax u '
[ � �] ,
(9. 1 . 1 8a)
f: f l (�) d�
(9. 1 . l 8b)
A =
with initial conditions
u(x, O)
=
fo(x),
u(x, O) =
and boundary conditions (9. 1.17). The matrix A has one positive and one nega tive eigenvalue, and, therefore, there is exactly one characteristic that enters the region on each side. Thus, the number of boundary conditions is correct. We
CHARACTERISTICS AND BOUNDARY CONDITIONS
365
now transform A to diagonal form. The eigenvalues eigenvectors are
1>j
1> 1
1 (1, I )T , v2
=:
A2
=:
-1,
Aj and the corresponding 1 (1, - I )T v2
.
Thus,
Introducing
as new variables gives us
with boundary conditions
it(O, t) + v(O, t) it(1, t) - v(1, t)
v2 u(O, t) v2 u(1, t)
v2 go(t), v2 g l (t).
Therefore, the conditions of Theorem 9.1 . 1 are satisfied, and we can solve the initial-boundary-value problem. We now consider equations with variable coefficients. We start with the scalar equation
dU
at
=
A(x, t)
dU
ax '
o ::; x ::; 1, t � 0
(9. 1 .19)
with initial values
u(x, O) From Section
=:
f(x).
8.2, we know its solution is constant along the characteristic lines
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
366
t
t
x
°
x
° Figure 9.1.3. Characteristics of (9. 1 . 1 9).
dt =
dx
x(o) =
-;\(x, t),
XO.
If ;\(X, t) < 0, then we have to give boundary conditions on the boundary x = u(O, t)
and, if ;\(x, t) >
0,
= go(t)
0, we give
(see Figure 9. 1 .3). Thus, we have the same situation as we had for equations with constant coefficients. However, ;\(x, t) can change sign in the interior of the region. If ;\(0, t) > 0, ;\(1 , t) < 0, no boundary conditions need to be specified any where, and if ;\(0, t) < 0, ;\(1 , t) > 0, then we have to specify boundary condi tions on both sides. As before, if ;\(0, t) == 0, no boundary conditions need to be given at x = ° (see Figure 9.1.4). As in Section 8.2, we can also solve the A(O, t) > 0, A(1, t) < 0,
ACO, t) < 0, A(1, t) > 0,
x
x
ACO, t)
Figure 9.1.4. Characteristics of Eq. (9. 1 . 1 9).
=
A(1, t)
x
=
0,
367
EXERCISES
inhomogeneous equation
ur
A(x, t)ux + F(x, t),
=
by the method of characteristics. We now consider systems
au au + B(x, t)u + F(x, t), = A(x, t) ax at u(x, O) = f(x),
° s; x S; 1 , O S; x S; 1.
� 0, (9. 1 .20)
We assume that this is a strongly hyperbolic system, that is, that the eigenval ues of A are real and that A can be smoothly transformed into diagonal form. Therefore, we can, without restriction, assume that A = is diagonal. We solve the system by iteration,
A
A� u [n + I ] ax
+
Bu [n] + F;
that is, we have to solve m scalar equations at every step. It is clear that one should specify boundary conditions of the type of Eq. (9. 1 . 1 1 ). We now have the following theorem.
9.1.2. Consider the system (9.1.20), where A is diagonal. Also assume that the eigenvalues Aj do not change sign at the boundaries; that is, one of the following relations hold at the boundaries for each j and, for all t: Aj > 0, Aj 0, Aj < 0. lfthe boundary conditions are of the form in Eq. (9.1.11), then the initial-boundary-value problem has a unique solution.
Theorem
==
EXERCISES 9.1.1. Determine all boundary conditions of the type of Eq. (9. 1 . 1 3) such that the initial-boundary-value problem for the differential equation
Ur
=
[
]
a b 0 b a b ux , ° b a
has a unique solution. Here
° S; x S;
1,
a and b are real constants.
� 0,
368
THE ENERGY METHOD FOR INITIAL·BOUNDARY·VALUE PROBLEMS
9.2. ENERGY ESTIMATES FOR HYPERBOLIC SYSTEMS IN ONE SPACE DIMENSION
o
In this section, we consider systems
au
au
+ B(x, t)u, A(x, t) at = ax
u(x, to)
=
f(x),
�
x
�
1,
� to, (9.2. 1a) (9.2.1b)
where
A Al
All
=
[ �I �I] ,
�n rr
0 A2
0
0 A r+2
0 0
n > o, 1]
Ar
<
0,
0
is a real diagonal matrix. For simplicity only, we assume that A is nonsingular. As we have seen in the last section, the problem above has a unique solution if we specify the boundary conditions
(9.2.2) The aim of this section is to derive energy estimates. We assume that the data are compatible, that is, that at t = 0 the initial data satisfy the boundary conditions. We use the notation
G(x) l b
G(1) - G(O)
and prove the following lemma.
Lemma 9.2.1.
function. Then
Let u and v be smooth vector functions and A a smooth matrix (u,Av,,)
=
-(ux ,Av) - (u, Ax v) + (u, Av) l b,
ENERGY ESTIMATES FOR HYPERBOLIC SYSTEMS
369
and l (u,Av) 1
�
I I A II�ll u ll ll v ll ,
where II A II� = supx/A I · Proof. The first relation follows by integration by parts. The second follows directly from the definition of the scalar product [ef. Eq. (1 2 . 18)] .
.
Next, we want to prove the following theorem. Theorem 9.2.1. Let u(x, t) be a smooth solution of the initial-boundary-value problem shown in Eqs. (9.2.1) and (9.2.2). There are constants K and a that do not depend on f such that
lI u( · , t) 1I
�
Ke<>(I - IO l l l u( ' , to) 1I
Ke<>(t - to l ll f il .
Proof. We have
Il u ll ;
=
=
=
�
(ur . u) + (u, u/ ), (Bu, u) + (u, Bu) + (Aux, u) (Bu, u) + (u, Bu) - (u, Axu) 2a l l u ll 2 + (u, Au)16,
+ +
(u, Aux ), (u, Au)16,
where
2a = � x �
(u, (B + B* - Ax)u) . lI u ll 2
Using the boundary conditions, we obtain
where
(u(1, t), A(1, t)u(1, t) = (u / (1, t), A/ (1, t)u / (1, t) + (ul l (1, t), AII (1, t)u ll (1, t) , = (u ll (1, t), C(1 , t)ull (1, t) ,
Now assume that
I R II(t)1
is small enough to guarantee
(9.2.3)
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
370
Then
(u(1, t), A(1 , t)u(1, t»
�
-
tAo lu ll (1, t) ! 2 ,
Correspondingly,
-(u(O, t), A(O, t)u(O, t» if
I R I (t) 1
�
-
t Ao l u l (O, t) 1 2 ,
is also sufficiently small. In this case we have
and the desired estimate (9.2.3) follows. Because u l (1, t) and u ll (O, t) are linear combinations of u ll(1 , t) and u l (O, t) respectively, we get the sharper estimate
lI u( · , t) 1 I 2 +
fl ( l u(O, r) 1 2 10
+
l u(1 , r) 1 2 ) dr
� constant e2a(/ - /o ) lI fOIl 2 .
If R I , R ll are not sufficiently small, then we can proceed in the following way. Introduce a new variable w into Eq. (9.2.1),
where
d is a function of x. Then w is the solution of
B depends on B and d. Now choose d as a smooth function such that I d- 1 (O)R1 1 and I d(1)R ll l are small enough that the previous estimates are valid for w. Then we also obtain estimates for u. This proves the theorem. Here
As before for periodic problems, we define a solution operator as the map ping
u(x, t)
S(t, to)u(x, to),
371
ENERGY ESTIMATES FOR HYPERBOLIC SYSTEMS
where u(x, t) is the solution of Eqs. (9.2. 1 a) and (9.2.2) with initial data at t = to. By Theorem 9.2 . 1 ,
u(x, to)
B y Duhamel's principle, the function
u(x, t) = S(t, to)f(x)
+
ft S(t, r)F(x, r) dr to
is the solution of the inhomogeneous problem
ut = Aux + Bu + F, u(x, to ) = f(x), u II(O, t) R / (t)u ' (O, t),
° �x �
1,
t � 0,
=
(9.2.4)
The bound on the solution operator allows us to estimate the solution of problem (9.2.4), precisely as in Eq. (4.7. 1 7) for the initial-value problem. The boundary conditions are also often inhomogeneous:
In this case, we introduce two new variables
The new problem has homogeneous boundary conditions, but the initial func tion f and the forcing function F are modified. To derive energy estimates, it is not necessary that the system be in diagonal form. Let us consider systems
ut = B(x, t)ux + C(x, t)u,
° �x �
1,
t � to,
(9.2.6)
with boundary conditions
Lou(O, t) = 0,
(9.2.7)
Here B = B* is a Hermitian matrix with exactly m - r negative eigenvalues at x = ° and exactly r positive eigenvalues at x = 1 . Lo and L\ are (m - r) X m and r x m matrices of maximal rank, respectively. Now, we want to prove the following theorem.
372
THE ENERGY METHOD FOR INITIAL·BOUNDARY·VALUE PROBLEMS
Theorem 9.2.2. If
j
(9.2.8)
0, 1 ,
for all vectors Wj satisfying Lowo =
(9.2.9)
0,
then any smooth solution of Eqs. (9.2.6) and (9.2. 7) satisfies an energy estimate of the fonn of Eq. (9.2.3). Proof. Integration by parts gives us
(u, Cu) + (Cu, u) + (u, Bux ) + (BUn u) (u, Cu) + (Cu, u) - (u, Bx u) + (u, Bu) 1 6 � constant lI u JJ 2 . This proves the theorem. As an example we consider the linearized Euler equations
(9.2. 1 0)
discussed in Section 6 . 1 . Introducing the new variables Ii gives us a symmetric system
B
-
=u
[ � �] .
and p
=
apjR
(9.2. 1 1 )
The eigenvalues of B are
A = -(U ± a),
a >
0.
To discuss the boundary conditions we have to distinguish between three cases:
1.
Supersonic Inflow, U(O, t) > a(O, t).
There are two characteristics that enter the region, and we have to specify Ii and p . The homogeneous
ENERGY ESTIMATES FOR HYPERBOLIC SYSTEMS
373
boundary conditions are
= p
U
0.
=
Condition (9.2.8) is satisfied, because 2.
(wo, Bwo> = 0.
Subsonic Flow, I U(O, t) I < a(O, t). There is only one characteristic that enters the region, and we use -ap, -
a i s real, as a boundary w61 ) = -aw�2) we obtain where
(wo, B(O, t)wo> =
Wo =
(wg ) , w�2»)
with
- U(a2 + 1 )) l w�2) 1 2 , - U)a - U( 1 - a)2 ) l w�2) 1 2 ,
(2aa (2(a
;?:
condition. For all
0,
provided 1 1 - 01. 1 is sufficiently small. Note that a specifying the ingoing characteristic variable u + p.
=
1 corresponds to
3. Supersonic Outflow, U(O, t) < -a(O, t). Both characteristics are leaving the
region, and no boundary condition may be given. The second boundary at x = 1 is treated in the same way.
REMARK. In Case 2, we can also include sonic inflow U(O,
Case
3, we can include sonic outflow
U(O, t)
=
-a(O, t).
t) = a(O, t),
and in
Until now, we have assumed homogeneous boundary conditions when using the energy method. For some classes of problems it is also possible to obtain an estimate in a direct way with inhomogeneous boundary conditions. As an example, consider the problem
au au ° + = 0, at ax u(x, O) = I(x), u(O, t) get).
-
::::; x ::::; 1 ,
=
Integration by parts gives us, for any 'YJ >
0,
-(u, ux) - (ux , u) l u(0, t) 1 2 - l u(1 , t) 1 2
(9.2. 1 2a) (9.2 . 1 2b) (9.2. 12c)
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
374
v = e-'11 u satisfies
:t II v ll 2 + Iv(· , t) l f S; 2 I v(0, t) 1 2
S; 2 I e- '11g(t) 1 2
where
I v( . , t) l f = I v(0, t) 1 2 + I v(1, t) 1 2 .
Therefore,
f: I v(" t) l f dt S; II v( ·, 0) 1I 2 + f: ! e -� l g(t) 1 2 dt
II v( " T) 1I 2 +
2
or, for
fl > flo � 0,
lI e- � T u( · , T) 1I 2 +
f: l e-�l u(·, t) l f dt S; C( lI f0 1l 2 + f: l e-� lg(t) 1 2 dt)
(9.2. 1 3)
In this estimate, one can directly read off the dependence of u on the initial data f and the boundary data g. A forcing function F can also be included in Eq. (9.2. 1 2a), and the estimate is obtained by using Duhamel's principle. As demonstrated in Section 9 . 1 , hyperbolic systems in one space dimension with constant coefficients can always be transformed to diagonal form, and, if the ingoing characteristic dependent variables are prescribed at the boundaries, we get the completely decoupled problem
au A au = + F, at ax u(x, O) = f(x), u ll(O, t) = go(t), uf(1, t) = gl (t).
° s; x s;
1,
t � 0,
(9.2. 14)
A,
Here uf and u ll correspond to the positive and negative eigenvalues of respectively. By applying the technique used for the scalar problem (9.2. 1 2), we immediately obtain the estimate (9.2. 13), where I g(t) 1 2 is interpreted as I go(t) 1 2 + I g I (t) 1 2 . In other words, we can always find boundary conditions such that, with an arbitrary forcing function F, initial functionf, and boundary func tions gO , g l , an estimate of the form of Eq. (9.2. 1 3) holds.
EXERCISES
375
Indeed, we can obtain such an estimate for general boundary conditions of the type of Eq. (9.2.2) in essentially the same direct way. However, when deal ing with the discrete case in the next chapter, that will not generally be possi ble. We have derived the energy estimates under the assumption that a smooth solution exists. This is no restriction, because we have already constructed such a solution in the previous section by the method of characteristics. However, one can also prove the existence of a solution in the following way. We construct difference approximations, which satisfy the corresponding discrete estimates. Then one can interpolate the discrete solution in such a way that the interpolant is smooth and converges as h, k --? 0 to the solution of our problem. This is a general principle: Given an initial boundary value problem, one derives estimates under the assumption that a smooth solution exists. Then one constructs a difference approximation whose solutions satisfy corresponding discrete estimates. Suitable interpolants converge as h, k --? 0 to the solution of the problem.
EXERCISES 9.2.1. Derive boundary conditions at x = 1 for the system (9.2. 1 0) such that an estimate
lIu( ·, t) II � K llu( · , 0) 11 holds for
u = (u, p f. What is the smallest possible value of K ?
9.2.2. Consider the system (9.2. 1 1 ) with the inhomogeneous versions of the boundary conditions discussed above. Derive an estimate of the form of Eq. (9.2. 1 3). Prove that such an estimate holds in general.
9.3. ENERGY ESTIMATES FOR PARABOLIC DIFFERENTIAL EQUATIONS IN ONE SPACE DIMENSION The simplest parabolic initial-boundary-value problem is the normalized heat equation
ut
o �x �
1,
� 0,
(9.3 . 1 )
with initial conditions
u(x, O) = f(x),
(9.3 .2)
376
THE ENERGY METHOD FOR lNlTlAL-BOUNDARY-VALUE PROBLEMS
and the so called
Dirichlet boundary conditions u(O , t)
=
u(1, t) = O.
(9.3.3)
(See Figure 9.3.1.) Assume that Eqs. (9.3. 1) to (9.3.3) have a smooth solution. We want to derive an energy estimate. Lemma 9.2.1 gives us
dt
d
(u, u)
(u, Ut )
+
-2 11 ux 1l 2
(ut , u) +
(uux
=
+
(u, uxx)
(un. u), ux u) l b = -2 11 ux 1l 2 � 0, +
or
This estimate can be generalized to the equation
ut = a(x, t)uxx
+
b(x, t)ux
+
with the same initial and boundary conditions. Here functions with a(x, t) � ao > O. Now we obtain
d
dt where, by Lemma
(u, u) =
(9.3.4)
c(x, t)u, a, b,
and
I + II + III,
9.2. 1 ,
u=o
u=o
o
u = {(x) Figure 9.3.1.
1
x
c
are smooth
ENERGY ESTIMATES FOR PARABOLIC DIFFERENTIAL EQUATIONS
377
(u, auxx) + (auxx, u), -(Un aux) - (U, axux) - (aUn Ux) - (axux, U) + a(uux + uxu) l b, ::; - 2(ux,aux) + 21 I ax l � l u l l uxll , - lao l I u, I ' + 2 I 1 a,II _ # l I u li � lI u,lI , I � (9.3.5) ::; - � ao ll ux l l 2 + 2 ( l � ll ) lI u 11 2 ; o II (u,bux) + (bun u) ::; 2 1 I b l ,., l I u l i l ux l , ::; I baol � lI u l 2 + ao l ux l 2 ; III (u, cu) + (cu,u) ::; 21 1 c 1 � l u I 1 2 . I
=
=
S
=
=
Thus,
(9.3.6) or
uA , t) + rju(j, t)
Instead of Dirichlet boundary conditions we can use j
=
0,
j
To obtain an energy estimate, we need a so called Lemma 9.3.1.
Lemma 9.3.1. Let f
Proof. Let
XI
and
X2
I f(x d l =
::; ::;
E e I(0 X
(9.3.7)
0, 1.
=
Sobolev inequality,
f). For every E > 0
be points with min x
I f(x) l ,
I f(X2 ) 1 =
Without restriction, we can assume that
max x
I f(x) 1
X I < X2 . Then
Il f II ,., ·
as in
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
378
X2J lfx dx = I f I 2 1�� - JX2 lx f dx xl
Xl
;
that is,
Observing that
.e I f(x dl 2 ::;; II ! 11 2
the lemma follows.
REMARK. Lemma 9.3.1 holds for .e = 00 (Exercise 9.3.1). Now consider Eq. (9.3.4) with the boundary conditions (9.3.7). By Eq. (9.3.5), the only new terms in the energy estimate, compared with the Dirichlet
case, come from the boundary terms
By Lemma
9.3.1,
these can be estimated by
E ::;; 2 1I a ll� ( l ro l + I r l D ll u ll :, ::;; 2 I1 a ll�€ ( l ro l + I r l D il ux II 2 + 2 I1 a ll= (€- 1 + 1)( l ro l + i r I Dll u ll 2 . Choosing
€ = (ao/8)( ll a ll� ( l ro l + I r l l )t l , we obtain instead of Eq. (9.3.6)
:t II u l1 2 - � lI uxI1 2 + a* lI u Il 2 , ::;;
a* = a + 2 I1 a ll� (€ - 1 + 1)( 1 ra 1 + i r I D . Thus, we have an energy estimate in this case. We can also generalize the results to systems
at
au
= Auxx + Bux + Cu, U(X, O) = f(x),
o ::;; x ::;; 1, t � 0, (9.3.8)
u is a vector-valued function and A = A(x, t), B = B(x, t), and C = C(x, t) are m X m matrices that depend smoothly on x, t. We assume that
where
ENERGY ESTIMATES FOR PARABOLIC DIFFERENTIAL EQUATIONS
A + A* � 2aoI,
ao
379
(9.3.9)
° constant.
>
The boundary conditions consist of m linearly independent relations at each of the boundaries x = ° and x = 1 :
j
=
0, 1 .
(9.3.10)
To obtain an energy estimate we make the following assumption.
Assumption 9.3.1.
and Wj with
The boundary conditions are such that, for all vectors Vj j
0, 1 ,
the inequalities c � ° constant,
(9.3 . 1 1)
hold. REMARK. In the next chapter, we will use the Laplace transform technique to show that problems can be well-posed under less restrictive conditions than Eq.
(9.3.11).
We can prove the following theorem.
If Eqs. (9.3.9) and (9.3.II) hold, then the smooth solutions of the initial-boundary-value problem shown in Eqs. (9.3.8) and (9.3.10) satisfy an energy estimate.
Theorem 9.3.1.
Proof. As for the scalar equation
(9.3.4), integration by parts gives us
:t lI u ll 2
=
I + II + III,
where
I
(u,Auxx ) + (Auxx. u), -(ux, Aux ) - (u,Ax ux ) - (Aux, ux ) - (Ax ux , u) + « Aux , u) + (u,Au,») l b , :s; (ux , (A + A*)u,; ) + 2 11 Ax ilooll ux ll il u ll + c( l u(0, t) 1 2 + l u(1, t) 1 2 ) 2ao ll ux l l 2 + 2 11A., lIooll ux l l ll u li + c( l u(0, t) 1 2 + l u(1, t) 1 2 ). :s; -
-
380
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
By using Lemma 9.3 . 1 we obtain, as for the scalar case, c]
>
0, C 2
>
o.
The terms I I and III have the same structure as the corresponding terms in Eq. (9.3 .5) for the scalar case and, therefore, we obtain the estimates
II III By choosing
::;
::;
c3 11ul12 + o ll ux l12 , 0 211C11�ll uIl2 .
>
0
0 < C ] , we have an energy estimate.
We have shown that the smooth solutions of the above initial-boundary-value problem satisfy an energy estimate. This does not guarantee that such solutions exist. However, as for hyperbolic systems, one can prove existence using dif ference approximations provided that the initial and boundary conditions are compatible. The compatibility conditions are certainly satisfied if/(x) has com pact support in 0 ::; x ::; 1 , that is, if I(x) vanishes in a neighborhood of x = 0, 1 . If the compatibility conditions are not satisfied, then we approximate I by a sequence {Iv } with compact support, and we define, in the same way as ear lier, a unique generalized solution that still satisfies the energy estimate. One can show that the generalized solution is smooth except in the comers x = 0, 1 ; where t = O. Theorem 9.3 . 1 shows that the existence of energy estimates is independent of the form of the lower order terms (first- and zero-order terms). Therefore, B does not need to have real eigenvalues. In many applications, systems are of the form
o
::;
x
::;
1,
t � 0,
(9.3 . 1 2)
where v is a constant with 0 < v « 1 . For example, in fluid flow problems, v represents a small viscosity. In this case, we still obtain an energy estimate, even if the eigenvalues of B are complex. However, the exponential growth constant ex in the energy estimate is proportional to V - I . If B is symmetric, we can often do better. As an example, we consider Eq. (9.3 . 1 2) with
u =
[ u(l( )) ] , u2
B =
[ bbl l 0b ] '
b ] ] , b real constants,
(9. 3 . 1 3a)
and boundary conditions (9.3 . 1 3b)
EXERCISES
381
Then
One can also show that, as "inviscid" problem
p ----7
0, the solutions converge to the solution of the (9.3. 14a)
with boundary conditions (9.3 . 1 4b) (see Exercise 9.3.3).
EXERCISES 9.3.1. Prove Lemma 9.3 . 1 for f =
00
.
9.3.2. Assume that the matrix A in Eq. (9.3 .8) is positive definite and diago nal. What are the most general matrices satisfied?
ROJ, R lj
such that Eq. (9.3 . 1 1 ) is
9.3.3. Prove that the solutions of Eqs. (9.3.12) and (9.3 . 1 3) converge to the solution of Eq. (9.3 . 1 4) as
p ----7
O.
9.4. WELL-POSED PROBLEMS In the last two sections, we derived energy estimates for hyperbolic and parabolic systems and discussed possible boundary conditions that lead to well posed problems. In this section, we define well-posed problems in general. We consider systems of partial differential equations
Ut
=
( !)
P x, t,
u(x, to) = f(x), in the strip 0 � Here u (u( ! ), =
U
+ F,
o �x
� 1,
t � to, (9.4. 1 )
x � 1 , t � to. (If 1 = 00 , we obtain a quarter space problem.) u(m) T is a vector function with m components and
• • •
,
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
382
( aax ) = LP A. (x, t) axav. v=o
P x, t,
is a differential operator of order p with smooth matrix coefficients. At x we give boundary conditions
( � ) u(O, t)
La t,
=
( � ) u(/, t)
L] t,
go,
=
g[ .
= 0, I
(9.4.2)
-
Here Lo and L] are differential operators of order r. In most applications r ::; p 1. However, there are cases, where r ? p. In analogy with the corresponding definition for the Cauchy problem, we define well-posedness for homogeneous boundary conditions.
9.4.1. Consider Eqs. (9.4.1 ) and (9.4.2) with F = go = g [ = 0. We call the problem well posed if, for everyf E C = that vanishes in a neighborhood of x = 0, I, it has a unique smooth solution that satisfies the estimate
Definition
(9.4.3) where
K and
ex
do not depend on f and to.
REMARK. If the coefficients of
replace Eq.
(9.4.3) by
P, Lo, and L [
do not depend on
t, then we can
because the transformation t' = t - to does not change the differential equation or boundary conditions. As before, we can define a solution operator S(t, to). If F = go = g[ = 0, then we can write the solution of the differential equation in the form
u(x, t) and Eq.
(9.4.3)
=
S(t, to)u(x, to),
says that
This again shows that we can extend the admissible initial data to all functions in L2 • Also, as before, we can use the solution operator to solve the inhomogeneous
WELL-POSED PROBLEMS
383
differential equation with homogeneous boundary conditions (go = g l = 0). If F E C � (x, t) vanishes in a neighborhood of x 0, I, then, by Duhamel's principle, the solution is =
u(x, t) = S(t, to)f(x) +
Itot Set, r)F(x, r) dr.
Also, u E C � . Again we can extend the admissible F to all functions F E L2 (X, t). We can also solve problems with inhomogeneous boundary conditions pro vided we can find a smooth function ip that satisfies the boundary conditions, that is,
Loip(O, t)
-
=
go,
In this case u = u ip satisfies homogeneous .!>oundary condi�ons. Also, U satisfies Eq. (9.4.1) with f and F replaced by f = f ip and F = F ipt + P(x, t, a/ax)ip, respectively. Thus, the estimate for u depends on the derivatives of ip. In general, this does not cause any difficulty with respect to x, because we can choose ip as a smooth function of x. However, ip t can only be bounded by agj/at. Thus, the boundary data must be differentiable. So it is desirable that the reduction process be avoided. Instead, one would like to estimate u directly in terms of F,f, and g. This leads to the following definition.
-
-
The problem is strongly well posed if it is well posed and, instead of Eq. (9.4.3), the estimate
Definition 9.4.2.
II u( · , t) 1 I 2
!5:
(
K(t, to) I I u( · , to) 11 2 +
fa (li F(· , r)
11
2
+ I go(r) 1 2 + I g l (r) 1 2 ) dr
)
(9.4.4)
holds. Here K(t, to) is a function that is bounded in every finite time interval and does not depend on the data. One can prove the following theorem.
For hyperbolic systems (9.2. 1) with general inhomogeneous boundary conditions Theorem 9.4.1.
u ll(O, t) u f (1, t)
R f (t)u f (0, t) + go(t), R ll (t)u ll (1 , t) + g l (t),
THE ENERGY METHOD FOR INITIAL·BOUNDARY·VALUE PROBLEMS
384
and /or parabolic systems (9.3.8) with Neumann boundary conditions j =
0, 1 ,
(9.4.5)
the initial boundary value problem is strongly well posed 9.2. For parabolic problems, the proof follows by an easy modification of the estimates in Section 9.3.
Proof. For hyperbolic problems we have indicated a proof in Section
The definition of strong well-posedness is given for general differential operators. As we demonstrated earlier, we obtain stronger estimates includ ing the boundary values for hyperbolic equations (Exercise 9.4.1). Furthermore, if a second-order parabolic problem is strongly well posed, then we can esti mate the derivative of the solution as well as the boundary values (Exercise
9.4.2).
It should be pointed out that strongly well posed is, in general, a more strin gent requirement than well posed. Parabolic problems with other than Neumann boundary conditions are not generally strongly well posed. In more than one space dimension, hyperbolic problems can be well posed but not strongly well posed. The same is true for higher order systems. However, even for general systems such as (9.4. 1), one can always find boundary conditions such that the initial-boundary-value problem is strongly well posed if P is a semibounded operator for the Cauchy problem.
EXERCISES 9.4.1. Carry out the proof of Theorem 9.4.1 in detail and prove that the estimate
JI (l u(0, r) 1 2 + l u(1, r) 1 2) dr K(t, to) ( Il u( · , to) 11 2 + fo ( II F( · , r) 11 2 + I go(r)1 2 + I g I (r) 1 2 ) dr )
Il u( · , t) 11 2 + S;;
10
(9.4.6)
holds for hyperbolic equations.
9.4.2. Prove Theorem 9.4.1 for second order parabolic systems with Neumann boundary conditions and derive the estimate
SEMIBOUNDED OPERATORS
385
JI ( 1I uA·, T) 11 2 I U(O, T) 1 2 :s; K(t, to) ( Il u( · , to) 11 2 fo ( II F( ·, T) 11 2 Il u( · , t) 11 2 +
+
10
+
9.5.
+ IU(1,T) 1 2 ) dT
)
+ I gO (T) 1 2 + I g l (T) 1 2 ) dT .
(9.4.7)
SEMIBOUNDED OPERATORS
In the previous sections, we considered differential equations
u/ = Pu,
° :s; x :s; I, t
2::
to,
with boundary conditions consisting of homogeneous linear combinations of u and its derivatives
Lou(O, t) = L 1 u(l , t)
=
0.
If 1 = 00, the boundary condition at x = 1 is replaced by the condition Il u( ·, t) 1I < 00. For all practical purposes we can assume that u(x, t) and all its derivatives converge to zero as x -7 00• In most cases, the solutions satisfied an energy estimate of the fonn
2Re (Pu, u) = (Pu, u) + (u, Pu) :s; 2a d l u l1 2 . We now fonnalize these results to some extent. For every fixed t, the differential operator P can be considered as an operator 9 in the functional analysis sense, if we make its domain r:p:-of definition precise. We define r:p:-to be the set of functions r:p:- : =
[If 1 = 00 , the condition
{w
E
C� , Low(O)
=
L l w(l)
=
a }.
L l w(l) = ° i s replaced by Il w ll < 00 . ]
Definition 9.5.1. We call 9 semibounded if there exists a constant a such that, for all t and all w E �
2Re (Pw, w) = (Pw, w) + (w, Pw) :s; 2a ll w l1 2 .
(Later in this book we will also use the notation
P for 9).
If 9 is semi bounded and u is a solution of the differential equation with r:p:- for every fixed t, then
uE
386
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
and the basic energy estimate follows. A theorem of the following type would be ideal. If 9 is semibounded, then the corresponding initial-boundary-value problem is well posed. However, this is not true. If we change the domain Wto o/Ij c Wby adding more boundary conditions, then the operator 91 is still semibounded, but the corresponding initial-boundary-value problem might not have a solution, because we may have overdetermined the solution at the boundary. Therefore, we define maximally semibounded as follows. Definition 9.5.2. 9 is called maximally semibounded if the number q of lin early independent boundary conditions is minimal, that is, if there exist no boundary conditions such that 9 is semibounded and their number is smaller than q. REMARK. One uses the word
as possible.
maximal because,
in some sense, Wis as large
Let us apply the concept to
° ::; x We assume that at x =
0, I
::;
1 , t � 0.
boundary conditions of the form
x
0, 1,
are to be specified. We want to choose them s o that 9 will be maximally semi bounded. For all w E c� we have
(Wo w)
+
(w, W,,)
=
I w l 2 l b.
Therefore, we must choose the boundary conditions so that
Clearly, no boundary conditions are necessary for x =
0, and we need to bound
SEMIBOUNDED OPERATORS
387
This is only possible if
w(1)
=
0.
Thus, the minimal number of boundary conditions is one. If we consider systems
then we arrive at our previous conditions, namely, one must express the ingoing characteristic variables in terms of those which are outgoing. Let us apply the principle to the linearized Korteweg-de Vries equation
° S; For all
x
S;
1, t � 0, 0
>
0.
w E C � , we have
Thus, we must choose the boundary conditions to guarantee
This is only possible if
For x =
1, we need one condition Wxx = a ] W,
1
with
+
2oRe(a ] ) S;
0,
or
W
=
0.
For x = 0, we need two conditions, because 1 Wx 1 2 appears with a positive mul tiplier. These conditions are
or
388
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
w
=
wx
O.
=
One can show that the resulting initial-boundary-value problem is well posed. In general, one can prove the following theorem. Theorem 9.S.1.
Consider a general system o :$;
:$; f, t � to,
x
with boundary conditions 0,
O, .e ; t � to.
x
Assume that Ap is nonsingular and that the coefficients are smooth. If the asso ciated operator f7! is maximally semibounded, then the initial-boundary-value problem is well posed. We now discuss another general result. Consider a parabolic system
A + A* in
> Of >
0,
n unknowns u l = (u( l ), . . . , u(n» T and a symmetric hyperbolic system
I
in m unknowns u ll = (u(n + ), . eigenvalues A of B with A < 0 at
, u(n + m)l. Assume that there are exactly r x = O. Now we couple the systems to obtain
. .
(9.5.1) and consider the quarter-space problem 0 :$; x < 00, t describe as boundary conditions the linear combinations
L l O uxCO, t) + Laou(O, t)
=
O.
� O.
At
x
0,
we
(9.5.2)
SEMIBOUNDED OPERATORS
389
We are interested in solutions with lI u( · , t) 1I < 00, and assume that F(x, t), u(x, 0) are smooth functions with compact support. For this problem, one can prove the following theorem: Theorem 9.5.2. The operator g'J associated with Eqs. (9.5. 1) and (9.5.2) is maximally semibounded if the number of boundary conditions is equal to n + r and
- Re« w i (O),Aw; (O) + �(w II(O), Bw II(O) + (w i (O), B 1 2 w II(O) ) (9.5.3) � constant I w i (0)1 2 , for all t and all w that satisfy the boundary conditions. In this case, the quarter space problem is well posed. REMARK. We could have treated the problem in Theorem 9.5.2 on a bounded strip instead of on a quarter space. In that case, we would also need boundary conditions on the other boundary, these could have been derived by consider ing the corresponding left quarter-space problem. This technique of reducing problems to quarter space problems is an important tool.
As an example, we consider the one-dimensional version of the linearized Navier-Stokes equations (4.6. 13) with constant coefficients. By introducing p = ap/R as a new variable we get
[ � ] - [ � �] [ � L + V [ b � ] [ � Lx' =
The uncoupled equations are
UI -
and the expression
-vuUx +
PI
xx
vU
v
(p. + p. ' )
R
>
o.
(9.5.4)
,
- UPx ,
(9.5.3) becomes
� I p l 2 + aup �
constant
l ul2,
x
O.
(9.5.5)
Recalling that U is the velocity of the flow we have linearized around, we distinguish among three different cases.
U > o. In this case two boundary conditions have to be specified. The inequality (9.5.5) is satisfied if, for example,
1. Inflow,
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
390
u = p = 0,
or
0, p = O.
Ux =
2. Rigid Wall, U = O. In this case we have to specify one boundary condition. The inequality (9.5.5) is satisfied if, for example, u = 0, which is the natural condition at a rigid wall.
3. Outflow,
U < O. One boundary condition is needed, and Eq. satisfied if, for example,
u
=
0,
- p Ux
or
(9.5.5)
is
+ ap = O.
In all three cases, the given conditions are stricter than necessary, since the constant in the right-hand side of Eq. (9.5.5) need not be zero.
EXERCISES
-7 0 in Eq. (9.5.4), the limiting system is the linearized Euler equa tions. Derive boundary conditions that give well-posed problems for both systems. Is that possible for all values of U and a?
9.5.1. If P
9.5.2. Derive boundary conditions such that the initial-boundary-value problem for
o S; x S; 1, t � O. is well posed.
9.6. QUARTER-SPACE PROBLEMS IN MORE THAN ONE SPACE DIMENSION As an example, we consider hyperbolic systems
��
=
A
�:
+B
�� + Cu + F
=:
( :x ) u + F.
P x, t,
(9.6.1)
All coefficients are smooth functions of x = (x,y), t, and A = A(x, t), B = B(x, t) are Hermitian matrices. C = C(x, t) can be a general matrix. In many applica tions, it is skew Hermitian, that is, C = -C'.
9.6.1 Quarter-Space Problems We consider Eq. (9.6.1 ) in the quarter space as shown in Figure 9.6. 1 .
0 S; x < 00,
- 00
< y < 00, t � 0
QUARTER-SPACE PROBLEMS IN MORE THAN ONE SPACE DIMENSION
391
For t = 0, we give initial data
u(X, O)
=
f(x),
(9.6.2)
and at x = 0 we describe boundary conditions
Lo(y, l)u(O,y,l) that
are
=
_00 <
0,
y
< 00,
(9.6.3)
like Eq. (9.2.7), that is, they consist of linear relations between the
components of
u.
We also assume that A(O,y,/) is nonsingular.
REMARK. It is not necessary to assume that A(O,y, I)
be nonsingular. One can
use the following condition. Let hj(O,y,l) be an eigenvalue of A(O,y, I). If A/O,y,l) = 0 for some y = Yo, 1 = 10, t!,len either Aj := ° for all x,l in a _
neighborhood of x = 0, or Aj =
xPAj(X, I),
Aj i 0,
p � I.
We now consider solutions that are 2-r-periodic in y. Therefore. we assume that all coefficients and data have this property. Let
(u,u) denote the
Lz
=
r"o ro (u,u)dx dy, J J
lI u ll'
=
scalar product and nonn. We assume that
(u,u), II! II
< 00 and that we
are interested in smooth solutions, for which
lI u(',I)1I for all
I.
< �,
We consider Eq. (9.6.4) as a boundary condition at x =
again use difference approximations to prove that solutions exisl.
1
y
Figure 9.6.1.
(9.6.4) DO.
One can
392
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
Derive an energy estimate proceeding as before. Let u be a smooth solution to the homogeneous problem with F = 0. Then, we apply integration by parts to the right hand side of
:t lI u ll 2
=
(u, Pu) + (Pu, u).
Observing that there are no boundary terms in the y direction and, because of Eq. (9.6.4), there are no boundary terms at x = 00, we obtain, as in the one dimensional case,
(u, Pu) + (Pu, u)
=
(u, (-Ax By + C + C*)u) 211" - 0 (u(O,y, t), A(O, y, t)u(O, y, t) dy. -
f
Therefore, for every fixed y , the boundary term is of the same form as in the one-dimensional case, and we obtain an energy estimate
if the boundary conditions are such that
(u(O, y, t),A(O, y, t)u(O, y, t)
�
0,
for all y , t. Definitions 9.4. 1 and 9.4.2 are generalized to two space dimensions in an obvious way. One can prove the following theorem.
Theorem 9.6.1. Assume that A(O, y, t) is nonsingular and has exactly m - r negative eigenvalues and that Eq. (9.6.3) consists of m - r linearly independent relations. If, for all w E all := { Lo ( Y, t)w(O) = O } ,
(w(O), A(O, y, t)w(O)
� 0 jw(0)j 2 ,
o = const. �
0,
for all y, t, then the initial-boundary-value problem [Eqs. (9.6.1) to (9.6.4)J is strongly well posed if 0 > ° and well posed if 0 = 0. The concept of semibounded operators can be generalized to the two-dimen sional case in an obvious way. In our example, the operator is semibounded if
(w(O), A(O, y, t)w(O) for all y, t. If A(O, y, t) has
�
0,
WE
all
(9.6.5)
m - r negative eigenvalues, one needs at least m - r
QUARTER-SPACE PROBLEMS IN MORE THAN ONE SPACE DIMENSION
393
boundary conditions for Eq. (9.6.5) to hold. Thus, the operator is maximally semibounded if the number of boundary conditions is equal to the number of negative eigenvalues of A. The results above show that boundary conditions must comply with the one-dimensional theory. This is also true for parabolic and mixed hyperbolic parabolic systems. As an example, we consider the linearized and symmetrized Euler equations [see Eq. (4.6.5)]
u, =
[�
o u o
u
The eigenvalues A of A are
Al
=
A2, 3
and we have
(u,Au) = - U( l uI 2
- U, = -U
+
l ul 2
±
(9.6.6)
a,
+
l iW ) - 2aup.
As in the one-dimensional case discussed in Section 9.2, the boundary condi tions depend on U and a.
1. Supersonic Inflow, U > a.
Three eigenvalues are negative, and all vari ables must be specified at the boundary:
u = u = p = O. 2.
Subsonic Inflow, 0 < U < a. Two eigenvalues are negative, and we must specify two conditions. For example,
u with
=
- U(0l2
- Olp ,
u
+ 1 ) + 20la
0,
� 0,
(u,Au) � o. 3. Rigid Wall, U = O. In this case the boundary is characteristic, A is singu lar and, according to the remark above, the function U must have special will make
properties near the boundary. We need only specify one boundary condi tion
394
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
u
= -ap
with a � o. The most natural case is parallel to the wall.
a
= 0 corresponding to flow that is
4. Subsonic Outflow, -a < U < o. One eigenvalue is negative, and we need one boundary condition. For example,
u
=
-ap,
with -
or
U (a 2
+ 1) + 2aa � 0,
p
5.
0
will make (u,Au) � O. Supersonic Outflow, U < -a . All eigenvalues are positive and no bound
ary conditions may be given.
The energy method is very powerful, when it works, that is, when the bound ary conditions are such that the boundary terms have the right sign. If this is not the case, then the method does not tell us anything, and we have to use Laplace transform methods instead. These are discussed in the next chapter.
9.6.2. Problems in General Domains Next we consider the differential equation (9.6. 1 ) in a general domain n in the a smooth curve an (see Figure 9.6.2). We give initial
x, y plane, bounded by
an
x
Figure 9.6.2.
395
QUARTER-SPACE PROBLEMS IN MORE THAN ONE SPACE DIMENSION
data (9.6.2) for x E n and boundary conditions of the type of Eq. (9.6.3) on an . We want to show that this problem can be solved in terms of a quarter space problem and a Cauchy problem. We assume that A, B, and C are defined in the whole x plane and F == O. Let d > 0 be a real number. At every point "0 = (xo, Yo ) of an we detennine the inward normal and on it the point X I (xj , yd with I XI - "0 1 = d. If d is sufficiently small (in relation to the curvature of an ), then the process defines a subdomain n I , bounded by a smooth curve an I , (see Figure 9.6.3). Now let tp E COO be a function with tp == 1 in the neighborhood of an and tp == 0 in n I and the neighborhood of an I . Multiply Eq. (9.6. 1 ) by tp and introduce new variables by UI = tpU, U2 = (1 - tp )u, U = U I + U2 . Then we obtain the system =
(9.6.7a) Correspondingly, multiplying Eq. (9.6. 1) by (1
- tp) gives us X
E
n.
(9.6.7b)
By construction, U2 == 0 in the neighborhood of an . Therefore, we can extend the definition of U2 to the whole x plane. If (Atpx+Btpy)uI were a known function, then Eq. (9.6.7b) is a Cauchy problem for U2 . Now we construct a mapping
which transforms the region n - n I into the rectangle {2 := {O :::; x :::; 1 , 0 :::; y :::; 27r }. Here the lines x = 0 and x = 1 correspond to the boundary curves an and an I , respectively. We assume also that the derivatives a/an, a/as in the normal and tangential direction, respectively, of an are transformed into a/ax and a/ay, respectively.
Figure 9.6.3. Partition of a domain with smooth boundary into two subdomains.
THE ENERGY METHOD FOR INITIAL-BOUNDARY-VALUE PROBLEMS
396
After the transformation, the system (9.6.7a) becomes
where Uj(x, t)
=
Uj(x, t) and)
A(O,y,t)
=
=
1,2. For x = 0
A(O, y, t)cos a + B(O,y,t) sin a,
with a denoting the angle between the x axis and the inward normal derivative (see Figure 9.6.4). The boundary conditions on ao on x
=
O.
are
transformed into boundary conditions
Also, 141 and 142 are 2�-periodic with respect to
I{Jx,'Py. and
Ul
vanish in a neighborhood of Xl
=
y.
By construction,
I. Therefore, we can extend
the definition of Ul to the whole quarter space .i 2: O. If we knew the term
(A'Px + B'Py)U2, then 141 would be the solution of a quarter-space problem.
The systems (9.6.7b) and (9.6.7c) are only coupled by lower order terms. From our previous results, we know that these lower order terms have no influ ence upon whether or not a problem is well posed. Therefore, the boundary conditions on ao must be such that the quarter-space problem (9.6.8) is well posed, and we obtain the following theorem for the original problem.
Theorem 9.6.2. Assume that A
A cos a + B sin a is nowhere singular on an and has exactly m - r negative eigenvalues. Then the initial-boundary-value problem is well posed, if =
o Figure
9.6.4. Mapping of the domain [} - [} I into a rectangle.
1
%
397
EXERCISES
(u, Au)
;:::
0
(9.6.9)
for all x E an and all vectors u that satisfy the boundary conditions. Proof. Using the construction above we must solve Eqs. (9.6.7b) and (9.6.7c). We solve these by iteration
Au- 1[nx + l] + Bu- l'y[n + l] + CU- I[n + l] - (Al{ix + Bl{iy )(u_ l[n+ l ] + U- 2[n] ), x [ + I] = A u [nx + I] + BU [n+ I ] + Cu [n + I] U2tn 2y 2 2 I ] [ [ n + (Al{ix + Bl{iy )(u 1 + u2n+ ] ) , x - [ l]
U l nr +
=
-
E E
n u,
R2 .
At every step of the iteration we solve a Cauchy problem and a quarter-space problem. By construction, both problems are well posed, and one can show that the iteration converges to a solution of the original problem. This construction is easily extended to parabolic and mixed parabolic-hyper bolic systems.
EXERCISES 9.6.1. Consider the system
Ut
[ ! �] x + [ � n uy, U
o S; x <
00,
- 00
<
y
<
00,
t ;::: O.
For which values of a and b can an energy estimate be obtained? Derive the most general well-posed boundary conditions for this system. 9.6.2. Consider the symmetric linearized Euler equations (9.6.6) with U > a, V = 0 in a circular disc n . Define well-posed boundary conditions on an .
BIBLIOGRAPHIC NOTES The results in this chapter are classical. More details and references are given in Kreiss and Lorenz ( 1 989).
10 THE LAPLACE TRANSFORM METHOD FOR INITIAL-BOUNDARY -VALUE PROBLEMS
10.1 SOLUTION OF HYPERBOLIC SYSTEMS Initial-boundary-value problems for systems with constant coefficients can be solved using the Laplace transform. As we will see, the Laplace transform method is often the only way we can decide on the stability of a given finite difference scheme. We only use elementary properties of the Laplace transform (see Appendix A.2). We consider the quarter-space problem for the system
o s
+ F,
x
<
00,
t �
0,
( l O. 1 . 1 a)
where A is diagonal and
We prescribe initial data
U(X, O) and boundary conditions 398
=
j(x),
( l 0. 1 . 1b)
399
SOLUTION OF HYPERBOLIC SYSTEMS
Ll/ ul/ (O, t) + L1u1(0, t) = g, lI u( · , t) 1I < 00 for every fixed t.
(10. 1 . 1 c)
where Ll and Ll/ are constant matrices. We assume that F, f , and g are smooth functions with compact support. We know already that the above problem is well posed if, and only if, Ll/ is nonsingular. However, we arrive at the same conclusion using the Laplace transform. We start with the following lemma.
10.1.1. Consider the initial-boundary-value problem (10.1.1) with F O. It is not well posed if we can find a complex number s with Re s > 0 and g initial values f(x) with 0 < IIf ll < 00 such that ==
Lemma ==
w(x, t) is a solution.
=
(10.1.2)
Re s > 0
e S�(x),
Proof. Assume that there is a solution of the above type. Define the sequence
{ fJ (x) }}:d
by
f(Jx) II f(Jx) 1I '
i.e.,
I IfJ II
1.
Then
are also solutions satisfying j =
1 , 2, . . . .
Therefore, the problem cannot be well-posed because we can construct solutions that grow arbitrarily fast. This proves the lemma. REMARK. If Eq. ( 1 0. 1 . 1 ) had contained lower order terms, then solutions
with Re s ::;
11 0
are permissible.
w
We will now give conditions guaranteeing that solutions of the type of Eq.
(10. 1 .2) exist.
Theorem 10.1.1. A solution of the type ofEq. (10.1.2) exists; that is, the initial boundary-value problem is not well posed, if the eigenvalue problem
THE LAPLACE TRANSFORM METHOD
400
sl{)
A
dl{) dx '
o
::;;
x <
00,
( 1 0 . 1 .3a)
with boundary conditions ( 10. 1 .3b)
0,
has an eigenvalue s with Re s > O. The eigenvalue problem has an eigenvalue s with Re s > 0 if, and only if, Lll is singular. Therefore, by our previous result, the initial-boundary-value prob lem is well posed if, and only if, the eigenvalue problem (10. 1.3) has no eigen value s with Re s > O. Proof. If there is an eigenvalue
s with Re s > 0, then
is a solution of the type of Eg. ( 10. 1 .2). Therefore, the problem is not well posed. Let us now derive algebraic conditions such that there are no eigenvalues with Re s > O. These conditions are necessary conditions for the problem to be well posed. The general solution of Eg. ( 1 0 . 1 .3a) can be written in the fonn
For
III{) I I < 00,
a necessary and sufficient condition is that 1{) 1 (0) =
O.
( 10. 1 .4)
Then the relations ( 10.1 .3b) are satisfied if, and only if, ( 1 0. 1 .5) There are two possibilities. 1.
LII is Nonsingular.
Then the only solution of Eg. ( 1 0. 1 .5) is I{) II (0) = 0, and there is no eigenvalue s with Re s > O. In this case, we know, from our previous results, that the problem is well posed.
2.
LII is Singular.
Then Eg. ( 1 0. 1 .5) has a nontrivial solution, and, there-
SOLUTION OF HYPERBOLIC SYSTEMS
401
fore, there is an eigenvalue s with Re s > eigenvalues.
O.
In fact, all s with Re s >
0
are
This proves the theorem. Thus, for a well-posed problem, LII must be nonsingular, and, therefore, we can write the boundary conditions in the form
(10.1.6) that is, they are necessarily of the form we have discussed earlier. We can now solve the problem using the Laplace transform. Without restric tion, we can assume that !(x) == O. Otherwise, we introduce a new variable it = u h(t)!(x), h(O) = 1 , h smooth with compact support. We define the Laplace transform by -
s = i�
+ 1/ ,
t 1/
real,
1/ > O.
(10.1 .7)
REMARK. From Section 9.2, we know that the solution of Eq. (10. 1 .1) satisfies an energy estimate. Therefore, the right-hand side of Eq. every 1/ > O. By Eq.
(10. 1 .7) is
finite for
(10. 1 . 1a)
Therefore,
implies (observe that ! == 0)
(l0.1 .8a) The boundary conditions are
THE LAPLACE TRANSFORM METHOD
402
lI u( · , s)1I < 00.
( l 0 . 1 .8b)
Because the eigenvalue problem ( 1 0 . 1 .3) only has a trivial solution, the solution u of the homogeneous problem ( 1 0 . 1 .8) is identically zero. Thus, the inhomoge neous problem ( 1 0 . 1 .8) has a unique solution. We can write down the solution explicitly:
ul (x, s)
=
uIl(x, s)
-
=
I e s(Ai)- I (X - Y) (A/ )- l pl ( y, s) dy,
f: e s(AII)-I (x - Y)(AIIr l pIl( y, s) dy
+
e s(AII)-IX uIl(O, s),
( 1 0. 1 .9)
where uI l (O, s) is determined by Eq. ( 10. 1 .8b). Using Eq. ( 1 0 . 1 .9), we can estimate u(x, s) in terms of P, gI l . However, it is easy to use energy estimates directly. We take the scalar product of Eq. ( 1 0 . 1 .8a) with ul and uI l , respectively, and obtain
or
Integration by parts gives us
and, therefore,
that is,
l uI (0, s) 1 S; For
ul l , we obtain, correspondingly,
C
1/ 1 /2
II FI II · A
SOLUTION OF HYPERBOLIC SYSTEMS
403
'l] llull1l2 � lIull ll llFlI 1I � (UIl(O, s), AIIUIl(O, S) , � (UIl(O,S), AIIUIl(O, S) , � � l Iull 1l 2 2� II FII 1I 2 -
+
-
or
Using the boundary condition we obtain
Therefore,
IIu l 1 2 � constant ( � II F I 1 2 Ig1l 1 2 ) , lu(O,s) 1 2 � constant ( Igtl1 2 + � II F II 2 ) . +
'I]
(lO.UOa) (lO.1.10b)
Inverting the Laplace transform gives us the solution of our problem
and, by Parseval's relation, Eq.
Correspondingly, Eq.
(A.2.17), we obtain, for any 'I] > 0 and s
(lO. 1.10b) is equivalent to
=
i� + 'I] ,
404
THE LAPLACE TRANSFORM METHOD
fo� e-2'1 t l u(0, t)1 2 dt :s;
constant
J: e-2'1t ( � II F( · , t) 1I 2
+
)
I g(t) 1 2 dt.
( 1 0. 1 . 1 1 b)
Thus, we can estimate the solution in terms of the data. In Section 1 0.3, we use this estimate to define another concept of well-posedness.
EXERCISES
0 in Eq. ( 1 0. l . 1 b). Derive the estimate ( 1 0. 1 . 1 1 b) by applying the Laplace transform technique to Ii = u - h(t)f(x) as described above.
10.1.1. Assume that f(x) :/:-
10.2. SOLUTION OF PARABOLIC PROBLEMS We start with an example. Consider the quarter-space problem for a parabolic system
Ut = Auxx + F, u(x, O) = f(x),
o :s;
x < 00,
t
� 0, ( 1 0.2. 1 )
where
A
=
[ �1 �2 ] ,
Aj
constant > 0,
j
1 , 2,
u
[ uu((2l )) ] ,
with boundary conditions
u( l ) (O, t) U�2) (0, t) lI u( · , t) 1I
=
=
exu(2) (0, t) (3u�l ) (O, t)
< 00 .
+ +
g( l ) (t), g(2) (t), ( 1 0.2.2)
where ex and {3 are constants. Let us first investigate under what conditions on ex and (3 we can obtain an energy estimate. Here we assume that g ( l ) = g(2) = O. Integration by parts gives us
where, by Eq. ( 1 0.2.2),
405
SOLUTION OF PARABOLIC PROBLEMS
Therefore, to obtain an energy estimate, we need A I a + A 2 {3 = O. We now investigate the case A a + A 2 {3 '" O. As in the previous section, l we can prove that the problem is not well posed if the eigenvalue problem corresponding to Eqs. ( 10.2. 1 ) and ( 10.2.2)
o �x <
00, 2
( 1 0.2.3) ( 10.2.4)
has eigenvalues in the right half of the complex plane. In that case, there are solutions that grow arbitrarily fast. For our example, we have the following theorem: Theorem 10.2.1. The problem [Eqs. (10.2.3) and (10.2.4)J has an eigenvalue s with Re s > 0 if, and only if,
"\ - 1/2 1\2
_
"\ -:- 1/2 > 0 1\)
"\ - 1/2 a (3 - 0 , 1\ 1
. ] =
Proof. The general solution of Eq. ( 1 0.2.3) with II
constant ,
j
1
- (3A i l /2
-a A2 1/2
]Y
( 1 0.2.5)
< 00 for Re s > 0 is given
Re "\I\j- 1 /2 s 1/2 > 0 ,
=
1 , 2.
The boundary conditions are satisfied if, and only if,
[
1 , 2.
= 0,
[ ]
y = yy((2l ) )
•
( 10.2.6)
Equation (10.2.6) has a nontrivial solution if, and only if, A2 1/2 - A i l /2 a{3 = O . This proves the theorem. We now show that if A2 1 /2 - A i I /2a{3 '" 0, then the problem has a unique solution that can be estimated in terms of the data. We assume again that ! == O. The Laplace transform is the solution of
u
su
u( l )(O, s) u�2)(0, s)
F,
A uxx + au(2)(0, s) + (3u�l )(O, s) +
g( l ) (s), g(2) (s),
l I u ll
<
00.
( 10.2.7)
406
THE LAPLACE TRANSFORM METHOD
Because there are no eigenvalues with 1/ = Re s > 0, Eq. ( 10.2.7) has a unique solution for 1/ > 0. Inverting the Laplace transfonn gives us the desired solution of the initial-boundary-value problem. We now estimate the solution in tenns of F and g. We introduce into Eq. ( 10.2.7) a new variable
7r 4
< arg
s l/2
< �
- 4
and obtain the system
Ly(O, s)
g,
where
y =
[ !]
A
H =
F
,
The unitary transfonnation
T =
[ -II I ]
1
v'2
1 '
transfonns H to diagonal fonn. Therefore, we introduce new variables by y = T* y, F = T*F and obtain
+ (2sf 1 /2 pI , s l/2y II = A 1/2y�I + (2s)- 1/2pII , DJ y l (0, s) + D2y ll(0, s) = g(s), s l/2 yI
= - A 1 /2 Yx
where IIpl ll
� constant IIFII , J / 2 2 ( ) s). I 1 Ig I
IIpllll
l I y l l < 00,
( l0.2.8)
� constant IIFIi, and I g i � constant ( lg( l ) 1 +
We need the lemma below.
Lemma 10.2.1. D J
is nonsingular if, and only if, A2 1/2 - Ai l/2 a{3 ,;. 0.
Proof. Consider the homogeneous equation ( 1 0.2.8). For Re s > 0, the general solution belonging to
L2
is given by
SOLUTION OF PARABOLIC PROBLEMS
407
y II = 0 . Therefore, the homogeneous problem has a nontrivial solution if, and only if,
has a nontrivial solution. Inverting the transformations above, any nontrivial solution generates a nontrivial solution e stcp (x) and vice versa. The lemma then follows from Theorem lO.2. 1 . Assume that D , is nonsingular. We can proceed as in the previous section. We first estimate y II . Integration by parts gives us
Therefore, observing that
V2 Re s '/2 � I s 1 '/2,
that is, constant Is l 2
I y II(0, s) 1 2
�
constant F II - II I 1 2 . I s 1 3/2
Because D, is nonsingular and D, and D are independent of s, the boundary 2 conditions give us 1/(0, s) 1 2
� �
constant ( l y II (0, s) 12 + Ig(s) 1 2) 1 constant II pII I 1 2 + I g(S) 1 2 I s 1 3/2
(
Using integration by parts we obtain
)
•
THE LAPLACE TRANSFORM METHOD
408
( (2S�I/2 ( ji f ]f)) � (Y(0, s), A 1 /2ji f (0, S) ( (2s�I /2 ( ji f ]f )) .
Re (_ji f , A 1 /2ji;)
Re s 1 /2 11 ji f 11 2
+
Re
Re
+
Therefore,
Inverting all of the transformations we obtain
If g
II u l1 2 �
constant
II ux l1 2 �
constant
l u(0, s) 1 2 �
constant
+
+
+
( 1 0 2 9) .
.
== 0, then, from Parseval's relation, we obtain the estimate
fo= e-21) t ( ll u( - , t) 11 2 � For
( 1:1 2 II FI1 2 I stl/2 IgI 2 ) , ( � II FI1 2 I s I 1/2 I g I 2 ) , ( I st3/2 II F I1 2 I g 12 ) .
constant
+
Il uxC · , t) 1I 2 ) dt
( � 7]\ ) J: e-21) t II F( - , t) 1I 2 dt. +
(IO.2.lOa)
g =f- 0, we can only estimate u and obtain
fo= e-21) t ll u( - , t)11 2 dt fo= e-21) 1 ( 7]\ II F( · , t) 11 2 � constant +
!
)
I g(2) (t) 1 2 dt. 7] /2
We can also estimate
u(O, t)
+
(IO.2. lOb)
SOLUTION OF PARABOLIC PROBLEMS
409
fo= e-2'1 t l u(0, t) 1 2 dt :s;;
f: e-2'1t ( 17 !/2 II F( · , t) 11 2 + I g( l ) (t) 1 2 + � I g(2) (t) 1 2 ) dt.
constant
(l0.2. lOc)
We use this estimate to define a new concept of well-posedness in the next section. We have shown that good estimates are obtained for all a and (3 with
a(3
-J
( �AI ) 1/2 ,
whereas an energy estimate requires
Therefore, this example shows that the conditions for the existence of energy estimates can be too restrictive. One can generalize the above results considerably. Consider the quarter space problem for a parabolic system
ut = Auxx + Bux + Cu + F, u(x, O) = 0, with
m
° :s;; x
<
�
00,
0,
(l0.2. l I a) (l0.2. l l b)
linearly independent boundary conditions
R 1 1 UxCO, t) + R l Qu(O, t) = 0,
Roou(O, t)
=
0.
(l0.2. l Ic)
The principle part of this problem is obtained by neglecting the lower order terms in the differential equation and in the boundary conditions. The corre sponding eigenvalue problem is
scp R 1 1 CPx CO, S) Roocp (O, s) I cpll
Acpxx , 0, 0, <
° :s;; x
<
00
, (l0.2.12)
00.
One can prove the following theorem (Exercise
10.2.2).
410
THE LAPLACE TRANSFORM METHOD
10.2.2. The problem (10.2.11) has a unique solution satisfying an estimate of type (10.2.10), for 1'/ > 1'/0, if, and only if, Eq. (10.2.12) has no eigenvalue s with Re s > o. Theorem
This theorem is valid for much more general boundary conditions
0, x
o.
=
(10.2. 13)
EXERCISES 10.2.1. Formulate and prove the analogy to Lemma tems
10.2.2. Prove Theorem
10. 1 . 1
for parabolic sys
10.2.2.
10.3. GENERALIZED WELL-POSEDNESS We again consider the system of partial differential equations
au/at
=
Pu + F,
o
�
x
�
1, t
�
0,
(1O.3.1a)
with initial data
u(x, O)
=
I(x),
(l0.3. 1b)
and boundary conditions
Lou(O, t)
=
go(t),
(1O.3.1c)
We assume that the system of differential equations is either symmetric hyper bolic, that is,
P
=
A
a + B, ax
or second-order parabolic; that is,
A
=
A*,
411
GENERALIZED WELL-POSEDNESS
P
A
a2 ax2
+ B
a ax
+
C,
A
+ A*
�
2aoI > O.
C
For simplicity, we assume that A, B, and and the coefficients of Lo and L, are smooth functions of x and t. We also assume that F, f, g o , and g " are smooth bounded functions. To guarantee that the compatibility conditions are satisfied, we assume that F vanishes near the boundary and the initial line and that f(x), go(t), and g l (t) vanish near x = 0, 1 and t = 0, respectively. Of course, less stringent conditions need be met if only solutions with a fixed number of derivatives are of interest. As before, one can also extend the solution concept to include generalized solutions. In Section 9.4, we have discussed two different definitions of well-posedness. They differ with respect to the estimate required. All bounds are natural in the context of energy estimates. Unfortunately, energy estimates are not available in many circumstances, and, therefore, other techniques must be used. A very powerful tool is the Laplace transform, which we used in the last two sec tions. When using the Laplace transform, it is convenient to assume that f(x) == o. In Section 1 0. 1 , how to transform the problem so that this condition is satisfied was explained. To begin, we assume that the system (10.3.1) has constant coefficients. As in previous sections, we introduce the Laplace transform
u(x, s)
=
f: e-s/u(x, t) dt,
It is the solution of the
s
=
real,
TJ > O.
resolvent equation
(sI - P)u Lou(O, s)
F,
Typical estimates for the solutions of Eqs. (10.1. 10) and (10.2.9)]:
1.
i� + TJ , �, TJ
(10.3.2) (10.3.2)
are listed below [see Eqs.
Consider Eqs. (10.3.2) with homogeneous boundary conditions (gj == 0). There is a cqnstant TJo and a function K (TJ ), with lim'1 � � K(TJ ) = 0, such that, for all F and all s with TJ = Re s > TJ 0 ,
(10.3.3a) Here
0 = 0 for hyperbolic systems and 0 > 0 for parabolic systems.
412
THE LAPLACE TRANSFORM METHOD
2. Instead of Eq. ( l 0.3 .3a), for inhomogeneous boundary conditions we have
(This estimate is stronger than the one derived for the parabolic problem in Section 1 0.2.) If the differential operator P is defined on the function space satisfying the homogeneous boundary conditions Lov(O) = L I v l = 0, the operator is called the and the is usually formulated as
) resolvent (condition
resolvent operator,
I (sf - P) I � -'
(sf-p)- l
constant Re s
Our conditions ( l 0.3 .3a) and ( l 0.3 .4a) are generalized forms of the resolvent condition. By Parseval's relation, these inequalities imply the following estimates:
Ia� e-21)1 ( l u( - , t) 1 2 + 81 1 uA' ,t) 1 2 )dt Ia� e-21)I I F(- ,t) 1 2 dt, > 1) � - -7 � Ia� e-21)1( l u( -, t)1 1 2 + 81I uA', t)1 1 2)dt � Ia� e- 21)1 ( I F( -, t)1 1 2 + I go(t)1 2 + I g , (t)1 2) dt, > 1) - -7 � K(7J )
71
K(7J )
71
71 0 ,
lim K(7J )
=
71 0 ,
lim K(7J )
0,
0, ( l 0.3.3b)
( l 0.3.4b)
respectively. For the examples treated above we have 71 0 = O. However, if the prob lem has variable coefficients, lower order terms, or two boundaries, then we have to choose 71 0 > O. The last two of these generalizations will be discussed later. Because the integrals are taken over an infinite time interval, the constant K(7J ) necessarily goes to infinity at some point 71 = 71 For example, if = 0 for � T, then the integrals on the right-hand side of the estimates are finite for any value of y. However, the solution is in general nonzero for
go g, =
=
t
o.
u
F
GENERALIZED WELL-POSEDNESS
413
all t, and the integrals on the left-hand side y :5 Yo. Therefore, K(YJ ) ---7 00 as YJ ---7 Yo.
of the estimates do not exist for
We now use these estimates t o introduce a new concept of well-posedness for the systems ( l 0.3 . 1 ) with variable coefficients. ==
Definition 10.3.1. Consider the problem (10.3.1) withf go g, 0. We call the problem well posed in the generalized sense if, for any smooth compatible F, there is a smooth solution that satisfies the estimate (1 0.3.3 b) for all YJ > YJo. Here ==
o
° > °
==
for hyperbolic problems, for parabolic problems,
and YJo, K(YJ ) are constants that do not depend on F. We call the problem strongly well posed in the generalized sense if the estimate (lO.3.4b) holds. REMARK . The initial and boundary conditions can be made homogeneous by
subtracting a suitable function 1/;(x, t) that satisfies Eqs. ( l 0.3 . 1 b) and ( l 0.3 . 1 c). The function v = u - 1/; satisfies
Pv + F, v(x, 0) Lov(O, t)
° :5 x :5 1 , t � 0,
0, 0,
where F = P1/; - a1/; lat + F. Assuming that the problem is well posed in the _ generalized sense, we obtain the estimate ( l 0.3 .3b) as u ---7 v, F ---7 F. Hence, we get an estimate for u = v + 1/;, but the bound depends on dgoldt, dg J /dt, dfldx, and also on d2fIdi2 in the parabolic case. There are many other ways to define well-posedness. Any definition of well posedness must satisfy the following requirements. 1 . For smooth compatible data, the problem has a smooth solution.
2.
There is an estimate of the solution in terms of the data.
3 . It should be stable against perturbations of lower order terms; that is, if we perturb the differential equations by changing the lower order terms, then the solutions of the perturbed problem should also satisfy an estimate of the same type. 4. It should hold for a large class of problems. 5. There should be equivalent algebraic conditions that are easy to verify. The definitions of Section 9.4 fulfill these requirements. The necessary esti-
THE LAPLACE TRANSFORM METHOD
414
mates can be verified for symmetric first-order systems and for parabolic sys tems by the use of integration by parts. However, there are large classes of problems that cannot be investigated in this way. By using Definition 1 0.3 . 1 , we can cover a much wider class of problems. The concept of strong well-posedness in the generalized sense does not play the same role for parabolic equations as for hyperbolic equations. It holds for parabolic equations only if all boundary conditions are derivative conditions, that is, rank (Rr r ) = m in Eq. ( l 0.2. l l c). We now derive a number of fundamental properties for our new concepts.
Assume that the differential operator P is semibounded with the boundary conditions ( 10.3.1 c) such that
Theorem 10.3.1.
where 0 > 0 if P is parabolic and 0 = 0 if P is hyperbolic. Then, the problem (10.3.1) is well posed in the generalized sense. Proof. Introduce into Eq. ( l 0.3 . 1 ) a new variable,
Therefore, for ij
d
dt
=
1/ -
IIwl1 2 S; S;
Thus,
where
G(T ) Define
'P(t) by
Ol
w = e- TJ t u . Then, we obtain
� ij o > 0,
11 2 - 20 l l wx ll 2 - 20 II wx
0l) lIw1l 2 + 2 1 (w, e-TJ1F)I, ij IIwll 2 + � l Ie- TJ I F II 2. 1/
- 2(1/ -
415
GENERALIZED WELL-POSEDNESS
cp (t) =
{ e-�I, 0,
for for
t t
� 0,
< 0.
Then, we obtain
f: Il w( · , t) 11 2 dt � f: (t� cp (t - T)dt) G(T)dT - f: ( f: cp (t - T) dt) H(T) dT 20 � � � f � e- 21] 1 11 F( ' , t) 1I 2 dt f II wxC · , t) 11 2 dt,. 1
TJ
--=-
TJ
0
0
that is,
Thus, Eq. ( l 0.3.3b) is satisfied with
K(TJ )
=
max
( (y - a
1 )2 '
1_
y-a
_
)
and with 0 replaced by 20/�o . The existence of a smooth solution can also be verified and, therefore, the theorem is proved. Now we will prove that lower order terms do not affect generalized well posedness.
Theorem 10.3.2. Assume that the problem (10.3.1) is well posed in the gener alized sense. Then the perturbed problem
aw/dt = (P + Po)w + F, w(x, O) 0, Lj w( l , t) = Low(O, t) = 0,
° �x � 0,
1
,
t � 0,
has the same property. For hyperbolic equations Pow is a zero order term, for parabolic equations it may also contain derivatives offirst order.
416
THE LAPLACE TRANSFORM METHOD
Proof. Formally Pow can be considered as a forcing function, and by assump tion we have
fo� e-2'1t( lI w( · , t) 1I 2 �
K(,'1 )
+ !S lI wA · , t) 11 2 ) dt
f: e-2'1 t( IIPow( ' , t) 11 2 + I I F( · , t) 11 2) dt.
By choosing "I sufficiently large and, therefore, K("I ) sufficiently small and recalling that !S > 0 for parabolic problems, we can move the Pow term to the left-hand side giving us
f: e-2'1 t( lI w( - , t) 1I 2 + !S li wA ' , t) 1I 2) dt � f: e-2'1t I lF( - , t) 1I 2 dt, K 1 ("I )
> "I I .
"I
O.
The existence of a solution can also be assured and the theorem is proved. As we will see later, it is convenient to treat each boundary by itself. For that purpose, we define the two quarter-space problems
au/at = Pu + F, u(x, 0) = 0, Lou(O, t) = 0, au/at u(x, 0) L 1 u(1, t)
Pu + F, 0, 0,
O�x
<
00
, t � 0, (10.3.5)
- 00
<
x � 1 , t � 0, (10.3.6)
and the Cauchy problem
au/at u(x, 0)
Pu + F, O.
-00
<
x
<
00 ,
t � 0, (10.3.7)
The coefficient matrices in P are extended in a smooth way to the whole x axis so that they are constant for large I x I . We assume that the functions F have compact support in all three cases. Definition 10.3.1 of well-posedness in the generalized sense is now the same for each one of these three problems, but with the norms defined by
GENERALIZED WELL-POSEDNESS
417
10= l u(x, t) 1 2dx, Il u( . , t) II : =. I := f� l u(x, t)1 2 dx, = Il u( - , t) 1I : =. = := f� l u(x, t) 1 2 dx, = Il u( · , t) 11 6. = :=
for Eq. ( 1 0.3.5), for Eq. ( 1 0.3 .6), for Eq. ( 1 0.3.7).
We can now prove the following theorem.
10.3.3. The problem (10.3. 1) is well posed in the generalized sense quarter-space problems (10.3.5) and (10.3.6) and the Cauchy problem (10.3. 7) are all well posed in the generalized sense. Theorem
if the
Proof. Let 11' 1 (x) E
C = (-oo, oo) be a monotone function with for x � 1 /8, for x � 1 /4,
{ I,
II' I (X) _
0,
and define
11' 2 (x) 11' 3 (x)
11' 1 (1 1
-
x), II' I (X) - 1I' 2 (X). -
Let
Uj(x, t) Fj (x, t)
II'j (x)u(x, t), II'j (x)F(x, t),
j
=
1 , 2, 3 ,
j = 1 , 2, 3 ,
and define U I == 0 for x � 1 , U2 == 0 for x � 0, and U3 = 0 for x � 0, x � 1 , and, correspondingly, define Fj (x, t). Multiplying Eqs. ( 1 0.3 . 1 ) by II'j where j = 1 , 2, 3, the problem with homogeneous initial and boundary conditions can be written in the form
(udt U I (x, 0) Lou l (0, t) (U2 )t U2 (X, 0) L l u2 (1, t)
PU I + P l u + FI ,
o �X
< 00,
t
� 0,
0,
( 1 0.3.8a)
0,
PU2 + P2 u + F2 ,
-00 <
X �
1,
t � 0,
0,
0,
( 1 0.3.8b)
THE LAPLACE TRANSFORM METHOD
418
-
00
<
x
<
00,
t � 0, ( l 0.3.8c)
Here {Pj }f are bounded matrices in the hyperbolic case. For parabolic equa tions, {Pj u} I are linear combinations of U and its first derivatives. Now assume that the two quarter-space problems and the Cauchy problem are well posed in the generalized sense. The solutions of Eq. ( 1 0.3.8) satisfy
f: e-2�1
71 t ,
( 1 0.3 .9a)
71
( 10.3.9b)
f: e-2�1( lI u2 11 :�, t + O Il U2x ll :�, t ) dt � K2 (71 ) f: e -2�1 ( 11 F2 11 6. 1 + lI u 1l 6, 1 + o ll ux ll 6 t ) dt, > 71 2 , f�� e-2�1 ( lI u3 11 :�.� + O Il U3x l l :�. � ) dt � K3 (71 ) f� e- 2�1 ( II F3 11 6. 1 + lI u ll 6 t + o ll ux Il 6. } ) dt, � 71 >
71 3 ,
( 10.3 .9c)
Here we have used the fact that the functions Uj were smoothly extended and the coefficients of Pj vanish outside the intervals 1/8 � x � 1/4, 3/4 � x � 7/8. The inequalities are added and 71 is chosen large enough so that the lI u lI 6. 1 and lI ux ll 6 1 terms can be moved to the left-hand side. Observing that U = Ut + U2 + U3 for 0 � x � 1 , we obtain
where lim� � � KS (71 ) =
O.
SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
419
One can also use the above representation to prove the existence of solutions of the original problem. By construction, u = Uj + U2 + U3 is a solution of the original problem for x E [0, 1 ]. This proves the theorem. There are no difficulties in generalizing Definition 10.3.1 to several space dimensions. As in Section 9.6, we consider the differential equation in some domain n bounded by a smooth curve an . The norms 11 · 1 1 , 1 · 1 in Eq. (10.3.3) now represent the L2 norm over n and an , respectively. As we show later in Section 10.6, we can split such a problem into a Cauchy problem and a quarter space problem.
10.4. SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
(10.3.1) with constant coefficients. We derive algebraic conditions such that the initial boundary value problem is well posed in the generalized sense. As in Sections 10. 1 and 10.2, we can derive a necessary condition for well posedness. In this section, we consider systems
Theorem 10.4.1.
The problem is not well posed if the eigenvalue problem (P
-
sl) tp
=
0,
Lo tp (0) = L j
has a sequence of eigenvalues Sj , j
=
tp (1) = 0,
(10.4.1)
1, 2, . . . , with
lim Re Sj = 00.
J ---7 �
(10.4.2)
Proof.
Assume that there is such a sequence. Denote the corresponding eigen functions by tpj (x) with I tpj ( . ) 11 = 1 . Then
(10.4.3) are solutions of Eq.
(10.3.1) where
Corresponding to Lemma
go = g j - F = O.
10. 1 . 1 , the relation
tells us that the problem cannot be well posed. This proves the theorem.
THE LAPLACE TRANSFORM METHOD
420
REMARK. In Lemma 1 0. 1 . 1 , we found that if there is a fixed eigenvalue s with Re s > 0, then the problem ( 1 0. 1 . 1 ) is not well posed. The reason for this weaker condition is that the problem ( 1 0. 1 . 1 ) has no lower order terms in the differential equation or in the boundary conditions. Indeed, if such an eigenvalue exists, then it was demonstrated that there is a sequence of eigenvalues satisfying Eq. ( 1 0.4.2). If lower order terms are present, then there might be a fixed eigenvalue in the right half-plane, and yet the problem may be well posed.
The system ( 1 0.4. 1 ) has constant coefficients, and, therefore, we can, at least in principle, solve the eigenvalue problem explicitly. However, we shall not do this because, by Section 1 0.3, we can simplify our task considerably. We can neglect all lower order terms and replace the strip problem by quarter-space problems. Because the substitution x' = 1 x transforms the left quarter-space problem into a right quarter-space problem, we now only consider the problem -
au
= Pu + F, 0 :::; x < = f (x ) Lou(O, t) = go, II u( . , t) II < 00, at , u(x O)
,
00,
t
� 0,
( 1 0.4.4a) (1 0.4.4b) ( 1 0.4.4c)
where
P
A� ax
or
P
By Sections 10. 1 and 1 0.2, the eigenvalue condition becomes particularly simple if the boundary conditions are of the form of Eq. ( 1 0. l . 1 c) or Eq. ( 1 O.2. 1 1 c). We now have the following theorem.
Theorem 10.4.2. Assume that the boundary conditions have the fonn of Eq. (lO.l.lc) if P = Aajax or Eq. (l0.2.11c) if P = Aa2jax2 . Then the quarter space problem (10.4.4) is not well posed if the eigenvalue problems (10.1.3) or ( 10.2.12) for the principal part have an eigenvalue So with Re So > O. Now we shall derive necessary and sufficient conditions for well-posedness for the general problem ( 1 0.4.4). As in the previous sections, we assume that f == 0, and we Laplace transform Eq. ( 1 0.4.4) and obtain
(sf - P)u Lo u(O s)
F,
,
We now can prove Theorem 1 0.4.3.
( 1 0.4.5)
SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
Theorem 10.4.3.
421
The problem (10.4.4) is well posed in the generalized sense
if, and only if, Eq. (10.4.5), with go = 0, has a unique solution for all F and all s = i� + rJ , rJ > rJo such that Eq. (1O.3.3a) holds. Proof. If the problem is well posed, then the Laplace transform u is well defined and is a solution of Eq. ( 1 0.4.5) with go = 0. By Parseval's relation, it follows from Eq. (l 0.3.3b) that
J�� ( 1I u( - , i� + rJ ) 11 2 + o ll uA · , i� + rJ ) 11 2) d� ::; K(rJ ) J� II F( - , i� + rJ ) 11 2 d�. �
( 1 0.4.6)
Next we prove that this inequality implies the corresponding "pointwise" inequality ( 1 0.3.3a). Assume the contrary; that is, there is some point s = S l = i� 1 + rJ I , and a function FI (x) with II FI (x) 11 < 00 such that the solution of
(siI - P)w LoW
FI ,
Il w ll
0,
<
00,
satisfies
Let
F(x, t) Then, for
{ Jr 0,
s = rJ I + it
1 VT FI (x) where
e'l l t + iE 1 t FI (x) ,
i(� 1 - n
e�EI - E)T
1
t ::; T, t > T.
( 1 0.4.8)
THE LAPLACE TRANSFORM METHOD
422
Thus
With F defined by Eq. ( 10.4.8), the system ( 10.4.5) becomes, for s =
« TJ I + i�)I - P)u = FI (x) cp (t T), Lou = 0, lI u li < Because
cp
is independent of x, the function
TJ I + i�,
00 .
v = u/cp « � I - �) T) satisfies
« TJ I + i�)I - P)v = FI (x), Lo v = 0, II v ll
< 00 .
The solution v is a continuous function of its coefficients, and, by Eq. ( 10.4.7), there is a constant e independent of T such that
for
I � - � I I � e. Thus Il u( ' , TJ I + i�) 11 2 + o ll uA ' , TJ I + i�) 11 2
>
K(TJ I ) l cp (�, T) 1 2 ' 11 (FI ( - ) 11 2 , ( 1 0.4.9)
for I � - � I I � e. For large T, the magnitude of cp (�, T) is small outside a small interval around � = � I . Thus, for any e l > 0, there is a T = To = To (ed such that
By integrating Eq. ( 10.4.9), we obtain \
SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
423
f�� ( lIil( · , '7 l iOl1 2 o ll ilA · , '7 l iOll 2) dt �I ( lIil( · , '7 iOll 2 o llilA · , '7 1 iOll 2 ) dt �f �l �l 1 K('7 d f I� ((� I - O To) 1 2 · II F I ( - ) 1I 2 d� �I � K('7 d - £dll FI ( - ) 11 2 , = K('7 I ) f�oo II F( · , '7 1 O1l 2 d� - £I K('7 dIl F( · ) 1I 2 • +
+
+
+E
+
+
+
-E
+E
>
-E
(21r
+
Because £1 is arbitrary, this inequality contradicts Eq. ( 1 0.4.6). Accordingly, Eq. ( 10.4.7) cannot hold. To prove the theorem in the other direction, we consider ( 1 0.4.5), for go 0, where F is the Laplace transform of F in Eq. ( 1 0.4.4). We assume that there is a unique solution satisfying Eq. ( 1 0.3.3a). Formally, we can invert the Laplace transform and obtain =
u(x, t) =
1 __. 21rl
f
Yf'
eS1 il(x, s) ds.
( 1 0.4. 10)
Here 2ff denotes the line s = i� + '7 , '7 = constant > '70 , -00 < � < 00. The theory of ordinary differential equations tells us that il(x, s) is an analytic function of s for Re s > '7 o. Also, the smoothness of F implies that there are constants Cp such that
li Fe · , s) 1I
Re s >
'70 ,
and, by Eq. ( 1O.3 .3a) Re s >
'70 .
( 10.4. 1 1 )
Thus, u(x, t) is a well-defined smooth function of x , t and we can choose any line 2ff in Eq. ( 1 0.4. 1 0) with '7 > '70 . Substituting Eq. ( 1 0.4. 1 0) into Eq. ( 1 0.4.4) shows that it solves the differential equation and satisfies the homogeneous boundary conditions. Furthermore, by Parseval's relation, Eq. ( 1 0.3 .3b) follows. It remains to be shown that the initial function is zero. We have, for any
'7 > '70 ,
424
THE LAPLACE TRANSFORM METHOD
U(x, 0)
1 211"
foo -
that is,
00
u(x, i� +
1J ) d�,
This integral exists for p � 2 and every 1J > 1J o . By letting 1J that u (x, O) == O. This proves the theorem.
----7
00, it follows
In Sections 1 0. 1 and 10.2, we have shown that the eigenvalue condition is sufficient for well-posedness for hyperbolic and parabolic problems in one space dimension with standard boundary conditions. As we see in the next section, this is not true for hyperbolic problems in more than one space dimension. Also, in the discrete case, the corresponding eigenvalue condition is not sufficient for stability, even in one space dimension. This is the reason for introducing our new concept of well-posedness and the conditions ( l 0.3.3a) and ( l0.3Aa). In fact, it is even needed for problems in one space dimension, if the boundary conditions are not of the standard type. Consider the system Ut
+
Ux
� F'}
W t - Wx -
G,
o
$
x < 00,
t �
0,
( lOA. 1 2a)
with boundary conditions U/ (O, t)
==
w(O, t)
( lOA. 12b)
w/ (O, t).
( lOA. 1 2c)
or, alternatively, U(O, t)
==
SYSTEMS WITH CONSTANT COEFFICIENTS IN ONE SPACE DIMENSION
�,}
The transfonned equations are
o
G,
:::; x
<
00,
425
( l 0.4. 1 3a)
with boundary conditions
su(O, s) = w(O, s),
( l 0.4. 1 3b)
sw(O, s)
( l 0.4. 1 3c)
or, alternatively,
u(O, s)
=
The eigenvalue problem is
cpx
=
0,
sy, - y,x
=
0,
cp l l 1I y, 1I
scp(O)
=
y, (0)
or
scp +
II
< <
OO'}
00,
cp (O)
=
.
o
:::; x
<
00,
sy, (O).
For Re s > 0, we have y, == 0, and, therefore, cp == 0; that is, there are no eigenvalues with Re s > O. We now estimate the solutions of Eq. ( l 0.4. 1 3). With Re s integration by parts gives us
1'/
=
1'/ lI u ll 2 - 1 I u(0,s) 1 2 :::; lI uli ll F II , 1'/ lIwll 2 + 1 Iw(0 s) 1 2 :::; I IwIl Il G II , ,
that is,
; lIull 2 :::; 2� II F II 2 + � lu(O, s) 1 2, � IIw ll 2 + � Iw(0,s) 1 2 :::; 2� IIGII 2 . The boundary condition ( l 0.4. 1 3b) implies
1 u(O, s) 1 2 A
1
= W 1 w(O, s) /2 A
:::; 1'/ l 1s l 2
2
I I GII , A
,
426
THE LAPLACE TRANSFORM METHOD
F
(F'c;f ] holds and
and, therefore, the estimate (10.3.3a) [with replaced by the problem is well posed in the generalized sense. On the other hand, the boundary condition ( 1 0.4. 1 3c) gives us
l u(O, s) 1 that is, ( 1 0.4. 14) One can prove that this estimate is sharp and, therefore, that the estimate ( 10.3.3a) does not hold because lim'1 ---7 � K(TJ ) * 0. One might think that the definition of well-posedness could be changed such that the last case would be included. However, consider the strip problem
° � x � 1 , t � 0, Ut + Ux = F, = Wt - Wx G, U(X, O) = w(x, O) = 0, with boundary conditions
U(O, t) = wt (O, t),
w(1, t) =
u(1 , t).
The corresponding eigenvalue problem is
+ I{)x = 0, sl/; - l/; x = 0, I{) (0) = s l/; (0), SI{)
that is,
where
l/; (1)
I{)
( 1 ),
427
EXERCISES
Then s is an eigenvalue if, and only if, ( 1 004.1 5) This equation has solutions with arbitrarily large Re s (see Exercise 1 004.2), and, therefore, the strip problem is not well posed in any computationally suitable meaning. One can geometrically explain what happens. The characteristics that support w leave the strip at the boundary x = 0. To obtain u(O, t), we have to differen tiate w; that is, we lose one derivative. The value u is transported to the other boundary, and there it is transferred to w through the boundary condition. This value is again transported to the boundary x = ° and loses another derivative when its value is transferred to u. Thus, we lose more and more derivatives as time increases. EXERCISES 10.4.1. Prove that the estimate ( 1004. 1 4) is sharp, that is, that Eq. ( 10.3.3a) does not hold. 10.4.2. Prove that Eq. ( 1 004. 1 5) has solutions s with arbitrarily large Re s. 10.4.3. Prove by direct calculation that the eigenvalues s of Sl{) + I{)x
�
O,
s l/; - l/;x - 0, Sl{) (0) = l/; (0), l/; ( 1 ) = I{) ( 1),
}
° � x � 1,
satisfy Re s � 11 0 constant in agreement with the generalized well posedness of Eqs. (10A. I 2a) and ( 10A. I 2b). =
10.5. HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS IN SEVERAL SPACE DIMENSIONS
In this section, we consider hyperbolic systems ut
=
Aux
+
Buy
in the quarter space x � 0,
-
+
F
=: P
( !)
00 < y < 00, t �
u
+
F
( 10.S. I a)
0. For t = 0, we give initial data
428
THE LAPLACE TRANSFORM METHOD
u(X, O) = f(x),
x
=
(x y)
(l 0.5. lb)
,
and, at x = 0, we prescribe boundary conditions Lo u (O , y, t)
li u( . , t) 1i
=
g( y, t),
-00 <
< 00,
y
< 00, ( l0.5. lc)
F
which are of the same form as in Eg. ( l O . l . 1 c). We assume that all the coefficients are real and A is nonsingular. We also assume that and g are 27r-periodic in y, and we consider solutions that are 27r-periodic in y. We want to derive algebraic conditions guaranteeing that the above problem is well posed or strongly well posed in the generalized sense. Corresponding to Egs. (l 0.3.3) and (l 0.3.4), the estimates are now defined as integrals over the domain 0 � x < 00, 0 � y � 27r. Corresponding to Lemma 1 0. 1 . 1 , we now have the following lemma.
F
Lemma 10.5.1. Consider Eq. (10.5.1) with g O. The problem is not well posed ifwe can find a complex number s with Re s > 0, an integer and initial data
u(x, O)
==
==
w,
eiwy'P (x),
such that u(x, t)
=
e st + iwy'P (x)
(l 0.5.2)
is a solution of Eq. (10.5. 1) (the Lopatinsky condition). Proof. If Eg. ( l 0.5.2) is a solution, so is un (x, t)
=
e sn t + iwnY'P (nx) ,
for any positive integer n. Therefore, we obtain solutions that grow arbitrarily fast and the problem is not well posed. We now give conditions such that solutions of the form of Eg. ( l 0.5.2) exist. Substituting Eg. (l 0.5.2) into Eg. ( l 0.5 . 1 ) gives us
sIP =
A'Px + Lo'P (0) = 0,
As before, we have the lemma.
i B'P , w
o�x
< 00,
( l0.5.3a) ( l 0.5.3b)
HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
429
Lemma 10.5.2. There is a solution of the fonn of Eq. (10.5.2) if, and only if, for some fixed w, the eigenvalue problem (10.5.3) has an eigenvalue s with Re s >
O.
REMARK. w need not be an integer, because, if we have a solution for then we also have a solution for s/ l w l , wl l w l = ±l.
s, w,
B y assumption, A i s nonsingular, and we can write Eq. ( 10.5.3a) in the form I{)x
=
MI{) ,
= A-' (sl - iwB).
M
We need the following lemma.
Assume that the system (10.5.1) is strongly hyperbolic. Then there is a constant 0 > 0 such that, for Re s > 0, the eigenvalues K of the matrix M satisfy the estimate Lemma 10.5.3.
( 10.5.4) Proof. Let
{3 be a real number, and consider (M
- i{3l)- '
= =
(A - ' (sl - iwB) - i{3J )- I , (sl - iwB - i{3A)- ' A.
By assumption, the system is strongly hyperbolic, and, therefore, there is a transformation T = T(w, {3) with sUPw .{3(I TI I T - I J ) < such that
T - 1 (wB + {3A)T =
[
AI 0
.. OJ .
Am
00
- .' -
A,
Aj real.
Thus,
(sl - iwB - i{3A)- 'A that is,
I (M - i{3l)- 1 1 � I T I I T -I I IA I . l (sl - iA) - ' I , 0 = IA I I T - I I I T I , � o I Re s l - l ,
( 1 0.5.5)
THE LAPLACE TRANSFORM METHOD
430
which implies 1 K - i,8 1 � and Eq. ( 1 0.5.4) follows.
" I Re s I .
Because
,8 is
arbitrary, we choose
,8 = 1m K,
The last lemma gives us the following lemma. Lemma 10.5.4. Assume that the system (lO.S.Ia) is strongly hyperbolic. For Re s > 0, the matrix M has no eigenvalues K with Re K = o. If A has exactly m - r negative eigenvalues, then M has exactly m - r eigenvalues K with Re K < 0, for all s with Re s > 0 and all real w. Proof The first statement of the lemma i s a weaker statement than Eq. ( 1 0.5.4). The eigenvalues K of M are continuous functions of w. Therefore, the number of K with Re K < 0 does not depend on w since Re K cannot change sign if we vary w. In particular, for w = 0, we obtain M
= sA- I ,
and the second statement of the lemma follows. Assume for a moment that the eigenvalues of M are distinct and denote by m-r the eigenvalues with Re K < O. Then, the general solution of Eq. ( 1 0.5.3a), belonging to L2 , can be written in the form
KI , . . . , K
m-r
cP
L j= 1
Uj
Yj eKj X •
Here the Yj are eigenvectors satisfying
Substituting this expression into the boundary conditions gives us a linear sys tem of equations for rr = (u 1 , U 2 , . . . , U m-r ) , which we write in the form
C(s, w ) rr =
O.
( 1 0.5.6)
There is a solution of the form of Eq. ( l 0.5.2) if Eq. (1 0.5.6) has a nontrivial solution. If the eigenvalues of M are not distinct, then we can still write the general solution, belonging to L2 , in the form cP
( 1 0.5.7)
HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
431
where now C{)j (x) are polynomials in x with vector coefficients containing alto gether m - r parameters Uj . Therefore, we also obtain a linear system of type ( 1 0.5.6) in this case. We have shown the following theorem to be true. 10.5.1. The initial-boundary-value problem (10.5. 1) is not well posed for some s with Re s > 0 and some w, if,
Theorem
Det (C(s, w))
=
O.
Now assume that the eigenvalue problem ( 10.5.3) has no eigenvalue s with Re s > O. We want to show that we can solve the initial-boundary-value problem using Fourier and Laplace transforms. As before, we assume that the initial data are zero. By assumption, the data are 21r-periodic in y, that is, we can expand them into Fourier series with respect to y. For example, 00
F(x, t)
L
w = - oo
F(x, w, t)eiwy .
Therefore, we can also expand the solution into a Fourier series 00
u(x, t)
L
w ::= -oo
u(x, w, t)eiwy .
Substituting this expression into Eq. ( 10.5 . 1 ) gives us, for every frequency a one-dimensional problem
A ux + iwBu + F, u(x, w, 0) = 0, Lou(O, w, t) = g(w, t). Ut
w,
=
( 10.5.8)
We can solve Eq. ( 10.5.8) using the Laplace transform. The equation
u(x, w , s)
=
1000 e-stu(x, w ,t)dt
satisfies
su Aux + iwBu + fr, Lou(O, w, s) = g(w, s). =
lI u ll < 00, ( 10.5.9)
THE LAPLACE TRANSFORM METHOD
432
By assumption, the eigenvalue problem ( l0.5.3) has no eigenvalue with Re s > 0, and, therefore, we can solve Eq. ( l0.5.9) for Re s > 0 and every w . Inverting the Laplace and Fourier transforms gives us the desired solution. By Parseval's relation, the estimates ( l0.3.3a) and ( l 0.3.4a) now take the form ( l 0.5. 1 O) and «(l 0.5. 1 1 ) respectively. Here K(q ) does not depend on w . We now consider the case where == 0 and write the differential equation ( l 0.5.9) in the form
F
Ux
=
7A- 1 (s' - iw'B)u -. 7Mu,
Lou(O, w , s)
=
g(w , s),
( l 0.5. 1 2)
where s
s
,
7
w
,
W
7
By Lemma 10.5.4, for every s', w ' with Re s' > 0, the eigenvalues K of M split into two groups. By Schur's lemma, we can find a unitary transformation U = U(w ' , s') such that U* (w' , s')M(w ' , s')U(w ' , s') =
[ �ll
where the eigenvalues K of M I I and M2 2 satisfy Re K 0 and Re K > 0 , respec tively. Substituting a new variable w = U*u into Eq. ( 1 0.5. 1 2), we obtain
<
A A A WxI = 7M I I WI + 7M 1 2Wll,
W�I
=
Lo Uw
7M22 wll ,
=:
C/(w' , S')w/(O , w , s) + CI l(w ', S')wll(O, w , s) = g.
Since we are interested in solutions with have a positive real part, it follows that
Il w ll
( l 0.5. 1 3)
< 00 and the eigenvalues of
M22
HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
�p
l1/(X, w, s) = eTM I I X �/(O, w, s), Cl (w', s'),vI w, s) = g.
433 ;:
0,
(0, (10.5.14) There there exist s:, w: with Re s: ° and sequencesare s�,twow�possibilities: with limp Firstly, s� s:, limp w: w: such that -7 �
-7 �
=
�
=
(10.5.15) One that we fore, canEq. prove (10.5 .15) holdscanif,choose and onlysuch if, that it is continuous at w:, s:. There U
If Reand, s: > 0, then the homogeneous equations (10. 5 .14) have a nontrivial solu tion therefore, s S: are eigenvalues of the eigenvalue problem (10. 5 . 3 ) for w : Thus, the problem is not well posed in any sense. If Re s. 0, then wetoobtain a solution ofsome Eq. (10.5. 3a)eigenvalues that satisfiesofLolPM 1 might ° but might not belong 1.:2(0,00) because of the be purely l imaginary [cf. Eq. (10.5.4)]. We make the following definition. 10.5.1. .([Det (Cl(w:, s:)) 0, where s: is purely imaginary, then s. defined by s. S' vls. 1 2 + w 2 is called a generalized eigenvalue of the eigen value problem (10.5.3) if II IP I I rE. L2 (0,00). Evento Lif (s:0, is purely imaginary, the corresponding eigenfunction IP might belong that is, Re r. In such a case s. < 0, 1, . . . , m Kp 2 is an eigenvalue. The theoryaxisforis incomplete. the case withIngeneralized eigenvalues, or eigenvaluesproblem on the imaginary some cases the initial-boundary-value isin well posed inforthedifference generalizedapproximations. sense, in other cases it is not. We discuss this more detai l Secondly,or equivalently, the alternativethatto theEq.determinant (10.5 .15) is condition that (C/ (w',is fulfilled: S')) - 1 is unifonnly bounded, Det (C (w', s')) 0, I w'l 1, I s' l $ 1, Re s' � 0. (10.5 .16) This of thelemma. Lopatinsky condition given in Lemma 10.5 .1.is Wea strengthened now have theversion following =
= T
TW .
=
=
Definition
=
=
REMARK.
00) ,
I
*
II =
$
10.5.5. Consider the initial-boundary-value problem (10.5. 1) with f = F = ° and with g satisfying S; S�1I" I g(y, t)1 2 dy dt < 00. Then there is a constant
Lemma
THE LAPLACE TRANSFORM METHOD
434
0
K > such that its solutions satisfy
f- f & l u(O, y, t) 1 2 dy dt � K f- f& I g( y, t) 1 2 dy dt o
0
0
(10.5.17)
0
if, and only if, Eq. (l0.5.16) holds. First assume that Eq. (10. 5 .16) holds. Then, because (CI t l is uniformly bounded, the solution w of Eq. (10.5 .13) satisfies (l0.5 .ISa) Res > 0, where(10.K5is.9)awith constant independent of s. The vector function u = Uw satisfies Eq. F 0, and, because U is a unitary matrix we have l ui I w l . Therefore,
Proof
=
w,
=
(l0.5 .ISb)
Re s > O.
By Parseval's relation, this inequality implies 1/
>
O.
Because the right-hand side(l0.is5 .17) independent of 1/ ,byEq.Parseval (10.5.17)'s relation, follows. we get Next assume that Eq. holds. Then, the corresponding integral inequality in the Fourier-Laplace space, and,arbias demonstrated above, this leads to the pointwise estimate (l0. 5 .lSb) for trary this estimate, obtain Eq. (l0.5 .ISa), which is equivalent to Eq. (10.g. 5From .16). This proves thewelemma. We now make the fOllowing definition. 10.5.2. Consider the system (l0. 5. 9) for F = O. /fits solutions satisfy Eq. (l O. 5. 18b), we say that it satisfies the Kreiss condition. BecauseRethes constant K is independent of s', one might think that the condition 0 could beisreplaced by Re s O.selects However, the reason for keeping the strict inequality that it automatically the correct general solution u through the condition Il u ll < because the exponentially growing part is annihilated. Definition
' w ,
�
>
00,
HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
435
Using the arguments above, we get the following lemma. The Kreiss condition is satisfied if, and only if, the eigenvalue roblem (10.5.3) has no eigenvalue or generalized eigenvalue for s � O. p Lemma 10.5.6.
Re The main result of the theory is presented in the following theorem.
Assume that Eq. (lO.5.la) is a strictly hyperbolic system. If the Kreiss condition is satisfied, then the initial boundary value problem is strongly well posed in the generalized sense.
Theorem 10.5.2.
Wetransformation will not give aprocess proof here. In applications it is not necessary toThego through thesolution leading to the formulation (10. 5 .13). general £I of Eq. (10. 5 . 9 ) with 11 £1 11 < 00 for F = 0 and Re s 0 is obtained just as for the eigenvalue problem, and we arrive at a system C(s, w)O" = (10.5.19) Res > 0, where is thecondition matrix occurring in Eq.to (10.5.6). With the proper normal ization,C(s,thew)Kreiss is equivalent >
g,
(10.5.20) 0, Re s � O. Note that C(s,thew imaginary must always befromdefined for ReWes demonstrate = 0 as a limit when s is approaching axis the right. the procedure in anIf aexample at the end of this section. hyperbolic problem is sense. well posed, then weconjecture have proved thatconverse it is alsois well posed in the generalized One might that the also true, butwe nohavegeneral results aretheorem. known. However, for strictly hyperbolic equations, the following Det (C(s, w»
-:/:-
)
Theorem 10.5.3. Assume that Eq. (1O.5.la) is strictly hyperbolic or symmetric hyperbolic. If the Kreiss condition is satisfied, then the initial-boundary-value problem is strongly well posed.
We willweonlycanprove this that result for symmetric hyperbolic systems. With out restriction, assume
Proof.
436
THE LAPLACE TRANSFORM METHOD
°
Ar
]
>
..
0,
.
°
Am
]
< 0,
is diagonal. We first solve an auxiliary problem ul = Aux + Buy, u(X, O) = f(x), dI (O, y, f) = 0,
lI u( . , t) 1I < 00. (10.5.21)
For its solution, we have the energy estimate dt
d
II u ll 2 +
that is,
f27r0 (u(O, y, f), Au(O, y, f» dy
=
0;
lI u( · , f) 1I 2 � ll u( - , 0) 1l 2 = I If( - ) 1I 2 , 2 2 (u(O, y, f), Au(O, y, f» dy df, u(O, y, f)1 2 dy df < Aj) l o o 1 �J � r 0 0 � Il f( · ) 11 2 . (10.5.22)
( min f T f 11"
Tf f 11"
We assume that u(x, f) is a smooth function of x, f. The difference w = U - u satisfies wt = Awx + Bwy, w(x, O) = 0, Low(O, y, f) = g( y, f),
g = -Lou(O, y, f),
I I w( - , f) 11 < 00.
(10.5.23)
If the Kreiss condition is satisfied, then
�o 211" I W(O, y, f)1 2 dy df f f0 2 � constant � 11" I g( y, f) 1 2 dy df � constant I I f( - ) 1 1 2 .
ff 0
0
Thus, we can estimate solution ofw(Eq.. , f)(1 0.5.23) on the boundary. We can use integration by partstheto estimate II I I and obtain
HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
d
dt
J 2" (W(O, y, t), Aw(O, y, t» dy, 2" � constant J I w(O, y, t) 1 2 dy;
II w l1 2 =
-
437
0
0
that is, constant J: J:" I w(0, y, t) 1 2 dy dt � constant l f(· ) 1I 2 • (10.5.24) The estimates (10.5.2hyperbolic 2) for v yieldsystems the finaltheestimate for u = v + w, which showsis that for symmetric initial-boundary-value problem strongly posedweif thenowKreiss As anwell example, discusscondition the systemis satisfied. II w( - , T) 1I 2 �
[ uu(2( l )) ] , (l0.5.25a)
u
with boundary conditions u( l ) (O, y, t)
=
au( 2\0, y, t) +
g,
(10.5.25b)
where a is a complex constant. Integration by parts gives us d J 2" «u 0,y, t), [ -1 01 ] u(0, y, t» dy, Il u l ,' 2 dt
- 0
°
J:" ( l u(l )(O, y, t) 1 2 - l u(2)(0, y, t) 1 2) dy, ( l a l 2 1) J:" I U(2) (0, y, t) 1 2 dy. -
Thus,We wewantobtain an energy estimate foralsol alestimate � 1. to discuss whether we can the solution of a using the Laplace transform. The eigenvalue problem (10.5.for3) hasotherthevalues form
THE LAPLACE TRANSFORM METHOD
438
CPx =
[ -lW-.s iWS ] ",
..,.. =: Mcp ,
I cp ll
For Re s
>
<
00.
0, M has exactly one eigenvalue
K _ Vs2 + w 2, Re K < 0, with negative real part. The corresponding eigenvector is given by =
Therefore, Thus, the problem is not well posed if the relation Res > 0, W real, (10.5 .26) has aonlysolution. A simple calculation shows that Eq. (10.5.26) has a solution if, and if, l a! > 1, 1m a -J- o. In that case, the problem is not well posed. We have already theshowncasethata the> 1,problem is well posed if !a! $; 1. Thus, we need only discuss l l where a is real. The problem (10.5.1 2) has the s + VS2 + w 2 = iaw ,
form
�
Ux
=
[ -lW -s iW ] S .
�
U,
�
U =
[ u� ((l2)) ] , U
u( l)(O, w,s) = au(2)(0, w, s) + g (w, s), l Ju( · ,w,s)!1 < 00, (10.5.27) wheregeneral we havesolution kept theof original variablesequation, s, w instead of the scaled ones s', w'. The the differential belonging to L2, is given by �
U = (J (J
[ s + VSlW. 2 + w2 ] e
is determined by the boundary condition
v - s2 + w2x .
HYPERBOLIC SYSTEMS WITH CONSTANT COEFFICIENTS
439
u(s + VS2 + W 2 - iaw) g; =
that is,
[ S + V.lWS2 + W2 ] e-v'S2 + W2X . g S + VS2 + w 2 - iaw
u
(O, w, s)I/lg(w, s) 1 is unbounded. Thus, the Kreiss condi We now showsatisfied. that luChoose tion is not the sign of w so that wa = I w a l and detenmne �1 from >
1
�1 +
Let s
=
i l w l� 1 + 1'/ , 1'/ « I w l . Then
lim
� = l a l·
( I U(2)(O, w, s)I/lg(w, s) l ) lim liw/(s + Vs2 + w 2 - iaw) 1 I w l �� lim Iw/li lw 1�1 Iw l �� + 1'/ + I w l V1 - �? + 2i�I1'//lw l (1'//w )2 - il aw l l l = lim Iw/ lilwl� 1 1'/ I w l �� ilw l � (l i�I1'//(l w l ( l - �f )) + &«1'/ /w)2)) - il aw I I I = lim Iw/l1'/ + i�I1'// V� &(1'/ 2/w) 1 1 00. I - �l Iwl ��
Iwl ��
+
+
+
+
+
Thus, we cannot obtain the estimate ( l O.5.18b), and the problem is not strongly well posed. One can show that it is also not well posed in the generalized sense. In thethe example above, energy estimates andconditions. Laplace transform techni ques yield same restriction for the boundary Generally, however, Laplace transform techniques give a much wider class of admissible bound ary conditions.
THE LAPLACE TRANSFORM METHOD
440
EXERCISES
Prove that Eq. (10.5.26) has a solution if, and only if, l a l > 1, m a O. 10.5.2. Prove that the problem (10. 5 . 2 5) is not well posed in the generalized sense5.2for5a) and l a l > 1, a real. [Hint: Add a forcing function F to Eq. (10. prove that Eq. (10.3.3a) is not satisfied.] 10.5.1.
I
-::/:.
10.6. PARABOLIC SYSTEMS IN MORE THAN ONE SPACE DIMENSION
Technically,literature we canonproceed as forwehyperbolic equations.toBecause there is an extensive the subject, restrict ourselves a simple example. We consider x (x, y),
Ut = Un + Uyy + F(x, t), u(x, O) = f(x), Lu(O,y, t) = g(y, t),
=
(10.6.1) the quarter space xoperator � 0, < y < t � O. L L(a/at, a/ax, a/ay) represents ainequations, linear difweferential with constant coefficients. Using the differential can eliminate /axp, where � 2. As before, we consider solu tionsProblems that arethat21r-periodic in y. or strongly well posed in the generalized sense, are well posed, are defined before. As adifferential test for well-posedness, construct special solu tions of the ashomogeneous equations of theweform (10.6.2a) U = eiwy +stcp (x) which satisfy (10.6.2b) (0) O. Substituting Eq. (10.6.2) into the differential equations gives us the eigenvalue problem (s w2) cp = CPx:n 0 � X < (10.6.3a) Il cp C ) 11 < (10.6.3b) L(s, a/ax, iw) cp (0) O. The boundary condition is of the form a(iw,s) cpAO) b(iw, s) cp (O) 0, where a and b are polynomials in s and iw. As before, we have the next lemma. - 00
00,
()P
=
p
Up
+
=
00,
00,
+
=
441
PARABOLIC SYSTEMS IN MORE THAN ONE SPACE DIMENSION
The initial-boundary-value problem (10.6.1) is not well posed if there is a sequence of eigenvalues Sj with Sj ---') 00.
Lemma 10.6.1.
Re For Res > 0, the general solution of Eq. (10.6.3a) belonging to L2 is
Therefore, the eigenvalues s are the solution of L(s, -"';;;-:;,;; iw) = -a(iw, s) "';;;-:;;; + b(iw, s)
0. (10.6.4) We nowEq.assume thatusingthereLaplace are noandeigenvalues s with Res > Then we can solve (10. 6 .1) Fourier transforms. Assuming thatf(x) 0, the transformed equations are =
'1/ 0.
)A
A FA <
2 + W U = uxx + , Ilu( - , w, s) 1 00, L(s, a/ax, iw)u(O, w,s) =
(
S
==
g(w,s).
(10.6.5) Now we can proceed as in Section 10.2. We write Eq. (10.6.5) as a first-order system by introducing a new variable v by Ux � v and obtain =
(10.6.6a) a(iw, s) "';;;-:;;; v(O, w, s) + b(iw, s)u(O, w, s) g(w, s).
(l0.6.6b)
The change of variables
] [ 1 = Y = y( l ) y<2)
[ transforms Eq. (10.6.6a) into the diagonal system / ] 1 � [ -1 0] , Yx = s + w 2 v
v'2
y
-
1 v2(s + w 2) and the boundary conditions (10.6.6b) into °
(10.6.7a)
THE LAPLACE TRANSFORM METHOD
442
(10.6.7b) where (-a(iw, s) � + b(iw, s))/ V2, (a(iw, s) � + b(iw, s))/ V2.
rl r2
As infollowing Section theorem. 10.2, we can now estimate the solution of Eq. (10.6.7) and obtain the 10.6.1. Assume that there are constants � 0, 0 0, C � O such that, for all s with Re s Theorem
TJo
w,
>
> TJ 0 ,
1 :� 1
h i � 0,
� c.
(10.6.8)
Then the initial-boundary-value problem is strongly well posed in the general ized sense.
Now we will discuss different boundary conditions. 1. Dirichlet and Neumann Conditions: u g or g, x 0. In this case a 0, b = 1 or a 1, b = 0, respectively. The conditions of Eq. (10.6.8) are satisfied. 2. Oblique Boundary Conditions: + {3uy g, x 0. Now a = 1, b i{3w. Therefore, =
Ux =
=
(10.6.9)
=
Ux
=
(10.6.10)
=
rl
=
-� + i{3w,
r2
=
� + i{3w.
A simple calculation shows that the conditions (10.6.8) are satisfied.
3. Boundary Conditions Containing Time Derivatives:
443
EXERCISES
g,
Ut + Ux
(10.6.11)
o.
x
In this case, r} = -� + s,
Ifnots well = 1 + Vi + w 2 , then r} = 0 and, by Lemma 10. 6 .1, the problem is posed. Alternatively, if we use the boundary conditions (10.6.12) x = 0, = g, we obtain Ut - Ux
r } = � + s,
r2 = � + s, -
and a simple calculation shows that the conditions (10.6.8) are satisfied. One can "stabilize" the first boundary conditions by replacing them by (10.6.13) = g. + Now the conditions (10.6.8) are satisfied. Ut
Ux - EUyy
EXERCISES 10.6.1.
Prove that6.10), the conditions .8) are6.13).satisfied for the boundary condi tions (10. (10.6.12), (10. and6(10.
10.7. SYSTEMS WITH VARIABLE COEFFICIENTS IN GENERAL DOMAINS
As wedomains have seen insmooth Sectionboundaries 9.6, the can initial-boundary-value problemproblem in gen eral with be reduced to the Cauchy and quarter-space problems where the coefficients oftangential the differential equation and the boundary conditions are 21r-periodic in the variables. The quarter-space problems with variable coefficients can, for large classes of hyper bolic,with parabolic, andcoefficients. mixed hyperbolic-parabolic problems, be reduced to sys tems constant As ofexample, considerconditions the system (10.5smooth .1), wherefunctions A and and the coef ficients the boundary are now of all with variables. Assume that the system is strictly hyperbolic. Consider all systems conan
B
THE LAPLACE TRANSFORM METHOD
444
stant coefficientsIf, byfor "freezing" the coeffiwithcientsconstant at everycoeffiboundary pointestimate0, all these systems c ients, the (10.5.16) holdsposed uniformly, then the problem with variable coefficients is also strongly well in the generalized sense. Similar problems results holdcanforalsoparabolic andbymixed hyperbolic-parabolic systems. Nonlinear be solved linearization and iteration. For more details we refer to the literature.
Y = Yo, t to. =
x =
BmLIOGRAPHIC NOTES
InMach-numbers. Gustafsson andTheKreiss (1983), the example Euler equations areLaplace analyzedtransform for low result is another where the technique shows well-posedness formethod. a wider class of boundary conditions than theOther one obtained with the energy examples where analysis ofintheHenshaw, type described inandthisReyna chapter(1994) has been applied to "real" problems are found Kreiss, and Johansson (1991a, 1991b, 1993). For references concerning the general theory, we refer to Kreiss and Lorenz (1989).
11 THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
11.1. HYPERBOLIC PROBLEMS
In 2this.1) and section, we) andwantderive to consider difference approximations of the system (9.these (9. 2 . 2 discrete energy estimates. In the continuous case,in energy estimates were obtained using the integration by parts rules Lemma 9.2.1. Weofneed corresponding summation-by-parts rules for the discrete approximations a/ax. To derive these rules, we divide the interval ° S; x 1 into subintervals of length h = liN, where N is a natural number. Introduce gridpoints Xj = jh, j= O, l, ... ,N, and gridfunctions S;
The simplest scalar product and nonn are defined by (u, v)r, s =
s
L ujvjh,
Il ull�, s = (u, u)r, s .
j= r
or, in the case of vector valued functions, (u, v)r, s
=
s
L (Uj , vj )h. j= r
We now have the following lemma, which corresponds to Lemma 9.2.1. 445
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
446
Lemma 11.1.1.
+I -(D_ u, v), + I,s + I + uj vj l : , -(D+ u, v)" s - h(D+ u, D+ v)" s (u , Dov)" s Fj + ,, 11
+
UjVj 1: -(Dou, v)" s + 1 (UjVj + I + Uj + I Vj) I : _ I '
+I
,
Fk + " - Ff+ " .
Proof.
s+ 1
s
L UjVj + 1
L
j= , + 1
j=,
j=r+ I -(D- u , v)r + I , s + I
s
- L (Uj + 1 + 2(u , Dov)r, s
j=r -UjVj I Sr + I ,
+
Uj _ I Vj
S+ I u,v' J J I,
- U)Vj -
s
L (Uj + 1 j=r
- U)(Vj + 1 - Vj )
-(D+u, v)" s s s- I s s+ I = _ _ Uj + I Vj, Uj I Vj UjVj 1 UjVj + I j= , + I j= r - I j=r j=r s (Uj + 1 - Uj - dVj + (Us Va I + Ua l Vs) j=,
L
L
L
L
-L
- (u, _ I v, + u,v,_ d, -2(Dou, v)" s + (UjVj + 1
+
Uj + I Vj) I : _ I '
This proves the lemma. nowthederive estimates for simple difference approximations. WeWebegincanwith scalarenergy equation (11.1.1) � 0, o � x � 1, with initial and boundary data (see Figure 11.1.1)
HYPERBOLIC PROBLEMS
447
u(l,t) g(t) =
u(x,O)
=
x
f(x)
Figure 11.1.1.
u(x, O) = I(x),
u(l ,
t) get). =
(11.1.2)
We approximate Eqs. (11.1.1) and (11.1.2) by (11.1.3) j = O, l, . . . , N - 1, with initial and boundary conditions j = O, l, ... ,N - 1, VN(t) = get). (11.1.4) We can(11.1.eliminate Eq. (11.1.an3) using thevalueboundary condition. Therefore, VN(11.1.from4) represent Eqs. 3 ) and initial problem for N ordinary dif ferential equations in N unknowns. Lemma 11.1.1 gives us the energy estimate :t I vI 6, N - l ( �� 'V) O. N _ l (D+v, V)O,N - hIlD+vII6 N- l =
= =
-1
( v, �� ) O. N - l ' (v, D+V)O,N I VN(t) 1 2 - I vo(t) 1 2 ::;; I g(t) 1 2. (11.1.5)
+
+ +
-I,
Therefore, I v(t)1I6.N - ! ::;; 1 / 1I6.N - !
+
t Ig(r) 1 2dr.
(11.1.6)
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
448
The simplest time discretization is the forward Euler scheme wt (I + kD+)w,] , j = 0, . . . ,N 1, W ? = Ij, (11.1.7) w;z, = g n . Now we obtain, for A = kjh ::; 1, using Lemma 11.1.1, I w n + 1 1 l6 N_ l 1 (1 + kD+)w nI l6 N _ I' = I w nI l6, N_ l + k2 I 1D+w nIl6, N_ l + k ((w n ,D+w n )O, N _ l + (D+wn ,W n )O,N_ j} , IlwnI16, N_ l - (hk - k2)I D+ w nIl6, N_l + klw�1 2 - kl wol2, (11.1.8) ::; I w nl l6 N - I + k l g nl 2 ; that is, - 1 I g nl2k. I w nI l6, N _ 1 ::; 1 1 1 I6,N - l + nL v =O Thus,corresponds we also obtain an energy estimate for the fully discretized approximation that to Eq. We approximate the problem ° ::; x ::; 1, t 0, U(X, O) = I(x), u(O, t) = get), by 1 =
1,
-
]
=
=
( 1 1 . 1 .6).
�
h, j 1,2, . .. ,N, vo(t) = get), and the forward Eulerobtain scheme for time discretization. Instead of Eqs. (11.1.again 5) anduse(11.1.8), we now =
=
HYPERBOLIC PROBLEMS
449
and Ilw n + I IIT,N
11(1 - kD_)w n lli N ' IIw n llT, N - (hk - e)IID _w n ll i,N $; Ilw n lli,N + k l g n l 2 ,
k l w� 1 2
+
k l w� 1 2 ,
01 . 1.9)
respectively. Next consider the system with constant coefficients au 0 $; x $; l , t = Aux. at u(x, O) = f(x), u Il(O, t) R I u I (O, t) + g Il(t), u I O , t) R IlU Il O, t) + g I (t),
� 0,
=
01.1.10)
where is a diagonal those above ismatrix given with by AI 0 and All < O. The approximation analogous to >
j = O, I, . .. , N - 1, j 1,2, . . . , N,
dUll dt Uj (O) = Ii, U�I(t) R I da(t) + g Il(t), u�(t) = R Ilu I,J(t) + g I (t). ]
(11.1.11) Referring back to Section 9.2, we can assume that I RI I and I R Il I are as small as necessary. We introduce the scalar product and obtain, from Eq. 01.1.11) and Lemma 11.1.1,
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
450
:t Ilul � (A1D+ UI , UI )0,N_ I (ul ,A1D+ ul)0,N_ I =
+
(AIlD_ UI1l, UIl) I, N (ull,AIlD_ UIl) I , N ll,AIlD_ UIl)I , N = -h(D+ u l , A D+ u l )0,N _ I h(D_u 1 (uj,Au)l � � constant ( l g 1 2 I g Il 1 2), +
+
+
+
+
that is, I I and I RIl I are sufficiently small. provided I R If A Cl is a function of x, t, then we obtain E
where maxx results for periodic probloperator ems.) 1 Al x I . (Seel theO.corresponding We defi n e the sernidiscrete solution Assume that g g Sh (t, to) as the mapping a=
==
i
==
The estimate above tells us that 's principle to write down the solution of the Therefore, we can use Duhamel inhomogeneous equation andgetestimate it. Asof the for periodic problems, weForcantime add discretization, lowerdifferential order terms and still estimates same form. we againtheuseeigenvalues the explicitAjEuler methodmaxj and obtain the� same kind of estimates, provided of A satisfy I A j jh I k 1. We also could have used the implicit Euler method or the Crank-Nicholson scheme. With these methods, there isto noprove restriction on A = kofjhsolutions . The scheme above can be used the existence of thefor initial-boundary-value problem in the manner described earlier. However, practical calculations, the scheme has a number of drawbacks. 1. It is only first order accurate. 2. For scalar equations
451
HYPERBOLIC PROBLEMS
Ur
=
aux
the approximation Qv of aux depends on the sign of a; that is, + , if a 0, QV { aD aD_, if a < O. Otherwise, no energy estimates are possible since the approximation is not stable. This poses some difficulties if a = a(x, t) is a function of x, t and changes sign. We can overcome them by using =
>
In applications, hyperboliconesystems are rarelythediagonal. If onethiswants to employ the above method, must diagonalize system first; can be quite expensive withdownregard todifference computerapproximation, time. (Of courseandonethencantransform diagonalize the system, write the the system back to its original form). now derive beandWe inapproximate diagonal form.itanbyWeapproximation start with thethatscalaris second-order problem Eqs.accurate (11.1.1)andandneed (11.1.not2) dvjdt = Dovj , j = 1, 2, ... ,N 1, (11.1.12) Vj (O) = fj . To obtain a unique solution, we need equations for va and VN . We use (11.1.13) VN (t) = get). Thus, we use thein theboundary condition of the continuous problem, a centered approximation interior, and a one-sided approximation at x = o. The modification of boundary the difference operatorthatatis,x we0 use can thealsocentered be expressed as animation addition of an extra condition, approx at x = 0 but supply a boundary condition that determines V_ -
=
I:
dvo/dt = Dovo,
Ifextrawe boundary eliminate condition V_ I , we again obtain the one-sided approximation above. The determines techniques V- I as a linear extrapolation of V I and Va . Linear or higher order extrapolation are often used to supply extra boundary conditions. Formally, we can write the difference approximation as
452
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
dvjdt
=
Qv,
VN (t)
=
get).
Asa scalar in theproduct previous(., examples, we obtain an energy estimate if we can construct ')h such that that is, Q has the same property as
ajax.
The scalar product we use is
The construction of suitable11.1.1, scalarweproducts tion 11.4. Using Lemma obtain is discussed in more detail in Sec d
dt
I I vllh2 = ! ((D+ vo)vo
voD+ vo)h + ! ((D_VN )VN + vN D_VN )h + (Dov, V) I,N - I + (v, DO V) I,N J , = I VN I 2 - I vo l 2 s; I g(t)1 2 . +
-
Thus,If thewetrapezoidal obtain an energy rule is estimate. used for time discretization, we obtain V)n + 1 - v'!) � Q(v)n + 1 + v)'! ), j O, I , . . . ,N - 1, 2 =
01.1.14) Because Q is semibounded under our scalar product, unconditional stability follows immediately as with the pure initial value problem treated in Section 5.3. Similarly, we get unconditional stability for the backward Euler method (I kQ)v)n + 1 O, I , ... ,N 1, j V 'j/ I g n + , 01.1.15) vJ /j. (cf.Implicit Theoremschemes 5.3.3). are usually inefficientscheme, for solving hyperbolic problems. A more convenient scheme is the leap-frog modified at the boundary to attain stability. We write the approximation [Egs. 01.1.12) and 01.1.13)] with -
=
=
=
l
-
HYPERBOLIC PROBLEMS g
=
453
[-1 -11 0 01 ............ 0 ........ 00 ]
0 in matrix fonn
1
dv dt
h
.� ... �.� . . .� ...1. . �. . . : : ... � o . . . . . .. ... 0 I 0
v,
.
.
v
- 2:
.
[5J
that is, D
where
1 dv = h dt
( C - B) v
,
1[ 0........ 0 ] [ 0 ... 0 ] �... � .. � .. .. . . . .� , �.� . . �. . . :. . .0� ..... ...0� , 0....... 0 [t0 ..�. . . .:. .: .�0 ]. 1
D
C
.
. .
1
B
=
=
o
? ....
I
- 2:
Thus, can be and considered as a system for andThe matrixnite. Therefore, weis antisym metric is symmetric semidefi the results of Theorem 5.3.4 to construct the time discretization can use D- 1 /2 BD- 1 /2
that is
D I /2 V.
D- 1 /2 CD- 1 /2
454
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
v 0n + 1 v� + 1 vN ]
= =
k V no - I + 2 h (v ni I Vjn - + 2kDovi ,
- !(v o + 1 + Va - I )), j 1,2, ... ,N - 1, =
O. The approximation is stable if =
(11.1.16)
O. Because the grid values vi + I are not coupled to each other, the scheme is explicit. Let us calculate ID- I/2 CD- I/2 . By definition, o >
1
I D- I /2 CD- I/2 1 2 =
max lui
01 0
I (D- l /2 CD- l /2 )u I 2 l ul 2 2
=
max
lui 01 0
-2 + 41 �N £.Jj = 2 1 Uj + �N - l 1 U . 12 £.Jj = O J
I - Uj - 1 2 + 1 UN - I 1 2 1
1
4
Thus,We thecanapproximation kjh < 1. generalize theseis stable resultsforto systems o S; x S; 1, t � O. (11.1.17) Here Aonly,A*weis assume a symmetric matrix, which need not be diagonal. For conve nience that A is constant and that the boundary conditions are homogeneous. Assume that the boundary conditions are of the form =
U I t) u l/(1, t)
(0,
0, 0,
UI u l/
=
=
(u ( I ) , . . . , u ( r)l, (u (r + I ) , , u (m)l. • • •
Then (u,Aux ) + (Aux , u)
(u,Au) l b,
(11.1.18)
455
HYPERBOLIC PROBLEMS
and we obtain an energy estimate if (- l ) j(u(j, t),Au(j, t»
0, forTheall usemi thatdiscrete satisfy approximation the boundary conditions. is given by dvJ· _ = ADov.J ' dt
;:::
j
=
0, 1,
1,2, . . . , N - 1, Vb(t) = 0, V�(t) = O. j
dVbl dt dv� dt
(11.1.19)
=
(11.1.20)
We now prove that it satisfies energy estimate. We use the scalar product an
and obtain, from Lemma 11.1.1 and Eq. (11.1.19), :t IlvlI�
=
=
� « vbl, (AD+ voF) h
+ 2
« v IN , (AD_VN »I
+
« AD+ vO )ll, vb/»
_J « AD_VN )I , I/ N» (ADo v, V) I ,N - I ,
+
+
(v, ADov) I,N - I
+
« vo,AD+ vo) + (AD+ vo, Vo» h 2 « vN ,AD_VN ) + (AD_VN ' VN» (v,ADov) I,N - 1 + (ADoV, V) I,N - I
2
h
+
+
(vj , Av) l� � o.
TheCorresponding energy estimateto Eq. follows. (11.1.16), the completely discretized approximation is
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
456
j 1,2, ... , N 1 , 1 , 0, 1 , II 2: (A(u7 � (uo + 1 + UO - I )))II, II = 1 0, 1 , 2k (A(UNn _ I I (UnN I + UnN- 1 )))1 . (11.1.2 1) h -
-
I (Uo + )II p (U::/ )II n+ (U N 1 )/
_
0, __
-
(unN- 1 )/
_
-
,
'2
+
It isNowstableassume for k that IA I /hthe< 1.boundary conditions are not given in the form of Eq. (11.1.18), but consist of r and r linearly independent relations Lou(O, t) = ° and LI u(1, t) 0, (11.1.22) respectively, still satisfyThen Eq. (11.1.19). We can unitary assume matrices that the row vec tors of £0, L which are orthogonal. we can construct m
-
=
I
and a unitary matrix U(x)
E
c�
U(x) =
with
{ UaUI ,, forl'or °� �< xx �-< t,1 , l'
3
-
that connects Ua with U I . Substituting new variables = Uu into Eq. (11.1.17 ) and (11.1. 2 2 , the boundaryofconditions for thewilldibefferential of the formequations of Eq. (11.1.18), ) and in the neighborhood the boundaries become u
u
The approximation at x ° has the form II = 1 , 0, 1 , ( VOH) I = 0, 2 � (VO + I + V O - I )) )II ( VO+ I ) II = (VO - I ) II + : ( U A u* ( v 7 In the original variables, we have =
-
_
EXERCISES
457
(UVo + P) / = Lovo + P = Rovo + 1 = RO V O - l +
0,
v =
-1,0, 1,
2: RO ( A ( v7 � (v0 + 1 VO - 1 )) ) . Atfor xk =A 1,/h we< 1 obtain the corresponding relations. The approximation is stable . +
-
l I
EXERCISES 11.1.1.
Consider the approximation j 1,2, ... , V� = v/O) = /j.
0,
Prove that the norm IIvll
(�
I vo l 2
+
IIvll i. �
)
1/2
is independent of t. 11.1.2. Consider the approximation where Re (w, Q l wh :::;; £xllwll� , Re (w, Qowh = kllQo llh :::;; 5 5,
0,
1 0, forstable.all gridfunctions satisfying the boundary conditions. Prove that it is -
>
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
458
11.2 PARABOLIC DIFFERENTIAL EQUATIONS
We begin with an example. Consider the heat equation o s; x 1, t ;?! 0, S;
u(x, O)
=
I(x),
(11.2.1)
with Dirichlet boundary conditions (11.2.2) The mesh is approximation constructed as isit was for hyperbolic differential equations. The semidiscrete u(O, t)
=
u(I , t) = O.
j 1, 2, ... , N - 1, =
(11.2.3)
with boundary conditions Va = VN = O.
(11.2.4) For 11.1.1,simplicity, we assume that all of these functions are real. Using Lemma � :t Ilvlli. N - I
(v, D+ D_v)I, N_ 1 - IID_vllt N + vjD_vj l � , - liD- viii N + vN D_VN - VaD_ v l = - I ID_vlli, N s; o. =
Thus,FromtheSection approximation iscompletely stable. discretized approximation is stable if we 6. 3 , the use Eulermethod method or the Crank-Nicholson method. One can also use thethe backwards forward Euler wr l = wj + kD+ D_wj, wJ = = w3 w N = O.
/j, j I,2, . . . , N - 1,
We obtain
PARABOLIC DIFFERENTIAL EQUATIONS
459
IIw n llT ,N_ I + 2k(w n , D+D_w n ) I ,N _ I + k2 I1D+D_w n lli N - I ' Ilw n llT,N _ l - 2kIlD _w n llr,N + e IlD+D_ w n llr,N _ l '
Observing that IID+yllr,N - 1
N- I
h- 2
L
� 2h - 2
j= l N- l
IYj + I - Yj l 2h,
L ( IYj + 1 1 2
j= l � 4h- 2 liY llr N '
+
l yY)h,
we obtain Therefore, Thus, the approximation is stable for 2kjh2 � 1 which is the same stability limit deriToveddiscuss in Section 2 for the periodic problem. more inequality analogousgeneral to that inboundary Lemmaconditions 9.3 .1. we need a discrete Sobolev Lemma 11.2.1. For any gridfunction and every f > 0 we have max I f 1 2 � fliD-flli' N C(f )lI f Il6, N ' 05.j5.N Here C(f ) is a constant that depends on f . with The proof proceeds as in the continuous case. Let and be indices I f,, 1 = 05.min j 5. N I h l , and assume, for simplicity, that � By Lemma 1 1. 1. 1 , v
+
Proof.
J1.
J1.
p.
p
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
460
that is, O��XN I h l 2 :5 omi2N Ih l 2 + I f l l', v(I D+f l l',v - ' + I D_fl l' +" v), :5 I f I l 5, N + 2I 1f l o, NI D_fl h , N, C(l') = 1 + l' - ' . :5 l'I ID_ f l I , N + C(l')l I f I l5,N ' Now we consider Eq. (11.2 .1) with boundary conditions (11.2.5) v = 0, 1. uAv, t) + rvu v, t = 0, We choose the gridpoints according to Figure 11.2 .1 as Xj = -1h + jh, j O, I, ... , N; (N - 1)h 1. byThen, Xo -hj2, XN 1 +hj2, and we approximate Eq. (11.2 .1) and (11.2.5) dv:· (11.2.6a) D+D_vj, d Vj(O) h, j = 1,2, . . . , N - 1, (11.2.6b) D-VN + "21 r,(vN + VN - = 0. (11.2.6c) As(11.before, can derive 2.6c) andweLemma 11.1.1anweenergy get estimate. Using the boundary conditions 1 d (v,D+D_v)" N - l -I ID_vl I� N + vND_VN - v,D_vJ, "2 dt I vl I2" N - , -I D- vl � N - "21 r, vN(vN + vN-d + "21 rov,(v, + vo)· (
)
=
=
=
=
=
=
d
=
Xo
0
Xl
X2
� h
S
S Figure 11.2.1.
XN_2
XN_l
1
XN
PARABOLIC DIFFERENTIAL EQUATIONS
461
Furthennore, the boundary conditions imply that I vo l � constant l VI I , I VN I � constant I VN I i , for h sufficiently small. Thus, by Lemma 1 1.2.1, VOVI � constant I VI 12 � € I D�vl tN + C(€)l Ivl i,N ' VN� IVN � constant I VN� � €IID� vl �, N + C(€ )l IvlII, N' Because I vl i, N � constant I vl i N � I ' we get, by choosing sufficiently small, � :t I vl i,N I � constant I vl i N � I ' andForthetime energydiscretization estimate follows. wegeneral can again use forward Euler, backward Euler, or(9.3.10), the trapezoidal rule. For parabolic systems, such as Eqs. (9.3.8) to we consider the approximation dv· (AjD+D� + BjDo + Cj)Vj, j 1 , 2, . . . , N 1, dt fj, Vj(O) RIOD+vo + tRoo(vo + vr) 0, Rl I D�VN + tROI (VN + VN� I ) = O. �
11
2
€
,
�
1
_
=
-
=
( 1 1 .2.7)
Asestiminatethe(Exercise continuous1 1 .2.1). case, one can show that the solutions satisfy an energy Now we consider the quarter-space problem for the system (9.3.12): o � x < 00, t � 0, p > 0, u(x, O) f(x), ul (0, t) : (u( l) (O, t), U(2) (0, t), . . , u( r) (o, t))T = 0, U�I(O, t) : = (u� + 1 ) (0, t), u� + 2) (0, t), . . . , u�m) (o, t))T lIu (· , t)1I =
=
.
0,
( 1 1 .2.8) < 00. Here B B* is a constant symmetric matrix with r negative eigenvalues and (y, By) � 0 for all vectors y with y I O. We want to show that the solutions of =
=
462
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
1,2, .. . , vb = 0,
V II- I
=
v II1 '
(1 1.2.9)
satisfy energy estimate. We use the scalar product an
and obtain 1 d (vo, vo)h dtd (v, v)',=, (vbl,BD+vb/)h + p(vbl,D+D_vb/)h + 2(v,BDov)t,= 2p(v,D+D_v)t,= 2" dt
+
I
+
+
II,
where I .-
p(vbl, D+D_vb/)h + 2p(v, D+D_ v)t, =, -2p(vo,D+vo) + p(vbl,D+D_vb/)h - 2p1iD+ v1l6 =, _ .!:... (V II V II h t VII- t ) - 2pI D+vIl6,=, -2p I D+vI l6 = 0; II . - (vo, BD+ vo)h + 2(v, BDov)t,=, (vo,BD+vo)h - (vo,Bvt) -(vo,Bvo) O. Thus, (d/dt)lIvl � 0 and the energy estimate follows. In theforprevious sectionproblems we demonstrated how estimatesboundary could beconditions. obtained directly hyperbolic with inhomogeneous Here we discuss a different andby more generala technique. The boundary condi tions are made homogeneous subtracting suitable function from the solu tion. Consider the problem 0 '
_
:::;
=
:::;
:::;
463
PARABOLIC DIFFERENTIAL EQUATIONS
x � 1 , t � 0,
° � u(O, t) = go(t), uO, t) = g I (t), u( , O) =
x f(x). Let the grid be defined by
Xj = jh, j = O, I, . . . ,N; (N - �)h = l,
0 1 .2.10)
and consider the semidiscrete approximation du: D D_u + Fj , d = + j uo(t) = go(t), D-UN (t) = g I (t), Uj (O) = /j.
,N - 1, o 1 .2. 1 1 a) o 1.2. lIb) o 1 .2. l Ic) 0 1 .2. l I d) The locationForofa the gridpoisolution nts makes the boundary condition 0 1 .2.1 1 c) center correctly. smooth u(x , t) we have j = 1 , 2, . . .
showing second-order accuracy. With
+
Xj (Xj - 1)gNCt),
0 1 .2.12)
we have D+ D_
+
..
gNCt», j = 1 , 2, . , N - 1,
Therefore, W satisfies dw: = D+D_wj + Fj, j = 1, 2, . , N - 1, d wo(t) = 0, D_WN (t) = 0, w/O) = /j, j = O, I , . . . , N, . .
0 1 .2.13)
464
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
where Fjjj = /jFj - 2go(t) 2gN(t) - (Xj - 1 )2 d�;t) - Xj(Xj - 1) dg;t(t) , 'Pj(O). D+D_ is a semibounded operator, and, as usual, we get +
+
=
which yields where For the solution u of the original problem we get l Iu(t)l � � 1 'P(t)I � etl lj I � + f� et - TI F(r)l � dr, � constant et ( I go(t)1 2 I go(O)1 2 I gN(t) 1 2 + I gN(O)1 2 + I ! I � I: ( IlF(T)l l + I gO(T) I ' + 1 dg;�T) I ' (11.2.14) I gN(T)I ' + 1 dg;,(T) n dT) . +
+
+
+
+
derivatives ofwould the boundary data occur in the right-hand side, thisanyestimate isforSince weaker than we like. However, in general, we cannot expect better Dirichlet boundary condition, because the underlying continuous problem is not strongly well posed.
465
EXERCISES
EXERCISES
Prove that the approximation 0 1 .2.7) satisfies an energy estimate. 11.2.2. Apply the forward Euler method to Eq. 0 1 .2.6) and prove that it is stable for kjh2 :s; � . 11.2.1.
11.3. STABILITY, CONSISTENCY, AND ORDER OF ACCURACY
weway discussas wethedidabovewithconcepts in a general setting. We can proto Inceedthisinsection, the same the periodic problem. Corresponding Section we) wiconsider a general system(9.4.2). of differential equations ofgridfunc the form oftions,Eq. and(9.49.4,a. 1discrete th boundary conditions We introduce a grid, norm, , and approximate the continuous problem by IHlh
d · = d: =
u Q(xj,t,D)uj Fj, j 1 , 2, ... ,N 1 , t � to, u/to) h, Lo(t, D)vo(t) go(t), t � to, 0 1 .3 . 1 ) t � to. operator in the x direction. For convenience, we Dhaveis anusedarbithetrarysamedifference notation for the gridfunctions ,h, and go as used for the Fj corresponding functions in the continuous problem, even though they may be different in the gridpoints. We assume that theby grid and the boundary conditions are such that v is uniquely determined Eq. 0 1 .3 . 1 ). The following definition corresponds to Definition 9.4. 1 . Definition 11.3.1. Consider Eq. (11.3.1) with F = go = gN = O. We call the approximation stable if, for all h :s; ho there are constants K and such that, +
-
=
,
for all to and all u(to),
Q(
0 1 .3.2)
The constants K and are, in general, different from the corresponding ones for the conti n uous problem. The assumption of a unique solution implies that h be finite for every j and every fixed h. However, we may allow I h I � 00 asdefined h � 0 as long as IIf IIh < 00 is independent of h. For example, with the grid by Xj = (j 1 j2)h, we can handle the function h = x- 1/4 , because it is well defined at all gridpoints and Q(
-
466
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
N- I I fl� L j=I I h l 2h < 00, h O. Actually, wewecandodefine approxithismaterather solutions if theissue singularity falls onIfa there grid point, but not pursue academic any further. are discontinuities in go, gN, ofandsmooth of time,andgeneralized solutions Fj as functions defined by a sequence approximations then taken towith the are can now be defined for the problem limit. The solution operator S(t, to) +!,.:: satisfying } f = O. It operatesandonstability all bounded gri d functions {V gothe boundary gN F conditions, j is equivalent to the condition (11.3 .3) When treating the conditions case F 0,toit iseliminate, convenientfromto rewrite the approximation. We use the boundary the approximation, all vectors Vj, with j � 0, j � N. The resulting system has the form d : = QVj Fj , j = 1, 2, . . . , N - 1, (11.3.4) j 1,2, . . . , N - 1. v/dto) h, A simple example of this kind of modification is d J Dovj Fj , j = 1,2, ... , N - 1 , (11.3 .5a) (11.3.5b) Vodt- 2VI V2 = 0, (11.3 .5c) VN 0, where the modified difference operator Q is defined by 1, j 2,3, .. . , N - 2, , j Dov j (11.3 .6) QVj VN - 2 j N - 1. - ---V;-- ' The of course, neoussolution problemoperator with FS is,0 can now bethewrisame. tten The solution of the inhomoge --7
=
=
=
v
v
=I-
-
-
+
=
=
-
_
=
+
+
=
r>vJ ,
=I-
Vj(t) S(t,to)fj Jtto S(t, r)F/r) dr. =
+
(11.3 .7)
467
STABILITY, CONSISTENCY, AND ORDER OF ACCURACY
By using the inequality (11.3.3), we obtain the estimate (11.3 .8) Eq. (4.any7.2extra ). Thisdifficulty shows that introduction ofis where t) is defidoesnednotin cause astable. forcing'P*(a,function if thetheapproximation Aswhen we have seentheabove, themethod. basis forThestability isdefialways aisemibounded oper ator using energy formal n ition s given by the fol lowing definition. Definition 11.3.2. The discrete operator is semibounded if, for all grid functions v that satisfy the homogeneous boundary conditions Lovo ° and LNvN 0, there is a scalar product and a norm such that the inequality (11.3.9) Re (v, Qvh � allvll� holds. Here a is independent of h, x, t, and v. be treatedassumption. by a change of variables, as Inhomogeneous shown in Sectionboundary 11.2. Weconditions make thecanfollowing Q
=
=
We can find a smooth function 'P(x, t) that satisfies Lo(t, D)'Po(t) go(t), LN(t, D)'PN(t) gN(t), x n;� (I �� I + I �:� I ) d � c m� I go l + I O + I gN I + I d:; I ) , (0, 1, . . . . :t I (11.3.10) Here the are constants that do not depend on h. Thegogridfunction = Vj -'Pj now satisfies the original approximation (11. 3 .1) with = gN = 0 and with a modified but bounded forcing function Fj . By Duhamel's principle, we now obtain
Assumption 11.3.1.
=
=
p
/I
Cp
Wj
=
468
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
\lw(t)lIh � K ( ea(t-to) I f lih + IP*ey, t to) ��� ( "F(T)lh + I gO(T)I + I dg;;T) I + I gN(T) I I dg;;T) I) ) . (11.3 .11) TheWefinwant al estimate foroutv that then obtained by using Eq. (11.on3the.10).bound = w this + IP isassumption to point imposes restrictions ary beyondcondition the obvious smoothness requirements. As an example, consider the data boundary -
t
t
+
The smoothness of IP(x, t) implies dIP (O,t) &(h ); IPI (t) - IPo(t) = h a; thatTheis, above the boundary data go(t) mustestimates satisfy go(t) &(h). if the approxima construction to obtain is unnecessary tion is strongly stable. Corresponding to Definition 9.4.2, we make this defini tion. Definition 11.3.3. The approximation is strongly stable if it is stable and if, instead of Eq. (11.3.2), the estimate I v(t)I � � K(t, to)(lIv(to)lI� + max I F(T)I � ( 1 1 .3. 1 2) hol is a bounded function in any finite time interval and does s. HereonK(t,thto)e data. notddepend All thearedifference approximations for hyperbolic systems that we(11.have disfor cussed strongly stable. The approximation [Eqs. (11. 2 . 3 ) and 2 . 4 )] the heat equationif wewithrepltheaceDirichlet boundary condition is not strongly sta ble. However, the boundary condition by derivative conditions (11.We2.5now ), thendefine the resulting approximation (11.accuracy 2.6) is strongly stable. consistency and order of of the di f ference approx imation. Definition 11.3.4. Let u(x, t) be a smooth solution of the differential equation. The approximation is accurate of order p, if the restriction to the grid satisfies +
=
tO � T � t
2
STABILITY, CONSISTENCY, AND ORDER OF ACCURACY
469
the perturbed system duJ Q(xj,t,D)uj Fj hPFj , j = 1,2, . . . , N - 1, dt 1,2, . . . , N - 1, Uj(to) = h hPjj, j h go(t), Lo(t, D) = go(t) P 01.3 .13) LN(t,D) gN(t) hPgN(t), are bounded independent of h. /fp � 1, jj, go, gN, dgo/dt,is called and dgN/dt wthenheretheFj,approximation consistent. the solutions of consistent approximations follows immedi atelyConvergence from strongofstability. p, then 11.3.1. If the approximation is strongly stable and accurate of order Ilu(·, t) - v(t)l h constant hP for smooth solutions u(x, t) on any finite time interval (0, T). Proof. The error = u - v satisfies Eq. 01. 3 .1), where the forcing function, the initialEq.data01.and3 .12). the boundary data are of order hP • Thus, the theorem follows from Astrongly similar stable. theoremHowever, can be formulated forprocedure approximations that are astable but not the necessary of subtracting function that treat satisfies Assumption 11.3 .1detail does innotSection always12.give7. optimal error estimates. We this problem in more Now consider fully discrete approximations _
=
+
+
+
=
+
=
+
Theorem
S;
w
q
Q- I V; + = 0L= 0 Qavr o + kF'j, vj = fj, Lo uO = g 3 , LNV'fv = g'fv. 1
j
1, 2, . . . , N - 1,
=
a =
0, 1 , . . . , q,
(11.3.14)
Here Lv vv
q
-
L
a=-I
Sva v � - a , p
O, N,
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
470
where the SV fJ are the boundary operators on each time level. We assume that u + I can be solved for boundedly in terms of values at the previous + 1 time levels; that is, we require that there is a unique solution of Q- I Uj Gj, j 1, 2, . . , N - 1, SO,- I Uo go, (11.3 .15) SN, -I UN gN , and that it satisfies an estimate (11.3 .16) (The factor h multiplies the boundary data because it is included in the norm IHlh . ) the previous are defindefi ed innition. analogy with the semidiscrete case. ForAllexample, we haveconcepts the following Definition 11.3.5. The approximation (11.3.14) is stable if, for go g'N 0, there are constants and such that, for all f n
q
=
=
=
.
=
q
L I lu n + lJ l I� fJ � O
P =
IJ ,
a
K
K2 e 2 atn
�
q
L IIf fJ l I� .
=
=
(11.3 .17)
fJ � O
EXERCISES 11.3.1.
Prove that R1OD+ uO R I I D- UN
+
+
"2
1
Roo(uo
1 (UN
"2 R O I
+
+
UI )
j
=
=
go,
UN - I )
=
. . - 1,
1 , 2, . , N
gN ,
fj isestimate stronglybreaks stable.down [Rij is defined in Eq. (9. 3 .10). ] Prove that the strong if R IO R O. 11.3.2. Consider the approximation 01. 3 .14) for go g'N 0 and assume that it is stable. Prove that the solution satisfies the estimate u/O)
=
,
=
11 =
=
=
HIGHER ORDER APPROXIMATIONS
471
11.4 HIGHER ORDER APPROXIMATIONS
InenceSections 1 1 . 1 and 1 1 .2, we have seen that we can always construct differ approximations thatis notarediffi first-cultortosecond-order accurate. Formethods. parabolicAsdianf ferenti a l equations, it construct hi g her order example, we consider the heat equation ° x 1 , t � 0, U(X, O)
=
f(x),
::;
::;
( 1 1 .4. 1)
with boundary conditions U(O, t) ux(1, t)
0, 0.
( 1 1 .4.2)
We choose the gridpoints according to Figure 1 1 .4.1 as Xj = jh,
j = - l , O, . . . , N + 1 ,
Nh = 1 +
h 2
and approximate the differential equation and the boundary conditions with fourth-order accuracy by du,J _ dt
=
(D+D_
}
/j,
Uj(O) = uo(t) = 0,
1 , 2, . . . , N - 1 ,
( 1 1 .4.3a) (l 1 .4.3b) ( 1 1 .4.3c) ( l 1 .4.3d)
X=o X_I
x01 h
s
s
Figure 11.4.1.
x=l
472
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
Afourth-order Taylor expansion about XN showsboundary that the boundary condition is also accurate. To obtain extra conditions that determine V- I , VN + I , we differentiate the boundary conditions (11.4.2) with respect to t and replace the time derivatives by space derivatives using the differential equation. u[ (O, t) uxx (O, t) = 0, (11.4.4) Uxt(1, t) u x (1, t) = 0. The difference approximations (11.4.5a) D�v_ I (t) 0, (1l.4.5b) D� VN + I (t) = ° ofEq. (11.we4derive .4) determine V- I , VN+ I . Now an energy tions are real and obtain estimate. For simplicity, we assume that all func _
1 /2
=
=
x x
=
By(11.4Lemma .5b) imply11.1.1D_VandN the0] ; boundary conditions [note that Eqs. (11.4.3d) and =
(v, D� D:v) I,N _ I
= =
- IID- vl g N + vj D_ vj l�, -liD- vIIi N + vjD_vj l� = - liD- vIIi N ' -(D_v, D+ D: vh, N + VjD+D: vj l � , -(D_v, D+D:v) I,N - I + vjD+ D: vj l � , (D: v, D: vh,N - D_vjD:vj l � + vj D+ D:vjl�
=
I ID:vll� N '
Therefore, and the energy estimate follows. It is notto thediffireader cult to(Exercise extend these results to parabolic systems. We leave the details 11. 4 . 2 ). Theby conditions (11.4.5) areto beonlypresented second-order approximations ofEq. to(11.derive 4.4). Yet, using the technique in Section 12. 7 , it is possible anbility&(h4) errorCrank-Nicholson estimate (Exercisemethod 12.7 .1).follows For theimmediately. totally discreti zed problem, sta of the However, we would have second-order accuracy Section 13.2, we show that the fourth orderonly Runge-Kutta method yieldsinatime. stableInapproximation.
473
HIGHER ORDER APPROXIMATIONS
For hyperbolic differentialforequations, can use the same procedure at an inflow boundary. Consider, example, one the quarter-space problem au au ° � x < 00, t � 0, ax ' ( 11.4.6) u(atx, O) f(x), with boundary condition (11.4.7) u(O, t) g(t). We (11.4choose .6) andthe(11.gri4.7d)points by jh, j -1,0, 1,2, ... , and approximate Eqs. dv -(Do j 1,2, . . , dt Vj(O) fj, (11.4.8) vo(t) g(t). To obtain an) wiextra boundary condition, which determines v_ we differentiate Eq. (11. 4 . 7 t h respect to t and repl a ce the time derivatives by space deriva tives. This gives us uAO,t) -ur(O,t) -gr (t), uxxAO, t) -uru(O, t) gur(t). Observing that =
Xj
1
_
=
=
=
=
.
=
=
I,
=
=
=
=
-
we use the extra boundary condition (11.4.9) To prove stability, we assume that g 0. Again assuming that all functions are real, we obtain ==
1 d Ilvll l .= "2 dt 2
474
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
By Lemma 11.1.1 and the boundary conditions, (v, Dov)\, = -(Dov, v)\, - VOv\ that is (v, Dov)\,oo = O. Using Lemma 11.1.1 and Eqs. (11.4.8) and (11.4.9), we obtain (v, DoD+D_v)\,oo -(D_v, DoD_vh,oo v\DoD_v\, -(D_v, DoD_v)\,oo voDoD_v\, D_voD_vI , -(D_v,DoD_v)l,oo = (DoD_v,D_v)},co 2 -(D+DoD_v, v)\ , oo - I D_vo I ; that is, 00
00
+
Thus,
-21 -dtd l vl b2 = - h212 I D_vo l 2 � 0' whichleading showstothatthetheCrank-Nicholson approximation ismethod stable. Weor thecouldfourth-order also use theRunge-Kutta trapezoidal rule method to discretize incharacteristics time. leaving the domain, the techniques used For problems with above not work,system. becauseHowever, there area dinotfferent sufficiently manycanboundary conditions forone-sided thewill continuous procedure be developed difference operators. First, we consider the scalar model problemusing au au o �x < t 2: 0, ax ' at (11.4.10) u(x, O) f(x), with real solutions u and the semidiscrete approximation dv-: = QVj, j 0, 1, . . . , t 2: 0, d Vj(O) = /j, (11.4.11) -
, 00
00,
=
=
HIGHER ORDER APPROXIMATIONS
475
where is a difference points. QAssume that thereoperator is a scalardefined producteverywhere and a norm,including definedtheby boundary r-
I lulI�
1
L
(U, w)"
i,j = O
hijUi Wj h
L
+
j=r
vj wj h,
(u, uh.
(11.4 .12) Here of a positive-definite Hermitian r x r matrix H. Now assumethethathi} are Q'andelements H can be constructed such that =
(11.4 .13) for all gridfunctions u with IIvllh < The equality (11.4.13) corresponds to the equality (u, au/ax) of the continuous case. In Section shown that our conditions4 I u(O)are1 2fulfilled with the difference operator11.1, it was { Douj,o, j 0,1,2, ... , (11.4 .14) QUj D+ u j and the scalar product 00 .
= -
(v, W)"
h '2 VO Wo
�
+
L uj wj h; j= 1
(11.4 .15)
thatNow is, rwe1show and Hhow1 /higher 2. order accurate difference operators can be con soe thisthatstrong Eq. 01.condition, 4 .13) is satisfied. Atsemiboundedness first, it may seemwouldtooberestrictive tostructed requi r because enough forwhichstabiliEq.ty.(11.However, we show how to generalize the results to systems, after 4 .13) i s needed. At inner2r, points, we usein Eq.the(3.1. centered difference operator witha wayorderto ofsuccess accu racy as defined 7 ). The problem is to fi n d fullyrelaxed modifybystencils at the points nearThis the boundary. Thefurther order inof Section accuracy12.can7, bebut one near the boundary. is discussed demonstrate in a The directerror wayWfor uthe model problem (11.4.11) with Q aswedefined in Eq. it(11.here4.14). v satisfies =
p
=
=
=
-
476
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
dWj dt Wj(O) 0,
j 0, 1, ... , =
=
where 0, . 1, 2, With greaterthethanscalar0, product defined by Eq. (11.4.15), we have, for any 01 and 02 . .
d I lwl h2 2(w, QW)h + 2(w, F)h , WO(W I - WO) + jL= 1 Wj(Wj+ I - Wj_ + wo Foh + 2 L wjFjh, j= 1
dt
d
Choosing 0 I 1 and integrating the last inequality gives us =
Thus,Wethewriteapproximation second-orderas accurate. the differenceis operator Q an infinite matrix Q, partitioned as, hQ [ QCTI I =
-
where
Q12
D
]
'
(11.4.16)
HIGHER ORDER APPROXIMATIONS
qO,r -
I
qr- :,r- I
:::] ,
qr - I, m
al
o
-al
al
0
al
where
as as
I
]'
477
: :] s S; r, o as
0
as
0
o ...
D o We now have the following lemma.
The difference operator Q satisfies the relation (11.4.13) if, and QII H-- IIB, Q1 2 H C, ( 1 1 .4. 17) where B is an r X r matrix with ( 1 1 .4. 1 8) B = diag (- t, O, . . . , O) + B2 , BI Lemma 11.4.1.
only if,
Proof. Let
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
478
where .) as notation for the usual Euclidean scalar product. Since D is anti and use ( symmetric, Eq. (11.4.13) can be written as "
The case VII 0 shows that HQIl = B must have the form of Eq. (11.4special .18). Thus, =
for all vectors VI and vII. This is only possible if HQ12 C which proves the lemma. For a given The 2s order accuratequestion approximation atainner points, theite matri matrixx HC isanddetermined. remaining is whether positive defi n an "almost" symmetricu(x)matrix so that, for smoothantifunctions andB, =satisfying 2s - 1, Eq. (11.4.18), can be found =
T
(11.4.19) forx -some points �j . It is sufficient to consider polynomials of the form u x) = x and assume that h 1. The approximation is accurate of order if (
(
=
r)"
T
(11.4.20) ° 1. For where v = 0, 1, and we have adopted the convention that 0 approximations of systems, we need a special form of H: . . . , T,
=
HIGHER ORDER APPROXIMATIONS
o hl l
[f One can now prove the following theorem. H
=
�
hl' - l : hr- l , r- l
hl, r- l
]
479
>
O.
(11.4.21)
Theorem 11.4.1. For every order of accuracy 2s in the interior, there are boundary approximations accurate of orde r T = 2s - 1 such that Eq. (11.4.13) is satisfied, where Q has the form of Eq. (11.4.16) and the matrix H has the form of Eq. (11.4.21).
Weonecanmust solvechoose Eq. (11.4.20) using a symbolic manipulation system. In gen eral, r � + 2, but the operator is not uniquely determined. For 3, that r 5, there is a three-parameter solution. These parameters can be chosen so the bandwidth of theon operator is minimized.TheThis is convenient forQ hasimplementation, in particular, parallel computers. resulting operator the structure T=
T
=
® x
x
x
x
®
x
x
®
x
hQ
x
x
x
x
x
x
x
x
® x
x
x
x
x
x
x
x
x
x
®
x
®
x
x
(11.4.22) x
where 3, 2sin nonsmooth 4, and H of the type shown by Eq. ( 1 1 .4.21). For systems multidimensional domains, the stability proof can beonalgeneralized if the matri x H, defining the norm, is further restricted to diag In this case, the accuracy near the boundary cannot be as large as orderform. 2s- 1 , except for the case s 1 . However, one can prove the next theorem. T=
=
=
For interior accuracy of order 2s with 1 � s � 4, there are boundary approximations accurate of order s such that Eq. (11.4.13) is satisfied with Q of the form shown in Eq. (11.4.16) and H diagonal in the scalar product ( 11.4. 12).
Theorem 11.4.2.
Assystem will be(11shown later, we may expect an overall accuracy of order s + 1 . The .4.20) can again be computed using a symbolic manipulation program. The case s 1 has already been presented in Section 1 1 . 1 . For s 2, =
=
480
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
theis solution is uniquely determined. The structure of the difference operator Q ® x x
x
hQ
x
x
x
®
x
x
®
x
x
x
x
x
®
x
®
x
x
x
(11.4.23)
x
where 2, 2s 4, and H diagonal, giving an overall third-order accurate approximation. The tocaseminimize s 3 leads to a one parameter chosen the bandwidth of Q. The solution. structure This is parameter can be T
=
=
=
® x
x x
x
x
®
x
x
hQ
x
® x
x
x
x
x
x
x
®
x
x
x
®
x
x
x
x
x
x
x
x
x
x
®
x
x
x x
x
x
x
®
x
x
x
x
x
(11.4.24) where 3, 2s 6, and H diagonal, giving an overall fourth-order accurate solution. For implementation on computing SIMD computers, theifnumber ofonly nonzero diagonals essentially determines the time, even there are a few nonzerois elements in a particular diagonal. It is interesting to note that the bandwidth nine forbetterbothresults Eq. (11.in practice, 4.22) andbecause Eq. (11.it4.is24),morebut accurate the latterincanthebeinterior. expected(Theto give exact above. values )of the elements of Q are given in Appendix 11.A for the three cases Forbecause the scalarQ model problem, theTherelation (11.problem 4.13) immediately leadstreated, to sta bility, is semibounded. inflow has already been and, inmethods. principle,However, we can nowwe putshallthese two aprocedures togetherway to obtain high order present more convenient of solving systems ables in bythe using system.the outflow operator Q developed above, for all of the vari T
=
=
481
HIGHER ORDER APPROXIMATIONS
To begin, we consider systems au
A au ax ' u(x, O) = f(x), at
=
0 ::;; x < 00, t � 0,
(11.4.25a) (11.4.25b)
with boundary conditions uI (O, t)
O.
(11.4.25c) Here the constant symmetric matrithatx A there has isnegative eigenvalues andforuIEq. (u (l), u(2), . . . We assume an energy estimate (11.4.25); that is, (11.4.26) (w,Aw) � 0, forWeall obtai vectorsn anwapproximation (w I , w I ll with w I O. by replacingoperator, ajax by the difference operator Q described above. Let T be the projection defined by j 1, 2, ... , (11.4.27a) Tv) - { Vj ' (O, vJY, j o. Then the sernidiscrete approximation can be written in the form dVj j 0, 1, ... , dt (11.4.27b) Vj (O) Tk Inall anpoints, actualincluding fully discreteat each implementation, thethewhole solution is advanced at time step, and physical boundary conditions 4.25c)griaredfunctions imposed bybefore the next step. The scalar product is generalized to(11.vector =
q
, U(q))T.
=
=
=
.
=
_
=
=
Xo,
r-
I
L
i ,j = O
hij(Vi , wj )h
+
L (Vj ' wj)h, j=r
(11.4.28)
and the relation (11.4.13) is also well defined for vector gridfunctions.
482
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
To prove stability we need two lemmas. Lemma 11.4.2.
The property (11.4.13) holds if, and only if, (v, Qwh
=
-(Qv, W)h - (vo, wo),
(11.4.29)
for all real v, W E l2 (0, 00).
Obviously, Furthermore, if Eq.Eq.(11.(11. 4.13)4.13)holds,foIIows then from Eq. (11.4.29) by letting v = w.
Proof.
(v
+
w, Q(v + w)h = - 1 l vo + wo l 2 ,
that is, (v, Qvh + (W, QW)h + (v, QW)h + (W, QV)h = - 1 ( I vo l 2 + I wo l 2 + 2(vo, wo» ,
and Eq. (11.4.29) foIIows from Eq. (11.4.13). Assume that the matrix H, defining the scalar product (11.4.28), has the restricted form (11.4.21). Then the projection operator T is self-adjoint. Lemma 11.4.3.
Proof.
The scalar product can be written as (v, W)h
=
ho(vo, wo)h + R(v, w),
wherewR(v,<w)00, does not depend on vo, woo Thus, for any v, w with IIvllh
00,
II lll!
(v, TW)h
ho(vo, (Tw)o)h + R(v, Tw), ho(vJ /, wJ /)h + R(v, w), ho«(Tv)o, wo)h + R(Tv, w) = (Tv, W)h ,
which proves the lemma. We also need the following lemma.
<
HIGHER ORDER APPROXIMATIONS
483
Lemma 11.4.4. Let A be a constant Hermitian matrix and let (v, wh be defined by Eq. (11.4.28). Then
(Av, wh Proof.
=
(v,Awh.
From the definition of the scalar product, we have (Av, wh
r-
1
L
i,j = O r- 1
L
i,j = O
hij(Avi , wj )h
+
hij(v;,AWj )h
+
L (Avj , wj)h, j=r 00
(v,Awh·
L (Avj , Wj)h, j=r
Now we can prove stability. Assume that the matrix H defining the scalar product ( 11.4.28) has the restricted form of Eq. (11.4.21). If Q satisfies the equality (11.4.13), then the approximation (11.4.27) is stable.
Theorem 11.4.3.
Proof. The solution of Eq. (11.4.27) satisfies v Tv, and, because T is a pro jection operator, we have T2 T. Then using Lemmas 11.4.2 to 11.4.4, =
=
(v, TA Qvh
+
(v, TA QTv)h (v, TA QTv)h (v, TAQTv)h
(TAQv, vh, + +
(TAQTv, V)h , (QTv,A Tv)h , (v, TQA Tvh - «(Tv)o ,A(Tv)o).
BecauseEq.A(11.and4.2Q6).commute, the first two terms cancel, and the theorem follows from Fortechnique more general boundary conditions of thetransformation type of Eq. (11.1. 22),towetransform use the same as in Section 11.1. A unitary is used theTime boundary conditionscanto betheconstructed form of Eq.as(11.in 4Section .25c). 11.1. For higher order discretization accuracy, one canto usetheseRunge-Kutta methods or13.some other ODE approximation. We come back methods in Section 2 . Throughout this section, we haveNo restricted our discussion to constant coeffiof cient problems for convenience. new difficulties arise from the presence boundaries when treating problems with variable coefficients. As demonstrated
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
484
ininitial Section extra treated terms occur in the5.3. stability estimates,wejustgetas for the pure value11.1,problem in Section For example, d
dt IIvllh
� allvllh ,
because AQ - QA 0 if A A(xj) depends on x. -f-
=
EXERCISES
Derive nonlineara fourth-order problem accurate approximation like Eq. (11.4.3) for the o � x � 1, t � 0, u[ = a (u) uxx + b(u) un u(O, t) = go(t), uA I , t) = gl (t), u(x,O) I(x ), where a(u) � 0 O. 11.4.2. Derive a fourth-order accurate approximation like Eq. (11.4 . 3 ) for the problem (9.3.8) to (9.3 .10). Prove that it is stable. 11.4.3. Explicitly verify that Eq. (11.4.20) is satisfied for Q defined in Eq. (11.4.14), and the scalar product Eq. (11.4.15). 11.4.1.
=
>
11.S. SEVERAL SPACE DIMENSIONS
We consider a two-dimensional PDE of the form au at
PI
( � ) u ( ;y ) u + P2
+
F,
x � 1, < y < t � 0, (11.5 .1) with 211"-periodic solutions in the y direction. The grid is defined on the domain o � x � 1, 0 � y � 211" by - 00
o �
00,
O, I, . . . ,M, j I,2, . . . , N. (11.5.2) The scalar product and norm are defined by 1 =
=
485
SEVERAL SPACE DIMENSIONS
N
L (V'j , W')h\ h2 j= 1 N M
M
L di (Vi" i=O
L L (vij , diwij)h 1 h2 ,
I lvll�
wi · h 2 h l (v, vh.
(11.5.3) We assume thata diagonal di > ° are scalars; that is, in the direction we use a scalar product with matrix H, asis discussed in Section 11.4. The semidiscrete approximation dVij (11.5.4a) (i,j) O h, t � 0, Q 1 Vij + Q2 Vij , dt (11.5 .4b) j 1,2, . . . , N, Lo voj gOj , (11.5.4c) j 1,2, . . . , N, LMVMj = gMj , (11.5 .4d) (i,j) O h , vi) = Vi,j + N , (11.5.4e) (i,j) ° h . ViiO) = iij , denotes the inner points defined Eq.Q (11.are5.semibounded: 4a) is well-defined. Here Assume0 h that the one-dimensional operatorssuchQthat and l 2 01.5.5) for v satisfying Lovo = 0, LMvM = 0, and 01.5.6) for W periodic. We have N M Re Re L + (v'j , Q l v'j h\ h2 L di(Vi· , Q2 Vi· h2 hl , i=O j= 1 j= 1 i=O
x
E
E
E
$;
N
M
di llvi ll�2 h l ' (X I jL= I IIv'jll�\ h2 + (X2 L i=O
concept of semiboundedness is generalized way to several spaceThedimensions. semibounded. As an We have just shown that QI +inQan2 isobvious
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
486
example, consider the well-posed problem au au + au ° � x � 1, ° � y � 21r, t � 0, at ax Ty' u(1,y,t) = 0, (11.5.7) u(x,y,O) f(x,y), with real solutions that are periodic in y. As an approximation we use dVij O, l, ... ,M 1, dt = Q\ Vij + DOy Vij, ViM = 0, =
-
O, l, . . ,M, .
(11.5.8)
N. forHere j 1,2,Q\ . is. . ,defined Q in Eq. (11.4.14) and the scalar product (·, ·hl is defined by Eq. (11.4 .15).as Since j 1,2, . . . , N, (11.5.9) and (11.5 .10) 0, O, l, . . ,M, we get =
.
-1
N
2 j=L\ ! voj! h2,
proving stability. of periodic solutions in the y direction makes the general The assumption ization to two inspacethe dimensions particularly simple.much. However, the theintroduction ofmation boundaries y direction does not change Consider (11.5.4a) in the domain ° � x � 1 with boundary conditions approxi
SEVERAL SPACE DIMENSIONS
487
LxOUOj
=
gOj ,
LxMUMj = gMj, LyOUiO giO , LyNUiN = giN . =
(11.5 .11 )
The scalar product is changed to L ej (U.j , w .j h 1 h2
(u, wh
j
L di (u;. , Wi· )h2 h l ,
=
L L di ej(uij, wi)h 1 h2 . j
(11.5 .12)
The assumptions (11.5.5)way, and (11. 5.stability 6) of one-dimensional semiboundedness are modified in an obvious and follows as in the one-dimensional case.For systems, we use the same technique as in Section 11.4. The solution is advanced at allvia points including the boundaries, and thediscrete boundary conditions are imposed projections (in each step for the fully case). Stability is proven as in the one-dimensional case. For higher orderformapproximations, thethat sameis,technique is applied. If the scalar product has the of Eq. (11. 5 .12), if the matrix H in Eq. (11. 4 .12) isforward diagonal,manner thentotheseveral one-dimensional stabilityForresults generalizematrices in a straight space dimensions. nondiagonal H, the presence of comers in the computational domain complicates the analysis, and no Time generaldiscretization results are known for this case. can be done asinbefore. However,to thebe solved implicitattrapezoidal and backward Euler methods result large systems each time step. has been shown how this can be avoided for periodic problems by intro ducing ADI-methods like It
(I � QI ) Un + I/2 (I � Q2 ) un , ( � Q2 ) un + l (I � Q I ) Un + I/2 . -
+
=
(11.5 .13) We assume that Qconstants I and Q2 are semibounded, and, for convenience, it is assumed that the growth are zero: 1,2. (11.5 .14) I
-
=
+
v
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
488
Theorem 11.5.1. The ADI-scheme (/1.5./3) is stable if Q I and Q2 are semi bounded and kQ I and kQ2 are bounded.
Proof
If Eq. (11.5 .14) is satisfied, then
I � Q2un + ' I : lIu n + ' II� + I � Q2 u n + ' I : - 2Re (u n + l , kQ2 U n + l )h , = 1 ( 1 � Q2 ) u n + ' : 1 ( 1 + � Q I ) u n + 1 /2 1 : , I I ( I - � QI ) un + 1/21 : = I ( I + � Q2 ) unl : '
Ilu n + l l l� + :0;;
=
-
:0;; :0;;
Ilu n ll� + I
Thus, l Iu n + I II�
:0;; :0;;
� Q2unll� .
I � Q2 u n + I I / : , IluO II� + I � Q2 uo l : constant l uO I l� , lIu n + I I I� +
:0;;
whichrequirement proves stability. [Note that the scheme is almost unconditionally stable. The that kQ I and kQ2 be bounded means that k &'(hP) where is theTheseorderresults of thecandifferenti al operatorto any in space. ] with a structured grid that be generalized domain can by a smooth transformation to a most rectanglproblems e with aarising uniformin gripractice. d.beBytransformed using overlapping subgrids, we can solve We come backbe toconvenient this technique inoneSection 13.3. Sometimes it may to use single grid,thereevenmayif betheconcave domain cannot be transformed into a rectangle. For example, comers domains.thatIt weis also operatorsasininthisL-shaped case. Assume wantpossible to solveto construct semibounded (11.5 .15) -aaxu + aayu in the domain shown in Figure 11.5 .1. =
p
SEVERAL SPACE DIMENSIONS
489 u=O
'-------;p
u=O
x
Figure 11.5.1 Domain with 90-degree concave comer.
Zero boundary values are given on the right and upper boundaries. At allAtinteall rior points, we use standard second-order centered difference operators. outfloIfw theboundary the corner operators are used. unifonnpoints, grid isexcept such that the pointP, Pone-sided is (xm, Ynfirst-order ) and h I h2 , we use 1 ux(P) Um - I, n ), 3h (2um + I, n - Umn 1 (11.5 .16) uy(P) 3h (2 Um, n + 1 - Umn - Um, n - I ). One can(defined prove thatin order the resulting are semiin nbounded with the scalar product from belowoperators and upward ) =
---7
---7
( �( (
t t VmOWmO + +
j= 1
N- I .
L
l=m+ I
t vmjwmj +
+ t t von won +
i,j
�
i =m + 1
1=
)
)
UijWij
�I UinWin +
- . L dij VijWij·
ViOw iO
h
)
) � +
N- I
L
i =m + I
ViiWii
(11.5.17)
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
490
__
.L __ .l 2 2
__
l.. 4
P2
""-
1
2
",,.l 2
�
14
__
PI
.L 2
__
.l 2
__
.L 2
__
Figure 11.5.2 Domain and weighting coefficients for 45-degree comers.
The dij are � at convex corners, � at concave corners, and t at all weights product can be used also for scalar fact, this points.in InFigure boundary outflow other . 2 . 5 11. shown as degree corners, 45 Assume at are(XI,semibounded is locatedoperators corner YO ) and the concave corner at that(xthem , Yconvex with difference the Then ). n 1 IO ), h 1 2h (4Vll - 2VlO PI
P2 =
(VI + I , O
=
- V
1 ?;h (2vm + I, n
VI - 2, 2 - VI + 2, O ) ,
Vmn - Vm - I, n ),
1 2vmn - Vm - 2, n - m + n ) 01.5 .18) overlapping weforuseproving boundaries, curved domainswithwiththe smooth inanalogy problems For well 9.6 Section in used principle close in grids posedness. This technique will be considered in Section 13.3. 6h (4vm, n + I
V
2, - 2
EXERCISES 11.5.1.
Consider the problem Aux + Buy, ul (O, y, t) 0, u l/O , y, t) 0, u ll/ (x, O, t) 0, u l V(x, 1, t) 0, Ut =
=
=
=
=
u(x, y, O)
=
f(x, y),
° $ x, y $
1, � 0,
.
BIBLIOGRAPHIC NOTES
491
where 0, (u,Au)"I1= o :s; 0, 0, (u, BU)uiV =0 :s; 0. Define the approximation corresponding to4.2Eq.4) satisfying (11.4.27) Eq. with(11.Q 4based onProveoperators like Eq. (11. 4 . 2 3) or Eq. (11. .13). stability. 11.5.2. Consider the fractional step method (u,Au)ui =0 � (u, Bu)uIII=O �
with boundary conditions Prove unconditional stability.such that Q\ and Q2 both are semibounded. 11.5.3. Prove that the difference approximation discussed above for u/ Ux + uy in the domain shown in Figure 11. 5 .1 is stable by using the scalar product (11.5.17). 11.5.4. Prove stability as in Exercise 11.5 .3, but with an outflow boundary with comers as in Figure 11.5 .2. 11.5.5. Consider the problem 0, (X, y) �: 0, (x, y) ao , u (x , y,O) f(x, y), where ° isconditions the unit square. stable and second-order accurate boundary for theConstruct approximation =
=
E
E 0,
�
=
BIBLIOGRAPHIC NOTES
The general procedureproblems, for deriving higher order accurate11.di4f,ference approxima tions for hyperbolic as described in Section was first presented bytorsKreiss and Scherer (1974, 1977). The computation of the di f ference opera shown in Appendix 11. A was done by Strand (1994). A general stability proof for systems inOlsson multidimensional nonthesmooth domains withis alsononorthogonal grids was given by (1995a b ). In same papers, it shown that , the general form (11. 4 .12) of H can be used also for systems, by letting the difference approximation depend on H near the boundaries.
492
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
The construction of thecomers difference approximations 01.(978). 5 .16) and 01.5 .18) at concave and 45-degree was done by Engquist For implicit compact difference approximations of the typeare (3.1. 29),severe wherethan isfornondiagonal, the restrictions imposed by the boundaries more thetheexplicit ones.canIt beis shown infiGottlieb etifalthe. (993) that the01.local accuracy near boundary at most r st order condition 4 .13) is to be satisfi e d with a locally modifi e d nonn. In Carpenter, Gottlieb, and Abarbanel (994), thispaper difficulty is avoided byto introducing a globally modifi ed nonn,as ina the same it is shown how implement the boundary conditions penalty function in the difference approximation. P
APPENDIX TO CHAPTER 11
Matrix near the boundary for the matrix hQ in Eqs. 01.4.22) to 01.4.24).elements (11.4.23): 2,2s 4, H T =
Matrix
dl l d l4 d21 d24 d31 d34 d41 d44
diagonal.
=
-24/17 -3/34 -1/2 0 4/43 59/86 3/98 0
dl2 dl5 d22 d25 d3 2 d 35 d42 d45
0059/34 0-59/86 -4/43 032/49
d l3 d l6 d2 3 d26 d33 d36 d43 d46
=
=
=
=
-4/17 01/2 00 0-59/98 -4/49
Inner points: -1/12, (11.4.24): 3,2s 6, H -21600/13649 d l 2 81763/40947 -9143/13649 0-81763/180195 ddIl85 020539/81894 d22 0 d25 -2328/12013 30637/72078 0 d28 0 d32 -7357/16266 -131/54220 645/2711 11237/32532 - a_ 2 = a 2 =
Matrix
dl l d l4 d1 7 d2 1 d24 d27 d3 1 d3 4
=
T =
- a- I
=
= = =
=
=
d3 5
2/3.
diagonal.
=
=
= al
=
d 13 d I6 d1 9 d2 3 d26 d29 d33 d36
=
= =
=
=
=
=
131/27298 07357/36039 006611/360390 -3487/2711 0 0
APPENDIX TO CHAPTER 11
09143/53590 = 0 = 72/5359 -20539/236310 = 13733/23631 = -1296/7877 = 0 1541/87602 32400/43801 = Inner points: d37 d4 1 d44 d47 d5 1 d54 d57 d6 1 d64 d67
=
=
=
=
493
d 38 d42 d45 d48 d52 d55 d58 d62 d65 d68
0 = -30637/64308 = 13733/32154 0 = 2328/7877 = 0 = 144/7877 = -6611/262806 = -89387/131403 = -6480/43801 =
=
d3 9 d43 d46 d49 d53 d56 d59 d63 d66 d69
0-645/5359 = -67/4660 = 0 -11237/47262 = 89387/118155 = 0 = 3487/43801 0 = 720/43801 =
=
=
=
al
Matrix (11.4.22): T
=
3,2s = 4, H of type (11.4.21).
=
=
=
hd35 h d36 d37 f l d4 1 f l d42
3/4.
3-11/6 -3/2 1/3 0 00 -24(-779042810827742869 + 104535124033147vi26116897) -(-176530817412806109689 + 29768274816875927 vi26116897)/6 = 343(-171079116122226 871 + 27975630462649 vi26116897) -3(-7475554291248533227 + 1648464218793925vi26116897/2 = (-23837927681800309 15 + 1179620587812973 vi26116897)/3 = -1232(-1157245295813 15 + 37280576429 vi26116897) 0 = -12(-380966843 + 86315 vi26116897) = (5024933015 + 2010631 vi26116897)/3 -231(-431968921 + 86711 vivi26116897)/2 = (-65931742559 + 12256337 26116897) vi26116897)/6 -(-50597298167 + 9716873 = -88(-15453061 + 2911 vi26116897) = 0 48(-56020909845192541 + 9790180507043vi26116897) = (-99182490492375860 11 + 1463702013196501 vi26116897)/6
dl l = dl 2 d \3 = dl 4 = d l5 = dl 6 = d l7 = f l d2 1 f 1 d22 f 1 d2 3 f l d24 f 1 d 25 f l d 26 d27 h d3 1 f2 d32 h d33 f2 d34
=
=
=
=
=
=
,----
THE ENERGY METHOD FOR DIFFERENCE APPROXIMATIONS
494
I I d43 I I d44 I I d4S
-13(-4130451756851441723 + 664278707201077vivi26116897) 3(-26937108467782666617 + 5169063172799767vi 26116897)/2 -(6548308508012371315 + 3968886380989379vi 26116897)/3 88(-91337851897923397 + 19696768305507 26116897) vi 242(-120683 + 15 26116897) = 264(-120683 + 15 vi26116897) (-43118111 + 23357vi26116897)/3 -47(-28770085 + 2259vivi26116897)/2 = -3(1003619433 + 11777 26116897) = -11(-384168269 + 65747 vi26116897)/6 22(87290207 + 10221vivi26116897) -66(3692405 419 26116897)
=
=
=
I I d46 = h d47 = h ds l h ds2 = h ds 3 = h ds4 h dss h ds6 = I I ds7 =
=
where -56764003702447356523 +vi8154993476273221 vi26116897 26116897 -55804550303 + 9650225 vi = 3262210757 + 271861 26116897 In floating point fonnat: -1.83333333333333333333333333333 = 3 0.01.530000000000000000000000000000 33333333333333333333333333333 0 0 0.389422071485311842975177265601 --0.269537639034869460503559633382 0.0.0639037937659262938432677856177 943327360845463774750968877542 -0.0805183715808445133581024825053 0.0 00610740835721650092906463755986 0.111249966676253227197631191910 -0.786153109432785509340645292043 0.198779437635276432052935915731 II 12 13
= =
d1 1 = dI2 d1 3 = d1 4 = dIS = dI6 = d17 = d2 I = d22 = d23 = d24 = d25 = d26 d 27 = d3 I = d32 = d33 = =
APPENDIX TO CHAPTER 11
Inner points:
d 34 = d3 S = d3 6 = d 37 = d4 1 =
0.-0.508080676928351487908752085978 0241370624126563706018867104972 0781990939443926721678719106473 00.-0.0 0190512060948850190478223587424
d42 = d43 = d44 = d4S = d46 = d47 = dS 1 = dS2 = dS3 = dS4 = dss = dS6 = dS7 =
0.0269311042007326141816664674714 633860292039252305642283500160 0.-0. 0 517726709186493664626888177642 0.592764606048964306931634491846 -0. 00229048095413832510406070952285 543688142698406758774679261364 -0. 0 00249870649542362738624804675220 0.0.-0.000546392445304455008494236684033 870248056190193154450416111555 -0. 6 86097670431383548237962511317 0659895344563505072850627735852 189855304809436619879348998897 0.0.-0.827732281897054247443360556719
495
12 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
12.1. NECESSARY CONDITIONS FOR STABILITY
Consider aconstant differencecoefficients. approximation for a system of partial diftestferential equa tions with For periodic problems, a simple for stability is to construct simple wave solutions u(x, t)
=
ei(w, x) u(w, t)
and to asestimate the growth ratecondition. of u(w, t).AsThisweleads toseentheinvonChapter Neumann con dition a necessary stability have 10, there is sa method similar procedure for theapproximations initial-boundary-value problem. We nowexample. adapt thiWe for di f ference and begin with a simple approximate au au F, at ax + u(x, O) f(x), Il u( - , t) 1 I < 00
o $ x < 00, t
� 0,
=
(12.1.1)
by dv· = Dov, } + Fj , dt v/O) = h , Vo 2v\ + V2 g, II v ll " < 00 J _
-
496
=
j
1,2, ... ,
(l0.1.2a) (12.1.2b) (12.1.2c) (12.1.2d)
497
NECESSARY CONDITIONS FOR STABILITY
introduced the inhomogeneous be(Weusedhavelater). The scalar product and the term normgarein thenowboundary defined bycondition to (12.1.3) (u, wh (u, wh. � , II u llh = (u, U)h . The test is given by the following lemma. Lemma 12.1.1.
Let
F
=
==
g o. If Eq. (12.1.2) has a solution ==
(12.1.4)
Il f llh < 00
Re (See> 0Lemma and some stepsize ho, then the approx 10.1.1.) This necessary condition for stability is called the Godunov-Ryabenkii con dition. Proof. If such a solution exists, for some h ho s = so, Re So > 0, then soh = 1 (fj+ 1 h- d , j 1,2, . . . , So = hoso, 2f fo 0,
for some complex number s with s imation is not stable in any sense.
=
=
-
h -
l
=
+
II f lih < 00 .
,
Therefore, for any h, isfast.alsoWea solution. Thus, asthish in terms 0, we ofcantheconstruct solutions growing arbitrarily can formulate eigenvalue problem S 'Pj hDo 'Pj ' j = 1,2, . . . , S hs, (12.1.5a) (12.1.5b) 'Po 2 'P + 'P 2 0 (12.1.5c) I 'P llh < 00 . We have the following lemma. Lemma 12.1.2. The approximation (12.1.2) is not stable if Eq. (12.1.5) has an eigenvalue S with Res > O. (See Theorem 10.1.1.) Equation 5a) ishasantheordinary cients, and its(12.1. solution form difference equation with constant coeffi ---7
=
-
1
=
=
498 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
where and are the two solutions of the characteristic equation KJ
K2
s
=
(12.1.6a)
sh.
The next lemma discusses properties of and 2 . Lemma 12.1.3. For Re s 0, the characteristic equation has no solutions with I , and there is exactly one solution with < 1, KJ
IK I
K
>
IK I
=
KJ
=
S
-
VI + s2 ,
s
Then Assume that Eq. (12.1.6a) has a solution
Proof.
sh.
= K
=
e i�
for Re (s)
>
(12.1.6b) 0, � real.
whichsolutions is a contradiction. Thus, there are -1, no solutions with that I I 1. Because the two and satisfy it is obvious there is exactly one root, I say, that is inside the unit circle for all s with Re s O. A simple calculation shows that Eq. (12.1.6b) is the desired solution. formLemma 12.1.3 and the condition (12.1.5c) imply that the solution has the (12.1.7) which we substitute into the boundary condition (12.1.5b). This yields (12.1.8) Therefore, there is a nontrivial solution if, andthatonly if,is no eigenvalue 1. By Lemma 12.1. 3 , this is impossible, and we have shown there s with Rethesboundary O. Indeed, the same result is obtained for any order of extrapolation at (12.1.9) q 1,2, . . . . The only difference in the discussion above is that Eq. (12.1.8) becomes K
KJ
K2
K
K J K2 =
K1
>
=
=
>
499
NECESSARY CONDITIONS FOR STABILITY
0, which leadsdiscuss to the same conclusion. We now difference quarter-space problem (10.1.1).approximations with constant coefficients for the dv·(t) dt
_ J_
=
Vj(O)
fj '
=
QV.(t) + F(t) J J '
j 1,2, . j 1,2, . . . ,
. . ,
(12.1.10a) (12.1.10b) (12.1.10c)
(t), where the
= g
SUPt
=
h v = -r
whereBE isdoesthe notshiftdepend operator.on h,In and general, the Bv Bvo + hBv l are mX m matrices. Here Bv has no influence on stability. Therefore, vo neglect these termsassumption and assumeisthatthatBvEq.Bvo.(12.1.1 We Oa)assume thateBwith wenonsingular. p and B-r are Another i s stabl periodic boundary conditions. Finally, we assume that the boundary conditions can be written as 1
q
L Ll'j vj
=
=
(12.1.12) 0, 1, . . . 1, jl + where theon h.Ll'This j are constant matrices. Again, we assume that the Ll'j do not depend is no restriction. AsVo,we. . have seenthusin Section 11.3Q, weto can use the boundary condition to eliminate . , V r+l, modifying Q. Any &(h) terms in the boundary condition create bounded operators and have no infl uence onHowever, stability. itWeis could alsothatinclude time-derivatives in the boundary conditions. assumed these have been eliminated by using theThedifferential equations (12.1.10a). eigenvalue problem associated with our approximation is S 'Pj hQ 'Pj j 1,2, . . . , Re (s) Re (hs) > 0, (12.1.13a) (12.1.13b) 0, Lo 'P (12.1.13c) 1I 'P lih < 00. Using the same argument as for our example we can prove the following lemma. =
=
o =
,
g- I"
=
,r
=
-
500 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
The approximation is not stable if the eigenvalue problem (12.1.13) has an eigenvalue s with s > o.
Lemma 12.1.4. Godunov-Ryabenkii Condition.
Re Proof. Let So with Re So 0 be an eigenvalue and IPj the corresponding eigen function. Then >
fonn a sequence of solutions whose growth rate becomes arbitrarily large as h O. Thus, the approximation is not stable. We now derive algebraic conditions for the existence of eigenvalues. ---7
Lemma 12.1.5. The lsi.
ficiently large Proof.
eigenvalue problem (12.1.13) has no eigenvalues for suf
Equation (12.1.13) implies
Here the constant dependslargeonlyS on. Thithes proves coefficients I Bp I and I Ll'j I . Therefore, the lemma. = 0 for sufficiently IIIP llh l I now discuss Eq.We(12.1.13a) in thehowfonnto solve the eigenvalue problem (12.1.13). We write IPp + j
=
p- l
L
v ::::: - r
Bp IP p + j ,
(12.1.14)
where This is a system of ordinary difference equations in space. Introducing we can write it as a one-step method
501
NECESSARY CONDITIONS FOR STABILITY
(12.1.15a)
j 1,2, ... , where o I
0
o I
The boundary conditions can be written in the form becauseMweM(s), can useH =Eq.H('S(12.1.14) to eliminatein all'S. l{'p with Thus, ) are polynomials The eigenvalues and eigenvectors of M are the solutions of /I
=
Bp-I I o
0I
>P
(12.1.15b) from LoI{'O O. =
B-r o
o
0 (12.1.16) I 0 o (cf. Lemmaproblem 5.2.2). We(12.1.13). shall useTo thedemonstrate solutions theof Eq.procedure, (12.1.16)wetofirstsolvederive the eigenvalue someWeproperties of M.that K = 0 is not an eigenvalue of M. Assume that K = 0 is want to show anO. Because, eigenvalue.by Then Eq. (12) .16) becomes Y I Y2 = ... Yp + r - I = 0, iL rYp + r -r is nonsingular, Yp + r O. Thus, we have arrived at a contradictionassumption, and K = 0Bcannot be an eigenvalue. Using the relation j 1,2, ... + r - 1, we can eliminate Y2 , . . . , Yp+r and obtain, after a simple computation, =
=
=
=
,P
(12.1.17) Thus, the eigenvalues of M are the solutions of the characteristic equation
502 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
(12.1.18)
°
We now need the following lemma. Lemma 12.1.6. For s with Re s 0, there is no solution of Eq. (12.1.18) with I K I 1 and there are exactly rm solutions, counted according to their multi plicity, with I K I < 1. Proof. Assume that there is a root K ei�. Then Eq. (12.1.18) implies that s isimanatioeigenvalue of L� = -r Bp e;p ( Because we have assumed that the approx n is stable forto thethe hypothesis periodic case, we necessarily have Re s � ° . This iswitha contradiction Re s 0; that is, there are no solutions K I K I 1. The solutions K are continuous functions of s and cannot cross the unit Therefore, the number of solutions with I K I < 1 is constant for Recase,s the0,circle. and we can determine their number from the limit Re s 00 In this solutions with I K I < 1 converge to zero and are, to first approximation, determined by (12.1.19) Because B-r is nonsingular, Eq. (12.1.19) has exactly mr solutions K &(s - I /r). This proves the lemma. By Schur's lemma (see Appendix A.l), we can find a unitary transformation U U(S) such that >
=
=
>
=
>
�
.
=
=
Here the eigenvalues M satisfy I K I < 1 and I K I 1 for Re s 0, respectively. Thus, M Iofl isMan and rm rm matrix. Introducing a new variable II
X
22
>
>
into Eq. (12.1.15) gives us (12.1.20a)
503
NECESSARY CONDITIONS FOR STABILITY
with boundary conditions (12.1.20b) againprove an rmtheX following rm matrix. HereWeHcanI isnow lemma. Lemma 12.1.7. The Godunov-Ryabenkii condition is satisfied if, and only if, H is nonsingular for Re s 0, that is, for Re s O. (12.1.2 1) Proof. By Eq. (12.1.20a), .IJI = Mj22- I .IJI Therefore, IIt\1l1 h < 00 implies t\I�I 0; that is, t\l II 0 and the solutions of Eq. (12.1.20) satisfy >
>
'I'j
'1' 1 .
==
=
Thus,lemma. there is a nontrivial solution if, and only if, H I is singular. This proves the We have,condition. therefore,To veriderived ancondition algebraicwe need condition forthrough the Godunov Ryabenkii f y the not go the reduc tion of Eq. (12.1.13) to Eq. (12.1.15) and the construction of U. The above isconstruction of the formtells us that the general solution of Eq. (12.1.13a) with II cp ll h < 00 (12.1.22) C{ij L I Pp (j)K {, and depends on rmvector freecoefficients. parameters (TIts order is at, C1most rm f. Here Pp (j) is a poly with mp-l where mp is the nomial in j multiplicity of p. Substituting Eq. (12.1.22) into the boundary conditions (12.1.13b) yields a system of equations (12.1.23) C(s)(T 0, and we can rephrase Lemma 12.1.7 in the following form. =
iKpi <
=
K
(C1 "
=
. . .
504 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Lemma 12.1.8.
The Godunov-Ryabenkii condition is satisfied if, and only if,
Det I C(s) 1 0, for Re s 0. Sincethatthethenumber ofoffreeboundary parameters is cannot it is a consequence of Lemma 12.1.8 number conditions be less than for a stable approximation. As an example, we approximate au au t � 0, a a 0, ° � x < at = ax ' by the fourth-order difference scheme -f-
REMARK.
>
rm,
rm
00,
-f-
�:
=
a
( : Do(h) - � D (2h)) o
j
Vj ,
1,2, ....
(12.1.24)
If a 0, we use as boundary conditions >
< 0, we use the boundary condition u(O, t) toIf aobtain
=
(12.1.25) ° of the differential equation
As boundary conditions for the difference approximation we use 0, (12.1.26) [It Equation will be shown later that approximation has a global &(h4) error.] (12.1.13a) has thethisform Vo
=
(12.1.27) and the characteristic equation is given by (12.1.28)
NECESSARY CONDITIONS FOR STABILITY
505
We have the following lemma. Lemma 12.1.9.
1 . The characteristic equation (12. 1.28) has exactly two roots
ReS 0, 2. In a neighborhood of s = KI
=
-1 +
1, 2.
v =
>
0, these roots are of the form
a
K2
=
4
K2
=
4
-
ViS + &( I s l ), if a
-
ViS + &( I s l ),
0, if a < 0. >
from Lemmaequations 12.1.6. For sThe0,firstthe statement solutions offollows the characteristic are
Proof.
=
K ( l , 2)
=
+1,
K (3,4)
=
4 ±
ViS.
By perturbation We have for arguments, small lsi we detennine which of these four roots are K and 1
K2.
K (l)
=
-1 +
3 � + &(lsI 2 ), 5 a
-
By selecting follows. This theprovesrootsthesatisfying lemma. I K I < 1 for ReS 0, the second statement Lemma there ofareEq.two(12.roots1.27)Kv ,with 1 , 2, with< 00I KvhasI
V =
If K
1 =
K2
>
is a double root, it becomes
We substitute this expression into the boundary conditions (12. 1 .25) used for a ° and obtain >
506 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
0, K 2 - KI o. (12.1.29) (J I (KIKI- 1) q + K 2 (J-2 KI ( K 2K-2 1) q The detenninant of this system is Det KIK 2 By Lemma 12.1.9,conditions is, therefor isthenocaseeigenvalue with ReS > O. KI 1, K 2(12.1.1; that For the boundary 26) used a < 0, we obtain (JI 0 ( (K 2 - 1) 2 (K J _ 1) 2 ) (J2 ( 1 K2 KI KIK 2 ) 0, (12. 1.30) which, because IReKIKs2>!
-::/:-
=
_
=
_
1_
_
=
EXERCISES 12.1.1.
Prove that the Godunov-Ryabenkii is satisfied for the fourth order approximation (12.1.24) withcondition the boundary conditions V-I Vo = 0 for any value of a. =
12.2. SUFFICIENT CONDITIONS FOR STABILITY
Inditions this section, we useWetheproceed Laplaceas intransfonn to derivecasesufficient algebraic con for stability. the continuous in Section 10. 5 where symmetric hyperbolic systemstoinestimate severalthespacesolution dimensions were considered. use the Laplace transfonn on the boundary. Energy We estimates give us the desired stability result. We begin the example (12.1.2The ) andLaplace assumetransfonned that F =/ =equations O. We callhave the solution of thiswithparticular problem y.
SUFFICIENT CONDITIONS FOR STABILITY
507
the fonn j 1,2, ...
(12.2.1a) (12.2.1b) We need space. to use the inverse Laplace transfonn toforobtain the stability estimates in physical Thus, our estimates must hold all s with Re s > where iswhole a constant. Because h is arbitrarily small, it will be necessary to consider the half-plane Res > O. By Eq. (12.1.7), the general solution ofEq. (12.2.1a) is =
,
11 0 ,
11 °
Substituting this into the boundary condition (12.2.1b) gives us Bydetennined. Lemma We 12.1.can3 , be more1 precise. 0 for Res > 0 and, therefore, is uniquely Lemma 12.2.1. There is a constant 0 > 0 such that 1 1 2:: 0 > 0, for all s with Re s O. (12.2.2) I Proof. By Eq.large(12.1.s i. 6Also, b), is0aascontinuous lsi Therefore, Eq.s, and, (12.therefore, 2.2) holds Eq.for sufficiently function of l (12.2.1;2)then, holdsbyifEq.we (12.1. can show that 1 for all s with ReS 2:: O.-1,Assume that 6 a), s 0 and, by Eq. (12.1.6b), which is a contradiction. This proves the lemma. We can now invert the Laplace transfonn K1
K1
-
4-
U1
2::
-
K1 �
� 00.
K1
K1 =
=
K1
4-
K1 =
(12.2.3) For we can take the contour Re s O. By Parseval's relation, we obtain se
=
j 0, 1, . . . .
508 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
We also obtain, for every T � 0,
fTo I Yj(t) 1 2 dt 1 fT I g(t) 1 2 dt, &4
(12.2.4) because the solution forassume 0 t that T does not depend on values of get) with t > T, and, therefore, we can get) 0 for t > T. [For more details, see Kreiss and Lorenz (1989). ] we canus derive a standard energy estimate. Lemma 11.1.1 and Eq. (12.Now 2.4) give =:;
=:;
=:;
0
=
:t I YI I � (y, DoYh (DoY, yh +
that is, (12.2.5) Thus,We weshallcanprove estimate theapproximation solution in terms(12.1.of2the) is data. that the stronglythe stable sense of Section 11.3 . We assume that F g 0 and first solve auxiliaryin theproblem dw-J (12.2.6a) Dow ' j 1,2, . . , dt (12.2.6b) w/O) /j, O. (12.2.6c) Wo - W I Lemma 11.1.1 gives us =
__
=
J'
=
=
.
=
=
that is, Ilw(T)IIf.
+
J T I wo l 2 dt o
=:;
1111 1 f, .
(12.2.7)
SUFFICIENT CONDITIONS FOR STABILITY
509
Thus, (12.2.8a) We also want to show that (12.2.8b) The Laplace transfonned equations (12.2.6) are SWj = Dowj + /j, j = 1,2, ... , (12.2.9a) (12.2.9b) Wo - W I 0, I l w l h < 00. For I sh l � 2, we obtain I l w l � $ I s�2 I Dowl l � + 1�2 I f l � , l 2h + 1�2 I f l � , = W 1 j j I w _1 + 2 1 sh l 2 f I J= $ s� I l wl l � + )h I wo l 2 + � I f l � ; 12 I l I 12 that is, (12.2.10) I W2 1 2 $ 21 I Wo 1 2 + h l4s l 2 I f l 1 2h ' I sh l � 2. For I sh l $ 2, we write Eq. (12.2.9a) in the fonn =
I
A
A
that is, and, therefore, by Eq. (12.2.10), we obtain, for s = it
510 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Parseval's relation gives us Eq. (12.2.8b). Now we substitute Y v-w into Eq. (12.1.2) as a new variable and obtain dY (12.2. 11 a) D Yj, j 1,2, ... , dt (12.2.11b) Yj(O) = 0, (12.2.11c) Yo - 2Yl + Y2 = g(t) + g l (t), l (t) = - (wo - 2Wl + W2 ). (For convenience, we have assumed F 0.) BywhereEq.g(12. 2.8), =
J
=
O
=
=
(12.2.12) i.e.For , byEq.Eq.(12.(12.22.11), .7): we can use the estimate (12.2.5) with g replaced by g + gl 2(1 y(T) II � + I I w(T) II �) � constant ( II ! II � + f: 1 g(t)1 2 dt) . (12.2.13) ThisWeproves strong stability. now generalize the stabilityfor result to systems (12.1.19). Ascallforthetheresulting exam ple, we fi r st solve the problem the case that! F 0 and solution The Laplace transformed equations have the form (12.2.14a) SYj = QYj ' j 1,2, . .. , (12.2.l4b) LoYo g, WeForwantlargeto derive estimates for IYj I . sh , we have the following lemma. II v(T) I I � �
=
y.
=
I l
=
==
SUFFICIENT CONDITIONS FOR STABILITY
Sl1
There are constants Co and Ko such that, for all
Lemma 12.2.2.
Co,
l s i = I sh l � (12.2.15)
Proof.
Equations (12.2. 14) and (12. 1 . 12) give us
For constant/l s l 2 � �, the desired estimate follows. Corresponding to Eq. (12. 1 . 1 5), we introduce Yj =
and we write Eq. (12.2. 14) as
( Yp +j � l , · · · Yj � r )T , A
A
j
1 , 2, . . . ,
(12.2. 16a)
with boundary conditions (12.2. 16b)
Here = (s ) , = H(s) are polynomials in s. Therefore, we can, on any compact set := ( s i � Co, ReS � 0) choose the unitary transformation U(s) l upper S that transforms to as a continuous function of s. Cor responding to Eq. (12.1 .20),triangular the changeformof variables w = U'y gives us M
H
M
M
[ ]
WI . W II + 1 J
=
[
Ml l
0
j
1, 2, . . . ,
(12.2.17a)
with boundary conditions (12.2. 17b)
Here
M ij , H I , H II
are continuous functions of s on any compact set S.
512 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
As in the last section, WiI 0, and the solution satisfies WII 0, g- I =
WjI
= Ml l
WII '
(12.2.17c) By Lemma 12.1.7 the Godunov-Ryabenkii condition is equivalent to for Re s > O. We strengthen it to (12.2.18) Det (H I(s)) 0, for Res � O. This condition is called the detenninant condition. As we will see, it is equiva lent with a uniform estimate of the boundary values in terms of boundary data. We make the following definition. HIw1 = g.
-J
Assume that there is a constant K that is independent of s and g such that the solutions of Eq. (12.2. 14) satisfy
Definition 12.2.1.
p
L
v = -r+
I
\9. 1 2 �
Kl gl2
,
(12.2.19)
Re s > O.
Then we say that the Kreiss condition is satisfied.
We will now prove the following lemma. The Kreiss condition (12.2.19) is equivalent to the detenninant condition (12.2.18).
Lemma 12.2.3.
By2.18)Lemma 12.Because 2.2 we need only to consider lsi � Assume that the Eq. (12. holds. HI(S) is a continuous function of s, there is a constant 0 > 0 such that (12.2.20) Therefore, ! (HI(s))- 1 1 is uniformly bounded, and Eg. (12. 2 .19) follows from Eq.If(12.Eq.2.(12. 17c). 2 .19) holds, then (HI(s))- 1 must be uniformly bounded and Eq. (12.2.18) holds. This proves the lemma.
Proof.
Co .
513
SUFFICIENT CONDITIONS FOR STABILITY
ThetheKreiss condition gives an estimate ofxedthepoint solution nearhavethetheboundary. In fact solution can be estimated at any fi X . We following j lemma. Lemma 12.2.4. If the Kreiss condition holds, then there are constants Kj such that for every fixed j the solutions of Eq. (12.2.14) satisfy (12.2.2 1) Res > O. Proof. By Eg. (l2. 2 .17c) we get and the lemma follows. Wes > can the Kreiss condition asop.anNow eigenvalue condition. For Resingular 0,foritalsoiss the_express Godunov-;:R yabenkii conditi assume that H I (so) is i�o , where �o is real. Let s i�o + �, � > O. As � 0, Eg. (12.2.20) converges to 0,j =
=
w lI
-7
=
·t O )WII , WjI = M I -l I (IC;; H I (i�o)w{ g, =
and there is a nontrivial solution for O. In general, there may be one or more <eigenvalues of (i�o) with I K (i�o) 1 1 and in such a case the condition Ilylih 00 is violated. We make the next definition. Definition 12.2.2. If Det (H (so» 0, where So is purely imaginary, then So is called a generalized eigenvalue of the eigenvalue problem (12. 1.13) if g =
Ml l
I
=
=
IIlPlih = 00.
REMARK. The condition I I IP Il h may be fulfilled even if So is on the imagi nary axis. In such a case, So is an eigenvalue. We now have the following lemma. Lemma 12.2.5. The Kreiss condition is satisfied if, and only if, there are no eigenvalues or generalized eigenvalues for Re s � o. cases the(12.easiest is Eg.2.16a). (12.2Instead, .19). Oneweneed writeIn most the system 2.14) incondition the formtoofcheck Eg. (12. use not the < 00
514 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
representation ( 12.1 .22) for the general solution, which we substitute into the boundary conditions to obtain C(s)o-
=
g.
ThenWewesummarize can detennine whether or not Eq. the ( 12.2. 19) holds. our results by connecting Kreiss condition to an estimate in physical space. Theorem 12.2.1. Consider the system (12.1. 10) with f F = O. Its solutions y satisfy, for any fixed j, the estimate =
( 12.2.22)
if, and only if, the Kreiss condition is satisfied.
assume that the Kreiss condition holds. Then, by Eq. ( 12.2.2 1 ) and 's relation, ParsevalFirst
Proof.
fo� e- 2'1 t l y/t)1 2 dt S; Kj J: e-2'1 t l g(t)1 2 dt S; Kj
J: I g(t)1 2 dt,
1/ > o.
Sincenotthedepend right-hand side is independent of 1/ and the solution y/t), 0 t T, does on g(t) for t > T, Eq. ( 12.2.22) follows. Next assume holds and let T ---7 00. Then Eq. ( 12.2.2 1 ) follows, as well that as Eq.Eq.( 1(2.2.12.2.22) 19). This pro,Ves the theorem. now consider the example 1 2. 1 .24) to (12. 1 .26) with inhomoge neousWe boundary conditions. We needEqs.to(sharpen Lemma 1 2. 1 .9. Lemma 12.2.6. There is a constant (j > 0 such that, on any compact set l s i C, ReS � 0, the roots K and K 2 of the characteristic equation (12.1.28) satisfy the inequalities S; S;
S;
1
- � (j , � , K 1 2 I (j 11 - _ I Kj
11
IK
j
1 , 2,
if a
> 0,
j
1 , 2,
if a
<
(In fact, the second inequality holds for all a.)
O.
515
SUFFICIENT CONDITIONS FOR STABILITY
rootsifareforcontinuous functions of s. Therefore, the inequalities can onlyThebeThe violated some S Kj = 1 , when a > 0, or K I K2 1 , when a < O. first statement of Lemma 12.1.9 tells us that this cannot happen for Restatement s > O. Let a > 0 and Kj 1 , then necessarily s O. However, by the second Lemma 1 2. 1 . 9, we obtain a contradiction because K 1 = - 1 . Let a
Proof.
=
=
=
1 _ 12
s
a
( T _ _1T ) K
K
( K � _ _1 ) s a K� Thus,lemma. s 0, and by Lemma 12. 1 .9 this leads to a contradiction. This proves the Now we substitute the general solution +
1 _ 12
=
=
into the inhomogeneous boundary conditions and obtain the inhomogeneous equations ( 12.1 .29) and ( 12. 1 .30) with nonzero right hand sides, respectively. Lemma(12.2.12.2.619) tells usandthatthe(1 1Kreiss and (12condition are uniformly bounded, that is, the esti mate holds is satisfied. We have proved the next lemma. Lemma 12.2.7.
tion.
The probLem (12.1.24) to (12.1.26) satisfies the Kreiss condi
Clearly, the results can becomponent. generalized to systems (10. 1 . 1 ) if we use the above approximation for every assume thatmatrices the operator Q is sernibounded for the Cauchy problem; thatWeis,now the coefficient are Hermitian and Re (w, Qw)_ =, =
�
(12.2.23)
0,
for all w with Il w ll-=,= < The semiboundedness will be destroyed by bound ary terms lemma when working the scalareffect. product (u, V)h 'Lf= 1 (Uj, vj)h. The following shows thewithboundary Lemma 12.2.8. Assume that Q defined in Eq. (12. 1. 11) satisfies Eq. (12.2.23). Then for every grid vector function {Wj Jj= 1 with II w llh < 00 .
=
-r+
00
516 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
r j- 1
Proof.
Re (w, Qwh � Re � � (Wj- v , B_jw- v ). j= 1 v = O Define the difference operators
(12.2.24)
1 -I � BBj h j = -r } '
Q-
-
1 p � BBj h j= O } '
Q+
1
Q:
-
-I
� Bj E-j .
h j= - r
We have -I
j= - r
� � (wv, Bj Wv +j)
j=-r v = 1
-I
��
v = 1 j= - r
j= - r v =j + 1 j= - r v =j + 1
j= - r V = I �
=
(Bj E-j wv, Wv ) +
(Q:W, wh +
r j- I
r
0
� �
j= 1 v = l -j
(Wv + j , B_j w v ),
� � (Wj - v, B_j w_v ). j=1 v=O
Thus Re (w, Qwh
Re (w, Q+wh + Re (w, Q- wh Re (w, Q+wh + Re (w, Q:wh r j- I + � � (Wj - v , B_j w- v ). j= v = O
Re Define a new and extended grid vector function by I
(12.2.25)
SUFFICIENT CONDITIONS FOR STABILITY
517
for j for j � O.
� 1,
Then
that is, Re (w, Q+ w)_ �. � + Re (Q_w, w)_ �. � , Re (w, (Q+ + Q_ )w)_ �. � Re (w , Qw)_�. � o.
�
The lemma then follows by Eq. (12.2.25). We can now prove the following theorem. 0,
Consider the system (12. 1. 10) with F f and assume that Q is semibounded for the Cauchy problem and that the Kreiss condition holds. Then we obtain, for the solution y at every fixed T, =
Theorem 12.2.2.
=
(12.2.26) � constant f: I (t) 1 2 dt. Proof. By Lemma 12.2.8, we have r 2 Re ( y, QY)h � constant L I Yv I 2 , v -r + 1 and after integration the estimate follows using Theorem 12.2.1. II y(T) l Ih
g
=
518 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
The(12.1.10). last theorem guarantees that weofh obtainto an estimate for the solutions of We extend the definition < j < and solve a Cauchy problem. Subtracting its solution fromInstead, u, we obtain a problem with F f = O. We shall not pursue this approach. we give conditions such that Eq. (12.1.1 0) is strongly stable. As in the example, we now solve an auxiliary problem d Wj (12.2.27a) j = 1,2, . .. , QWj, dt (12.2.27b) w/O) = h, (12.2.27c) io wo (t) = 0, (12.2.27d) II w llh < Totheobtain the desired result,be weestimated need toinconstruct boundary. Weconditions such that boundary values can terms of want to prove the II f i l h next lemma. Eq.
-00
00
=
_
-
00 .
Assume that Eq. (12.2.23) holds. Then, we can find boundary conditions (12.2.27c) such that, for all grid functions Wj, with II w llh < 00, that satisfy these boundary conditions,
Lemma 12.2.9.
Re (w, QW)h Proof.
� -0
r
L Iwvl 2 ,
v = -r+ 1
o > O.
(12.2.28)
By Lemma 12.2.8, r j 1
) Re L L (Wj _ . . B_j w_v), v=O r= Re L (Wr - v , B_r w- v ) v =o r- 1 j- 1 (12.2.29) + Re L L (Wj _ .. B_j W- v ). j= v = O Let be a constant with 0 < � 1 and choose the boundary conditions by (12.2.30a) that is, Re (w, QW h �
j= 1
I
1
T
T
SUFFICIENT CONDITIONS FOR STABILITY
p- =
519
1,2, . . .
(12.2.30b)
, r.
Then we get, by Eq. (12.2.29), r
-I -Re L TV jB_ r w_ v j 2 v=O r- I j(12.2.3 1) Re j=Ll L (Wj - v , B_j w_ v )' v=o Because B-r is nonsingular, we have 1, jB_ r vj � constant j vj � constant jB_j vj , j 1, forThus, any vector v. by using Eq. (12.2.3 1) and the boundary conditions (12.2.30) once more Re (w, QW)h
I
+
...,r
Re (w, Qwh
+
-
r- l
L TV jB_vw_v j 2 v=o r- I j- I
�
constant j=L L jWj _ v l jB_rw - v j , v =o
�
constant L L Tr + v -j I B_ rwj _ v _ r j jB_ rw_ v j , j= v =o
�
constant L L T(r -j)/2 (T(r -j + v)/2 jB_r Wj _ v _ r l ) (Tr/2 jB_rw_v l ), j=1 v=o
l r- I j- I
1 r- I j - I r- 1 j
I
constant L L T(r -jJj2 (Tr -j + v I B_rWj _ v _ r I 2 TV I B_ rw_ v I 2 ). j= I v= O Because 0 < T � 1 and j � 1, we have T(r - jJj2 � T I /2 . Furthermore, we have o � 2 and 1 j � 1 in the double sum; that is, r- l j- l r- l 2 V L L T jB_rw_ v j � constant L TV jB_ r w_v j 2 v=o j= I v = o and -
�
r
::; 1/
r
-
::; r
+
-
-
+ 1/
r
-
520 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
r- l j- I
��
j = o v= o
rr - j + p I B_ r Wj _ v _ r I 2
r- l
constant � rV I B_ rw_ v I 2 . v=o
::;
Thus r 1
l /2 �- rV I B_rw_v I 2 . constantr v=o v= o Bynonsingular, choosing wer sufficiently get small but positive and using the fact that B-r is Re (w, Qwh +
r- I
� rV I B_rw_ v I 2
::;
r-
1
�
I B_rw_ v I 2 .
(12.2.32) Itboundary remainsconditions to include once W I , . . . , Wr in the estimate. This is achieved by using the more. We have Re (w, Qwh
r
� 1' = 1
I WI' 1 2
::;
r
�
1'= 1
::;
-0 1
v=o
I B_ r wl' _ r I 2
r- I
� v=O
I B_rw _v I 2
and r is nonsingular we can split the right-hand side of Eq. (12. 2 . 3 2) in twosincepartsB-and obtain Re (w, QW)h
::;
-
;
0
I B::� I - I
r-
I
� •
=0
I W _v I 2 -
r
; �
0
1' = 1
1 WI' 1 2 .
This proves the lemma. Thisoriginal lemmaproblem providesmayestimates for the boundary values Wj, j -r + 1 , . . . , r. The have boundary conditions including Uj , j r, there fore, we also need estimates for j r. We now have the following lemma. =
Wj,
>
>
Assume that r � p and that the boundary conditions (12.2.27c) are such that Eq. (12.2.28) holds. Then we obtain for every jixedj Lemma 12.2.10.
(12.2.33)
SUFFICIENT CONDITIONS FOR STABILITY
Proof.
521
By Lemma 12.2.9, we have r 0 2 Re (w, QW)h $ - 2 L I Wv I 2 , I v
d
dt Il w llh2
= -r+
o >
0,
and after integration 20 ro v =:i-r+ I wv (r)1 2 dr $ Il fll �. Therefore, Eq.the(12.estimate 2.33) follows for we+Laplace 1 $j $ transform Eq. (12.2.27) and To derive for j obtain (12.2.34a) SWj QWj fj , j 1,2, . . . , Lo wo 0, (12.2.34b) (12.2.34c) Ilw l l h < 00. We consider the two cases l s i I sh l 2! Co and I sh l < Co where Co is a large number. We have Il w(t) ll � +
I
-r
>
=
r.
r,
=
+
=
=
II w l I �
$ I s�1 2 II hQw ll� I s�2 IIf ll�, � COll" "'"
(�
+
II w il l 1' 12
+
1'�1 2
..t,
1 ;;', 1 2 +
1 1,12
)
II f ll l .
that is, for large I sh l It then follows that Next, we consider the case I sh l $ Co, and write Eq. (12.2.34) in the one-step
522 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
fonn [see Eq. (12.2.16a)] Ifj l
For any j, we have
:s;
constant I h ! .
j- I j I j I h M Y Yj I L p =I M - Pfp, =
+
that is,
constant ( IY,[' h' % If, I' ) constant ( ! Y I 1 2 h l Jl �). Becau fixed jse I Y I I 2 L� = -r + 1 I wp 1 2 , and by assumption p :S; we obtain for every l y) I '
s
:s;
+
+
r,
=
I sh l Co . (12.2.36) Let s i� and integrate. We get, from Eqs. (12.2.35) and (12.2.36), :s;
=
Byfor the definition Parseval's relation, the fact l :S;j :S; ofweYj'obtain Eq. (12. 2.33) and for any fixedthatj . Eq. (12.2.33) holds Now we can prove our main theorem. -r +
r,
SUFFICIENT CONDITIONS FOR STABILITY
523
Assume that r � p, that the difference operator Q is semi bounded for the Cauchy problem, and that the Kreiss condition is satisfied. Then the approximation (12.1.10) is strongly stable.
Theorem 12.2.3.
Proof. In the (12. general we have toa nonzero forcing function Fj(t). The auxil iary problem 2.27)case,is modified dWj Qw. + Fj, j 1,2, ... , dt Wj(O) fj, (12.2.37) Lo o(t) o. The Lemma 12.2.10 can still be used. The transformed equation (12.2technique .34a) nowinbecomes =
]
=
=
w
=
j 1,2, ... ,
leading to the estimate
(12.2.38) instead Eq. with (12.2the.33)energy (assuming that the integral of I FI � exists). We also have, asofusual method,
(12.2.39) I w(t) l I l � con,tant ( I tl l l J: I F(T) l l dT) . The difference y v - W satisfies dYj j 1,2, . . . , dt Yj(O) 0, Lo Yo g(t) g l (t), where l (t) 1 2 dt 12.2.2, foand� I gfrom Eqs. (12.constant 2.28) andII f ll(12.�. The 2.29).theorem follows from Theorem REMARK. Because, in practical applications, we deal mainly with problems in finite intervals 0 x L, t � 0, the only interesting case is r p. This is typical +
=
=
=
+
::;
::;
::;
=
524 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
for 12.4, dissipative approximations are nondissipative considered andapproximations. the assumption InwillSection be removed. EXERCISES
Carry Lemma out the details in the derivation of the estimate (12.2.38), that is, prove 12.2.10 for F -:/:- O. 12.2.2. Prove that the fourth-order approximation (12.1 .24) is strongly stable with the boundary conditions 12.2.1.
for any value of
Q.
12.3. A FOURTH-ORDER ACCURATE APPROXIMATION FOR HYPERBOLIC DIFFERENTIAL EQUATIONS
We consider the quarter-space problem for a hyperbolic system U(x, 0)
=
f(x),
o�x <
00,
t ;::: 0, (12.3.1)
and approximate it by the fourth-order accurate approximation j 1 , 2, . . . =
,
(12.3.2a)
where Q A(1 Do(h) � Do(2h)). Because A can be diagonalized, the approximation can be decoupled into a setwithof scalar equations. Therefore, we can assume that A already is diagonal =
A
-
A FOURTH-ORDER ACCURATE APPROXIMATION
525
In this case, the boundary conditions can be written as (12.3.3) We obtaindifferentiate the boundary condition (12.3 .3) twice with respect to t and (12.3 .4) As boundary conditions for the ingoing characteristic variables ull, we use (12.3.2b) For the outgoing characteristic variables, we use extrapolation conditions (12.3.2c) We want to prove the following theorem. Theorem 12.3.1. The approximation (12.3.2) is strongly stable and fourth order accurate if � 4. Proof The approximation for the outgoing characteristic variables V I is decou pled from that of the ingoing characteristic variables V II , and we have already discussed it in the previous sections an example. the approximation for V I is two strongly stableas and that, for Our everyresults fixed tellj, us that q
I Thus, for V IIwein can the think form of V as a given function and write the boundary conditions
v{/ ct)
=
go(t),
Nowwewealso canhave thinkdiscussed of the approximation for two v II as consisting of scalar equations that in the previous Theorem 12.2.3, they are strongly stable. sections. By Lemma 12.2.7 and
526 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Sectionthat12.the7,error we shall comment on the accuracy of this method andInshow is &further (h4). REMARK. It is, of course, not necessary that A be in diagonal form. However, the extrapolation conditions should be applied to the outgoing characteristic variables. EXERCISES 12.3.1.
Consider the linearized Euler equations 0, u(x, O) u(O, t)
==
° �x <
00,
t �
0,
I(x),
0. Formulate conditionsapproximation (12.3 .2b), expressed variables theu forboundary the fourth-order (12.3.2a).in the original 12.3.2. Consider the hyperbolic problem °�x < Aux + F, t � 0, ==
p,
Ut
00,
==
u(x, O) I(x), u ll (O, t) R / u / (O, t) + g ll (t), ==
==
Formulate the boundary conditions to Eq. (12.3 .2b) for the fourth-order approximation (12.3.corresponding 2a). 12.4. STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
We again consider the approximation (12.1.10).forIntheSection 10.3, wecase,introduced the definition of generalized well-posedness continuous and theto same concept will be used for semi discrete approximations. Corresponding the analytic case, we make the following definition.
527
STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
0.
Consider the approximation (12.1.10) with f g We call the problem stable in the generalized sense if, for all sufficiently small h, there is a unique solution that satisfies the estimate ==
Definition 12.4.1.
==
(12.4.1a) for all 1) > 1) o. Here 1) 0 and K(1) are constants that do not depend on furthermore,
F
and,
(12.4.2)
0.
Ifif, only f 0, we call the approximation strongly stable in the generalized sense instead of Eq. (I2.4.1a), ==
holds.
The inrestricti on to10.homogeneous initial andproblem boundary conditions was dis cussed Section 3 for the continuous and the same discussion applies here. If a smooth function cansubtracted be constructed thatoriginal satisfiessolution, the inhomo geneous conditions, then it can be from the and theThedifference will satisfy a problem of the required form. leading terms of the operator are of the order h- i , and this part of the inoperator the nextis called theorem.the principal part. There may also be lower order terms, as Q
Assume that the approximation (12.1.10) is stable in the gen eralized sense and that Qo is a lower order operator that is bounded indepen dent of h. Then the perturbed problem
Theorem 12.4.1.
dv} dt v}(O) Lo vo (t) = =
has the same property.
0,
j 0,
1 , 2, . . . ,
(12.4.3)
528 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Proof.
By considering QOVj as a forcing function, we get, by assumption,
f: e-2'1t llv(t) lI � dt 2K(1/) f: e-2'1 t (ljQov(t)lI �
+ II F( t II � dt
) )
�
K1 (1/ ) 10= e-2'1 t (ll v(t)ll� + I I F(t) I I �) dt K1(1/ ) -7 0, 1/ -7 00. By choosing 1/ sufficiently large, we obtain �
1/ 1/ 1, >
which proves the theorem. Therefore, we can again assume that the coefficients Bv and L j do not depend on h, that is, that Bv = BV Q ' We canL vNalso consider the strip problem. Then there are also boundary con ditions gN . As in the continuous case, one can prove the following N theorem. l'
=
The approximation is stable in the generalized sense for the strip problem if it is stable for the periodic problem and stable in the generalized sense for the right and left quarter-space problems.
Theorem 12.4.2.
Therefore, we needproblem only consider Laplace transformed (12.1.10)theisquarter-space problem. Letf O. The SVj QVj + Fj , j = 1, 2, ... (12.4.4) < Lava g, I\vllh 00, which called the case, resolvent equation. As inis thealsocontinuous we can prove the next theorem. Theorem 12.4.3. Let f = g O. The approximation (12.1. 10) is stable in the generalized sense if, and only if, Eq. (12.4.4) has a unique solution that satisfies =
=
,
=
=
(12.4.5a) 1/ , 1/ 1/ 0
lim'1 K(1/) O.
0,
for all s = i� + > where = If only f then it is strongly stable in the generalized sense if, instead of Eq. (12.4.5a), �=
=
STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
529
holds.
If we only consider thecaseprinciple partorder of Q,terms theninwethecandifference choose O. However, in the general with lower equation, or when there are two boundaries, then we may have O. The main result of this section is Theorem 12.4.4.
REMARK. 'YI O =
'YI 0 >
Assume that the differential equations of the underlying prob lem (10. 1.1) are strictly hyperbolic (i.e., the eigenvalues of A are distinct) and that the approximation (12.1.10) is dissipative and consistent. Then the approx imation is strongly stable in the generalized sense if the Kreiss condition is satisfied. Theorem 12.4.4.
proof ofrequione-step res a number theThesolutions differenceof lemmas. equations.The first one gives estimates for Let D be an m x m matrix, where m is fixed, Gj is a discrete vector function with ilG llh < 00 and consider the system
Lemma 12.4.1.
yj + I = DYj + hGj , il y ll h < 00.
j
=
. .,
1, 2, .
( 12.4.6)
If ID I < 1, then the solutions of Eq. (12.4.6) satisfy the estimate I yllh �
1
h _
IDI I I Gi l h
+
(
h 1
_
IDI 2
) 1/2 I Y I · I
(12.4.7)
If ID- I I < 1, then Eq. (12.4. 6) has a unique solution that satisfies the estimates I D- I lh G l jD- I I I I i l!
( 12.4.8)
h 1 /2 11 GII " . (1 I D- 1 1 2 )1 /2
( 1 2.4.9)
i l y i l l! � 1
_
and I YI I �
-
is a sum of theLetparticular solutionForwithI DYII < 01 ,andt thethesolution solutionofofEq.the (12.4.6) homogeneous equation. Y I 0,
Proof.
=
=
530 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
then implies that is, If G = 0, then j
= 1 , 2, . . . .
Therefore, for any solution y with II Y llh < 00
andIfEq.\D-(11 \2.4.7) follows. < 1, then we write Eq. ( 12.4.6) in the fonn j 1 , 2, . . . and we obtain j Therefore, for any solution y with II yllh < 00 ==
(12.4. 1 0)
,
==
1 , 2, . . . .
that is, and Eq. ( 12.4.8) follows. In particular, we have Yj --7 0 as
j --7 00.
STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
531
The homogeneous equation (12.4.10) has the fonn j 1,2, ... ; Uj D- l Uj + 1 > that is, k > j. Therefore, 0 as j Duhamel's principle gives us, for Eq. Yj (12.4.10) thesinceunique solution, ==
==
----,1
----,1 00,
�
Yv
h
D-j Gj, L j=v
and therefore, for j 1 ==
This proves the lemma. We nowtransfonned consider equations Eq. (12.1.10) withf O. To esti mriateght thehalfsolutions of the Laplace (12. 4 . 4 ), we divide the of the complex s plane into three parts: I. l s i � Co, Re s � 0, where the constant Co is large. II. 0 < C l � s i � Co, Res � 0, where Cl is a small constant. III. l s i � C l , Res � O. ==
l
� Co, Re s � 0 Lemma 12.4.2. There are constants Co and Ko such that, for the solutions of Eq. (12.4.4) satisfy the estimate
I. l si
l si
==
I sh l � co,
532 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Proof. We have
�
� Q
"ull � $; s 2 II h ul l� + 2 I I F II �, 1 I 1 constant A 2 l A 2 A $; ( li v II h + h I g I 2 ) + 11 F I1 h ' 2 I sh l W By choosing
I sh l � Co, Co " ull � $;
large enough, we get
constant
$; constant
( I h i I l s i Ig l 2 + Is 1 2 " FII � ) ( co l sl I glA 2 + W II FA li h2) , S
1
.
1
1
the desired estimate follows. II. 0 < C}
$; lsi $; Co
Lemma 12.4.3. For any constant c l > 0, there is a constant T > ° such that, for C l $; l s i $; co , where Re s � 0, the solutions K of the characteristic equation (12. 1.18) satisfy
IKI
$;
1
-
T
or
IK I >
1
I
+
T.
( 12.4. 1 1)
Proof. We know that I K * 1 for Re S > 0. Assume now that there is a sequence {S( v) }, where Re S( v ) > 0, with solutions {K ( v ) } and I K ( v ) l less than 1 such that v sty) ----7 sQ' Re So 0, and K ( ) ----7 ei� , � reaL By Eq. ( 12. 1 . 1 8), So is an eigen value of = L v Bv e' v� . By dissipativity, Re So < ° if � * 0, which is impossible under our assumption. By consistency, L v Bv ° (see Lemma 12.4.5), imply ing that So ° for � 0, which is also excluded from the s-domain under consideration. This proves the lemma. =
Q
=
=
=
We now write Eq. ( 12.4.4) as the one-step method ( 12.2. 16),
j
=
and use the difference equation to eliminate ary condition which leads to
Here,
g.
1 , 2, . . ,
( 1 2.4. 12a)
.
up
+
1 , Up + 2 , . . , Vq from the bound .
( 1 2.4. 1 2b)
STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
Fj I gl
=
$; $;
(B
_ I FAj ,
p
0, ... ,0)T
constant
(l � h
q-
p
,
I Fj l
constant (h /2 I1 F llh
By Eq. ( 1 2.4. 1 1), there is a transfonnation s, for C I $; l s i $; Co, and Re s ;::: such that
0,
533
+
+
Ig l
I g \ )·
)
( 1 2.4. 1 2c)
S(S) that is a smooth function of
where
(MI)*MI $; 1 - 7/2, (M II)*MII ;::: 1 + 7/2.
( 1 2.4. 1 3a) ( 1 2.4. 1 3b)
Now introduce a new variable y = Sw into Eq. ( 12.4. 1 2). We obtain
M IIW !I ]
+
-
hFjI I,
( 1 2.4. 14)
with boundary conditions
By Lemma 1 2.2.3, the Kreiss condition is satisfied if, and only if, HI (S) is nonsingular for Re s ;::: We can now prove the following lemma.
0.
Lemma 12.4.4. Assume that the Kreiss condition is satisfied and consider s for < CI $; l s i $; co. For every CI > there exists a constant K2 such that the solutions of Eq. (12.4.4) satisfy the estimates
0
0,
IIUlih $; K2 (h 1l F lih + h I /2 Igl ), I Uj l $; K2 (h 1/2 I1 F llh + Ig \ ),
j fixed.
534
THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Proof. The inequality ( l 2.4. l 3b) implies that I (M II)- I I inequalities ( 1 2.4.8) and ( 12.4.9) then give us
I I w II IIh for the solutions
� w
� ( 1 + T/2)- 1 /2 < 1 . The
II
constant h llF IIh, of Eq. ( 12.4. 14). The Kreiss condition implies that
Therefore, by Eqs. ( 12.4.7) and ( 12.4. 1 2c), II w 1 1l h
�
constant (hll F ll h + h l /2 LW,
and the lemma follows. III. l s i
� Cl « 1, Re s � 0
Now we consider Eq. ( 1 2.4. 1 2) for ls i vectors of M are the solutions of
� CI «
1 . The eigenvalues and eigen
K
[ �l J, IPp+ r
which, by Eq. ( 1 2 . 1 . 17), is equivalent to
(Sf - ,t, )
B, K ' ,, '
o.
( 1 2.4. 15)
We need the following lemma.
12.4.5. Let Eq. (l2.1.10a) be a consistent approximation of Eq. (10.1.1). Then
Lemma
p :::: -r
v = -r
Proof. Apply Eq. ( 1 2. 1 . 1 1) to a smooth function 1/;. Taylor expansion gives us
v =-r
V = -r
STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
Let F = 0 in Eq. ( 10. 1 . 1 a). Then consistency implies lim h � o Q1/; the lemma follows. By using the identity K " ( 1 + (K - 1 »" write Eq. ( 1 2.4. 1 5) in the form =
(ll
-
- 1)
-
+ &( I K
A1/; ' , and
1 + V(K - 1 ) + & ( I K - 1 1 2), we can
=
t, B' K } ' (SI ,i:" B, ,t, ,B,(K (Sf - A(K - 1 )
=
535
- 1 1 2 » \0 \
+ &( I K =
)
- 1 1 2) � I
O.
Consider this equation for small l s i . Because, by assumption, A has distinct eigenvalues there are exactly m distinct eigenvalues and corresponding eigen vectors of the full problem, and they have the form
Here Aj and aj are the eigenvalues and corresponding eigenvectors of A. The dissipativity condition tells us that Re s < 0 for K ei�, � real, � -j:. O. For K = 1 we have s 0, and we have just demonstrated that there are exactly m eigenvalues Kj = 1 for s = O. Therefore, all the other eigenvalues satisfy =
=
(j > for Re s � O. Let Aj > eigenvalues satisfy
O.
If we choose � > 0, we see that the corresponding for all
s
Therefore,
Kj (i�) + implies that
0,
- -
i�
-
1/
Aj
+
1/ ,
7J
� O.
+ &(I �I � )
536 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
1 21\�j , -
I Kj ( i�
+
�)I �
+
where Aj > 0, for c ] sufficiently small. Correspondingly,
1 - 2 1�j l where Aj < 0, for sufficiently small. Order these eigenvalues such that IKj l > 1 for j 1,2, . . . , r and I Kj l < 1 for j r + 1, � > 0. Now we can find a smooth transformation S that transforms Eq. (12.4.12) into Eq. (12.4 .14), where now -
I Kj ( i� + � ) I �
'
=
c]
m
=
[; [
K' '
. . . , m,
°
�J �J (12.4.13). Km
K'
°
Kr
Here M{ and M{ I satisfy the inequalities Now we can again apply Lemma and use the Kreiss condition to obtain
12.4.1
(12.4.5b). 12.4.2 12.12.44..44.
The last estimates and Lemmas and show that the solutions of Eq. satisfy Eq. Thus, the approximation is strongly stable in the generalized sense. This proves Theorem
(12.4.4)
REMARK. For the usual difference approximations, the coefficients Bv of Q are polynomials in A. In that case, we need only assume that the underlying partial differential equations are strongly hyperbolic, because we can reduce the system to scalar equations.
(12.1.1 0)
As we have proved in the previous section, the Kreiss condition is satisfied if, and only if, there are no eigenvalues or generalized eigenvalues for Re S � 0.
STABILITY IN THE GENERALIZED SENSE FOR HYPERBOLIC SYSTEMS
537
By Lemma 1 2.4.3, any generalized eigenvalue s = i�o, �o -:/:- 0, is a genuine eigenvalue for dissipative approximations. Therefore, for s -:/:- 0, we only need to test for genuine eigenvalues in this case. In the neighborhood of s = 0, we can simplify the analysis. For Re s > 0, l s i « 1 , the general solution of Eq. (1 2.2. 14a) with Ilyli h < 00 can be written as ( 1 2.4. 1 6) Here Zj converges, as S problem, that is,
---7
0, to the corresponding solution of the continuous
ZJII (see Section 10. 1). By Eq. ( 12.1 .22),
Zj
L
IK" I � ] - o
Pv ( J)K(
represents the part of the solution with I K p I strictly smaller than 1 . The boundary conditions consist of two sets: 1.
m - r boundary conditions that formally converge to boundary conditions of the continuous problem. By consistency, they are of the form
2. Extra boundary conditions of the form
(E - J)Q2YO
=
g (2 )
•
Here Qj = L Cjp E" , j = 1 , 2 are bounded difference operators. Substitut ing Eq. ( 1 2.4. 1 6) into the boundary conditions gives us
Z(/ + Z&I R 1zJ + (E - J) Q] ZO + &( l s l )ZJ1 + g ( l ) , (E - J)Q2Z0 + &( I S I )ZJ1 = g (2). =
(1 2.4.1 7a) ( 1 2.4. 17b)
Now we can prove the following theorem. Theorem 12.4.5. The solutions of Eq. (12.2. 14) satisfy the Kreiss condition in the neighborhood of s = ° if, and only if, the reduced eigenvalue problem at
s 0, =
538 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
(E
-
I)Q2Y'0
0,
Y'j =
L
Pp ( j)K �,
( 12.4. 1 8)
[<. 1 < 1 - 0
only has the trivial solution. Proof. If Eq. ( 1 2.4. 1 8) has a nontrivial solution, then s = ° is a generalized eigenvalue for the full eigenvalue problem (12. 1 . 1 3) and the Kreiss condition is not satisfied. If, on the other hand, Eq. ( 1 2.4. 1 8) only has the trivial solution, then Eq. ( 12.4.17b) has a unique solution zo in the neighborhood of s = 0, and we can estimate it in terms of g (2) and &( I SJ)ZJi. This solution is introduced in Eq. ( 1 2.4. 17a), and since the coefficient of ZJ I is of the form 1 + &(ls J ), there is a unique solution that can be estimated in terms of g ( 1 ) and g (2) . This proves the theorem. EXERCISES 12.4.1. Prove Theorem 1 2.4.2 using the same technique as in Theorem 10.3.3. 12.4.2. Consider the approximation o > 0,
j = 1 , 2, . . . .
Formulate boundary conditions and prove that the Kreiss condition is satisfied by using Theorem 1 2.4.5. 12.5. AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION BUT IS STABLE IN THE GENERALIZED SENSE
All examples we have treated so far have satisfied the Kreiss condition. We now consider an example that does not satisfy the Kreiss condition. For certain parameter values, it is still stable in the generalized sense; for certain other values, it is not. It shows how complicated the situation is when the Kreiss condition is violated. Consider the equation au/at + au/ax = ° and the approximation
dv-J _ + Do v-J dt
Vj(O)
=
/j,
=
avo V I = 0, II v l I " < 00, -
0,
j
=
1 , 2, . . , .
( 12.S . 1 a) ( 1 2.S. 1b) (12.5 . 1 c) ( 1 2.5 . 1 d)
AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION
539
where a is a complex constant. For a 1 , this boundary condition is also some times used when the characteristic is entering the domain, as it does in our example. For other values of a, the example is somewhat artificial, but it is used to illustrate typical phenomena arising with generalized eigenvalues. The connected eigenvalue problem for Eq. ( 12.5 . 1 ) is given by =
1 -2 1
S'Pj
('Pj + I - 'Pj - d ,
=
1 , 2, . . . ,
s
=
hs,
II 'P II I , = < 00 ,
a 'P I ,
'Po
j
(1 2.5.2)
Thus,
where I K I I < 1 , for Re 5 > 0, and K I is the appropriate solution of the charac teristic equation
that is, KI
=
- � ::;; arg � ::;; � . 2 2
- 5 + �,
Substituting 'P into the boundary conditions shows that 5 is an eigenvalue if KI
=
a.
( 12.5.3)
Straightforward calculations give us the following lemma. Lemma 12.5.1.
KI
=
The function < � -� < arg � 2 2'
- 5 + �,
fior
> 0, Re 5 -
maps the right half-plane Re 5 � 0 one-to-one onto the right half disc n {K I , I K I I ::;; 1 , Re K I � O}, see Figure 12.5.1. In particular, 1. IK I I
=
1 , I arg K I I ::;; � ,
corresponds to 5 it
K I (O)
=
=
1,
K I (± i)
=
-1
+ i.
::;; � ::;; 1 with
.
540 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
--i
Figure 12.5.1.
2. K I = iT, - 1 < T < 1 , T -J 0, corresponds to s = i t I � I > 1 . 3 . The interior points of n correspond to s with Re s > O. 4. K I = 0 corresponds to s = 00. 5. There is a constant 0 > 0 such that I K I I � 1 - 0 Re s when Re s This gives us the following theorem.
is small.
Theorem 12.5.1. If a belongs to the interior of n , then there is an eigenvalue s with Re s > 0, and the approximation is unstable. If a does not belong to n , then there is no eigenvalue or generalized eigenvalue with Re s 0, and the
�
approximation is stable. If a = iT, - 1 < T < 1 , T -J 0, then there is an eigenvalue s with Re s = o. If I a I = 1, Re a 0, then there is a generalized eigenvalue with Re s = O.
�
We now carefully investigate the properties of the solution when a is on the boundary an of n ; that is, when there is an eigenvalue or a generalized eigenvalue s on the imaginary axis. We apply the technique used in Section 1 2.2 to derive an estimate of v in physical space. Corresponding to Eq. ( 1 2.2.6), we first solve the auxiliary prob lem
dw·1 dt
__
+
Dow1'
Wj (O) = fJ, Wo + W I = O.
0,
We apply the energy method and obtain
j
1 , 2, . . . , ( 1 2.5.4)
AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION
In particular, as
541
T --7 00, ( 12.5.5)
The difference Y = v - w satisfies
dYj dt
-DoYj ,
1 , 2, . . . ,
j
y/O) = 0,
Yl (t) - aYo(t)
g(t),
( 1 2.5.6)
where, by Eq. ( 1 2.5.5),
J:
I g(t) 1 2 dt � constant II fll � .
We solve ( 12.5.6) by Laplace transformation. The transformed equations are
SYj = -DoYj , Yl - ayO = g,
Re s > 0,
lIyll h < 00.
Therefore,
Kl - a
--
g,
and, by Parseval's relation, we obtain, for any j ,
( 1 2.5.7) If there is no eigenvalue or generalized eigenvalue, then
I (K I for Re s � O. We can choose 1/
-
=
a) I I -
� constant,
0 in Eq. ( 1 2.5.7) and obtain
542 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
We used the last estimate earlier to prove stability. If a belongs to_the boundary of n , then there is an eigenvalue or generalized eigenvalue So = i�o such that lim (K \ - ar
s � so
\
00,
and we have to investigate the right-hand side of Eq. ( 1 2.5.7) more carefully. We have to distinguish between three different cases.
G.ASE 1 . a = iT, - 1 < l' < 1 . In this case, there is an eigenvalue So
I �o l
> 1, and, in the neighborhood of So, we have
Therefore, we obtain, for the variable
s = s/h, So
Thus, for sufficiently small
i�o/h.
� t I hel l s - so l ·
( 1 2.5.9)
1 be a constant. We use Eq. ( 1 2.5.7) to estimate Yj in a finite
T.
I where
( 1 2.5.8)
h i s - so l ,
IK\ - al Let 0 < 0 < interval 0 :5; t :5;
-
i�o,
+
II,
( 12.5. 10)
AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION
543
Here Kl is a constant that, by Eq. ( 12.5.8), depends only on 0 . By Eqs. ( 12.5.8) and ( 12.5.9), we have, for sufficiently small 0,
Therefore, Eq. ( 12.5. 10) gives us, for 1/
=
liT,
By taking the square root of both sides, we find that the stability constant is proportional to h- 1 • However, because \a\ < 1, we have, for sufficiently small 0,
a =
constant > O.
Thus, the deterioration of the stability constant is only present in a narrow boundary layer. For
that is,
544 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
j � I log h ili log ( 1 - a ) l , the bad effect disappears. The above estimate is sharp for general functions g E has some smoothness, that is, if
L2 (t) . However, if get)
where p � 1 for large l s i , then we can do better. The major contribution comes from the term
But
-
-
-
� o = � olh, where � o is a fixed number. Hence, for 0 < I �o l we get
R
f (�O +O )/h - h2 (�O- -o )/h 1
d <:;t < cons2tant h2p - I < constant h2p - 3 . h I 1· <:;t + 1'/ 1 2p -
Thus, if P � �, then the "bad term" in Eq. ( 1 2.5 . 1 1) is bounded. Therefore, this example indicates that one can accept genuine eigenvalues S with Re s = O.
GASE 2. l a l _= 1 , Re a > O. In this case, there is a generalized eigenvalue So =
i �o, where I � o 1 < 1 . We can proceed in the same way as for the previous case and obtain an estimate of the_same type as in Eq. (12.5 . 1 1 ) (see Exercise 12.5 . 1 ). However, in this case, I K l ( i�o )1 = 1 and, therefore,
for 1'/ = liT. (Recall that I K I I < 1 for 1'/ > 0 by definition.) Thus the "bad behavior" is not restricted to a boundary layer but spreads to the whole domain o � x < 00. Still, for smooth g, the effect is unimportant and one might be tempted to accept the presence of generalized eigenvalues. However, the insta bility can be amplified if we consider, instead of the quarter-space problem, the problem in the strip 0 � x � 1 , t � O. Now we also must describe boundary con ditions at x = 1 . Let hN = 1 . We consider Eq. ( 1 2.5. 1 ) with lIuli h < 00 replaced by the boundary condition UN =
O.
(12.5. 1 2)
AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION
545
This is not a very natural condition at an outflow boundary, but the left quarter space problem is stable, and the example serves as an illustration of the prin ciples. We now estimate the eigenvalues of the operator corresponding to the half strip problem. The general solution of the eigenvalue problem is of the form
where
K j , and K 2
Thus,
K I K 2 = - 1 , that is,
Substituting
I{)
are solutions of the characteristic equation
into the boundary conditions gives us
Therefore, S is an eigenvalue if
that is,
KI - a 1 + aK I
( 1 2.5.13)
We want to estimate the corresponding eigenvalues s. To do that, we will show that there is an eigenvalue near So , that is, that pq. {12.5 . 1 3) has a solution s z So , Re s > O. In a neighborhood of So = i�o, I �o l < 1 , we have, with K = K I = -s + v'1 + s2 ,
546 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
V I - � + 2 i� + � ) ( 1 + i2�� /( 1 � ) + TJ + V( 1 TJ + VI - � (1 + i� � /(1 - � ) + + + VI - � 1 _ R
- i�
K (S)
-i � - i� - i�
-
TJ +
(
2
�
�2
2
2
J
�
&(� 2 ))
2
2
&(� 2 ))
&(� 2 ),
2 - �
( 12.5. 14)
that is, 1 Therefore, we can write
where eiTO
K (So)
=
a,
CI
C IO
+ &(I�
C
C 0 2
+ &(I�
2
�o l ) �o l )
=/:. 0 real,
+ & (� ),
CIO
+ &(� ),
C 0 > 2
0 reaL
To first approximation, Eq. ( 1 2.5. 1 3) becomes ei1f(N - l ) e(i [To + CJ o (� - �O )) - Cloij)(2N - l )
= D(ic l O(� -
�o ) - C20�),
( 1 2.5.15)
where D = a/( 1 + a2 ) . Observe that, by assumption, a 2 =/:. - 1 . Equation ( 1 2.5. 15) has a solution if, for some integer m,
,
e-Cloij (2N - I )
7r (N
= I D l l ic lO (� - �o ) - c2o� l - 1 ) - 27rm + (TO + C IO(� - �o )) (2N = arg D + arg (iclO (� - �o ) - C20 � ),
Choose m such that
(J :=
- 1) - 7r S arg ( · )
7r(N - 1 ) + (2N + 1 )TO - 27rm satisfies
<
7r. ( 1 2.5. 1 6)
AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION
lui �
547
7r.
One can show that the equation
has a solution
c20 7] = -
-1
log (2N - 1 ) 2N
+
(9'
(
log log (2N - 1 ) 2N - 1
)
'
(Exercise 12.5.2). Furthermore, observing that
- -
Cl O (� - �o) =
(9'
( 1 1)' 2N
_
it follows that Eq. ( 12.5 . 1 6) and, therefore, also Eq. ( 12.5 . 1 3) have a solution
C207] Cl O (�
=
1 2N - l
� o)
10g (2N
_
1)
+
(9'
1 (arg D - u ) 2N _ 1
( +
log log (2N - 1 ) 2N - l (9'
(
(2N
_
'
�
1 ) 1 g (2N - 1 )
This eigensolution leads to a solution of the strip problem
s
)
= ( i� + � )/h,
)
.
( 1 2.5.17)
where, to first approximation,
The last equation shows that the instability is of the order (1/h)<xI , which grows with time. If at = 1 0 and N = 1 00, then the solution has increased by a factor 1020 , which is not acceptable. Geometrically, we can think of this phenomenon in the following way. The solution of the strip problem consists of waves that are reflected back and forth between the boundaries. There are waves that are amplified by a factor h- I every time they are reflected at the boundary x = O . As time increases, more and more reflections can take place. This argument explains why, in the quarter-
548 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
space case, the stability constant is only increased by h - l • The above waves are only reflected once at the boundary x = 0 and, then, escape to infinity. _ Th_e bad behavior obtained here does not happen in the first case, when So = i�o, I �o I > 1 . The deterioration of the stability constant is only felt in a boundary layer; that is, the amplitude of the reflected wave decreases exponentially away from the boundary x = O. In this case, we obtain from Eq. ( 1 2.5. 1 3)
In the neighborhood of So, we have I K I i � 1 - (J, (J > O. Thus, I K I - a l and, hence, also I s - so l , must be exponentially small. This shows that the order of the instability remains essentially &( 1lh). The above results seem to indicate that we can accept eigenvalues s with Re s = 0 but not generalized eigenvalues. However, the picture is even more complicated. This will become clear during the discussion of the third case.
CASE 3. a = ± i. Now So = + i, K J We have to calculate
=
i i, and we have a generalized eigenvalue.
in the neighborhood of So. The root K (SO) is now a double root of the charac teristic polynomial and we have
7
=
S - So So
Therefore, we obtain, for all sufficiently small 1 7 1 , I K (S) -
a
l
2:
constant I s - So I L
( 1 2.5 . 1 8)
This inequality tells us that s ---7 So at a faster rate than K (S) ---7 a. This implies improved stability properties. Substituting Eq. (12.5 . 1 8) into the esti mates ( 1 2.5. 1 0) and ( 1 2.5 . 1 1) gives us, for 'T/ = liT,
549
AN EXAMPLE THAT DOES NOT SATISFY THE KREISS CONDITION
that is,
e- 2
s: I Yj l 2 dt �
constant
( �) II fll � . 1
+
Thus, the instability is of the order vTjh, which is weaker than in the previous case. A simple but tedious computation shows that there is a constant Co > 0 such that, for all sufficiently small T,
I K (S) I � 1
_
Co 1/
VI � - �o l
+
�
Now, since I s - so l ;:::: (h/ V2) ( I� - �o l
+
1/ ), we get from Eq. ( 1 2.5. 1 8)
Therefore, in the last case, instead of Eq. ( 12.5. 1 1 ), we obtain, with
j
T=
1/11 ,
1 , 2, . . . .
Again, the deterioration of the stability constant is restricted to a boundary layer. This layer is wider than in case 1 , but one can still show that no further ampli fication occurs for the strip problem. Our example shows that eigenvalues s on the imaginary axis can be accepted, but that generalized eigenvalues can lead to bad nonlocal behavior if the cor responding root K with I K I = 1 is simple. If K is a multiple root, then the singularity becomes weaker as shown by Eq. ( 1 2.5 . 1 8), and this case can be accepted. We now show that our generalized stability concept can distinguish betwe$n the diff�rent cases. First consider case 1 , where there is an eigenvalue So = i�o , where I�ol > 1 . For I s - so l ;:::: 0 > 0, the desired estimate poses no difficulty and
550
THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
can be obtained by our previous methods. Therefore, we need only consider a neighborhood of So . We let f = F -:I- and write the transformed equations
0, 0
j =
1 , 2, . . . ,
in the form j
1 , 2, . . .
,
( 12.5.19)
where
c =
[ -2S1 O1 ] ,
Fj =
[ F0 ] '
B
[a -1 ] .
=
The eigenvalues of the matrix_ C are the solutions K J and K 2 of the characteristic equation. Now assume that / �o / > 1 . Then we know that / K J / < 1 and / K 2 / > 1 in a neighborhood of So . Therefore, there is an analytic transformation T = To + (s - so)TJ (s) such that
Tcr
J
[ o 0]. KJ
K2
Introducing new variables
gives us
j
1, 2, . . .
,
( 12.5.20) Because So is an eigenvalue, a(so) =
0, and a simple calculation shows that ( 12.5 . 21)
551
EXERCISES
1 , IKd = T, IK 2 1 liT, - 1 < T < 1 . Thus byFurthennore, Lemma 1 2.4.b(S)1 ,iswebounded. obtain theIn case estimates I l w (2)lI h constanth I l F (2) l h, 1I w }2) 1 constanth 1/2 I F (2) I h ' I l w O)l I h constant(hI l F O)l I h h Iw? ) 1) [2) 1 ) constant ( hI l F (1 )l l h --;--:-:-I s h-_1...,. I w . .,. .. sol constant ( h 1 F 0 ) 1 h 1 _ -h So I I F- (2)ll h ) constant ( h � ) I F li h ' Thus, theto beproblem is stable in the generalized sense in case which was shown above well-behaved. In casefor 2,thisanisestimate leading to (generalized stability cannot be derived. The 1 reason that IK 1 d too fastl'9'asl sS so so.l ), IK2 1 = l'9' ( l s - sol ) ; that is, K . approaches the unit circle = K 2 is now a double Inatcase 3, thewhich situationimplies is more favorable. The root K 1 1 = 1 /2). root (ls s so, IKII l'9' - sol This larger distance from the unit circle as S So is enough to secure the necessary estimates for stability in the generalized sense. =
=:;
=:;
=:;
+
=:;
+
=:;
+
=:;
S
-
+
I,
=
=
-7
+
+
-7
+
EXERCISES 12.5.1.
ConsiderSo thei� problem ( 1 2.5. 1 ) with value o , and prove that =
a =
1.
Find the generalized eigen
thereby order 11h is confined to a bound ary layerproving (case 2thatin ourthe instability discussion ofabove). 12.5.2. Consider the equation for large Prove that it has a solution M.
552 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
[This result is used in the derivation of solutions to Eq. ( 1 2.5. 1 3).] 12.5.3. Consider the approximation
�:
dVo(2) dt
=
[ � �] Dovj,
j
=
1 , 2, . . .
,
Prove that it has generalized eigenvalues. In Exercise 1 1 . 1 . 1 , it was shown that the approximation satisfies an energy estimate. At first sight, this seems to be a contradiction, because we know that generalized eigenvalues lead to growth of order h-I /2 • Explain this paradox. 12.6. PARABOLIC EQUATIONS
We consider parabolic systems
Ut = Auxx + Bux + Cu + F u(x, O) !(x),
-
.
Pu + F,
°�x <
=
00,
t � 0, ( 1 2.6. 1 a)
in the quarter space x � 0, t � 0. Here u, F, and ! are vector functions with m components and A, B, and C are constant matrices. We also assume that A A is Hermitian and positive definite. At x ° and t � 0, we describe m linearly independent boundary conditions =
*
=
R l 1 uAO, t) + Rl O u(O, t) Roo u(O, t) g (O\t) .
g ( I ) (t), ( 1 2.6. 1b)
=
Here R 1 1 and R IO are r x m matrices with rank (R 1 1 ) r. Correspondingly, Roo is an (m r) x m matrix with rank (Roo) = m r. We approximate Eq. ( 1 2.6. 1) by a difference approximation of type ( 1 2. 1 . 10). The only difference is that, instead of Eq. ( 1 2. 1 . 1 1), =
-
-
553
PARABOLIC EQUATIONS
Q
( 1 2.6.2)
JI= -r
is of order h- 2 • We modify the stability Definition 1 2.4. 1 by replacing the estimate ( 1 2.4. 1 a) by
Equation ( 1 2.4. 1 b) is modified correspondingly. In analogy with the continuous case, we note that the concept "strongly sta ble in the generalized sense" does not play the same role as for hyperbolic equations. Only if all boundary conditions are derivative conditions, that is, if rank (R I I ) = m, can we find approximations that are strongly stable in the generalized sense. As we know from the continuous case, well-posedness is independent of the lower order terms; that is, we can assume that B == C == 0, and R I O == 0. Correspondingly, as in the hyperbolic case, we can assume that the coefficients of the difference operator Q and the boundary conditions do not depend on h. By assumption, A A', and, therefore, we can make a change of variables such that A becomes diagonal. Therefore, we need only discuss scalar differential equations. The different components of the solution can be coupled through the boundary conditions. However, to present the basic technique in a simple way, we only discuss the uncoupled case. No new essential difficulties occur with coupling through the boundary conditions. We consider, therefore, only the model problem =
Ut = Uxx + F, U(x, O) f(x),
o ::::; x <
00,
t � 0,
=
( 1 2.6.3a)
with boundary conditions
Ux CO, t)
( 1 2.6.3b)
or
U(O, t)
=
g (O) (t) ,
and approximate it by the fourth-order approximation
( 12.6.3c)
554 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
(I - �� D+D_ ) Vj
dVj = D D_ dt =: QVj + Fj, Vj(O) = Ii, +
j
j
=
=
+
Fj,
1 , 2, . . . , 1 , 2, . . . .
( 12.6.4a) ( 12.6.4b)
In this case, we need two boundary conditions. The first boundary condition formally converges to the boundary condition for the continuous problem. In the case of the Neumann boundary condition ( 12.6.3b), it is of the form - (1 )
1,
g ,
( 12.6.4c)
For Dirichlet conditions ( 12.6.3c), we have, instead, - (0)
1.
g ,
( 12.6.4d)
The conditions on aj are imposed by consistency. The second boundary con dition can be expressed in terms of second- or higher order differences; that is, p- i
L bj D�vj
j= - i
-( ) g2.
( 12.6.4e)
Typically, the extra boundary condition is either an extrapolation condition of the form
q �
2,
or is obtained by differentiating the boundary conditions ( 12.6.3b) or ( 12.6.3c) with respect to t and replacing the t derivatives by space derivatives using the differential equations. This second procedure was demonstrated in Section 1 1 .4. For example, Eq. ( 12.6.3c) implies
and we use
PARABOLIC EQUATIONS
555
as an extra boundary condition. In every case, we assume that we can solve the boundary conditions for Vo and V-I and write them in the form
Vo V- I
-
p
L 10j vj . Lo vo j= 1 l lj vj L l vo L j= 1 -
p
-
go
,
( 12.6.40
g- I ·
We assume that the coefficients lij are constants and do not depend on h. Here we are only interested in stability in the generalized sense; we must estimate the solutions of the resolvent equation
SVj QVj + Fj , Liv = 0, Il v( · , s) lI h < 00. =
j = 1 , 2 , . . . , Re S j = 0, 1 ,
>
0, (12.6.5)
We now estimate the solution Vj . In the parabolic case it is natural to define s = sh2, and divide the analysis into three parts corresponding to the three parts of the complex s plane: I. l s i � Co , Re s � 0, where the constant Co is large. II. 0 < Cr S ls i s Co , Re s � 0, where Cr is a small constant. III. l s i S C I , Re s � 0. I. l s i
� Co, Re s � o.
In this case, the estimates are obtained easily. We have
Thus, for constant j{l s I 2h4 ) S
� , we obtain
556 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
( 1 2.6.6a) Also, ( 12.6.6b) Therefore, the desired estimate holds. For cases II and III we need two lemmas. Lemma
12.6.1. For Re s > 0, the characteristic equation s =
K
12
has no roots with I K I = 1 , and it has exactly two roots K 1 and K 2 , counted according to their multiplicity, with I Kj I < 1 . For every C 1 > 0, there is a T > 0 such that I Kj l :5: 1 I Kj l � 1
-
+
T, T,
1 , 2,
j j
3 , 4, for l s i � C1 ,
Re s � O.
( 1 2.6.7)
In a neighborhood of s = 0, we have
s I/2 + &( I s l ), K 2 KI = K 2 (0) = 7 V48 1/14, K 3 K4 = K4(0) + & ( I S I ), K4(0) = ""
K] = K 2 is a double root if K] = 4
-
v'i5
:=:
0. 13,
K 2 (0) + &( I S I ), 1 + s I /2 + &( I S I ), 7 + V48 14. ( 1 2.6.8) :=:
S :=:
3.00.
Proof. The proof follows the same lines as for hyperbolic equations. Assume that there is a root K ei� , where � real. Then =
implies s = Re s :5: O. Thus, there are no roots K with I K I = 1 for Re s Furthermore, the solutions K with I K I < 1 satisfy lim K
S � OCl
=
O.
> O.
PARABOLIC EQUATIONS
557
Thus, to first approximation, the characteristic equation reduces to 1 2K 2
=
- lis,
that is, there are exactly two roots with I K I < 1 . Equation ( 12.6.9) shows that the combination I K I = 1 , Re s = 0 is possible only if K = 1 , s = O. Thus, Eq. ( 12.6.7) follows. For s = 0, we have K ] (0)
=
1,
and Eq. ( 12.6.8) follows by perturbing s. If K K ] = K 2 is a double root, then =
2(K - 1 ) K
0,
that is, (K - 1) ( 12K 2 - 6K (K - 1) - 2K(K - 1 )2 + (K - 1 )3 ) = - (K - 1 ) (K + 1 ) (K 2 - 8K + 1 ) = O. The roots K ] the lemma.
=
K 2 = ± 1 are ruled out by Eqs. ( 12.6.7) and ( 1 2.6.8). This proves
The corresponding eigenvalue problem reads
s!pj = Q!pj , Li !po = 0,
j i
=
=
( 12.6. l Oa)
1 , 2, . . . ,
1 , 2,
(1 2.6. l Ob) ( 12.6.lOc)
and we have the next lemma.
The quarter-space problem is not stable in the generalized sense if Eq. (12.6.5) has an eigenvalue with Re s > 0, for some h ho > O. Lemma 12.6.2.
=
Proof. The general solution of the difference equation is
and s is an eigenvalue if
558 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
Lotp o
=
O.
This condition is a linear system for the coefficients 0" 1 and 0" 2 , and its deter minant is a function of s = sh2 • Therefore, any eigenvalue So with Re So > 0 for some h = ho generates eigenvalues s = so h6/h2 , whose real part becomes arbitrarily large as h ----7 o. This proves the lemma. From now on we assume that there is no eigenvalue s with Re s > O. Then the system ( 1 2.6.5) has a unique solution. To estimate it we first consider the second case. II.
q
S; l s i
S; co,
Res � O.
We can write the difference equation (1 2.6.5) in the form
where the coefficients O:j are polynomials in s. As before, we introduce new variables by
and write Eq. ( 12.6.5) in the form j where
M =
[
0: 1
0:0
o o
1
o
1
o
1 , 2, . . .
,
( 1 2.6. 1 1 a)
0:- 1
0 0 1
Using the difference equation, we can eliminate Vj ' j � 3, from the boundary conditions. These now contain terms of order I h2 Fj l S; h3/2 11F11 h , and we write them in the form
g,
( 12.6. 1 1b)
where H(s) is a 2 x 4 matrix whose coefficients are polynomials in s. The eigenvalues of M are the solutions of the characteristic equation. Thus,
559
PARABOLIC EQUATIONS
by Lemma 1 2.6. 1 , for l si 1 , 2, with > 0, ,there3,are4, with two eigenvalues , jconstant K j 1 1 + T, T > O. and two eigenvalues K S; K l IKjlBecause the two sets of eigenvalues I j there is a smooth j j are well separated, transformation S S(S) such that 2:
T,
Cl
=
2:
=
=
=
where Now substitute into Eq. ( 12.6. 1 1 ) new variables w Sy, F obtain WjI+l M lw l + h2F-III WjII+ l M IIw ll + h2Fj , j 1 , 2, . . . with boundary conditions =
=
=
}
=
SF.
Then, we
} '
,
}
( 1 2.6. 1 2a)
( 1 2.6. 1 2b)
Theis system ( 1 2.6. 1 2) is of the same form as Eq. ( 12.4. 1 4). The only differ 2Fj . Therefore, we obtain the corresponding ence that is replaced by h hF j estimate from Lemma 1 2.4. 1 and Eq. ( 1 2.6. 1 1b) I w II l h S; constant h2 I F-II I l h ' IwfI I S; constanth3/2 I1 F/ll l h , I w / l h S; constant(h2 l 1 F l h + h1 /2 I w{ I ), ( 1 2.6. 13) S; constant(h2 1 l F / l h + I (H I )- l l h2 1 I F ll l h ). WeRe have assumed that the eigenvalue problem ( 12.6. 1 0) has no eigenvalue for s > O . Thus, H I is nonsingular for Re s > O. There are two possibili ties. s i 1 . H I is nonsingular for S; Res O. Then, I (H I )- l l is uniformly l bounded, and, for the solution v of the original problem, the estimate Cl S;
Co,
2:
560 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
( 1 2.6. 1 3) becomes
II vll h $; constant h2 11 F ll h
$;
constant
l si
II F Ii h .
( 12.6. 14a)
Therefore,
II D+ vll h $; constant h II F II h $;
constant II F l l h ' I s l l /2
( 1 2.6. 14b)
Here we have used the fact that h2 $; co / l s l . 2. The eigenvalue problem has an eigenvalue So = i �o, �o real. Then has a pole at s = So, and (J
(H i ) - l
constant > 0.
One can show that the estimate is sharp (Exercise 12.6. 1 ). We have, there fore, the following lemma.
The estimates (12.6. 14) hold for CI $; l s i $; co, Re s > ° if, and only if, the eigenvalue problem (12.6.10) has no eigenvalue for Re s 2:: 0, C I < I -S I < co.
Lemma 12.6.3. -
-
REMARK. If there is an eigenvalue on the imaginary axis, only the estimate of II D+vll h breaks down, but we still have an estimate for II vli h . However, the introduction of lower order terms requires that there is an estimate of II D+ v ll h for the original problem. Otherwise the perturbed problem will not be stable in general. Now we consider the last case. III.
l s i < q « 1.
Corresponding to the continuous problem, we write the resolvent equation ( 12.6.5) in the form
( 12.6. 1 5) that is,
PARABOLIC EQUATIONS
Zj + I
{3
�
OZj
+
{3
561
- I Zj - 1 A
+
{3- 2 Zj - 2 A
-1 /2 {3 I Uj
+ S
+
A
1 2h2S
--
1 /2 Fj . A
We introduce new variables
Then Eq. ( 12.6. 1 5) becomes j
1 , 2, . . . , ( 12.6. 1 6a)
for certain matrices M0 and M I that are independent of ;S and h. It is important to write the boundary conditions in terms of u and Z. The extra boundary con dition (12.6.4e) can always be written as a relation for Z alone, whereas, for the other boundary conditions, this is only true for Neumann conditions ( 12.6.4c). (Recall that, at the moment, we are only considering homogeneous boundary conditions.) Using the same procedure as we did when deriving Eq. ( 1 2.6. 1 1b), we can now write the boundary condition in the form ( 12.6. 1 6b) Here H(;S 1 /2 ) is an analytic function of ;S 1 /2 . The eigenvalues of Mo + ;S 1 /2 M I are given in Lemma 12.6.1 (see Exercise 12.6.2). Also, Mo has the form ( 1 2.6. 17) Therefore, there is a transformation
S = S(;S 1 /2 ), analytic in ;S1 /2 , such that
562 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
where the
Kj are defined in Lemma 1 2.6. 1 . By the same lemma, it follows that
Now we can proceed as before. We substitute new variables w = Sy, F = SF into Eq. ( 1 2.6. 1 6) and obtain Eq. ( 1 2.6. 12) with replaced by s- I /2 F. We can again use Lemma 1 2.4. 1 to estimate w and we obtain
F
IIw II II h
�
I W III I
�
II w I IIh
� �
h2 I s l - I /2 - II � constant IlF l l h ' I IF I I h l si I s l l /2 h2 I sl - I /2 - II � constant - II constant l II F II h , h /2 I s J l/4 I I F IIh I s 1 3/4 h2 . 1 - II h2 Is l - I/2 - I I I + I (H ) - I constant II IIh IIh , IIF F I 4 4 3 l/ / / 2 Is1 I Sj Is l A
constant
(
constant
1 + I (H I )-I I lsi
)
I I Fi lh' A
( 12.6. 1 8)
We obtain, therefore, the next lemma.
is nonsingular at s = 0, then for l S I CI , CI sufficiently small, the solutions of the resolvent equation (12.6.5) satisfy the estimate �
Lemma 12.6.4. If H I
II D ll h
�
�:�ant II fr llh '
co
II D+D ll h = I s l l/2 11zll h
�
constant
A
�.,... Is l l /2 .,.,.-- llF l l h'
( 12.6. 1 9)
Again, the estimates in Eq. ( 1 2.6. 1 8) are sharp (Exercise 1 2.6.3) and, therefore, there is no acceptable estimate if H I (0) is singular. The estimates ( 1 2.6.6), ( 1 2.6. 14), and ( 12.6. 1 9) give us the following theo rem. Theorem 12.6.1. The problem is stable if, H I is nonsingular for Re s � o.
in the generalized sense if, and only
We can also express this result as an eigenvalue condition. Consider the eigenvalue problem ( 1 2.6. 1 0), and write it in the form
( 12.6.20)
563
PARABOLIC EQUATIONS
Change the boundary conditions to boundary conditions for Then we obtain the following lemma. Lemma 12.6.5. HI
1J;
is nonsingular for s = 0 if, and only if, s eigenvalue or generalized eigenvalue of (12.6.20).
when possible. =
0
is not an
We summarize our results in the next theorem.
The approximation (12.6.4) is stable in the generalized sense if, and only if, the eigenvalue problem (12.6.20) with boundary conditions, written as conditions for 1J; whenever possible, has no eigenvalue or gener alized eigenvalue with Re s � O.
Theorem 12.6.2.
REMARK. For s
-:/: 0, Re s � 0, there are no generalized eigenvalues because, for the solution Kj of the characteristic equation, we have I Kj l -:/: 1 . In this case, the eigenvalues of Eq. (12.6.20) and Eq. (12.6. 10) are identical. For s = 0, we have K \ = 1 . Therefore, for Neumann boundary conditions, s = 0 is always a generalized eigenvalue of Eq. (12.6. 1 0) (Vj = constant is a solution independent of time) but not of Eq. (12.6.20) if H I is nonsingular. This is the reason why we have written the eigenvalue problem in the form of Eq. ( 12.6.20). If s = 0 is not an eigenvalue or generalized eigenvalue of Eq. ( 1 2.6. 10), then it is also not an eigenvalue or generalized eigenvalue of Eq. ( 12.6.20). Therefore, we need only switch to Eq. (1 2.6.20) if Eq. (1 2.6.10) has an eigenvalue or generalized eigenvalue at s = O.
Now we consider two examples.
EXAMPLE 1 . Consider the boundary condition u (O, t) = O. This example was analyzed by the energy method in Section 1 1 .2. The differential equation gives us ur (O, t) = uxx(O, t) = O. Therefore, we use Va
=
0,
as numerical boundary conditions. The truncation error for the second condition consists of only even-order space derivatives. Because all these are zero at x = 0, the condition has arbitrarily high-order accuracy. For s -:/: 0, we need not rewrite the eigenvalue problem. If K \ -:/: K 2 , then the general solution of Eq. (1 2.6. l Oa) is ( 1 2.6.2 1 ) Introducing this expression into the boundary condition gives us
564 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
0" \
+
0"2
=
(K \ - 1 )2
0,
(K 2 - 1 )2
o.
This system of linear equations has a nontrivial solution if (K \ _ 1 )2
(K 2 - 1 )2
K\
K2
that is, KI
+
1 KI
= K2
1
+
K2
or
We know that, for all s with Re S � 0, the product I K I K 2 1 < 1 . Therefore, there is no eigenvalue or generalized eigenvalue if K 1 -:/:. K 2 . If K 1 = K 2 , then the general solution has the form ( 1 2. 6 .22)
In this case, we obtain from the boundary conditions, 0" 1 0"2
=
(
0, KJ -
�)
=
O.
Thus, the system has a nontrivial solution if K 1 = ± 1 . If K J = 1 , then s = 0 and, by Lemma 1 2.6. 1 , K 2 -:/:. K J , which is a contradiction. The other possible solution K 1 - 1 implies Re s < 0, which is of no interest. Thus, by Theorem 1 2.6.2 and the remark following it, the approximation is stable in the generalized sense. =
2 . Consider the boundary conditions u,, (O, t) = O. The differential equation gives us UtAO, t) = uxxx (O, t) = O. Therefore, we use
EXAMPLE
565
PARABOLIC EQUATIONS
For
K 1 =I- K2,
o.
0,
( 1 2.6.23)
we now obtain
0" 1 ( K I �) 0"2 ( K2 - K\ ) (K I (K2 -
+
- 1 )3
1 )3
0,
o.
This system has a nontrivial solution if, and only if,
that is, for
K =I- I , 1
or
[K K2KI K2K2 K 1 KI[ K 1 K2,
Since 1 + + < 3, for Re s � 0, the only possibility of a nontrivial solution is = or = 1 . If = then I{) is of the form of Eq. ( 1 2.6.22) and the linear system of equations is
Thus, there is a nontrivial solution if, and only if,
By Lemma 12.6. 1 , the only possible
KI KI 4 is
=
-
-/is z 0. 1 2. On inspection,
566 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
it follows that the above relation cannot hold. Thus, there is no eigenvalue for Re 5 � 0, 5 :;l 0. For the eigenvalue problem in its original form, 5 = 0 is a generalized eigen value corresponding to K 1 = 1 . Therefore, we have to modify it to the form shown in Eq. ( 12.6.20). The boundary conditions become
Vto
+
Vt - I
=
0,
D�Vt_ 1 = o.
( 12.6.24)
Both 'Pj and Vtj are linear combinations of K{ and K� if K I :;l K 2 . Thus, in the neighborhood of 5 = 0, the general solution of Eq. (12.6.20) can be written as
'Pj
;:::
. -(] I (1 - S-1 /2 ) 1 + (] 2
5 1/2 K 2 (0) - 1
.
Ki(O).
Therefore,
that is,
(The choice of coefficients in the representation of 'Pj was made to get a simple normalized form for Vtj.) Substituting this expression into the boundary con ditions ( 1 2.6.24) shows that a 1 = a 2 = O. Thus, 5 = 0 is not a generalized eigenvalue and the approximation is stable in the generalized sense. The above results can be extended to very general parabolic systems. We can neglect all lower order terms and we can prove the following theorem.
The approximation for general parabolic systems is stable in the generalized sense if it is dissipative for the Cauchy problem and if the modified eigenvalue problem has no eigenvalues or generalized eigenvalues for Re S � O. Theorem 12.6.3.
EXERCISES 12.6.1. Prove that the estimates (12.6. 14) are sharp. 12.6.2. Consider the matrix MO + 5 1/2MI in Eq. (1 2.6. 16a). Prove that its eigen values are the roots K given by the characteristic equation given in Lemma 1 2.6. 1 .
THE CONVERGENCE RATE
567
12.6.3. Prove that the estimates ( 1 2.6. 1 8) are sharp. 12.6.4. Introduce nonzero data in the boundary conditions of Example 2 above and prove that the approximation is strongly stable in the generalized sense.
12.7. THE CONVERGENCE RATE
In Section 1 1 .3, the basic principles for obtaining error estimates were discussed in connection with the energy method for stability analysis. In this section, we present a more detailed discussion, which also includes the more general sta bility analysis based on the Laplace transform. The stability estimate obtained for a certain approximation is the key to the proper error estimates. This was demonstrated in Part I for the pure ini tial value problem and the same principles also hold when boundary conditions are involved. However, to obtain optimal estimates, one has to be a little more careful. The basic idea is to insert the true solution u(x, t) into the approximation. This generally introduces truncation errors as inhomogeneous terms in the dif ference approximation, the initial conditions, and the boundary conditions. The stability estimate then gives the desired error estimate, because small data only can give small solutions. The error ej(t) = Vj(t) u(Xj , t) satisfies the equations -
dej dt ej(O) Loeo
Qej + hql Fj, hQ2jj, hq3 g,
j
1 , 2, . . . ,
( 12.7 . 1 a) ( 12.7.1b) ( 1 2.7. 1c)
where F, j, and g are smooth functions. (For convenience, we use the same notation for these functions as in the original approximation for v(t). Fur thermore, we consider only the quarter-space problem even when the energy method is used.) The integers ql , q2 , and q3 are not necessarily equal. Equa tions (12.7. 1 ) are based on the fact that there is a smooth solution u(x, t) to the continuous problem, and we recall that this requires certain compatibility conditions on the initial and boundary data and on the forcing function if there is one. Let us first consider the case that Q is a semibounded operator, so that the energy method can be applied. This requires homogeneous boundary conditions, and our recipe above was to subtract a certain smooth function IP/t) that satisfies the boundary condition. Assuming that this is possible and that IPj(t) = hQ3 1/;/t), where 1/;j (t) is smooth, we have
568 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
d 'Pj dt Q 'Pj
&(hq3 ), &(hQ3 ),
( 1 2.7.2a) (1 2.7.2b)
which yields, for e/t) = ej (t) - 'P/t) ,
dej dt e/O) Loeo
Qej + (hq , + hq3 )Fj,
1 , 2, . . . ,
j
(hq2 + hQ3 )jj '
( 1 2.7.3)
0.
The energy estimate yields a bound on any finite time interval [0, T],
and by construction ( 1 2.7.4) [This estimate corresponds to Theorem 1 1 .3 . 1 .] We often have q2 00, and ql is given by the order of the approximation at inner points. It is, therefore, natural to choose boundary conditions such that q3 = ql . The crucial issue is the construction of 'Pj(t). Let us first assume that all the boundary conditions in Eq. ( 1 2.7. 1c) are approximations of the boundary condi tions for the differential equation. For example, for a scalar parabolic equation we could approximate auAO, t) + bu(O, t) ° by =
=
a
VI - VA
h
+
b
V,
+ va 2
0.
( 1 2.7.5)
If the grid is located such that Xo = -h/2, then we have q3 2 in Eq. ( 1 2.7. 1 c) with g being a combination of u derivatives. Hence, 'P can be constructed such that it satisfies Eq. ( 1 2.7.2). For the equation Ut + Ux = 0, u(O, t) 0, the same b conclusion holds with a 0, and 1 in Eq. ( 12.7.5). This is a general princi ple: As long as Eq. ( 12.7 . 1 c) does not contain any extra boundary conditions, the error estimate follows immediately when there is an energy estimate. (One can also show that q3 � q , is a necessary condition for an hq I estimate for this type of boundary conditions.) Let us next consider the case that there are extra boundary conditions. We use the familiar example Ut Ux with extrapolation at the boundary =
=
=
=
=
569
THE CONVERGENCE RATE
The error satisfies
and we need a function epit) = h2 1/;j(t), where
g(t).
( 1 2.7.6)
But if 1/; is smooth, we have
and since g is, in general, not small, Eq. ( 1 2.7.6) is impossible. [If the solution happens to satisfy uxAh, t) = ° the construction would be possible.] Now consider the usual centered second-order approximation for inner points. As noted earlier, the linear extrapolation condition at j = ° is equiv alent to
duo dt
( 1 2.7.7)
where the grid function index has been shifted one step, j
---7 j -
1. By defining
j = 1 , 2, . . . , j = 0, we can write the approximation as 0, 1 , . . . ,
j
( 1 2.7.8) without any boundary condition. Because Q is only first-order accurate at x = 0, the error equation is j
0, 1 , . . . , ( 1 2.7.9)
where
570 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
0, 1 , 2, . . . ,
j j
By stability and Duhamel's principle, it follows that
li e(t)116
.
�
:5 constant 11F(t) 1 i6
.
�
= &'(h3 ),
o :5 t :5 T.
However, this estimate is not optimal. Since the approximation satisfies the Kreiss condition, we can do better. Returning to the original formulation with an explicit boundary condition, the error equation is
de
j dt = Do ej
+
h2Fj,
1 , 2, . . . ,
j
ej(O) = h2h, eo - 2e l + e2 = h2g .
(12.7. 10)
We split the error into two parts e = e( l ) + e(2), where j
1 , 2, . . . , ( 12.7. 1 1 )
j
1 , 2, . . . , ( 1 2.7. 12)
This approximation was shown to be stable in Section 1 1 . 1 , and we immediately get
where the norm is based on the scalar product
571
THE CONVERGENCE RATE
(U, wh for real grid functions U and w. To estimate e(2), we Laplace transform Eq. (12.7. 1 2), use the fact that the Kreiss condition is satisfied, and transform back again. Because Do is semi bounded for the Cauchy problem, we use the same procedure as in Section 1 2.2 to obtain the estimate
(12.7. 1 3) (see Theorem 1 2.2.2). This implies the final estimate
o
:::;;
t
:::;;
T.
( 1 2.7. 1 4)
The technique we have used for deriving the optimal estimate ( 1 2.7.1 4) is similar to the one used to derive strong stability in Section 1 2.2. In fact, recall ing that the approximation in our example is strongly stable, the result follows immediately from the error equation (12.7. 10). Considering the method in the form ( 12.7.8), the approximation of a/ax is only first order accurate at x = o. Still we have shown that there is an overall h2 accuracy. This is possible because the lower order approximation is applied only at one point. This is the background for the expression "one order less accuracy at the boundary is allowed." This statement is only valid if it refers to "extra" boundary conditions. The "physical" boundary conditions must always be approximated to the same order as the differential operator at inner points. Let us next consider the problem
o u(x, O)
=
u II (O, t)
:::;;
x
<
t � 0,
00,
I(x), =
( 1 2.7. 1 5a) ( l 2.7. 1 5b)
R 1u I (0, t) + get),
( 12.7 . 15c)
and the fourth-order approximation dUj - = dt Uj (O) =
A
(-
/j,
)
1 4 Do(h) - - Do (2h) UJ· 3 3
j
1 , 2, . . . ,
(1 2.7. 1 6a) (12.7. l 6b)
572
THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
where
This is a generalization of the example in Section 12.3. By differentiating the boundary conditions ( 12.7 . 1 5c) twice with respect to t and using the differential equation ( 12.7 . 15a) we get
where (12.7. 17)
As boundary conditions for the approximation, we use VJ/(t)
=
R l vJ(t)
( 12.7. 1 6c)
=
( l 2.7 . 1 6d)
D+D_vJ/(t) D!vJ(t) D!V� l (t)
=
+ get), I S D+D-vJ(t) + get) ,
( 12.7. 1 6e)
0,
o.
=
( 12.7. 1 6f)
Let u(x, t) be a smooth solution of Eq. (1 2.7. 1 5), and consider the truncation error for the extra boundary conditions ( 12.7. 1 6d,e,f). We have for e = u v -
D+D_eJI
=
S l D+D-eJ + (9' (h2),
= (9' ( 1 ), D!e� l = (9'(1 ) . D!eJ
However, the normalized form corresponding to ( 1 2. 1 . 1 2) is h2D+D_eJ/(t) = S l h2D+D_ eJ(t) + (9'(h4 ),
= (9' (h4), (hD+)4 e� 1 (t) = (9'(h4),
(hD+)4eJ(t)
(12.7 . 1 8)
THE CONVERGENCE RATE
573
which yields the right order of truncation error. The other error equations are
dej (t) -----;Jt =
A
(
4
'3
Do(h)
ej(O) = 0, eJ I(t) R 1 eJ(t).
-
1
'3
)
Do(2h) ej (t)
+
4
& (h ) ,
j
1 , 2, . . . , ( 12.7. 1 9)
=
The approximations (12.7 . 1 8) and ( 1 2.7. 1 9) were analyzed in Section 1 2.3 and shown to be strongly stable. Thus, we get the error estimate ( 1 2.7.20) Note that Eq. ( 1 2.7. 1 6d) is only a second-order approximation of Eq. ( 1 2.7. 17), and we still have an h4 error in the solution. This is possible, since it is an extra numerical boundary condition. The condition ( 1 2.7. 17) is also "extra" in the sense that is is not required for defining a unique solution of the differential equation. In summary, when the extra boundary conditions are written in the normal ized form as in Eq. ( 1 2. 1 . 1 2), the error term must be of the same order as the truncation error at inner points. Now consider the case where the approximation is strongly stable in the generalized sense. If there is no error in the initial data, then the error estimate follows immediately from Eq. ( 1 2.4. 1 b). For Eq. ( 1 2.7. 1 ), we get
( 1 2.7.2 1 ) Assume that there i s an initial error
where fj is smooth; that is,
II = 0, 1 , . . . .
574 THE LAPLACE TRANSFORM METHOD FOR DIFFERENCE APPROXIMATIONS
j
=
1 , 2, . . . .
( 12.7.22)
We obtain j
1 , 2, . . . , (1 2.7.23)
which yields the estimate
( 1 2.7.24) Note that we don't have the same difficulty when making the initial condition homogeneous as we have in some cases when making the boundary conditions homogeneous. The only requirement is that fj be smooth. Next assume that the approximation is stable and that the Kreiss condition is not satisfied. Then the error estimate does not follow directly from the sta bility estimate because it does not permit nonzero boundary data. The proce dure of splitting the error into e = e( 1 ) + e( 2), as demonstrated above, cannot be used either, because we need the Kreiss condition to estimate e( 2) . One alternative is to subtract a suitable function that satisfies the inhomogeneous boundary condition. Another alternative is to eliminate the boundary values V-r+ [ , V-r + 2, . . . , Va and modify the difference operator Q near the boundary. However, as we have already seen above, we may lose accuracy in this pro cess. Finally, we consider the case where the approximation is stable in the gener alized sense and the Kreiss condition is not satisfied. Now we must construct a function that satisfies both the inhomogeneous initial and boundary conditions. This may be tricky, because we cannot in general expect compatibility at the comer x = 0, t = 0, even if the solution u(x, t) is smooth. The reason for this is that the truncation error in the boundary conditions is different from the trunca tion error in the initial condition, the latter one typically being zero. And even if such a function exists, we may lose accuracy in the subtraction process, just as in the previous case. The most general theory based on the Laplace transform method for obtain ing optimal error estimates in this case is given in Gustafsson (198 1). The essen tial condition is that there be no eigenvalue or generalized eigenvalue at s = o. (The theory is given for the fully discrete case where s = 0 corresponds to Z = 1 .)
EXERCISES
575
EXERCISES 12.7.1. Prove that the error II vj (t) u(Xj , t)llh of the approximation ( 1 1 .4.3) and ( 1 1 .4.5) is &(h4). 12.7.2. Use the result of Exercise 12.6.4 to prove that the approximation ( 12.6.4) and (1 2.6.23) gives a fourth-order accurate solution. -
BIBLIOGRAPHIC NOTES
The first general stability theory for semidiscrete approximations based on the Laplace transform technique was given by Strikwerda ( 1980). The stability con cept there corresponds to strong stability in the generalized sense for hyperbolic problems. Strikwerda also proves stability for a fourth-order approximation of Ut = ± ux, but with different boundary conditions than ours. The method of lines has been used extensively in applications, for example in fluid dynamics. However, very little analysis has been done. In Gustafsson and Kreiss ( 1983) and Johansson ( 1 993), semidiscrete approximations of model problems corresponding to incompressible flow are analyzed. Gustafsson and Oliger (1982) derived stable boundary conditions for a num ber of implicit time discretizations of a centered second-order in space approx imation of the Euler equations. Regarding optimal error estimates for approximations that are stable in the generalized sense, see Notes on Chapter 13.
13 THE LAPLACE TRANSFORM METHOD FOR FULLY DISCRETE APPROXIMATIONS : Normal Mode Analysis
I n Chapter 1 1 , we discussed fully discrete approximations for initial-boundary value problems, and the energy method was used for stability analysis. For the examples treated there, we first discretized space and left time continuous. We derived stability estimates for the systems of ODEs obtained in this way. For some standard methods of time discretization, like the backward Euler and trapezoidal methods, we obtained fully discrete methods in a straightforward manner. Stability is more difficult to verify for more general time discretiza tions. In this chapter, we instead use the more general Laplace transform tech nique. In particular, we shall use the concept of stability in the generalized sense to analyze the method of lines in Section 1 3.2. More general methods are obtained by discretizing space and time simul taneously. For example, the Lax-Wendroff method cannot be obtained by the method of lines as used before. For these general methods, only the Laplace transform technique leads to stability results. Furthermore, it gives necessary and sufficient conditions for stability. This type of analysis for fully discrete approximations is often called normal mode analysis or GKS-analysis. 13.1. GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
We consider the quarter-space problem for systems
au
576
A at =
au ax
+ F,
o
:::;
x <
00,
t � 0,
(13. l . 1a)
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
577
with initial data
f(x)
( 1 3 . l . 1 b)
Lou(O, t) = g .
( 1 3 . l . 1 c)
u(X, 0)
=
and boundary conditions
To construct difference approximations we introduce a space step ho and a time step k > 0 that generate a mesh with meshpoints (Xj , tn ) = ( j h, nk), where j = - r + 1 , -r + 2, . . . , and n = 0, 1 , . . . , as shown in Figure 1 3. 1 . 1 . The approx imation is denoted by up. The gridpoints (Xj , tn ) with j � 1 are called interior points and the other points are called boundary points. We shall use the scalar product and norm 00
(Uj ' wj )h, L j= l
(v, wh .- (v, W) l , �
lIulI �
respectively. The simplest approximations are explicit one-step methods.
v n+ 1
Q vP + k FP , VO h, Lovt l = g n+ l . ] ]
=
j
=
1 , 2, . . . ,
( 1 3 . 1 .2a) ( 1 3 . 1 .2b)
=
( 1 3 . 1 .2c)
Here Q
p
L Bp E P
( 1 3 . 1 .3)
v = -r
t
k
h Figure 13.1.1.
x
THE LAPLACE TRANSFORM METHOD
578
is a difference operator of the type that we have discussed earlier, and we assume that B_n Bp are nonsingular. It is assumed that the boundary condi tions ( 1 3 . 1 .2c) are such that the approximation is solvable, that is, that {urI } is uniquely defined by {uP } . This is the case if, for example, the boundary conditions can be written in the form q
� (L(O) un u n+ 1 .L... /"
/""
=
"= 1
"
+
L( I ) u n+ l ) /" "
+
"
g n+ l /"
+
-r
'
1 , . . . , 0.
( 1 3 . 1 .4)
We now introduce the general technique based on the Laplace transform. We need the grid vector function up and the data to be defined for all t . Therefore, we define
and, correspondingly, for the data F and g. We also define g(t) = The transform
is now well-defined. Assuming that /j
==
° for ° $; t $; k.
0, we get
Therefore, the Laplace transform of Eq. ( 1 3 . 1 .2) with /j j
=
=
° is ( 1 3 . 1 .5a)
1 , 2, . . . ,
( 1 3 . 1 .5b) ( 1 3 . 1 .5c)
La
is the Laplace transform of Lo . Written in the form of Eq. ( 1 3 . 1 .4), these equations become q
� (e-skL(O)D /"" " .L... "=
I
+
L( 1) D ) /""
"
+
gA
,
-r
+
1 , . . . , 0.
( 1 3 . 1 .6)
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
579
For convenience, we assume that Q is only the principle part; that is, there is no explicit dependence on h or k. The coefficient matrices Bv depend on A = k/h , and we assume that A is a constant when h varies. The Godunov-Ryabenkii condition and the Kreiss condition can now be defined as in the semidiscrete case and the corresponding estimates are obtained in the same way. It is convenient to define the conditions in terms of z = e sk • Because we are dealing with the principle part, we are concerned with the behavior of the solution Uj (s) for Re s ;::: 0; that is, for I z l ;::: 1 . Let Vj be defined by
and, correspondingly, for Fj and gj . We also define io(z) (13. 1 .5) can now be written in the form
ZVj = QVj + kFj , Lava = g , I I v ll h < 00,
j
:=
1 , 2, . . . ,
Lo(s). The problem (1 3. 1 .7a) ( 1 3 . 1 .7b) ( 1 3 . 1 .7c)
which is called the z transformed problem. The corresponding eigenvalue prob lem is
z CPj = Q CPj , La CP o 0, I I cp li h < 00,
j
1 , 2, . . . , ( 13. 1 .8)
and we have the following lemma.
If Eq. (13. 1.8) has an eigenvalue z with Iz l > 1, then the approximation (13.1.2) is not stable. This is the Godunov-Ryabenkii condition. Lemma 13.1.1.
Proof. If z is an eigenvalue, then v; = zncpj is a solution of Eq. ( 13 . 1 .2) with F == 0, g == 0, f = cpo At a fixed time t, we have
vJt/k
=
Z l/k ..,..,,,J. '
and for decreasing k the solution grows without bound. This proves the lemma. To derive sufficient conditions for stability, we follow the same lines as for the semidiscrete case. The analysis is very similar. The difference is that we are now considering the domain I z l ;::: 1 instead of Re s ;::: 0, and we want to estimate the solutions of Eq. (13. 1 .7) in terms of the data Fj and g. The definitions
580
THE LAPLACE TRANSFORM METHOD
and the lemmas are completely analogous to the ones given in Section 12.2; therefore, we make the presentation shorter here. Consider the transformed problem ( 1 3 . 1 .7) with F = 0 for Izl > 1 . The general solution of Eq. ( 1 3 . 1 .7a) is defined in terms of the solution of the char acteristic equation
( 1 3 . 1 .9) We have assumed stability for the problem with periodic boundary conditions. In the same way as for the semidiscrete case, we can prove the following lemma (Exercise 1 3. 1 . 1). Lemma 13.1.2. The characteristic equation (13.1.9) has no solution K = ei� , � real, if I zl > 1 . If B-r is nonsingular, there are exactly mr solutions K v with I K v l < I for I z l > 1 .
Equation ( 1 3 . l .7a) can be written i n one-step form j = 1 , 2, . . . , where the eigenvalues of M are given by the solutions K of Eq. ( 1 3. l .9). By Lemma 1 3 . 1 .2, we can transform the matrix M as in Eq. (1 2.2. 1 7a). The Godunov-Ryabinkii condition can then be stated for the transformed function w = U'u as
Izl > 1 ,
( 1 3. 1 . 1 0)
where H I is the matrix in the transformed boundary condition
The condition ( 1 3. l . 1O) is now strengthened to the determinant condition
Izl � 1 .
( 1 3. 1 . 1 1 )
We now have the following lemma.
The determinant condition is fulfilled if, and only if, the solu tions of Eq. (13.1.7) with Fj = 0 satisfy
Lemma 13.1.3.
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
i
l i\ 1 2 ::;; K l gl 2 ,
v = -r + I
where K is independent of z
Izl > 1,
581
( 1 3 . 1 . 1 2)
(the Kreiss condition).
The detenninant condition, or equivalently the Kreiss condition, can be checked without the transfonnation process above. The solution of Eq. (13. 1 .7a) under the condition ( 1 3. 1 .7c) has the fonn
L I Pv ( j )K � ,
Izl >
IKv I <
1,
( 13 . 1 . 1 3)
where Pv ( j) is a polynomial in j with vector coefficients. By Lemma 1 3 . 1 .2, the solution Vj depends on mr parameters (J' = ((J J , . . . , (Jmr)T, and they are determined from the boundary conditions by a linear system C(z) (J'
=
g.
( 1 3 . 1 . 14)
When z approaches the unit circle, some of the roots K v may also approach the unit circle. For I zo = 1 , we, therefore, single out the proper solutions K v by
l
( 13. 1 . 15) for 0 > O. Generalized eigenvalues Zo are defined in analogy with the definition in Section 1 2.2. We have the next lemma.
The Kreiss condition is fulfilled if, and only if, there are no eigenvalues or generalized eigenvalues z, I zi � 1 , to the problem (13. 1.8), that is, lf Det (C(z» -f- 0, I z l � 1 .
Lemma 13.1.4.
As we have seen, the Kreiss condition can be fonnulated in several ways in the transfonned space. By going back to the physical space via the rela tion Vj (s) = Vj (z) and the inverse Laplace transfonn, we obtain, the following theorem corresponding to Theorem 1 2.2. 1 .
Theorem 13.1.1. The solutions of Eq. (13.1.2) with f = F = 0 satisfy, for any fixed j , the estimate n
L Ivil 2k v=
I
n
::;;
constant
L v=I
if, and only if, the Kreiss condition is satisfied.
I gV 1 2 k,
( 13 . 1 . 1 6)
THE LAPLACE TRANSFORM METHOD
582
REMARK. Parseval's relation yields the estimate for n = 00 . However, we use the same arguments as above to obtain the estimate for any finite
u n does not depend on g P ,
P
>
n.)
There is no need to check the Kreiss condition as
Izl
n.
---7 00 •
There are constants Co and Ko such that, for solutions of Eq. (13.1. 7) with F ° satisfy the estimate
Lemma 13.1.5.
=
(The solution
Izl � c
o
,
the
( 1 3 . 1 . 17) Proof. We have
constant
I z l 2 (II Vl l l
that is, for large enough
o +
L
I'- = -r + !
Iz l ,
and the lemma follows. Now we derive the stability estimate for the original problem under the assumption that there is an energy estimate for the Cauchy problem. To intro duce a slightly different technique than the one used for the semidiscrete case, we assume that Eq. ( 1 3 . 1 .2) is a scalar problem. We need the following lemma.
Let Q be any difference operator of the fonn (13.1.3) where the Bp are scalars. The scalar problem
Lemma 13.1.6.
u n+ 1 = Qu n ) } ' uJ = u n+ ! I'-
0,
=
g1'-n+
l
'
j = 1 , 2, . . . ,
-r
+
1,
-r
+
2, . . . , 0,
( 1 3. 1 . 1 8)
satisfies the Kreiss condition. The proof is omitted here. The lemma says that, from a stability point of view, we can always specify data explicitly at all boundary points regardless of inflow or outflow. (However, it may be difficult to find accurate data in the outflow case.) We make the assumption that the Cauchy problem satisfies an energy esti mate:
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
Assumption 13.1.1.
583
The solutions of p+ ! = QVP ,
O, ± 1 , ± 2, . . . ,
j
v
satisfy the estimate ( 1 3 . 1 . 1 9)
for A = kjh � Ao, Ao > O. We shall prove the following lemma. Lemma 13.1.7.
scalar problem
There exist boundary conditions such that the solution of the n+ ! v J
=
QvJ� '
J = fJ, L 1 V0n+ ! v
j j
0,
=
=
1 , 2, . . . ,
-r +
( 1 3. 1 .20a)
1,
-r
+
2, . . . ,
( 1 3 . 1 .20b) ( 1 3 . 1 .20c)
satisfies n
L I ; 1 2k p=
v
� constant "f"� ,
j
1
=
-r +
1,
-r +
2, . . . , 1 .
( 1 3 . 1 .21)
Proof. Let wn be defined by
Wjn =
{P
v '
0,
j = -r + 1 , otherwise,
-r
+
2, . . . , 0,
the projection operator P for gridfunctions { vP Jj= ( n) . PV J =
{P
v ,
0,
_
( 1 3 . 1 .22)
� by
j = 1 , 2, . . . , . < 0, } -
( 1 3 . 1 .23)
and the injection operator R for gridfunctions { vP Jj: l by ( 1 3 . 1 .24)
THE LAPLACE TRANSFORM METHOD
584
We have
II Qu n lih = IIPQ(Ru n + wn ) II =-=,=, IIPQRu " Il =-=, = + 2Re (PQRu ", PQw" ) _ = = "
IIPQwn ll=-= ='
+
By assumption,
Furthennore,
1 , 2, . .
j
.
, r,
which gives
lIu"+ I II�
� =
lIu n ll� I l u " l l�
2Re ( QRu", Qw" ) l ,r +l + 2Re ( v n , Q Wn ) l,r'
+
+
II Qwn IlL,
By choosing the boundary conditions as
u; = 0,
J1 = -r +
2,
-r +
3 , . . . , 0, ( 1 3 . 1 .25 )
we get
Re (u n+ 1 , Qw") I, r
j = -r + 2
c > 0. Thus,
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
585
showing that Eq. ( 1 3 . 1 .2 1 ) is satisfied. The lemma gives an estimate for the boundary values including vI'. As for the sernidiscrete case, we also need, in general, estimates for vJ', j � 2. Therefore, we have the following lemma.
Let L ! be the boundary operator constructed in Lemma 13.1.7. Then, for any fixed j, the solution of Eq. (13. 1.20) satisfies
Lemma 13.1.8.
n
I. v=
!
I v/ 1 2 k
� constant Il f ll �,
-r
j
+
1,
-
r
+
2, . . . .
vp, j -r + 1 , -r + . . , vI' as given boundary values
Proof By Lemma l 3 . 1 .7, we already have estimates for 2, . . . , 1 . In particular, we consider V�r ' v�r+ 3 ' . +2 and write the difference approximation as
v�+ ! Qvi' , VO /j, v n +! v n +! J
=
J
=
p.
p.
=
,
j j
=
=
2, 3, . . . ,
-r -r
J-I-
( 1 3 . 1 .26)
+
-r + 2, -r +
2,
+
3, . . . , 3, . . . , 1 .
( 1 3 . 1 .27)
Now, we consider the auxiliary problem with the special boundary operator shifted one step to the right:
w n+! Qw n wJ /j, J
=
J '
=
j j
o.
2, 3, . . . ,
-r
+
2,
-r
+
3, . . . , ( 1 3 . 1 .28)
Lemma 1 3 . 1 .7 implies
n
I. v=
!
I w� 1 2 k
� constant II f l[� ,
-r
+
2,
-r
+
3, . . . , 2.
586
THE LAPLACE TRANSFORM METHOD
The difference Yj
yp+l yJ
=
l y;+
=
0, =
=
Vj - Wj satisfies
QyP ,
vp.n + 1 - wp.n + 1 ,
)
=
)
= -r +
2,
-r +
J1.
= -r +
2,
-r +
3, . . . , 1 .
( 1 3 . 1 .29)
-r +
3, . . . , 1 .
( 1 3. 1 .30)
2, 3, . . . , 3, . . . ,
The z-transformed system is zYj
yp.
=
=
QYj , up. - wp. ,
2, 3, . . . ,
) J1.
= -r +
2,
By Lemma 1 3 . 1 .6, the solution satisfies the Kreiss condition, and, by Theorem 1 3. 1 . 1 , we get, for any fixed ) , n
L "= 1
l yj l 2 k � constant
� Thus, for ) n
L= v
1
=
1
n
L L ( I v� 1 2
p.= - r + 2 V = 1
)
constant IIfll� ,
+
-r +
I w� 1 2 )k, 2,
-r +
3, . . . .
2,
I v� 1 2 k � constant
n
L ( I W2 1 2 "=
1
+
I Y2 1 2 )k � constant llfll�.
( 1 3. 1 .3 1 )
Now the same procedure is applied for ) = 3, 4, . . . , and the lemma follows. The stability estimate for the original problem ( 1 3 . 1 .2) is obtained as it was for the semidiscrete problem. The solution W of the auxiliary problem (13.1 .20) with the special boundary conditions (and with the forcing function F included) is subtracted from the solution v of the original problem. The difference v - W satisfies a problem with zero forcing function, zero initial function but nonzero boundary data containing wp. and gw Because there are estimates for wp., we get the final estimate
( 1 3 . 1 .32) that is, strong stability.
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
587
The restriction to scalar problems is introduced because an analog of Lemma 1 3 . 1 .6 is not known for systems. In the system case, the result can, of course, still be used if the matrices Bv can be simultaneously diagonalized. This is no severe restriction in the one-dimensional case, but in several space dimensions it usually is. In such a case, the techniques presented for sernidiscrete problems (Lemma 1 2.2.9 and 1 2.2. 1 0) can be used. We summarize the results in the following theorem. Theorem 13.1.2. Assume that the approximation (13. 1. 2) fulfills the Kreiss con dition and that there is an energy estimate for the corresponding Cauchy prob lem. If one of the following conditions is satisfied, then the approximation is strongly stable.
1. 2.
The coefficient matrices can be simultaneously diagonalized. r ? p.
The principle of deriving stability estimates via an auxiliary problem with special boundary conditions can also be applied to multistep schemes. How ever, the construction of the special boundary conditions is more complicated and, at the current state, we need some extra assumptions. There is a general theory where the stability follows from the Kreiss condition by itself; this will be further commented on in the Bibliographic Notes at the end of this chap ter. Because the Kreiss condition plays the central role whatever approach we choose, we extend our discussion of this condition to general multistep schemes and treat a few examples. We consider general multistep methods of the type introduced in Section 5 . 1 . The general form of the approximation is Q- 1 Vjn+I
v}a
-
_
= fJ(a ) '
q
� £..J Q
a=O
j
a Vjn -a -r
+
+
k Fjn ' 1,
-r
j +
2, . . . ;
1 , 2, . . . , a
0, 1 , . . . , q, ( 1 3 . 1 .33)
Again we assume that the approximation is solvable, that is, that a unique solu I tion vt exists. By defining Vj (t) between gridpoints as above, we can use the Laplace transform, and, by substituting z = esk, we obtain the z-transformed equations. Under the assumption that f ) == ° and a = 0, 1 , . . , q, these equa tions are formally obtained by the substitution vp = z n Uj and, similarly, for Fp and gn . We obtain
Y
.
THE LAPLACE TRANSFORM METHOD
588 q
Z Q- I Uj
=
L OUj
g,
� z-a Q,A + kFj ,
j
a =O
1 , 2, . . . ,
Iiuli h < 00.
( 1 3 . 1 .34)
The only difference when compared to the procedure for one-step schemes is that we now have higher order polynomials in z involved in the difference equa tion and possibly in the boundary conditions. The corresponding eigenvalue problem is
ZQ- I 'Pj
=
Lo 'Pj
0,
11 'P li h
q
� z- a Qa 'Pj ,
a =0
j
1 , 2, . . . ,
( 1 3 . 1 .35a) ( 1 3 . 1 .35b) (13. 1 .35c)
< 00.
Obviously, Lemma 13. 1 . 1 , formulated for one-step schemes, also holds in this case. Again, the solution of the ordinary difference equation ( 1 3 . 1 .35a) is expressed in terms of the roots K" of the characteristic equation. If the dif ference operators Qa have the form p
� B("a) E"
JI = - r
,
( 1 3 . 1 .36)
then the characteristic equation is
( 1 3. 1 .37) This equation is obtained by formally substituting up = K j Zn into the original homogeneous scheme and setting the determinant of the resulting matrix equal to zero. The Kreiss condition ( 1 3 . 1 . 1 2) is also well-defined for Eq. ( 1 3 . 1 .34) with F == 0, as well as the eigenvalues and generalized eigenvalues of Eq. ( 1 3 . 1 .35), and Lemma 1 3. 1 .4 holds. Again, the condition is checked by writing the boundary conditions as in Eq. ( 1 3 . 1 . 14) and checking that Det (C(z» -J ° for I z l � 1 . We now consider the leap-frog scheme,
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
! U;�+ 1 - U;n - -- '" (Un j+ 1
1\
-
Ujn_ 1 ) '
j = 1 , 2, . . . ,
A
=
k
h'
589
( 1 3 . 1 .38a)
as an example. From Section 12.1 and 1 2.2, we know that the boundary con dition ( 1 3 . 1 .38b) is stable for the semidiscrete approximation for any integer q. The Kreiss con dition is satisfied and, furthermore, for the special cases q = 1 , 2, it has been demonstrated in Section 1 1 . 1 that the energy method can be applied directly. To check the Kreiss condition for the leap-frog scheme, we solve the char acteristic equation ( 1 3 . 1 .39) The transformed equation
has the solution
IKl l
< 1,
for
Izi
> 1.
The determinant condition is
Izl � 1 ,
( 13. 1 .40)
which is only violated if K 1 = 1 . [The matrices H I (z ) and e(z ) are scalar and, therefore, identical in this case.] The corresponding z values ± 1 are obtained from Eq. (13. 1 .39). To define K I properly in the neighborhood of z = - 1 , we let z = -(1 + 0), 0 > 0, 0 small, and we get, from Eq. ( 1 3 . 1 .39),
or
K Because 0 > 0,
Z
A > 0, we have
±1 -
o
A
.
S90
THE LAPLACE TRANSFORM METHOD
Kl
""
1
-
0/ )... ,
and obviously the Kreiss condition is violated at z = - 1 . The solution of Eq. ( 1 3 . 1 .38) is shown as a function of t in Figures 1 3. 1 .2 and 1 3 . 1 .3, for q = 1 and q = 2, respectively. The condition VN = 0 has been introduced at the boundary XN = 1 . As initial conditions, we have used
vJ vJ
-
vJ
sin (21rXj) +
,
kDovJ .
It is clearly seen how the instability originates at the outflow boundary. Also note that the oscillations are more severe in the case q = 2. Formally, the bound ary condition is more accurate than the one for q = 1 . The experiment is an illus tration of the basic fact that the stability concept is independent of the order of accuracy. The stronger oscillations obtained for q = 2 could actually be expected from the analysis. The singularity at z = - 1 of the matrix H I (z) = (K 1 1 )q becomes stronger with increasing q . This leads to a worse estimate of the solu tion of the z-transformed system -
(Z2 - 1 )Uj = )...z (Uj+ 1
(hD+)quo =
Il u li h < 00,
-
Uj- l ),
g,
j
1 , 2,
. .
., ( 1 3 . 1 .4 1 )
1 .5 u
1 .5 Figure 13.1.2.
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
591
1 .5 u
1 .5
Figure 13.1.3.
because
Ivo l Obviously, we need another boundary condition. It does not help to replace Do by D+ at the boundary, because this is equivalent to Eq. ( 1 3. 1 .38b) for q = 2 (with the grid shifted one step). However, if a noncentered time differencing is used, then we get the new condition ( 1 3 . 1 .42) The detenninant condition is now violated if, and only if, z
- 1
-
"A(K 1
- 1)
=
O.
( 1 3 . 1 .43)
The system ( 1 3 . 1 .39) and ( 1 3 . 1 .43) only has the solution z = K I = 1 . But a perturbation calculation with z = 1 + 0, 0 > 0, shows that K 1 = - 1 at z = 1 . Hence, the Kreiss condition holds. The numerical result with this boundary condition is shown in Figure 1 3 . 1 .4. The oscillations have now disappeared. The accuracy is still not very good, this is because the true solution has a discontinuity in the derivative because of an incompatibility in the data at the comer x = 1 , t = O.
592
THE LAPLACE TRANSFORM METHOD 1 .5
u
- 1 .5
Figure 13.1.4.
The procedure we have used for this example in order to check the Kreiss condition is typical. If Det (C(z)) 0 for some value z zo with I zo l > 1 , the analysis i s complete, and w e know that the approximation i s unstable. If C(z) is singular for z zo with I zo l 1 , the Kreiss condition may or may not be violated. If all the corresponding roots K satisfy K (zo) < I , then zo is an eigenvalue, and the Kreiss condition is violated. If there is at least one root K 1 with 1 K I (Zo ) 1 = 1 , we must find out whether or not Zo is a generalized eigenvalue. This is done by a perturbation calculation Zo � (1 + o)zo, 0 > O. If I K 1 ((1 + o)Zo) I < 1, then there is a generalized eigenvalue, otherwise not. (The procedure described here is used as the basis for the IBSTAB-algorithm described in Section 13.3.) Next we treat another example (cf. Exercise 1 2.5.3). Consider the problem =:
[ ]
o � x � 1,
I
=:
=
f(x), u ( l ) O , t)
=
and define the difference operator
Wj
1 1
=:
=:
u (l ) u (2) u(x, O) ( u l ) (O, t)
=:
=
Wo(2) WN(2)
0,
(13.1 .44)
Q by Wj = QVj
[ � �] Dovj, D+vo( I ) ' D_vN( I ) ,
� 0,
j
1 , 2, . . . , N - 1 ,
j
0,
j = N.
( 1 3 . 1 .45)
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
593
(Note that the vector grid function v has 2N + 2 components, but Qv has only 2N components.) The Crank-Nicholson approximation is
-
k !,+I + v n ) vJn + 1 - vJn = J 2 Q(VJ 1 n 1 (I I ) + ) + = 0. V� Vo n
O, l, . . . , N,
j
,
( 1 3 . 1 .46)
With the scalar product defined by
�
(v, wh we have,
WI·th
( vci2) wci2)
+
v;;) w;;» ) +
N- I
(Vj' wj) h, L j= I
Wj = Vjn+ 1 + Vjn ,
(W, QW)h -- 2"I wo(2) (W I( I ) - Wo( I » ) + 2"I wN(2) (WN(I ) - WN(I-) i ) N- i + � (w) ' ) (wj�i - wj�b + w? (wj�i - wj� i )) j= i
L
By taking the scalar product of Eq. ( 1 3 . 1 .46) with energy estimate
=
0.
v n+ 1 + v n , this gives the
showing that the method is stable, but not necessarily strongly stable. Next we check the Kreiss condition for the right quarter-space problem. The characteristic equation for the approximation at inner points is
Det
[ (Z- � (Z
- 1) K
+ 1) (K 2 - 1)
- � (Z +
1) (K 2 - 1 ) (Z - 1) K
]
0,
or, equivalently,
( A(Z ) 4(z - 1 )
+
1)
2
K2
_
(K 2
The solution of the z-transformed equation is
_
1 )2
0.
( 1 3 . 1 .47)
594
THE LAPLACE TRANSFORM METHOD
UIK{
[ �] +
U2K �
[� ] 1
,
where K I and K 2 are the solutions of Eq. ( 1 3 . 1 .47) with I K d < 1 and I K 2 1 for I z i > 1 . The condition vd l) = 0 implies U I = - U2, and the condition (z
_
1 )V-o(
2
) - � 2
(z
+
<1
1 ) (V- I( 1 ) - V-o( 1 »
implies
o.
(13. 1 .48)
Now consider the point
Zo = -
1 + hi/2 1 - hi/2
on the unit circle. The corresponding K values, K I = i, K 2 = -i, are obtained from Eq. ( 1 3 . 1 .47), and Eq. ( 1 3 . 1 .48) is, obviously, satisfied for z = zoo Thus, there is a generalized eigenvalue zoo However, both K values are double roots of the polynomial in Eq. ( 1 3. 1 .47), and we have a situation analogous to the one in the semidiscrete example in Section 12.5. The Kreiss condition is vio lated, showing that the scheme is not strongly stable. However, the generalized eigensolution contains double roots K that behave in such a way that stabil ity in the generalized sense (as defined in the next section) still holds. This is consistent with the energy estimate derived above, because one can prove that it leads to generalized stability (cf. Theorem 10.3. 1 for the continuous case). We now demonstrate how the stability theory can be applied to obtain gen eral principles for designing stable boundary conditions. The Kreiss condition may sometimes become rather complicated to check for systems of difference equations. However, for scalar equations, many results are known and we will show how they can be applied to systems. We consider general hyperbolic sys tems
au at
A auax '
o :::; x <
00,
t � 0,
( 1 3 . 1 .49)
and general multistep schemes ( 1 3 . 1 .33) and ( 1 3 . 1 .36) where the coefficient matrices B;(1) are assumed to be independent of h ; that is, we only consider the
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
595
principle part. We also make the very natural assumption that the matrices B�CJ) are polynomials in A. Under this assumption, both the differential equation and its approximation can be reduced to a set of scalar equations. We assume that this is already done and that A and B}") are diagonal with u and A partitioned such that
Well-posedness requires the form ( 1 3. 1 .50) for the boundary conditions (for convenience, only homogeneous conditions are considered). It is now possible to work with the diagonal form of the approx imation to construct stable boundary conditions and then implement them for the original system. Assume that the boundary conditions for each component of the outflow vector V I are such that the Kreiss condition is fulfilled for the scalar problem. In the z-transformed space, for any fixed j, there is an estimate
Izl >
1,
( 1 3. 1 .5 1 )
where g is the inhomogeneous term in the boundary conditions. Therefore, the boundary values vj can be regarded as inhomogeneous terms in the transformed boundary conditions for the inflow vector v ll . We require that these conditions have the form
v-I-II'
=
Sv0l
+
g - II
I-' '
-r +
1,
-r +
2, . . . , 0,
( 1 3 . 1 .52)
where S is a difference operator independent of z. Lemma 1 3 . 1 .6 is now applied to the inflow part of the system, and strong stability follows. It now remains to construct S in Eq. ( 1 3 . 1 .52) so that sufficient accuracy is obtained. First, we note that if r = 1 , then, obviously, we can use the physi cal boundary condition ( 1 3 . 1 .50) as the inflow condition. If r � 2, some extra conditions are required. Using Taylor expansions we get, from the differential equations and the boundary conditions,
596
THE LAPLACE TRANSFORM METHOD
( 1 3 . 1 .53) By using difference formulas for a"u 1/ax" , and applying Eq. ( 1 3 . 1 .53) at 0 = -h, -2h, . . . , -(r - l)h we obtain a set of boundary conditions that has the required form. In applications, the matrix A is sometimes singular, but the method described here can still be used. The vanishing eigenvalues are included in A I and Eq. (13.1 .50) are still well-posed conditions. Because the inverse of A I is never used, the procedure above is still well-defined. For more general problems including several space dimensions, variable coefficients, and possibly nonlinear equations, the procedure is, of course, more complicated. But it is straightforward and may still be a realistic approach. Next we turn to the outflow problem and consider approximations of the scalar equation
au at
a
( 1 3 . 1 .54)
> 0.
Several scalar examples have been treated in the previous sections, and, with some satisfaction, we note that those results can now be used for the full sys tem as indicated by the arguments above. For so called translatory boundary conditions of the form g n+ I f.'
'
CO (- I ) -:/. 0,
p.
-r +
1,
-r + 2,
. . .
, 0,
( 1 3 . 1 .55)
where the same form of conditions are used at all boundary points, quite general results can be obtained. The boundary characteristic function is defined by R(z, K )
If the conditions ( 1 3 . 1 .55) are considered as a scheme to be applied at all points,
GENERAL THEORY FOR APPROXIMATIONS OF HYPERBOLIC SYSTEMS
597
then the equation
( 13.1.56) is the characteristic equation used to verify the von Neumann condition Izi :s; 1 . In that case, it is assumed that more than one time level is involved; but for our purposes R(z , K) is well defined anyway. Similarly, the characteristic function for the basic scheme is
(13.1 .57)
0, where B�a) are scalars. Letting
n (Z, K) = I P(Z, K)I
+
I R(z, K) 1
one can prove the following result:
Assume that the approximation (13.1.33) is consistent with the scalar outflow problem (13.1.54) and has translatory boundary conditions (13.1.55). Then the Kreiss condition is fulfilled if the condition
Theorem 13.1.3.
n (Z , K) > ° for
(z, K)
U
{ iz l I K I = 1 , (z , K) { I :s; I z l , ° < I K I < I } E
=
-J.
( 1 , 1), (z , K)
-J.
(- 1 , - 1) } (13.1.58a)
and one of the conditions
(1 3.1.58b)
n (- 1, - 1) > 0, or
(13.1 .58c) are satisfied. The simplest outflow boundary procedure is extrapolation of some order
(hD+ )" v n+ l I"
=
°
'
IL =
-r + 1 , -r +
2, . . . , 0,
/I
(13.1.59)
598
THE LAPLACE TRANSFORM METHOD
which was used for the fourth-order approximation in Section 13. 1 [see Eq. (12.1 .25)]. These conditions are translatory, and we have
R(z, K ) = (K
- 1)".
Now assume that we use a dissipative one-step scheme. The condition ( 1 3 . 1 .58a) is, obviously, fulfilled for any v except, possibly, for K = 1 , I z l = 1 , z I- 1 . But consistency implies that up = constant gives the same constant solution up+ 1 at the next time level, and because we have a one-step scheme, P(z, 1 ) = 0 implies z = 1 . The condition ( 1 3 . 1 .58b) is also satisfied. Thus, the Kreiss condition is satisfied. It should be pointed out that the conditions in Theorem 13. 1 .3 are sufficient but, in general, not necessary. For example, to use the theorem for verifying sta bility for the Crank-Nicholson scheme with the one-sided forward Euler scheme at the boundary, the von Neumann condition for the boundary scheme must be satisfied. This leads to the condition k l a l � h, whereas a direct stability analysis shows that the correct condition is kla l < 2h. Finally, we note that the error estimates are obtained as in the semidiscrete case. A smooth solution of the continuous problem is substituted into the dif ference approximation and the truncation error enters as inhomogeneous terms F n , gn, andf. The error estimate then follows directly from the strong stability estimate. EXERCISES 13.1.1. Prove Lemma 1 3 . 1 .2. 13.1.2. Prove Lemma 13. 1 .6 for the special case
Q
a > O.
13.1.3. Formulate and prove the analogy of Lemma 12.2. 10 for explicit one step methods. Use the result to prove Theorem 1 3. 1 .2 for the case
r � p.
13.1.4. Formulate and prove Lemma 1 3. 1 .2 for multistep methods. 13.1.5. Prove that the leap-frog scheme ( 1 3. 1 .38a) fulfills the Kreiss condition with the boundary condition
( 1 3 . 1 .60) 13.1.6. Derive an error estimate for the Lax-Wendroff approximation of
599
THE METHOD OF LINES AND GENERALIZED STABILITY
u/ = A u + F,
A ll
"
U II (0,
<
0,
t) = get),
with boundary conditions
(vJ1t = g n , (hD+ )Q vJ = 0 . 13.1.7. Use the general principle described at the end of Section 1 3 . 1 to derive stable boundary conditions of the form of Eq. ( 1 3 . 1 . 52) for the lin earized Euler equations (4.6.5) for a fourth-order approximation (r = 2). 13.2. THE METHOD OF LINES AND GENERALIZED STABILITY
The concept of generalized stability for sernidiscrete approximations in Section 1 2.4 can be generalized to fully discrete approximations. First, consider one step schemes
V}!l+ l = Qv!l} + kFjn , vJ = 0, Lovg+ l = g n+ l ,
j
1 , 2, . . . ,
( 1 3.2. l a) ( 1 3 .2 . 1 b) ( 1 3 .2. 1 c)
where we have assumed zero initial data. Definition 13.2.1. The approximation (13.2.1) is stable in the generalized sense if, for gn = 0, the solutions satisfy an estimate
L e-2�ln llv n ll �k n= 1
for r]
� K(fJ )
�
L e-2�ln IlF n- l l1 �k, n= 1
> fJ o,
lim
� --)�
K(r])
0.
( 1 3 .2.2)
The approximation (13.2.1) is strongly stable in the generalized sense if the
600
THE LAPLACE TRANSFORM METHOD
solutions satisfy �
L e-21Jln l l u n ll �k n= 1
::;; K('Y/ )
L e-21J tn ( II F n- l l l � n= 1
+
Ig n I 2 ) k.
(1 3.2.3 )
As in the previous section, we define the solution for all t by extending u n , Fn , and gn to stepfunctions. Then we can take the Laplace transfonn and, by substituting z = e s k , we obtain the z-transfonned system j
=
1 , 2, . . . ,
Lava g, II v ll h < 00.
( 1 3.2.4)
By Parseval's relation, we have the following theorem.
The approximation (13.2.1) is stable in the generalized sense if, and only if, Eq. (13.2.4) with g = 0 has a unique solution satisfying
Theorem 13.2.1.
IIv(z) lI � ::;; K(z) II F(z) II � ,
'Y/ > 'Y/ o,
lim K(z)
1J --> �
=
0,
( 1 3 .2.5)
where z = e(1J + iOk . It is strongly stable in the generalized sense if, and only if, Eq. (13.2.4) has a unique solution satisfying 'Y/
>
'Y/ o,
lim K(z)
1J --> �
o. ( 1 3 .2.6)
One can prove the following theorem, which corresponds to Theorem 12.4.4. Theorem 13.2.2. Assume that Eq. (13.2.1a) is dissipative and consistent with a strictly hyperbolic system. Then the approximation (13.2.1) is strongly stable in the generalized sense if the Kreiss condition is satisfied.
The proof of this theorem follows, essentially, the same lines as for the semidiscrete case (Exercise 1 3 .2.3). Now we consider the method of lines. In particular, we treat time discretizations of the Runge-Kutta type and of the lin ear multistep type. In both cases, we prove that generalized stability follows from the same property for the semidiscrete approximation. From now on, we assume that g n+ l == ° in Eq. ( 1 3 .2. l c). Furthennore, we use the boundary conditions to eliminate all vectors Uj , j = -r + 1, - r + 2, . . . , 0, from the approximation. The resulting system has the form
THE METHOD OF LINES AND GENERALIZED STABILITY
dVj dt Vj (O)
QVj + Fj ,
j j
0,
=
=
1 , 2, . . .
601
,
1 , 2, . . . .
( 1 3 .2.7)
Even if the original differential equation has constant coefficients, the difference operator Q now has variable coefficients when applied to Vj near the boundary. We assume that the approximation is stable in the generalized sense, that is, the solution of the resolvent equation
j
=
1 , 2, . . .
,
( 13 .2.8)
satisfies the estimate 'YJ > 'YJ O, where lim� "-7 oo K('YJ ) = satisfies
( 1 3 .2.9)
O. In all our examples treated so far, the constant K('YJ ) constant , 'YJ
K('YJ ) ::;
( 1 3 .2. 1 0)
and we assume here that this is the case. We discretize time by using a method of the Runge-Kutta type, which gives us
w n+1 J
=
P(k Q)w}n + k Gn J '
( 1 3.2. 1 1 )
Here P and P I are polynomials i n kQ. Furthermore, it i s assumed that there is a relation between k and h such that IIkQII ::; constant. Let us apply the method to the scalar ordinary differential equation
y'
=
Ay·
We obtain
From Section 6.7, we know that there is an open domain Q in the complex plane p, = Ak such that
i P( p, ) i
<
1
if p, E Q .
We now make the following assumptions.
602
THE LAPLACE TRANSFORM METHOD
Assumption 13.2.1.
circle
There exists a number RJ > 0 such that the open half
belongs to {1 . If JJ. = ia, a real, l a l
:s;
P(ia) = eil{!,
RJ
I{)
does not belong to {1 , then necessarily
real,
-7f :S;
I{) :s;
Assumption 13.2.2. Let I{) be a given real number. If JJ. purely imaginary solution of
7f. =
( 1 3 .2. 1 2)
ia, l a l
:s;
RJ
is a
then it is a simple root, and there is no other purely imaginary root if3 with
For any consistent approximation, the above condition is satisfied, if we restrict RJ to be sufficiently small because
It is also satisfied if the approximation is dissipative, that is, if JJ. = ia, 0 < q I a I :s; R J , belongs to {1 . Let ia be a root of the above type. Consider the perturbed equation t
TJ
real,
TJ >
O.
( 1 3.2. 13)
We then have the following lemma. Lemma 13.2.1. For sufficiently small l i� +TJ I, the root of Eq. (13.2.13) can be expanded into a convergent Taylor series
Here Re JJ.(i� + 0) � 0, and 'Y > 0 is real and positive. Therefore,
Proof Because, by assumption, ia is a simple root, the Taylor expansion is valid and 'Y * O. Because JJ. e {1 , we have Re JJ.(i� + TJ ) � 0 for all suffi-
603
THE METHOD OF LINES AND GENERALIZED STABILITY
Figure 13.2.1.
ciently small I � I , 1/ , 1] � O. Therefore 'Y must be real and positive and the lemma follows. We shall now prove the following theorem.
Assume that Assumptions 13.2.1 and 13.2.2 hold and that the semidiscrete approximation is stable in the generalized sense with K(1] ) satisfying Eq. (13.2.10). Then the fully discretized approximation is stable in the same sense if
Theorem 13.2.3.
( 13.2. 14) where R is any constant with R < R] . Proof. The resolvent equation is
ZWj = P(kQ)wj
+
k Gj , z
=
eS\ 1] =
Re s
> 1] 0 ,
(13.2. 15)
where Gj = P] (kq)Fj. By assumption, the difference operators kQ are bounded, and p] (kQ) is a polynomial; that is,
THE LAPLACE TRANSFORM METHOD
604
Therefore, by Eq. ( 1 3 .2.5), we must show that the solutions of Eq. ( 13.2. 15) satisfy
Ilwll�
�
K(z) IIGII�,
that is,
II (zI - P(kQ» - J llh
�
K(z) =
lim
'1 ---C> �
�
K Z)
,
lim
'1 ---C> =
For large I zl , the estimate follows easily. Because we get from Eq. ( 1 3.2. 1 5)
Ilwll h
�
+
Ilwll h
and, for
Iz l
+
K J (z) = O.
P(kQ) is a bounded operator,
iT- II P(kQ)w ll h I�I constant k Izl
0;
B
IIGllh' IIGllh'
sufficiently large, constant k
. -:-:I z l - IIGllh Next consider I z l � Co. Let /Jo v (z) be the roots of the polynomial
z I - P(kQ)
=
Xo
z - P( /Jo ) . Then, we have
II ( /Jo v(z)I - kQ), v
where Xo is a complex constant. Thus, for every Eq. ( 1 3.2. 1 5) in the form
z with Iz l >
1 , we can write
( 13.2. 1 6) We now consider each factor /Jov(z)I - k Q . By assumption, the roots belong to 0 for I z l > 1 . There are three possibilities: 1.
/Jov
do not
I/JoAz) I R > 0 > 0, 0 constant; Eq. ( 1 3.2. 14) implies -
( 1 3.2. 17)
THE METHOD OF LINES AND GENERALIZED STABILITY
In particular, if Re p.j
� 0, then p.j ti. n
I p./z) I -
R �
605
implies
0 1 > 0.
Thus, the above inequality holds if the constant 0 > ° is chosen suffi ciently small. 2. Re p.p � 0 2 > 0, 0 2 = constant > 0. Let s = p. p (z)/k, and use the resolvent condition [Eqs. ( 1 3.2.9) and ( 13.2. 1 0)] for the semidiscrete problem. For k small enough, we have Re p.p/k > 11 0 , which gives
�
1 k - constant -::c-----,-,-- k Re ( p. p (z))
3 . Re p.p(Z) > 0, but limz �zo p. p (z) =
constant 02
( 13.2. 1 8)
iex, Zo = e i 'P , ex, i.p real. Let
By Lemma 1 3 .2. 1 , we have ( 1 3.2. 1 9) Therefore, by Eq. ( 13.2. 1 0), and for k small enough 1 k
�
constant
( 1 3 .2.20)
Now we can prove the theorem. Combining the estimates ( 1 3 .2. 17) to ( 1 3.2.20) and observing that, for a given z = e i 'P , there is at most one imaginary root p. p (z) = iex, we obtain for Izl � Co lI (z l - P(kQ)) - l l 1h IXo l - l
�
{
II ( p.p (z)
constant constant,
'Ilk '
- kQ) - 1
h if one of the roots has the form ( 1 3 .2 . 1 9), otherwise.
THE LAPLACE TRANSFORM METHOD
606 'YJ
Therefore,
k ::; constant implies constant --
IIFllh .
This proves the theorem. Instead of a Runge-Kutta method, we now consider a multistep method
(I
+
k{J_ I Q)wP+I
=
with real coefficients CXa and
q
L a =O
(cx a l -
k{Ja Q)wp- a
+
( 13.2.21)
kFp
{Ja . The resolvent equation becomes ( 1 3.2.22)
where PI
zq+
I
q
- L cxa zq- a ,
{J_ I Zq+
I
a=O
-
q
L
a (JaZq- .
a=O
Again, we apply the method to the scalar differential equation y' we obtain the characteristic equation
p. = Ak . We make the usual assumptions for the multistep method. Assumption 13.2.3.
1.
The equations PI (z)
0,
P2(z)
0
have no root in common, 2. 1, 3.
and the roots Zj of P I (Z) = 0 with
1, I Zj I
= 1 are simple.
=
Ay. Then,
THE METHOD OF LINES AND GENERALIZED STABILITY
607
n
in the complex plane
As in the previous case, there is an open domain
JJ. = "Ak such that
for
I zl � I, JJ. E n .
( 1 3 .2.23)
We make the same construction as earlier which leads to the following assump tion. Assumption 13.2.4.
circle
There exists a number R I
> 0
such that the open half
belongs to n . If JJ. = iet, et real, I et I < R I does not belong to n , then there is a z = e i 'P, cp real, such that ( 1 3.2.24) Assumption 13.2.S.
z = e i 'P is a simple root of
P I (z) has only simple roots near Z = 1 and, therefore, the last assumption holds if we choose R I sufficiently small. If z = e i 'P satisfies Eq. ( 1 3.2.24), then P2 (e i 'P) I- O. Otherwise, also P I (e i 'P) = o and e i 'P would be a common root of P I and P2 . Therefore, for z = e i 'P
is well defined and
dS dz
P2 (Z)P � (Z) - P I (z)P ; (z) P�(z)
We consider now the perturbed equation
In the same way as Lemma 1 3.2. 1 , one can prove the following lemma.
608
THE LAPLACE TRANSFORM METHOD
Lemma 13.2.2. The solution J1. J1.(i� + TJ) of the perturbed equation has the same properties as in Lemma 13.2.1. =
Definition 13.2. 1 , which defines stability i n the generalized sense, can also be used for multistep methods, and Theorem 1 3.2. 1 holds. We can now prove the following theorem.
Assume that Assumptions 13.2.3 to 13.2.5 hold and that the semidiscrete approximation is stable in the generalized sense. Then the totally discretized multistep method (J 3.2.21) is stable in the same sense if
Theorem 13.2.4.
I l k Q llh
�
R,
where R is any constant with R < R I • Proof. We begin by considering z values in the neighborhood of Zo , where P2 (zo ) = O. Because P I and P2 have no roots in common and k Q is bounded, we have
�
constant
I Z I q+1
constant --
constant k
k l z l q llFllh �
Iz l
IIFII",
IIFllh ,
which is the desired estimate. Thus, for the remaining part of the proof we can assume that P2 (z ) '" 0 and write the resolvent equation in the form
(S(z)! - kQ)wj
kzq F P2(z) J ' -
--
j
1 , 2, . . . .
( 1 3 .2.25)
First, we consider the case I z l � co , where Co is a sufficiently large constant. If {3- 1 0, then I S(z) 1 � constant Iz l , and we obtain, for IS(z)1 � 2 1I kQ II" , =
I I (S(z)! - k Qr l llh Therefore,
�
1
2
IS(z)1 - I l k Q llh I S(z) l ·
THE METHOD OF LINES AND GENERALIZED STABILITY
609
constant k
-.,..-:-- II F II h $ Izl which is the desired estimate. If f3- [ ::I- 0, then lim
i z i --) �
S(z)
=
f3= J .
We first assume f3- 1 < O. Because S(z) e {} , there is a constant 1 f3= l l - R > 0 [ . We get 1
$ $ S(z) - k I-:- -Q--,-,II-h ..,-I --,-I----I:for
0 [ > 0 such that
constant,
I z l sufficiently large, that is, constant --
IIFllh .
Next we assume f3- 1 > O. Then, recalling the resolvent estimate for the sernidis crete problem, for 0 < 02 < f3= l and I z l sufficiently large, we get
[ II (S(z)I - k Qf lih =
$
�
constant
k
k Re S(z)
constant.
Thus, the desired estimate also follows in this case. We have now finished the proof for I z l � Co , and continue with the case zI I $ Co· In this case, 7J k $ constant by definition of z. Therefore, it is sufficient to prove an estimate
There are three possibilities:
1. There is a constant 0 > 0 such that
I S(z)1 - kllQllh Then
� o.
610
THE LAPLACE TRANSFORM METHOD
Thus, the desired estimate holds. The above assumption is satisfied if the constant 0 is chosen to be sufficiently small and Re S S; 0, because S(z) e {l .
2. Re S(z) >
0 1 > 0. Recalling the resolvent estimate for the semidiscrete problem, we get
II (S(z)I - kQ)- l l1 h
=
( SiZ) I Q) - 1
k- 1
_
h constant < 01 Re S(z) -
constant k
k
Thus, constant k lzl q IIPII h 01 I P2 (z) 1
S;
constant k ll P ll h ,
and the desired estimate holds.
3. Re S(z) > 0, z
=
but lim z --i zO S(z)
e i lf' + (i� + '1)k . By Lemma
=
ia,
13.2. 2,
Zo
= e i lf' , a,
I{)
real, l a l
Therefore,
S;
constant
Re S(z)
S;
constant kT/
that is,
S;
constant
This completes the final part of the proof.
-- IIP li h . constant T/
S; R.
Let
611
EXERCISES
EXERCISES 13.2.1. Write a program for the standard fourth-order Runge-Kutta method applied to
dVj dt Vo(t) VN (t)
j 2vI (t) =
=
1 , 2, . . . ,N
-
1,
V2 (t),
g(t),
such that generalized stability is guaranteed for kjh sufficiently small. 13.2.2. Consider the well posed initial-boundary-value problem for
o �x <
00,
t
Lou(O, t)
� 0,
=
g(t).
Construct a method based on fourth-order approximation in space and fourth-order Runge-Kutta time discretization such that the resulting approximation is stable in the generalized sense. 13.2.3. Prove Theorem 13.2.2.
13.3. SEVERAL SPACE DIMENSIONS
In Section 1 1 .5, we used the energy method for the stability analysis of multidi mensional problems. In this section, we give a brief introduction to the general theory based on the Laplace transfOiTIl technique when applied to problems in two space dimensions. The Kreiss condition again plays the main role. We for mally define the periodic channel problem and investigate a few simple exam ples. We begin by considering two-dimensional hyperbolic systems
A
au au + B - + F, ax ay
-
o �x <
- 00
00,
00,
t �
0,
(1 3.3 . 1 ) with 27r-periodic solutions in the y direction and boundary conditions at x = The grid is defined in the usual way by
i = The approximation is
1 , 2,
j
=
1 , 2, . . . , N.
O.
THE LAPLACE TRANSFORM METHOD
612
vtt
=
Q vtJ
+
Lo V�; 1 gt l , Vijn+ 1 = Vi,nj+1+ N' V� = fij , =
k Fij ,
(Xi , Yj ) E {2 h , t � 0, j = 1 , 2, . . . , N, (Xi , Y) E {2 h , (Xi , Yj ) E {2 h .
( 1 3 . 3.2)
By extending the gridfunction vtJ as a stepfunction in time, we can use a Fourier series expansion in Y and take the Laplace transform in t. With f = F = 0 and z = e sk, we get the transformed system, for Vi = Vi (� ' Z), g = g(�, z),
ZVi = Q(�)Vi ' Lava = g, !I V! lh < 00 .
i = 1 , 2, . . . ,
( 1 3.3 . 3)
We are back to a one-dimensional problem, but a new parameter � = w 2 h2 has entered the problem. Now we must check that there is no eigenvalue z with I z l > 1 for any � with I � I S; 'If. Furthermore, the Kreiss condition must hold uniformly in � : Definition 13.3.1.
(13.3.3) satisfy r
L
v= + 1 -r
The Kreiss condition is satisfied if the solutions of Eq.
IVv (t z) 1 2
S;
K jg(t z) 1 2 ,
I z i > 1, I � I S;
'If,
(13.3.4)
where the constant K is independent of g, t and z. REMARK. We only consider the principle part of Q here. In general, the con dition is formulated with I z l > 1 replaced by I z l > 1 + TJ a k.
The Kreiss condition leads to estimates analogous to the ones for the one dimensional case. For example, Parseval's relation for the Fourier transform and for the Laplace transform gives the following theorem.
Theorem 13.3.1. The solutions of Eq. (13.3.2) with f = F = 0 satisfy, for any fixed i, the estimate
n
N
p= 1
j= 1
LL
I vij l 2 h2k S;
constant
n
N
LL p= 1 j= 1
if, and only if, the Kreiss condition is satisfied.
I gj l 2 h2 k,
(13 . 3.5)
SEVERAL SPACE DIMENSIONS
613
Estimates for the full original problem are then derived in the familiar way by constructing an auxiliary problem with special boundary conditions. In general, it is of course possible that a straightforward multidimensional generalization of a stable one-dimensional approximation becomes unstable. For example, the backward Euler method for u/ Ux is stable with linear extrapolation along the (approximate) characteristic at the boundary, but the two-dimensional fractional step version of it for Ut = Ux + uy is not (see Exercise 13.3.2). The necessary analysis to check the Kreiss condition is, in general, compli cated for problems in several space dimensions (sometimes even in one space dimension). Thune ( 1 990) has developed a software system called IBSTAB that is based on a numerical algorithm. We briefly indicate how it works for prob lems in two space dimensions. We want to find the stability limit for A k/hl = k/h2 • The characteristic equation, which defines the general solution of the Fourier-Laplace transformed difference equation, is written as =
=
P(z, K , �, A) =
o.
( 1 3 .3.6)
Here P is a polynomial in z, K, A and a trigonometric polynomial in �. Let K v' V = 1 , 2, . . . , I, be the solutions with I K v (z) 1 < 1 for I z l > 1 . Nontrivial solutions of the eigenvalue problem exist if Det (C) = 0, and we write this equation g(Z, K j , . . . , K I, �, A)
= O.
( 1 3 .3.7)
In principle, one could use a general algorithm for systems of nonlinear equa tions to solve Eqs. ( 1 3 .3.6) and (1 3.3.7) for 0 :5; � :5; 211" and a sequence of A values. However, the computing time would be prohibitive. Therefore, one must take advantage of the special properties of the problem. For the special case A = 0, the difference scheme reduces to an approximation of au/at = O. Therefore, it is natural to assume that all solutions z are located in the unit disc for A = O. The main idea of the algorithm is to move A in small steps and to catch any solution z that appears outside the unit circle. Because z is a continuous function of A, it is expected to be near the unit circle for A = Aj + OA, if it was inside for A = Aj . The implementation is done by letting z take values along the circle I z l = 1 + 'YJ , solving for the K v with I K v I < 1 and checking the equation g(z) = O. If I g(z) 1 « 1 , then, and only then, the system ( 1 3.3.6) and (1 3.3.7) is solved. If the iterative solver converges, then the correct solution (z, K 1 , . . . , K/) determines whether or not the Kreiss condition is violated. If it is, then the system also classifies the type of violation: 1 . Eigenvalue I z l > 1 . 2 . Generalized eigenvalue I z I = 1 with at least one K v' where I K v I simple root of Eq. ( 1 3 .3.6).
=
is a
614
THE LAPLACE TRANSFORM METHOD
3. Generalized eigenvalue I z l = 1 , where all roots K of Eq. ( 1 3 .3.6) on the unit circle are multiple roots. 4. Eigenvalue Iz l 1 , ( I Kv l < 1 , 1' 1 , 2, . . , l). The system calls cases 1 and 2 unstable and cases 3 and 4 weakly stable. =
=
.
The variable stepsizes OA, o�, and Of) in z ( 1 + 1/ )e i8 are calculated accord ing to a special strategy that takes the properties of P(z) and g(z) into account. With ML (z, t A) denoting the set of K v with 1 K v i < 1 given by Eq. (13 .3.6), the algorithm is: =
Algorithm IBSTAB for '11. = A \ (A � A + oA)As do for � = O(� � � + 00211" do for 0 = 0(0 � 0 + (0)211" do Z � ( 1 + 1/ )ei8 . . . , I(j) � ML(Z, � , A)
(K \ , K 2 ,
if I g(zl/lg'(z)1 � p. then solve ( 1 3.3.6), ( 1 3 .3.7) check stability criteria update stepsizes 0'11. , o�, 00
EXERCISES 13.3.1. Consider the Crank-Nicholson method for
U �+ l
Ut
=
Ux + uy:
IJ
Prove that this equation is stable with the boundary condition
13.3.2. Prove that the fractional step method
is stable with the boundary condition ( 1 3 .3.8) but unstable with
DOMAINS WITH IRREGULAR BOUNDARIES AND OVERLAPPING GRIDS
n+ UOJ
1
=
n 2 Ui,j+ 1 -
615
n- I
U2,j+2
or
13.3.3. Prove that the time-split Crank-Nicholson scheme
is stable with the boundary condition ( 1 3 .3.8) only if k/h ::;; 2. 13.4. DOMAINS WITH IRREGULAR BOUNDARIES AND OVERLAPPING GRIDS
The results obtained for the periodic channel problem and for domains with piecewise linear boundaries can be used to construct stable approximations for problems in domains with smooth boundaries. The technique is an extension of the one used in Section 9 .6 to prove well-posedness of the continuous problem, and we refer back to Figure 9.6.3. Part of the domain is shown in Figure 1 3 .4. 1 . The idea i s to use a rectangular grid i n the inner domain O l and a grid that is aligned with the boundary ao in the outer domain 0 - 0 I . The inner grid is denoted by 0 h and has a boundary rh which is piecewise linear if the gridpoints are connected. The outer grid n h is bounded on the inside by f\ and on the outside by f\. These boundary points are located on the smooth boundaries of o - 0 I . The two grids 0 h and n h overlap. (Formally, 0 h and n h are defined such that they do not contain the boundary points.) An application of this over lapping grid technique is shown in Figures 13.4.4 and 1 3.4.7.
an Figure 13.4.1.
616
THE LAPLACE TRANSFORM METHOD
..
.. .. .
'{'JW '' 'ft'tt
. .: -lI ., \" :. - :. , .
r�rn·'·ltHH :fH,FJ, r�'�·':�" .
rh
:� . .l.� • •
Figure 13,4.2. Overlapping grids,
Let us assume that a second-order accurate explicit method is used in the inner domain: ( 1 3.4. 1) In the outer domain, we use transformed coordinates x and y, as in Section 10,6, and the transformed difference scheme is (1 3.4.2) The scheme shown in Figure 1 3.4. 1 needs boundary values on rh, and we use interpolation from the outer grid: (1 3.4.3) Here L is an interpolation operator, which is, usually, based on bilinear inter polation using four gridpoints in 0 h. Similarly, boundary values for Eq, ( 1 3 .4.2) are provided by u.n+ 1 lJ
=
Lu n+ 1 '
(1 3.4.4)
where also L is an interpoJation operator, At the outer boundary r\, which has its gridpoints on the original boundary an , we use the boundary conditions for the differential equation together with extra numerical conditions if necessary: (1 3.4,5) In the y direction, the solutions are periodic. Thus, the method shown by Eqs.
617
DOMAINS WITH IRREGULAR BOUNDARIES AND OVERLAPPING GRIDS
FlguN 13.4.3.
(13.4.2), (13.4.4), and (13.4.5), when considered by itself, is a periodic
channel
approx.imation. The interpolation is used for all variables in the vectors Vij and Vij. respec� tively. If the problem is hyperbolic. this means that we are, actually, overspec� ifying at some parts of the boundary for both the inner and the outer domain if we consider the algorithm locally in time. But there is no contradiction in this procedure. Overspecification is not unstable; it is the accuracy that cannot be retained if accurate data are not available. In our case, accurate data are avail� able from the outer grid. Unfortunately, stability results are only available for one�dimensional problems. However. computations show very robust behavior. We
finish
this
section
by
presenting
a
computation
done by
Eva
part-Enander. The problem is to compute hypersonic inviscid flow around a body as shown in Figure 13.4.3. The flow is governed by the Euler equations and an upwind method (Roe, 1985) is used. The computational domain and the overlapping grids are shown in Figure 13.4.4. With one single grid covering
Figure 13.4.4.
618
THE LAPLACE TRANSFORM METHOD
Figure 13.4.S.
the whole computational domain, there is an inherent difficulty caused by the concave comer on the body. This comer makes it impossible to construct a smooth transformation to a rectangle. There are ways of smoothing the grid, but near the comer, a discontinuity in the gridlines will always remain. This difficulty is eliminated by overlapping the critical area with a separate subgrid. Figure 13.4.5 shows iso-Mach lines (curves with equal speed) when a steady state has been reached.
Figure 13.4.6.
DOMAINS WITH IRREGULAR BOUNDARlES AND OVERLAPPING GRIDS
619
Figure 13.4.7.
When the Euler equations are replaced by the Navier-Stokes equations, there is a boundary layer near the body, because of zero velocity at the body. It is then convenient to use a narrow subgrid covering the boundary layer, as shown in Figure 13.4.6. The overlapping grids are shown in Figure 13.4.7. The iso-Mach lines for this case are shown in Figure 13.4.8.
Figure
13.4.8.
THE LAPLACE TRANSFORM METHOD
620
FigUN 13.4.9. In the last calculation a modification of the geometry is made corresponding to a canopy window. Using the overlapping grid technique, it is easy to construct a local subgrid so that the details of the flow can be computed. Figure 13.4.9 shows the iso-Mach lines of the result around the window.
EXERCISES 13.4.1. Construct a one-dimensional scalar hyperbolic model problem with overlapping grids. Use linear interpolation between the grids and prove stability for the Lax-Wendroff scheme.
BWLIOGRAPHIC NOTES The general theory for initial boundary vaJue problems for difference apprOl( imations was introduced in a series of papers by Kreiss in the 19608. [The Godunov-Ryabenkij condition was introduced by Godunov and Ryabenkii
(1963).J In Kreiss (1968), a complete theory for dissipative one-step methods was presented [Osher (1969) relaxed some of the conditions]. It was shown that dissipative and consistent approximations satisfying the Kreiss condition are strongly stable if the differential equation is strictly hyperbolic. This is a stronger version of Theorem 13.2.2. In Gustafsson, Kreiss, and Sundstrom
(1972) a theory was presented that also included multistep schemes as well as nondissipative schemes and variable coefficients. Michelson ( 1983) extended the theory to the multidimensional case. In these papers, there is no assump-
BIBLIOGRAPHIC NOTES
621
tion on symmetric coefficient matrices, and the stability concept used is that of strong stability in the generalized sense. By using the symmetry assump tion in large parts of our presentation, it is possible to use a simpler technique. Furthermore, in this case, strong stability follows from the Kreiss condition. The sufficient (and convenient) stability criteria mentioned in Theorem 1 3. 1 .3 were given by Goldberg and Tadmor (1981) (where also Lemma 1 3 . 1 .6 was proved). These criteria have been further refined in a series of papers by the same authors ( 1985, 1987, 1989). The latest are found in Goldberg ( 1 991). The construction of the auxiliary boundary conditions and the estimate in Lemma 1 3. 1 .8 is due to Wu ( 1 994). The results on the method of lines in Section 1 3.2 were obtained by Kreiss and Wu ( 1 993). Error estimates follow directly from strong stability. For generalized stabil ity, we must first subtract a proper function to make the initial and boundary conditions homogeneous, and in this process one may loose a factor h. How ever, it has been shown by Gustafsson ( 1 975) that if there is no generalized eigenvalue at z = 1 , then one can still have one order lower accuracy for the extra numerical boundary conditions. The stability condition k l a l < 2h given at the end of Section 1 3 . 1 for the Crank-Nicholson scheme was derived by Skollermo ( 1 975). The stability problem for overlapping grids was treated for the one-dimen sional hyperbolic case by Starius ( 1980). He proved that if the overlapping region is fixed and independent of the mesh-size, then stability follows for dis sipative schemes. A general system CMPGRD for generating overlapping grids is described in Chesshire and Henshaw (1 990), and this system was used for computation of Navier-Stokes solutions in Henshaw (1 987, 1992).
APPENDIX A. 1
RES ULTS FROM LINEAR A LGEBRA
We will collect some selected results from linear algebra needed in this book. Let A = (aij) be a complex (m x m) matrix. Lemma A.1.1. If A has a complete set of linearly
then
independent eigenvectors Vi ,
where the Ai are the eigenvalues of A. If the eigenvalues ofA are distinct (A i -j Aj if i -j j), then A has a complete set of eigenvectors.
Lemma A.1.2. Jordan Canonical Form.
matrix T such that
For every matrix A there exists a
where the submatrices J have the form v
II T - 1 A T is 622
called the Jordan canonical form.
1 , 2, .
. .
, r.
623
For every matrix A there is an unitary matrix such that U*A U is upper triangular.
Lemma A.l.3. Schur's Lemma. U
Lemma A.1.4.
Any matrix A satisfies the inequalities peA) S;
sup (Au, u) lui= I
S; I A I ,
where peA) is the spectral radius of A, peA) max IAi l , and the Ai are the eigenvalues of A. The set of values (Au, u) for all u with l u i = 1 is called the numerical range or field of values of A. =
matrix H.
.
Lemma A.1.S. If the spectral radius satisfies peA) S; 1 - 0 for 0 > IA I . S; 1 o l , for some 0 1 > 0, where l u i = (u, Hu) I /2 for a positive
0, then definite
Let M = M(x) be an m x m analytic matrix function whose eigenvalues are contained in two nonintersecting sets EI and E2 • Suppose E I contains p eigenvalues and E2 contains m - p eigenvalues. Then there exists a nonsingular analytic matrix T(x) such that Lemma A.1.6.
T(x)M(x)T - I (x)
=
( MOl l
)
0 M22 '
where M I l is p x p and has eigenvalues contained in E I and M22 is (m - p) x (m - p) and has eigenvalues contained in E2 • Lemma A.l.7. Let A be a matrix with distinct eigenvalues Aj and eigenvectors Uj, and let B be a matrix with IBI S; 1 ; then, for sufficiently small E > 0, the eigenvalues of A + EB are also distinct and the eigenvalues and eigenvectors, JJ.j and Uj , of A + EB satisfy " I Aj - JJ.j l S; C I E , I Uj - uj l S; C2 E ,
for constants C I and C2 • We say that a scalar-, vector-, or matrix-valued function u of a vector x is Lipschitz continuous for x in a domain n if there exists a constant C > 0 such that
lu(x) - u(x' ) I S; C lx - x' l , for all x, x'
E n.
RESULTS FROM LINEAR ALGEBRA
624
Suppose that A(x) is a Lipschitz continuous m x m matrix with distinct eigenvalues. Then there exists a nonsingular Lipschitz continuous mXm matrix T(x) such that Lemma A.I.S.
T - I (x)A(x)T(x)
=
diag (A I , . . . , Am ) .
Let cP (0 ) be the space of functions with p continuous derivatives on o . The following lemma then holds.
Suppose that A(x) E CP is an m X m matrix with distinct eigen values. Then there exists a nonsingular m x m matrix T(x) E CP such that Lemma A.I.9.
T - I (x)A(x)T(x)
Lemma A.I.IO.
= diag (A I , . . . , Am ) .
Suppose that A(x) E CPo If the eigenvalues of A satisfy Re Aj S; - 0
< 0,
then, there is a transformation T(x) E CP such that
APPENDIX A.2
LAPLACE TRANSFORM
In this section, we discuss the Laplace transform, which is closely related to the Fourier transform. Let u(t), ° ::;; t < 00, be a continuous function, and assume that there are constants C and Q' such that
(A 2. 1 ) .
Then the function
t t belongs to
0, 0,
�
::;;
1/
>
Q',
L2 (t) and its Fourier transform v(�)
=
.9'V(t)
=
. � e - l � t v(t) dt J v 211" 1
�
-
�
(A.2.2) exists. Also, (A.2.3) and by Parseval's relation (A.2.4) Let s = i� + 1/ ,
t 1/
real. For
1/
>
Q',
the Laplace transform of u is defined by 625
LAPLACE TRANSFORM
626
(A.2.S) By Eq. (A.2 . 1 ), (A.2.6) By Eq. (A.2.3)
u(s)
=
!iff u
Therefore, we obtain, for any 1/
>
Vh.%J
=
=
Vh z:,.
(A.2.7)
ex ,
(A.2.8) The last formula is often written in the form
u(t)
1 __. 211'1
=
f
y;
(A.2.9)
e S1u(s) ds,
where !iff denotes the vertical line Re s = 1/ in the complex s plane. Parseval's relation becomes (A.2. 1 O)
PROPERTIES OF THE LAPLACE TRANSFORM
Suppose that u(t) has one continuous derivative and that dujdt also satisfies the estimate (A.2. 1). Integration by parts gives us for s '* 0
u(s)
f= e-S1u(t) dt =
-
o
e-S 1 u(t) s
I= 0
1 s
f= e-S1(dujdt) dt, 0
that is,
su(s)
=
dujdt
+
u(O).
(A.2. 1 1 )
PROPERTIES OF THE LAPLACE TRANSFORM
627
Equation (A.2. l l ) relates the Laplace transfonn of us that, for large l s I , Re s > ex ,
l u(s) 1
u(s)
to
d/Jdt.
It also tells
::; constant/ ls i ·
(A.2. 12)
Clearly we can use repeated integration by parts to relate u(s) to higher derivatives. In particular, if dPu/dtP is continuous and satisfies Eq. (A.2 . I ) and if dj u(O)/dtj = 0, j = 0, 1 , . . . , p 1 , then -
(A.2. 13) Using the theory of analytic functions one can prove that, for Re s > ex, the Laplace transfonn u(s) is an analytic function of s which by Eq. (A.2. 1 2) vanishes as l s i ---7 00, provided du/dt satisfies the above conditions. Therefore, one can replace the path 25 in Eq. (A.2.9) by any path 25 1 , as indicated in Figure A.2. l . In our applications of the Laplace transfonn, we often consider functions of x, t. Suppose that u(x, t) is a continuous function for ° ::; t < 00, ° ::; x ::; t, which satisfies the estimate (A.2. I ) for all x. Then
u
u(x, s)
=
fo� e-stu(x, t) dt,
°
::; x ::;
t,
Re s >
ex,
denotes the Laplace transfonn in time for every fixed x. The relation (A.2. l l ) reads
Im s
L
Re s
Re s = a
Figure A.2.1
Re s = 11
LAPLACE TRANSFORM
628
su
=
au/at + u(x, O).
(A.2. 14)
Also,
Ux
au(x, s)/ax
=
f: e-Srau(x, t)/ax dt
=
au/ax.
(A.2. 1S)
Let
denote the L2 norm for every fixed gives us
t
and
s, respectively. The estimate (A.2.6)
(A.2.16) Thus, if instead of Eq. (A.2. 1 )
holds, then
Parseval's relation (A.2.10) holds for every fixed x. Integrating over x gives us, for '1 > ex,
s
=
i�
+ '1 .
(A.2. 1 7)
APPENDIX A.3
ITERATIVE METHODS
Let A be a nonsingular nX n matrix and x, b vectors with n elements. The linear system Ax
=
(A.3. I)
b
has a unique solution. Let f(x) be a smooth vector function and E eter. We want to solve the nonlinear system of equations Ax
+
Ef(x)
=
> ° a param (A.3.2)
b,
by using the iteration Axn + I
+
XO = A
-l
Ef(xn )
b,
n
0, 1, 2, . . . , (A.3.3)
b.
Theorem A.3.1. Assume that f(x) is unifonnly Lipschitz continuous, that is, there is a constant L such that for all x, y I f(x) - f(y) 1
S; L l x - Y I .
if T = E I A - I I L < 1 , then Eq. (A . 3.2) has a unique solution i and limn Also,
(A.3.4) ---) 00
xn
=
X.
(A.3.5) Proof. Let x, y be two solutions. Then
629
ITERATIVE METHODS
630
that is,
y l ::; O.
(1 - r) l x -
Therefore, x = y and uniqueness follows. Subtracting
from Eq. (A.3.3) gives us (A.3.6) Thus,
I xn +
l
- xn l
::; r l xn xn - I I , ::; r2 \ xn - 1 xn - 2 1 , ::; rn l x l xo \ . _
_
_
Let m > n, then
I xm
- xn l
=
m- l
L i=n
i+ I
(x
i X)
(A.3.7) Therefore, X represents a Cauchy sequence which converges to a vector x. The vector x is a unique solution of Eq. (A.3.2) because i
lim If(x") - f(x) I ::;
n��
L
lim I xn - xl
n��
=
O.
Let n be fixed and let m ---7 00. Then the estimate (A.3.S) follows from Eq. (A.3.7). This proves the theorem. In most applications the nonlinear part f f(x) is not uniformly Lipschitz con tinuous. As an example consider 2 X + fX
h.
631
Then
f(x) - f(y)
=
X2 - y2
=
(x + y) (X - y),
(A.3.8)
and there is no constant L such that (A.3.4) holds for all x, y. Instead we intro duce the following definition. Definition A.3.t. f(x) is called locally Lipschitz continuous if, for every xO , yO , and R > 0, there exists a constant L = L(xo , yO , R) such that
jf(x) - f(y) j The function
S
L jx - yj
x2 is locally Lipschitz continuous. By Eq. (A.3.8)
Theorem A.3. 1 can now be modified as follows.
Theorem A.3.2. Assume that f(x) is locally Lipschitz continuous and let L = L(xo , 1 ) be the local Lipschitz constant for the ball Ix - xO j S 1 . If r = f jA- I l L < 1 , f /( l - r) f jA - l f(xo ) j < 1 , then the system (A.3.2) has a solution x with
(A.3. 1O)
Also, x is locally unique, that is, there is no other solution in the ball j x -xO j < 1 . Proof. Consider the iteration (A.3.3). We have
that is,
and by assumption 1 __ 1-7
I xl
_
xO j < 1 ,
(A.3 . 1 1 )
We shall use induction to prove that the sequence {xm } defined by Eq. (A.3.3) never leaves the ball B: j x - xO j S 1. By Eq. (A.3 . 1 1 ) X l E B. Assume that yj E B for j = 0, 1 , . . . , m - 1 . By Eq. (A.3.6)
ITERATIVE METHODS
632
Therefore, we can use Eq. (A.3.7) with
n=
1 and obtain
Thus,
and xm E B. Thus, we have shown that the sequence {xm } does not leave the ball B. Therefore, all the estimates of Theorem A.3 . 1 hold, and Theorem A.3.2 follows.
REFERENCES
S. Abarbanel, D. Gottlieb, and E. Tadmor: Spectral methods for discontinuous problems. Numerical Methods for Fluid Dynamics II, Eds. K.W. Morton and M.J. Baines, Clarendon Press, Oxford, pp. 1 28-1 53 (1 986). R.M. Beam and R.F. Warming: An implicit factored scheme for the compressible Navier-Stokes equations. A1AA J. , 1 6, No. 4, pp. 393-402 (1978). P. Brenner, V. Thomee, and L.B. Wahlbin: Besov spaces and applications to difference methods for initial value problems. Lecture Notes in Mathematics, Vo!' 434, Springer-Verlag, New York ( 1 975). C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang: Spectral Methods in Fluid Dynamics. Springer, Berlin ( 1 988). M . Carpenter, D . Gottlieb, and S. Abarbanel: Time-stable boundary conditions for finite-differ ence schemes solving hyperbolic systems: Methodology and application to high-order compact schemes. J. Compo Phys. , 111, pp. 220-236 ( 1 994).
1.G. Charney, R. Fjortoft, and 1. von Neumann: Numerical integration of the barotropic vorticity equation. Tellus, 2, No. 4, pp. 237-254 ( 1 950).
G. Chesshire and W.D. Henshaw: Composite overlapping meshes for the solution of partial differ ential equations. J. Compo Phys. , 90, pp. 1-64 ( 1 990). R. Chin and G. Hedstrom: A dispersion analysis for difference schemes, Math. Comp., 32, pp. 1 1 63-1 170 ( 1 978). R.V. Churchill and 1.W. Brown: Fourier Series and Boundary Value Problems. McGraw-Hili, New York ( 1 978). E. Coddington and N. Levinson: Theory of Ordinary Differential Equations. McGraw-Hili, New York ( 1 955). S. Conte and C. de Boor: Elementary Numerical Analysis, McGraw-Hili, New York ( 1 972). 1.W. Cooley and 1.W. Tukey: An algorithm for the machine calculation of complex Fourier series. Math. Comp., 19, pp. 297-301 ( 1 965). R. Courant and K. O. Friedrichs: Supersonic Flow and Shock Waves, Interscience Publishers, New York ( 1 948). R. Courant, K.O. Friedrichs, and H. Levy: U ber die partielie Differentialgleichungen der mathema tischen Physik. Mathematische Annalen, 1 00, pp. 32-74 ( 1 928). R. Courant and D. Hilbert: Methods of Mathematical Physics. Vol. l. Interscience Publishers, New York ( 1 953). 1. Crank and P. Nicholson: A practical method for numerical evaluation of solutions of partial differ ential equations of the heat-conduction type. Proc. Cambridge Philosophical Soc., 43, No. 50, pp. 50-67 ( 1 947).
633
634
REFERENCES
J. Douglas, Jr.: Alternating direction methods for parabolic systems in m-space variables. 1. Assoc. Compo Machinery, 9, pp. 450-456 ( 1 962). J. Douglas, Jr.: On the numerical integration of a2u/ax2 Soc. Industrial Appl. Math., 3, pp. 42-65 ( 1955).
+
a2u/ay2
=
au/at by implicit methods. 1.
J. Douglas, Jr. and J .E. Gunn: A general formulation of alternating direction methods, I. Num. Math., 6, pp. 428-453 ( 1 964). E.C. DuFort and S.P. Frankel: Stability conditions in the numerical treatment of parabolic differential equations. Math. Tables Other Aids Computation, 7, pp. 1 35-152 ( 1 953). W.R. Dykson andJ.R. Rice: Symmetric versus nonsymmetric differencing. SIAM 1. Sci. Stat. Comp., 6, No 1 pp. 45-48 ( 1 985). .
,
B. Engquist: A difference method for initial boundary value problems on general domains in two space dimensions. DCG Progress Report 1978, Uppsala University, Dept. of Computer Sci ence, Report No. 82 ( 1 978). B. Engquist, P. Lotstedt, and B. Sjogreen: Nonlinear filters for efficient shock computation. Math. Comp. , 52, No. 1 86, pp. 509-537 ( 1989). B. Engquist and S. Osher: One-sided difference schemes and transonic flow. Proc. Nat. Acad. Sci. USA, 77, No. 6, pp. 307 1-3074 ( 1 980). B. Fornberg: On a Fourier method for the integration of hyperbolic equations. SIAM 1. Num. Anal., 1 2 , pp. 509-528 ( 1 975). B. Fornberg and D.M. Sloan: A review of pseudospectral methods for solving partial differential equations. Acta Numerica, in press. L. Fox: The Numerical Solution of Two-Point Boundary Problems in Ordinary Differential Equa tions. Oxford University Press, Fairlawn, New Jersey ( 1 957). K.O. Friedrichs: Symmetric hyperbolic linear differential equations. Comm. Pure Appl. Math., 7, pp. 345-390 ( 1 954). KO. Friedrichs: Symmetric positive linear differential equations. Comm. Pure Appl. Math., 1 1 , pp. 333-4 1 8 ( 1 958). C.W. Gear: Numerical Initial Value Problems in Ordinary Differential Equations. Prentice-Hall, Englewood Cliffs, NJ ( 1 97 1 ). E. Godlewski and P.-A. Raviart: Hyperbolic systems of conservation laws. SMAI, Paris ( 1 990). S.K Godunov and V.S. Ryabenkii: Spectral criteria for the stability of boundary problems for non self-adjoint difference equations. Uspekhi Mat. Nauk, 18 ( 1 963). M. Goldberg: Simple stability criteria for difference approximations of hyperbolic initial-boundary value problems. II. Third International Conference on Hyperbolic Problems, Eds., B. Engquist and B. Gustafsson, Studentlitteratur, Lund, Sweden ( 1 99 1 ). M. Goldberg and E. Tadmor: Convenient stability criteria for difference approximations of hyper bolic initial-boundary value problems. Math. Comp., 44, pp. 361-377 ( 1 985). M. Goldberg and E. Tadmor: Convenient stability criteria for difference approximations of hyper bolic initial-boundary value problems. II. Math. Comp., 48, pp. 503-520 ( 1987). M. Goldberg and E. Tadmor: Scheme-independent stability criteria for difference approximations of hyperbolic initial-boundary value problems. II. Math. Comp. , 36, pp. 605-626 ( 1 98 1 ). M. Goldberg and E. Tadmor: Simple stability criteria for difference approximations of hyperbolic initial-boundary value problems. Nonlinear Hyperbolic Equations-Theory, Numerical Methods and Applications, Eds., J. BaUmann and R. Jeltsch, Vieweg Verlag, Branschweig, Germany, pp. 1 79- 185 ( 1 989). J. Goodman and R. LeVeque: On the accuracy of stable schemes for 20 scalar conservation laws. Math. Comp., 45, pp. 15-2 1 , ( 1 985). D. Gottlieb, B. Gustafsson, P. Olsson, and B. Strand: On the superconvergence of Galerkin methods for hyperbolic IBVP. RIACS Technical Report 93-07 ( 1 993). To appear in SIAM 1. Num. Anal.
REFERENCES
635
D. Gottlieb and S. Orszag: Numerical analysis of spectral methods: Theory and applications. CBMS Regional Conference Series in Applied Mathematics, 26, SIAM, Philadelphia ( 1 977). D. Gottlieb and E. Tadmor: Recovering pointwise values of discontinuous data within spectral accuracy. Progress in Scientific Computing, 6, pp. 357-375 ( 1 984). D. Gottlieb and E. Turkel: On time discretization for spectral methods. Stud. Appl. Math., 63, pp. 66-86 ( 1 980). B. Gustafsson: The convergence rate for difference approximations to general mixed initial bound ary value problems. SIAM J. Num. Anal., 1 8, pp. 179-190 ( 1 98 1 ). B. Gustafsson: The convergence rate for difference approximations to mixed initial boundary value problems. Math. Comp. , 29, pp. 396-406 ( 1 975). B. Gustafsson and H.-O. Kreiss: Difference approximations of hyperbolic problems with different time scales. I: The reduced problem. SIAM J. Num. Anal., 20, No . 1 , pp. 46-58 ( 1 983). B. Gustafsson, H.-O. Kreiss, and A. Sundstrom: Stability theory of difference approximations for mixed initial boundary value problems. II. Math. Comp., 26, pp. 649-686 ( 1 972). B. Gustafsson and 1. Oliger: Stable boundary approximations for implicit time discretizations for gas dynamics. SIAM J. Sci. Stat. Comp., 3, No. 4, pp. 408-42 1 ( 1 9 82). 1. Hadamard: Lectures on Cauchy 's Problem in Linear Partial Differential Equations. Yale Univer sity, New Haven ( 1 92 1 ) . E. Hairer, S.P. Nprsett, and G. Wanner: Solving Ordinary Differential Equations I, II. Springer-Ver lag, New York ( 1 980). A. Harten: ENO schemes with subcell resolution. J. Compo Phys. , 83, pp. 148-184 ( 1989). A. Harten, B. Engquist, S. Osher, and S. Chakravarthy: Uniformly high order accurate essentially nonoscillatory schemes, III. J. Compo Phys. , 7 1 , pp. 23 1-303 (1 987). A. Harten, 1. Hyman, and P. Lax: On finite-difference approximations and entropy conditions for shocks. Comm. Pure Appl. Math., 29, pp. 297-322 ( 1 976). A. Harten, S. Osher, B. Engquist, and S. Chakravarthy: Some results on uniformly high-order accu rate essentially nonoscillatory schemes. Appl. Num. Math., 2, pp. 347-377 ( 1 986). G. Hedstrom: Models of difference schemes for Ut + Ux 0 by partial differential equations. Math. Comp., 29, pp. 969-977 ( 1 975). W.D. Henshaw: A fourth-order accurate method for the incompressible Navier-Stokes equations on overlapping grids. Research Report RC 1 8609, IBM Research Division, Yorktown Heights, NY ( 1 992). W.D. Henshaw: A scheme for the numerical solution of hyperbolic systems of conservation laws. J. Compo Phys. , 68, No. I , ( 1987). W.D. Henshaw: The Numerical Solution ofHyperbolic Systems of Conservation Laws. Ph.D. Thesis, California Institute of Technology ( 1985). W.D. Henshaw, H.-0. Kreiss, and L. Reyna: A fourth-order accurate difference approximation for the incompressible Navier-Stokes equations. Research Report RC 1 8604, IBM Research Division, Yorktown Heights, NY ( 1 994). C. Hirsch: Numerical Computation of Internal and External Flows. Vol. 2. Wiley, New York ( 1 990). 1.M. Hyman: A method of lines approach to the numerical solution of conservation laws. Advances in Computational Methodsfor PDEs-lI/, Eds., R. Vichnevetsky and R.S. Stepleman, IMACS, pp. 3 1 3-32 1 ( 1 979). R. leltsch and 1.H. Smit: Accuracy barriers of two time level difference schemes for hyperbolic equations. Institut fur Geometrie und Praktische Mathematik, Aachen, Germany, Bericht Nr. 32 ( 1 985). C. 10hansson: Boundary conditions for open boundaries for the incompressible Navier-Stokes equa tion. J. Compo Phys. , 1 05, No. 2, pp. 233-25 1 ( 1993). C. 10hansson: Well-posedness in the generalized sense for the incompressible Navier-Stokes equa tions. J. Sci. Comp. , 6, No. 2, pp. 1 0 1 -1 27 ( l 99 I a). =
636
REFERENCES
C. Johansson: Well-posedness in the generalized sense for boundary layer suppressing boundary con ditions. J. Sci. Comp. , 6, No. 4, pp. 391-414 ( 1 99 1 b).
T. Kato: Perturbation Theory for Linear Operators. Springer-Verlag, New York ( 1 966).
G. Kreiss and G. Johansson: A note on the effect of numerical viscosity on solutions of conservation laws, unpublished.
H.-O. Kreiss: Difference approximations for the initial-boundary value problems for hyperbolic dif ferential equations. Numerical solutions of nonlinear differential equations. Ed., D. Greenspan, John Wiley, New York, pp. 14 1-166 ( 1 966). H.-O. Kreiss: On difference approximations of the dissipative type for hyperbolic differential equa tions. Comm. Pure Appl. Math., 17, pp. 335-353 (1 964). H.-O. Kreiss: Stability theory for difference approximations of mixed initial boundary value prob lems. 1. Math. Comp., 22, pp. 703-7 14 ( 1 968). H.-O. Kreiss: Ober die Losung von anfangsrandwertaufgaben fiir partielle differentialgleichungen mit hilfe von differenzengleichungen. Transactions of the Royal Institute of Technology, Stock holm, No. 166 ( 1 960). H.-O. Kreiss: Uber die Stabilitatsdefinition fiir Differenzengleichungen die partielle Differentialglei chungen approximieren. Nordisk Tidskr. Informations-Behandling, 2, pp. 153- 1 8 1 ( 1 962). H.-O. Kreiss and J. Lorenz: Initial-Boundary Value Problems and the Navier-Stokes Equations. Aca demic Press, New York ( 1 989). H.-O. Kreiss and J. Oliger: Comparison of accurate methods for the integration of hyperbolic equa tions. Tel/us, 24, pp. 1 99-2 1 5 ( 1 972). H.-O. Kreiss and J. Oliger: Stability of the Fourier method. SIAM J. Num. Anal., 1 6, No. 3, pp. 421 -433 ( 1 979). H.-O. Kreiss and G. Scherer: Finite element and finite difference methods for hyperbolic partial dif ferential equations. In Mathematical Aspects ofFinite Elements in Partial Differential Equations. Academic Press, Orlando, FL ( 1 974). H.-O. Kreiss and G. Scherer: On the existence of energy estimates for difference approximations for hyperbolic systems. Technical report, Uppsala University, Dept. of Scientific Computing, Upp sala, Sweden ( 1 977). H.-O. Kreiss and L. Wu: On the stability definition of difference approximations for the initial bound ary value problem. Appl. Num. Math., 1 2, pp. 2 1 3-227 ( 1 993). P.O. Lax: Weak solutions of nonlinear hyperbolic equations and their numerical computation. Comm. Pure Appl. Math., VII, pp. 1 59-1 93 ( 1 954). P.O. Lax and B. Wendroff: Difference schemes for hyperbolic equations with high order of accuracy. Comm. Pure Appl. Math., 17, pp. 3 8 1 -398 ( 1 964). P.D. Lax and B. Wendroff: Difference schemes with high order of accuracy for solving hyperbolic equations. New York University, Courant Inst. Math. Sci., Res. Rep. No. NYO-9759 ( 1 962). P.O. Lax and B. Wendroff: On the stability of difference schemes with variable coefficients. Comm. Pure Appl. Math., 15, pp. 363-371 ( 1 962). P.O. Lax and B. Wendroff: Systems of conservation laws. Comm. Pure Appl. Math., 13, pp. 2 17-237 ( 1 960). R. LeVeque: Numerical methods for conservation laws. Birkhuser Verlag, Basel ( 1 990). R. J. LeVeque and L. N. Trefethen: On the resolvent condition in the Kreiss Matrix Theorem. BIT, 24, pp. 584-591 ( 1 984). R.W. MacCormack: The effect of viscosity in hypervelocity impact cratering. AIAA Hypervelocity Impact Paper No. 69-354 ( 1 969). R.W. MacCormack and AJ. Paully: Computational efficiency achieved by time splitting of finite dif ference operators. AIAA 1 0th Aerospace Sciences Meeting, San Diego ( 1 972). A. Majda, J. McDonough, and S. Osher: The Fourier method for nonsmooth initial data. Math. Comp., 32, pp. 1 041-1081 ( 1 978).
REFERENCES
637
C.A. McCarthy: A strong resolvent condition need not imply power-boundedness. J. Math. Anal. Appl., in press. C.A. McCarthy and J. Schwartz: On the norm of a finite boolean algebra of projections, and appli cations to theorems of Kreiss and Morton. Comm. Pure Appl. Math., 1 8, pp. 1 91-201 ( 1 965). D. Michelson: Stability theory of difference approximations for multidimensional initial-boundary value problems. Math. Camp. , 40, pp. 1-46 ( 1983). M. Mock and P. Lax: The computation of discontinuous solutions of linear hyperbolic equations. Comm. Pure Appl. Math. , 3 1 , No. 4, pp. 423-430 ( 1 978). L. Nirenberg: Lectures on linear partial differential equations. Reg. Can! Ser. Math., 1 7, AMS, Prov idence, RI ( 1 972). G.G. O'Brien, M.A. Hyman, and S. Kaplan: A study of the numerical solution of partial differential equations. J. Math. Phys. , 29, pp. 223-25 1 ( 1 95 1 ). H. 0kland: False dispersion as a source of integration errors. Sci. Rep. No. I, Det Norske Met. Inst., Oslo, Norway ( 1 958). P. Olsson: High-Order Difference Methods and Dataparallel Implementation. Ph. D. Thesis, Uppsala University, Uppsala, Sweden ( 1 992). P. Olsson: Summation by parts, projections, and stability L Meth. Camp. , 64, pp. 1 035-1 065 (1 995a). P. Olsson: Summation by parts, projections, and stability, II. Math. Camp., 65 (I 995b) to appear. S.A. Orszag: Numerical simulation of incompressible flows within simple boundaries: accuracy. J. Fluid Mech., 48, pp. 75- 1 1 2 ( 1 97 1 ) . S. Osher: Stability o f difference approximations o f dissipative type for mixed initial boundary value problems. L Math. Camp. , 23, pp. 335-340 ( 1 969). S. Osher: Stability of parabolic difference approximations to certain mixed initial boundary value problems. Math. Camp. , 26, No. 1 17, pp. 1 3-39 ( 1 972). S. Osher and S. Chakravarthy: High resolution schemes and the entropy condition. SIAM J. Num. Anal., 2 1 , pp. 995-984 ( 1 984). S. Osher and E. Tadmor: On the convergence of difference approximations to scalar conservation laws. Math. Camp. , 50, No. 1 8 1 , pp. 1 9-5 1 ( 1 988). K. Otto: A software tool for analysis of Runge-Kutta methods. Uppsala University, Dept. of Scientific Computing, Report No. 1 1 6 ( 1 988). B. Parlett: Accuracy and dissipation in difference schemes. Comm. Pure App/. Math., 1 9, pp. 1 1 1-123 ( 1 966). D.W. Peace man and H.H. Rachford Jr.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Industrial App/. Math., 3, pp. 28-41 ( 1 955). V. Pereyra: Iterated deferred corrections for nonlinear boundary value problems. Num. Math. , 1 1 , pp. 1 1 1-125 ( 1 968). LG. Petrovskii: U ber das Cauchysche Problem fiir Systeme von partiellen Differential-gleichungen. Mat. Sbornik. N.S., 44, pp. 8 1 4-868 ( 1 937). N.A. Phillips: An example of non-linear computational instability. The Atmosphere and the Sea in Motion, Ed., Bert Bolin, Rockefeller Institute Press, New York, pp. 501-504 ( 1 959). L.F. Richardson: The deferred approach to the limit. Phil. Trans. Roy. Soc. , 226, pp. 300--349 ( 1 927). R.D. Richtmyer and K.W. Morton: Difference Methods for Initial-Value Problems, 2nd Edition. Interscience Publishers, New York ( 1 967). P. L. Roe: Some contributions to the modeling of discontinuous flows. Lecture Notes App/. Math., 22, pp. 1 63-1 93 ( 1 985). C. Shu and S. Osher: Efficient implementation of essentially non-oscillatory shock capturing schemes. J. Camp. Phys. , 77, pp. 439-47 1 ( 1 988). G. Skiillermo: How the boundary conditions affect the stability and accuracy of some implicit meth ods for hyperbolic equations. Uppsala University, Dept. of Scientific Computing, Report No. 62, Uppsala, Sweden ( 1 975).
638
REFERENCES
M.N. Spijker: On a conjecture by LeVeque and Trefethen related to the Kreiss Matrix Theorem. BIT, 3 1 , pp. 55 1-555 ( 1 99 1 ). G. Starius: On composite mesh difference methods for hyperbolic differential equations. Num. Math., 35, pp. 24 1-295 ( 1 980). MJ. Stetter: Stabilizing predictors for weakly unstable correctors. Math. Camp., 9, pp. 84-89 ( 1 965). B. Strand: Summation by parts for finite difference approximations for dldx. 1. Camp. Phys. , 1 10, pp. 47--67 ( 1 994). G. Strang: Accurate partial difference methods II. Non-linear problems. Num. Math., 6, pp. 37-46 ( 1 964). G. Strang: On the construction and comparison of difference schemes. SIAM 1. Num. Anal., 5, No. 3, pp. 506-5 1 7 ( 1 968). J. Strikwerda: Initial boundary value problems for incompletely parabolic systems. Comm. Pure Appl. Math., 30, pp. 797-822 ( 1 977). J. Strikwerda: Initial boundary value problems for the method of lines. 1. Camp. Phys., 34, pp. 94-107, ( 1 980). B. Swartz and B. Wendroff: The relative efficiency ·of finite difference and finite element methods. I: Hyperbolic problems and splines. SIAM 1. Num. Anal., I I , No. 5, pp. 979-993 ( 1 974). P. K. Sweby: High resolution schemes using flux limiters for hyperbolic conservation laws. SIAM 1. Num. Anal., 2 1 , pp. 995-10 1 1 ( 1 984). E. Tadmor: The equivalence of L2-stability, the resolvent condition, and strict H-stabiIity. Linear Algebra Appl., 4 1 , pp. 1 5 1-159 ( 1 98 1 ). E. Tadmor: The exponential accuracy of Fourier and Chebyschev differencing methods. SIAM 1. Num. Anal., 23, pp. 1-10 ( 1 986). E. Tadmor: Numerical viscosity and the entropy condition for conservative difference schemes. Math. Camp., 43, No. 1 68, pp. 369-38 1 ( 1 984). P. Thompson: Numerical Weather Analysis and Prediction. Macmillan, New York (1961). M. Thun e: A numerical algorithm for stability analysis of difference methods for hyperbolic systems. SIAM 1. Sci. Stat. Camp. , I I , No. I , pp. 63-8 1 ( 1 990). E.C. Titchmarsh: Introduction to the Theory of Fourier Integrals. Clarendon Press, Oxford ( 1 937). J.L.M. van Dorsselaer, J.F.B.M. Kraaijevanger, and M.N. Spijker: Linear stability analysis in the numerical solution of initial value problems. Acta Numerical, pp. 1 99-237 (1993). B. van Leer: Towards the ultimate conservative difference scheme II. Monotonicity and conservation combined in a second order scheme. 1. Camp. Phys. , 14, pp. 361-370 (1974). J. Varah: Stability of difference approximations to the mixed initial boundary value problems for parabolic systems. SIAM 1. Num. Anal., 8, pp. 598--615 ( 1 97 1 ). J. von Neumann: Proposal and analysis of a numerical method for the treatment of hydro-dynamical shock problems. Nat. Def. Res. Com., Report AM-55 I ( 1 944). J. von Neumann and R.D. Richtmyer: A method for the numerical calculation of hydrodynamic shocks. 1. Appl. Phys., 2 1 , pp. 232-237 ( 1 950). O. Widlund: On the rate of convergence for parabolic difference schemes I. SIAM AMS Proc. , II , pp. 60--73 ( 1 970). O. Widlund: On the rate of convergence for parabolic difference schemes II. Comm. Pure Appl. Math., 23, pp. 79-96 ( 1 970). O. Widlund: Stability of parabolic difference schemes in the maximum norm. Num. Math., 8, pp. 1 86-202 ( 1 966). L. Wu: The semigroup stability of the difference approximations for initial boundary value problems. Math. Camp. , in press. A. Zygmund: Trigonometric Series, I. Cambridge Univ. Press, London ( 1 977).
INDEX
Cn(a,b), 3 D_ , 1 9 D+, 1 9 D+D_ , 1 9 DO, 1 9 H-norm, 132, 133 IntN, 24 L2 norm, I I , 1 6, 33 L2 scalar product, 1 1 , 17, 33 L2 space, 1 6 S-operator, 30 z-transformed problem, 579 () scheme, 56 accurate of order (p, q), 60 Adams-Bashford method, 250 Adams-Moulton method, 250 algorithm IBSTAB, 6 1 3 aliasing error, 30 alternating direction implicit (ADI) method, 200 amplification factor, 4 1 artificial viscosity, 44, 77 backward Euler method, 55, 1 86 backward heat equation, I I I Beam-Warming limiter, 347 Bessel's inequality, 1 3 bilinear form, I I blow up time, 3 1 7 boundary characteristic function, 596 boundary condition, 360 boundary conditions containing time derivatives, 442 boundary conditions for hyperbolic systems, 359 boundary layer, 543, 544 boundary points, 577
box finite volume method, 260 box scheme, 1 72 Cauchy-Schwarz inequality, 1 2 centered difference operator, 8 2 centered finite volume method, 255, 259 CFL-condition, 54 characteristics, 39, 297, 359 characteristic equation, 5 1 , 1 76, 498 characteristic lines, 297 characteristic variables, 362 collocation method, 1 03 collocation points, 103 compatibility condition, 360 companion matrix, 1 5 8 conservation form, 254, 32 1 , 327, 345 conservation law, 254 conservation law form of the Euler equations, 261 conservative, 327 consistency, 465, 468 consistent, 45, 48, 1 65 contact discontinuities, 353 convection diffusion equation, 70, 1 93 convergence rate, 567 converges, 48 corrector, 92 Crank Nicholson method, 56, 72 deferred correction, 207 determinant condition, 5 1 2, 580 diagonal scaling, 1 09 difference approximation, 47 difference operators, 1 8 Dirac delta-function, 1 02 direct method, 56 Dirichlet and Neumann conditions, 442 Dirichlet boundary conditions, 376
639
640 discontinuous solutions, 290 discrete Fourier transform, 25 discrete norm, 2 1 discrete scalar product, 2 I discrete solution operator Sh , 159 discrete version of Duhamel's principle, 160 dissipation, 1 79, 29 1 dissipative, 240, 278 dissipativity, 178 domain of dependence, 54 DuFort Frankel method, 66, 279 Duhamel's principle, 147, 371 Duhamel's principle for PDE, 149
eigenvalue, 2 1 eigenvalue condition, 562 eigenvalue problem, 400, 497 eigenvector, 2 1 energy conserving, 39 energy estimates, 368 energy estimates for parabolic differential equations, 375 energy method, 1 83,359, 445 entropy condition, 338 equation of state, 1 36, 26 I equations for gas dynamics, 341 error estimates, 567 essentially nonoscillatory (ENO) methods, 350 Euclidean norm, 20 Euclidean product, 20 Euler equation, 136, 304, 372, 393 Euler method, 63 explicit, 54 extra boundary conditions, 537, 554, 568
fast Fourier transform (FFT) , 25 field of values, 228 finite-element method, 100 finite speed of propagation, 39 finite-volume method, 254 first-order system, 1 3 I first-order upwind method, 29 1 first-order wave equation, 38 flux-limiter method, 345 flux-splitting methods, 333 Fourier coefficients, 4 Fourier expansion, 36 Fourier method, 95, 262 Fourier series, 3, 33 Fourier transform, 17 I fractional step method, 614
INDEX Galerkin's method, 100, 101, 102 general domains, 394 generalized eigensolution, 594 generalized eigenvalue, 433, 5 1 3, 5 8 1 generalized solution, 149, 290 generalized stability, 599 genuine eigenvalue, 537 Gibb's phenomena, 5, 88 GKS-analysis, 576 Godunov method, 349 Godunov Ryabenkii condition, 497, 500, 579 Green's formula, 258 gridfunctions, 1 7, 39 gridpoints, 17 heat equation, 61, 1 3 8 higher order equations, 74 homogeneous boundary conditions, 371 Hyman method, 252 hyperbolic, 1 1 9 hypersonic flow, 6 1 7 ill posed, I I I implicit scheme, 55 incompatibility, 591 initial-boundary-value problems, 359 initial-value problem, 38 injection operator, 583 instability, 590 interior points, 577 internal energy, 261 internal layer, 324 interpolation operator, 6 1 7 inviscid Burgers' equation, 3 1 6 inviscid shock waves, 324 Jordan block, 1 1 8 Jordan canonical form, 1 76, 622 Korteweg de Vries type equation, 76, 387 Kreiss condition, 434, 5 1 2, 579, 58 1 , 582 Kreiss matrix theorem, 1 77 Laplace transform, 625 Laplace transform method, 398, 4 1 1 , 496 Lax Friedrichs method, 46, 291 Lax Richtmyer equivalence theorem, 170 Lax Wendroff method, 46, 1 80, 227, 29 1 Lax Wendroff type, 238 leap-frog scheme, 50, 165, 29 1 limiter, 347 linear multistep methods, 90
INDEX linearized equations, 136 local existence, 153 local truncation error, 1 65 locally Lipschitz continuous, 63 1 Lopatinsky condition, 428 lower order terms, 4 1 5 MacCormack method, 268 maximally semibounded, 386, 389 method of characteristics, 297, 299 method of characteristics on a regular grid, 305 method of lines, 80, 1 85, 599 method of Runge Kutta type, 60 1 min mod limiter, 350 modified eigenvalue problem, 566 modified Lax Wendroff method, 268 monitor, 334 monotone methods, 291 multi-step method, 249, 587 Navier Stokes equations, 140, 389 Neumann boundary conditions, 384 Newton's method, 74 nonlinear system, 629 nonuniform grids, 255 norm conserving, 39 norm of an operator, 22 normal mode analysis, 576 numerical range, 623 oblique boundary conditions, 442 one-step methods, 50 one-step multistage methods, 90 order of accuracy, 1 65, 465, 468 orthonormal, 1 2 overlapping grids, 6 1 9 overshoots, 291 overspecifying, 6 I 7 parabolic, 1 15, 1 22, 270, 273 parabolic system, 270, 273 parasitic solution, 52 Parseval's relation, 1 3 periodic channel problem, 6 1 2, 6 15 , 6 1 7 periodic gridfunctions, 1 7 periodicity conditions, 40 Petrovskii condition, 1 28 physical boundary conditions, 571 piecewise Cl function, 9 points per wavelength, 83, 93 pole, 560 power-bounded, 1 75
641 predictor, 92 predictor corrector, 92 principal part, 527 projection operator, 583 pseudo spectral method, 102 quadrilaterals, 258 quarter-space problem, 390, 4 1 6 Rankine Hugoniot relation, 328, 336 rarefaction wave, 3 19 reduced eigenvalue problem, 537 reflected wave, 548 regularization using viscosity, 3 13 resolvent condition, 178, 412, 605 resolvent equation, 4 1 1 , 528, 601 , 603, 606 resolvent estimate, 609 resolvent matrix, 178 resolvent operator, 4 1 2 Richardson extrapolation, 207 Riemann invariants, 338, 341 Riemann problem, 352 Roe method, 347, 353 Runge Kutta Heun method, 247 Runge Kutla methods, 241 saw-tooth function, 4 Schr6dinger type equation, 76, 1 15 Schur's Lemma, 1 25, 623 second-order parabolic, 4 1 0 second-order TVD scheme o f Sweby, 347 self-adjoint form, 1 89 semibounded, 1 29, 385, 4 14, 467, 5 1 5 semiboundedness, 1 84, 485 semidiscrete solution operator, 450 semi-implicit method, 72 semi-Lagrangian methods, 306 shock strength, 324 shock-tube problem, 343 shock wave, 3 1 8 Simpson's rule, 9 1 skew-Hermitian, 32 slope-limiter, 349 Sobolev inequality, 377 solution operator, 1 44, 370 sonic point, 238 spectral method, 102 spectral radius, e(A), 2 1 speed o f sound, 1 3 6 splitting methods, 195 spurious solutions, 1 65 stable, 44, 48, 159 stable in the generalized sense, 527, 599
642 stability, 1 57, 465 stability condition, 52 stability constant, 543 stagnation point, 238 stepfunction, 6 12 Strang type splitting, 268 strictly dissipative, 1 80 strictly hyperbolic, 1 1 9, 2 14, 2 1 9 strictly stable, 1 64 strongly hyperbolic, 1 19, 2 14, 2 1 9 strongly hyperbolic system, 361 , 363, 367 strongly parabolic, 1 3 8, 1 53 strongly stable in the generalized sense, 527, 599 strongly well posed, 383 strongly well posed in the generalized sense, 414 subsonic flow, 373 superbee limiter, 347 superposition principle, 39, 42 supersonic flow, 372 switch, 333 symbol, 4 1 , 1 27, 1 7 1 symmetric hyperbolic, 1 19, 1 52, 2 14, 2 1 9, 410 symmetrizer, 1 88, 2 1 1 test function, 1 0 1 threshold, 334 time-split method, 268 total variation, 347
INDEX translation operator, 1 8, 36 translatory boundary conditions, 596 trapezoidal rule, 56 traveling wave, 321, 322 trigonometric interpolation, 3, 24 truncation error, 59 TVD scheme, 347 two-dimensional hyperbolic systems, 611 two-step method, 50 unconditionally stable, 56 undershoots, 291 uniformly Lipschitz continuous, 629 upwind difference schemes, 333 van Leer limiter, 347 viscous Burgers' equation, 320 viscous shock waves, 323 von Neumann condition, 1 73 wave equation, 364 weak formulation, 328 weak shocks, 334 weak solutions, 328 weakly hyperbolic, 1 19, 2 14, 2 1 9 well posed, 74, 1 1 0, 1 1 3, 154 well posed in the generalized sense, 413, 414, 415 well-posedness, 106, 1 52, 382 work per wavelength, 97