c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1105–1130
THE MODIFIED GHOST FLUID METHOD FOR COUPLING OF FLUID AND STRUCTURE CONSTITUTED WITH HYDRO-ELASTO-PLASTIC EQUATION OF STATE∗ T. G. LIU† , W. F. XIE‡ , AND B. C. KHOO§ Abstract. In this work, the modified ghost fluid method (MGFM) [T. G. Liu, B. C. Khoo, and K. S. Yeo, J. Comput. Phys., 190 (2003), pp. 651–681] is further developed and applied to treat the compressible fluid-compressible structure coupling. To facilitate theoretical analysis, the structure is modeled as elastic-plastic material with perfect plasticity and constituted with the hydro-elastoplastic equation of state [H. S. Tang and F. Sotiropoulos, J. Comput. Phys., 151 (1999), pp. 790– 815] under strong impact. This results in the coupled compressible fluid-compressible structure system which is fully hyperbolic. To understand the effect of structure deformation on the interfacial and flow status, the compressible fluid-compressible structure Riemann problem is analyzed in the consideration of material deformation with an approximate Riemann problem solver proposed to take into account the effect of material elastic-plastic deformation. We clearly show the ghost fluid method can be applied to treat the flow-deformable structure coupling under strong impact provided that a proper Riemann problem solver is used to predict the ghost fluid states. And the resultant MGFM can work effectively and efficiently in such situations. Various examples are presented to validate and support the conclusions reached. Key words. modified ghost fluid method, GFM Riemann problem, approximate Riemann problem solver, fluid-compressible structure coupling AMS subject classifications. 35L45, 65C20, 76T10 DOI. 10.1137/050647013
1. Introduction. The simulation of multimedium compressible flow is still a very challenging topic, especially if the density ratio of two media is very large or one of the media is constituted with a stiff equation of state (EOS). This is because commonly used high resolution schemes for compressible flows such as total-variation-diminishing (TVD) schemes [8, 9] and essentially nonoscillatory (ENO) schemes [11, 10, 24], which work efficiently for pure-medium compressible flows, can run into unexpected difficulties due to numerical oscillations generated in the vicinity of material interfaces. Such oscillations (especially pressure oscillations) are analyzed mathematically by Karni [12] and Abgrall and Karni [2]. To suppress unphysical oscillations, various techniques have been developed [12, 2, 1, 25, 23, 18, 22, 30, 6, 5, 3, 33, 16]. Among those techniques, the ghost fluid method (GFM) [6] provided a flexible way for treatment of multimedium flows. The key point of a GFM-based algorithm is to properly define ghost fluids, which is the only difference among various GFM-based algorithms [20, 2, 6, 5, 3, 33, 16]. One main advantage of the GFM-based algorithm is ∗ Received by the editors December 8, 2005; accepted for publication (in revised form) May 21, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/64701.html † Institute of High Performance Computing, The Capricorn, Science Park II, Singapore 117528, Singapore. Current address: Department of Mathematics, Beijing University of Aeronautics and Astronautics, Beijing 100083, People’s Republic of China (
[email protected]). ‡ Institute of High Performance Computing, The Capricorn, Science Park II, Singapore 117528, Singapore. Current address: Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ 08544 (
[email protected]). § Department of Mechanical Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260, Singapore, and Singapore-MIT Alliance, 4 Engineering Drive 3, National University of Singapore, Singapore 117576, Singapore (
[email protected]).
1105
1106
T. G. LIU, W. F. XIE, AND B. C. KHOO
that the extension to multidimensions is fairly straightforward since computations are carried out in the single phase flow in the updating of flow field across the material interface(s). However, which GFM-based algorithm is the most appropriate was found to be problem related [20, 5, 16]. To better understand the underlying cause(s) for the different applicability between the aforementioned GFMs, Liu, Khoo, and Wang [16] compared the Riemann waves provided by the GFM-based algorithm (also referred to as GFM Riemann problems) to those generated from the original Riemann problems for gas-water compressible flows for all possible Riemann wave patterns. They found that two conditions have to be satisfied for the defined ghost fluids in order for the GFM-based algorithm to be able to provide the correct Riemann wave (correct solution) in the respective real fluids: Condition I is that the Riemann wave in the real fluid side for the GFM Riemann problems is initially consistent with that for the same side as for the original Riemann problem; condition II is that the new Riemann problem formed from the GFM computation maintains the same type of solution as that for the original Riemann problem during the decomposition of the singularity. (More discussion on the GFM Riemann problems is found in section 3.1.) To develop a more universally applicable GFM which is not problem dependent, Liu et al. [20, 16] proposed the modified ghost fluid method (MGFM). In the MGFM a multimedium Riemann problem is defined and solved to predict the interface status via two nonlinear characteristic equations [18]. The MGFM has been shown to be robust and efficient when applied to gas-gas or gas-liquid compressible flows [20, 16]. The intention of this work is to develop (or extend) the GFM to treat the compressible fluid-compressible structure coupling under strong fluid or shock impact. The focus is on accuracy analysis of the developed method. As we are aware, numerical instability occurring in the vicinity of the fluid-structure interface is one of the notorious difficulties in simulating such problems. Due to the manner or unique feature of computing solution through pure media based on the GFM treatment of the interface, this difficulty is expected to be overcome provided that the ghost fluids are properly defined. Unlike fluids, a solid medium can withstand strong tension. Elastic-plastic deformation usually occurs in solid material when under strong impact, thereby resulting in a leading elastic wave and a trailing plastic wave propagating simultaneously in the solid medium. Physically, when a solid is under elastic-plastic deformation, the instantaneous status of the solid cannot be uniquely determined without the knowledge of its prior history. Thus, difficulties are encountered in analyzing the characteristic features of solution in such situations if this irreversible process is followed through exactly. On the other hand, because the shear (deviatoric) stress is secondary in comparison to the normal stress when the structure is under strong impact, the description of the latter and the process of plastic deformation are usually simplified. In fact, in such situations, a frequently adopted model in engineering application is the hydro-elasto-plastic (HEP) EOS [27, 32]. In this model, perfect plasticity is assumed once the structure is under plastic compression with the von Mises yielding condition applied. Besides its simplicity, another advantage of using HEP EOS is that the system of governing equations for the structure is hyperbolic. As a result, a possible analytical solution, which is very useful for testing the numerical method developed, might be obtained for the compressible fluid-compressible structure coupled Riemann problem in a short period of time. With the help of the HEP EOS employed, we shall show that the GFM can indeed be employed to effectively handle the compressible fluid-compressible structure coupling provided that the ghost fluid status is accurately predicted to take into
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1107
account the strong impact at the interface. In doing so, an approximate Riemann problem solver which is able to take into account the effect of material elastic-plastic deformation is proposed, leading to the further development of the MGFM in this work. To understand the influence of material deformation on the interface and flow status, the solution structure of the compressible fluid-compressible solid Riemann problem is also analyzed with consideration of material elastic-plastic compression. Numerical tests and analysis will show the resultant MGFM can work efficiently in such situations. It should be noted that there are other methods and even commercial software with complex constitutive equations equipped for solid mechanic analysis. That the simpler HEP EOS is employed in this work is primarily intended for convenience of possible theoretical analysis for the developed technique as also the focus of the present work is on fluid-structure interface treatment under strong impact where it is found that the HEP EOS works well [27, 32]. We want to emphasize that the present developed technique can be applied to treat the compressible fluid-compressible structure interface with a more general constitutive equation provided that the structural normal stress is dominant and there is no severe tension such that cavitation occurs in the fluid in the vicinity of the interface. For the case of severe tension, it is currently not possible to make a theoretical statement on the performance of the developed technique; it is left to future work to tackle the specific subject of cavitation occurrence. One will find that the present developed technique can also be integrated with established structure software to treat the fluid-structure interface for strong impact. The remaining text is arranged as follows. In section 2, the one-dimensional (1D) Euler equations are presented together with the EOS for gases, water, and the compressible solid medium. Next, the 1D fluid-solid Riemann problems are studied in detail, and the conditions for different types of solution are analyzed. In section 3, based on conclusions reached in section 2, an approximate Riemann problem solver will be developed which works accurately when the solid medium is under elastic-plastic compression. Some theoretical conclusions will be made vis-`a-vis the approximate Riemann solver as applied to the fluid-structure coupling. In section 4 various tests are carried out with further discussion and analysis. A brief conclusion is presented in section 5. 2. The fluid-solid Riemann problem. 2.1. 1D Euler equations. The 1D Euler equations of an initial-value Riemann problem can be written as Ul , x < x0 , ∂F (U ) ∂U + = 0, U |t=0 = (2.1) ∂t ∂x Ur , x > x0 , for an inviscid, non–heat-conducting compressible medium, where U = [ρ, ρu, E]T , F (U ) = [ρu, ρu2 + p, (E + p)u]T for fluids. Here ρ is the density, u is the velocity, p is the pressure for fluids, and E is the total energy. For solids, F (U ) = [ρu, ρu2 − σ, (E − σ)u]T , where σ is the total stress with a negative (positive) value assumed when under compression (tension). By defining p = −σ, the flux for a solid has the same form as for a fluid. Ul and Ur are two constant states separated by the fluid-solid interface located at x0 . Hereafter, the subscripts “l” and “r” indicate the flow states on the left- and right-hand medium, respectively, which can be either a gas, water, or solid medium. The total energy is given as (2.2)
E = ρe + ρu2 /2,
1108
T. G. LIU, W. F. XIE, AND B. C. KHOO
where e is the specific internal energy per mass. For closure of the system, the EOS is required. The γ-law used for gases is (2.3)
ρe = p/(γg −1).
The Tait EOS employed for the water medium has the form (2.4)
p = Bw [ρ/ρ0 ]γw − Bw + p0 ,
where γw = 7.15, p0 = 1.0E5P a, Bw = 3.0E8P a, and ρ0 = 1000.0kg/m3 in this work. The Tait EOS is independent of entropy, and hence the energy equation is not strictly required for solving the compressible water flow. To facilitate programming, (2.4) can be rewritten in the form of a stiff gas EOS as [18, 6] (2.5)
¯w )/(γw −1), ρe = (p + γw B
¯w = Bw − p0 . In this way, one can directly apply the numerical solvers where B developed for the compressible gas flow to the water medium. Overall, the EOS for a gaseous or water medium can be written in a consistent form as (2.6)
ρe =
γB p + . γ−1 γ−1
¯w for water accordingly. Here γ and B are set to γg and zero for a gas, to γw and B The associated sound speed for EOS (2.6) can be expressed as c = γ p¯/ρ, where p¯ = p + B. 2.2. The hydro-elasto-plastic EOS for the solid medium. When a solid is under strong impact, the total stress tensor, σ , can be decomposed into a dominant isotropic hydrostatic pressure −ph I and a deviatoric stress tensor S; i.e., σ = −ph I+S. Here I is the unit tensor. There are many ways to treat both parts. The popular models for the hydrostatic pressure are the Mie–Gr¨ uneisen EOS and its variants [35]. The deviatoric stress is generally time and strain history dependent. To calculate it, iteration and time integration are usually required [34, 29]. However, because the deviatoric stress is usually secondary when the structure is under strong impact, the deviatoric stress might be considered in an integrated lumped sum as carried out in [32] and directly added to the hydrostatic pressure for facilitating engineering applications. More specifically, the HEP EOS [27, 32] was obtained in such a way; it is one of the frequently used models in engineering applications, where the Murnagham equation (which is a linearized Mie–Gr¨ uneisen EOS) and Hooke’s law are used for the hydrostatic pressure and the shear (deviatoric) stress, respectively, with the von Mises yielding condition imposed. This HEP EOS for 1D total stress can be expressed as [27, 32, 35, 34] (2.7)
4 p := −σ = ph (ρ) + s(ρ0 , τ0 , ρ), 3
where ph (ρ) and s(ρ0 , τ0 , ρ) are the hydrostatic pressure and the shear stress, respectively. Here the symbol “ := ” means definition. The subscript “0” refers to an “initial” state. Hydrostatic pressure ph (ρ) can be written as β ρ m (2.8) ph (ρ) = − 1 + pa , β ρa
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1109
and the shear stress is described by Hooke’s law with perfect plasticity as τ, |τ | < Y /2, (2.9) s= Y · sign(τ )/2, |τ | ≥ Y /2, with (2.10)
dτ G dρ = . dt ρ dt
Here, m, Y , G, and β are the bulk modulus, the yield stress, the modulus of rigidity, and the Gr¨ uneisen coefficient, respectively. For steel, m = 2.25E11P a, Y = 9.79E8P a, G = 8.53E10P a, and β = 3.7, ρa = 7800kg/m3 at pa = 1.0E5P a. Integrating (2.10), (2.7) can be expressed as ⎧ 2 ρ ≥ ρ2 , ⎪ ⎨ ph (ρ) + 3 Y, ρ 4 ph (ρ) + 3 (G ln ρ0 + τ0 ), ρ1 < ρ ≤ ρ2 , (2.11) p = −σ = ⎪ ⎩ ρ < ρ1 . ph (ρ) − 23 Y, Here, ρ1 = ρ0 e−(2τ0 +Y )/2G and ρ2 = ρ0 −(2τ0 −Y )/2G are limits of elasticity under tension and compression, respectively. Physically, τ0 and 1/ρ0 are the shear stress and specific volume along the particle path at the last time step and are dependent on previous strains. Thus, they are not constant but time dependent. Since their changes are small in a short time period, they are assumed constant in this work for facilitating the ensuing analysis. With this assumption, the exact solution of a 1D compressible fluid-compressible solid Riemann problem can be obtained in some situations in a short period of time [27]. We denote the associated stresses at ρ1 and ρ2 as σ1 and σ2 , respectively. We then take note that p1 = −σ1 and p2 = −σ2 . p1 is usually negative as σ1 is the limit of elasticity under tension. A tension occurring at the fluid-structure interface physically may cause gas vacuum or phase transition (cavitation) in the fluid as the fluid usually cannot withstand tension. The appearance of vacuum and cavitation imposes difficulties in the treatment of the interface, and special treatment is required. For a reasonably small magnitude of tension, the cavitating fluid-structure interface might be treated following the suggestions made in [17]. However, as far as we are aware, how to accurately treat the interface for a large magnitude of tension occurring at the fluid-structure interface is still an open issue from the viewpoint of mathematical and numerical analysis. Mathematically, to just always ensure a positive interfacial pressure, a special Riemann problem solver(s) with phase transition [14] may be designed but has yet to be developed and tested in situations consistent with our current focus. It is reckoned that the neglect of effect of strain history on the stress may yet still affect the occurrence of fluid phase transition next to the interface. How to theoretically evaluate this effect is, however, not clear at the present time; this is outside the scope of the present work and is a topic we hope to address in our future endeavors. As such, we therefore consider only the structure under compression in the vicinity of the interface in this work. The employment of the von Mises yielding criterion causes a jump to occur to the sound speed at p2 —the limit of elasticity under compression. If the stress is beyond σ2 , the solid experiences plastic compression. It is this jump that mimics the possible existence of two nonlinear Riemann (shock) waves, which are observed and verified by experiments. Because the sound speed in the elastic
1110
T. G. LIU, W. F. XIE, AND B. C. KHOO
Fig. 1. The loading and unloading path using the HEP EOS.
compression is larger than that in the plastic compression in the vicinity of the jump, the nonlinear elastic shock wave usually travels ahead of the nonlinear plastic shock wave. If the solid medium is under extreme compression such that the plastic shock wave overtakes the elastic shock wave to merge into one shock wave, this leads to the formation of a strong plastic shock wave. The critical value of forming the strong plastic shock wave may be determined theoretically and will be given in section 2.3. Useful discussion about EOS (2.7)–(2.11) can also be found in [27]. It should be noted that the loading and unloading procedure is irreversible using the HEP EOS as shown in Figure 1. At the beginning of loading, the structure is under elastic deformation and follows the loading path from the origin to A—the limit of elastic deformation. With the further increase of loading, structure plastic deformation occurs and follows the loading path from A to B with perfect plasticity assumed. When the structure is unloaded from B status, it first follows the elastic path from B to C—the new limit of elastic deformation—and then experiences the perfectly plastic path from C to D. As D does not coincide with the origin, a permanent deformation has occurred. When the structure is unloaded from an initial strong tension, the remaining tension at the interface leads to the physical occurrence of flow cavitation or vacuum and causes severe difficulties in the numerical treatment and analysis. In this work, it is not our intent to deal with the mentioned unloading path; this is the scope of our future work. For long time computation and taking into account time- and strain-dependent τ0 and 1/ρ0 , the detailed expression for the deviatoric stress in relation to strains has to be used. This part is related only to the solid mechanics. One can follow exactly the manner of [34, 29, 13] to treat the full deviatoric stress. More specifically, to take fully into account the deviatoric stress as a time- and strain-dependent tensor and σ = −ph I + S, the full three-dimensional governing equations for the structure can be written as (2.12)
∂Fh ∂Gh ∂Hh ∂Fd ∂Gd ∂Hd ∂U + + + + + + = 0. ∂t ∂x ∂y ∂z ∂x ∂y ∂z
Here, Fh , Gh , and Hh are the fluxes for the hydrostatic part, which are the same as that for fluids with p replaced by ph ; Fd , Gd , and Hd are the fluxes related to deviatoric stress components. For example, Fd has a form of [0, −sxx , −sxy , −sxz , −usxx ]T , where sxy and sxz are shear stress components. Equation (2.12) can be solved by operator splitting: ∂U ∂Fh ∂Gh ∂Hh 1 ∂t + ∂x + ∂y + ∂z = 0, (2.13) ∂Fd ∂Gd ∂Hd ∂U2 ∂t + ∂x + ∂y + ∂z = 0.
1111
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
t
t
Interface
Interface Left non-linear Riemann wave Left initial state
Left non-linear Riemann wave
Right non-linear Riemann wave Right initial state
x
Left initial state
Plastic Riemann wave Elastic Riemann wave Right initial state x
(a) pure gas Riemann problem
(b) fluid-solid Riemann problem with elasticplastic deformation
Fig. 2. Illustration of solution structure.
The hydrostatic part is coupled to the fluid equations and solved first, and then the deviatoric stress part is calculated. Since the fluid-structure coupling is taken into account in the computation of the former, the computation of the latter is related only to the solid dynamics. As a result, one can employ one’s favorite solid solver or a well-established solid solver to treat the deviatoric stress. This is, however, not the focus of the present work. Readers are encouraged to refer to the monograph by Kulikovskii, Pogorelov, and Semenov [13], where treatment for the deviatoric stress can be computationally separated from the isotropic hydrostatic part. 2.3. Solution types of the fluid-solid Riemann problem. In the absence of fluid cavitation or vacuum and under the employment of EOS (2.6), the solution types of a 1D gas-gas or gas-water Riemann problem are very similar to those of a pure gas Riemann problem [26, 28]. The solution in general consists of four constant regions connected by one of the three centered Riemann waves—two nonlinear waves (shock wave(s) or/and rarefaction wave(s)) and a linear wave (contact discontinuity or material interface). The two nonlinear waves are separated by the linear wave. In each medium, there is only one Riemann wave, which is a nonlinear wave (see Figure 2(a) for the general solution structure of a shock tube problem). On the other hand, there can be two nonlinear waves in the solid medium when the solid is under elastic-plastic deformation as mentioned in section 2.2. In other words, the solution of a 1D fluid-solid Riemann problem can consist of five constant regions, which are separated by one of the three centered Riemann waves; these two nonlinear waves in the solid medium are the same type (either two shock waves or two rarefaction waves) in the present focus of study (see Figure 2(b) for an illustration of a specific solution structure of a fluid-solid Riemann problem). Some analysis on how to solve a 1D compressible fluid-compressible solid Riemann problem may be found in [27]. Here we are interested in the ranges of conditions for each type of solution. These ranges of conditions will serve as a guide for correctly defining the ghost fluid status. SLD For ease of discussion and reference, we shall use a symbol like W to denote a R |S particular type of the solution; the upper letters stand for the media on the respective left- and right-hand sides of the interface, while the lower letters specify the nonlinear Riemann waves in the associated media. “|” stands for the fluid-structure interface SLD means that a gas (denoted by “G”) (the contact discontinuity). The symbol G R |S medium and a solid (denoted by “SLD”) medium are located on the respective leftand right-hand sides of the interface, and that a left rarefaction wave (denoted by
1112
T. G. LIU, W. F. XIE, AND B. C. KHOO
“R”) and a right shock wave (denoted by “S”) are generated in the respective gas G/W and solid media after the “diaphragm” is removed. The symbol S |SLD means SS that a gas or water (denoted by “G/W ”) medium and a solid (denoted by “SLD”) medium are located on the respective left- and right-hand sides of the interface, and that a left shock wave (denoted by “S”) and two right shock waves (denoted by “SS”) are generated in the respective gas and solid media after the diaphragm is removed. The discussions and conclusions to be reached below are obtained under the assumption that pressure is increased across a shock front, while it is decreased through a rarefaction wave fan. We shall further assume that the fluid is located on the left, while the solid medium is on the right in the following analysis; the fluid medium can be a gas or water. In this work, we shall focus on the compressible fluid-compressible solid Riemann problem (2.1) under the conditions of likely engineering/physical interest such as the case of an initially highly pressurized flow coupled to a solid, a strong shock in the fluid impacting a nearby solid structure, a high speed jet impacting on a solid structure, or a high speed solid projectile impacting a fluid. For all of these cases of interest, the solid medium is always under compression. As a result, the possible solution types for G/W G/W G/W SLD G/W SLD G/W SLD the Riemann problem (2.1) are R |SLD , R |SLD |S3 , S |S , S |SS , S SS , R G/W
limited to the solid medium under compression. Here the subscript and S |SLD S3 “S3 ” denotes the strong plastic shock wave as mentioned in section 2.2. For convenience of discussion, we define the following four functions: γ−1
2γ p ¯ 2c l − 1 , p < pl , (1) glR (p, pl ) = γ−1 p¯l (2)
S ggl (p, pl )
=
1 1 [p − pl ] − , ρl ρg (p)
(3)
S gwl (p, pl )
=
1 1 , [p − pl ] − ρl ρw (p)
(4)
S gsr (p, pr )
=
p > pl ,
p > pl ,
1 1 [p − pr ] − , ρr ρs (p)
p > pr .
Here, ρs (p) and ρw (p) are the density of the solid medium and water at p, respectively, (γ +1)p+(γ −1)p and ρg (p) = ρl (γgg −1)p+(γgg +1)pll . (Very briefly, in general, for ease of referral, the subscript small letter “l” stands for left-hand side, “r” denotes the right-hand side, “g” stands for gas medium, “w” stands for water medium, and “s” denotes solid medium. For the superscript, the capital letter “S” stands for shock, and “R” denotes rarefaction.) For the four functions, we have the following obvious observations. Lemma 2.1. Functions (1), (2), and (3) are differentiable and monotonously increasing with respect to p; function (4) is differentiable and monotonously increasing with respect to p in the respective elastic and plastic regions. G/W For the solution type of R |SLD , we have the following conclusion. S G/W for Riemann problem (2.1), then Theorem 2.1. If the solution type is R |SLD S the initial status of Ul and Ur satisfies conditions (2.14)
pl > pr
and
S glR (pr , pl ) < ul − ur < glR (pl2 , pl ) + gsr (pl2 , pr ).
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1113
Here, pl2 = min[pl , p2 ]. (Note that p2 = −σ2 is the pressure at the limit of elasticity under compression. Here and below, the subscript “2” refers to the said condition of the solid at the limit of elasticity under compression.) Proof of Theorem 2.1. By virtue of the left rarefaction wave and the right elastic shock wave, we have the following two equalities for the respective left and right nonlinear Riemann waves: γ−1
p¯eI 2γ 2cl e (2.15) − 1 = glR (peI , pl ), ul − uI = γ−1 p¯l (2.16)
ueI
− ur =
[peI
1 1 S − pr ] − (peI , pr ). = gsr ρr ρs (peI )
Here ueI and peI are the (exact) interfacial velocity and pressure, respectively. Because G/W the solution type is R |SLD , we have peI < pl from the left rarefaction wave and S e pI > pr from the right shock wave. This leads to pl > pr . Summing (2.15) and (2.16), we have an equation with respect to peI to be given as (2.17)
S f (peI ) := glR (peI , pl ) + gsr (peI , pr ) + ur − ul = 0.
Here, “ := ” means to define its right-hand side as a function of peI . The solution G/W type of R |SLD physically implies that pl > peI > pr , and that the solid medium S is under elastic deformation. In addition, f (peI ) is physically required to have only one root in the interval [pr , pl2 ]. Because f (peI ) is a monotonously increasing function with respect to peI in the interval [pr , pl2 ] according to Lemma 2.1, this requires that f (pl2 ) · f (pr ) < 0 and f (pl2 ) > 0, which leads to the second condition (inequality) of (2.14). This ends the proof of Theorem 2.1. By solving (2.17) using Newton’s iteration method, one obtains the exact interface pressure peI . The exact interface velocity ueI then follows from (2.15) or (2.16). Once the interface pressure and velocity are figured out, one can obtain the exact solution in the region of the rarefaction wave fan and behind the tail of the rarefaction wave via the rarefaction wave relations, and the solution behind the shock wave via shock G/W is wave relations at any time. That is, the exact solution for the type of R |SLD S obtained. (Following similar steps, one can obtain the exact solution for the other types discussed in this work.) G/W Theorem 2.2. If the solution type is R |SLD for Riemann problem (2.1), then SS the initial status of Ul and Ur satisfies conditions (2.18)
pl > p2 > pr
and
S glR (p2 , pl ) < ul − u2 < gsr (pl , p2 ).
S Here, u2 = ur + gsr (p2 , pr ). Proof of Theorem 2.2. For the left rarefaction wave, we have equality (2.15). Using EOS (2.7)–(2.11), the stress behind the leading elastic shock wave is limited to an increase to p2 through the shock wave front. Hence using the shock relationship we have the following equality: 1 1 S − (p2 , pr ). = gsr (2.19) u2 − ur = [p2 − pr ] ρr ρs (p2 )
Here u2 is the velocity of the solid across the leading elastic shock front. The trailing plastic shock wave causes the solid to be under further compression such that the
1114
T. G. LIU, W. F. XIE, AND B. C. KHOO
velocity and pressure behind the plastic shock front are raised to ueI and peI . Thus, we can get the following equality for the trailing plastic shock wave: 1 1 e S e − (peI , p2 ), (2.20) uI − u2 = [pI − p2 ] = gsr ρ2 ρs (peI ) where peI ∈ [p2 , pl ]. From the left rarefaction wave, the (right) leading elastic shock wave, and trailing plastic shock wave, we can physically deduce that pl > peI , p2 > pr , and peI > p2 , respectively. Thus we have pl > p2 > pr . Summing (2.15) and (2.20), we have a function of peI to be given as (2.21)
S (peI , p2 ) + u2 − ul = 0. f (peI ) := glR (peI , pl ) + gsr
In order for (2.21) to have a unique solution peI ∈ [p2 , pl ], the second inequality of (2.18) must hold because f (peI ) is a monotonously increasing function with respect to peI in the interval [p2 , pl ] from Lemma 2.1. G/W Theorem 2.3. If the solution type is R |SLD for Riemann problem (2.1), then S3 the initial status of Ul and Ur satisfies the conditions (2.22)
pl > p3 > pr
and
S S glR (p3 , pl ) + gsr (p3 , pr ) < ul − ur < gsr (pl , pr ).
p2 −pr 2 Here p3 refers to a point where 1/ρp32 −p −1/ρ3 = 1/ρr −1/ρ2 [27]; ρ3 is the solid density at p3 . When the stress in the solid medium is beyond p3 , the solid medium is under the strong plastic compression as deduced using EOS (2.7)–(2.11). Proof of Theorem 2.3. For the left rarefaction wave and the right shock wave, we have equalities (2.15) and (2.16) for the respective left rarefaction wave and right strong plastic shock wave. The summation of (2.15) and (2.16) leads to (2.17). In G/W order to have the solution type of R |SLD S3 , (2.17) is required to have a unique solution e pI ∈ [pl , p3 ]. By a simple function analysis, conditions (2.22) are true. G/W Theorem 2.4. If the solution type is S |SLD for Riemann problem (2.1), then S the initial status of Ul and Ur satisfies the conditions
(2.23)
p2 > max(pl , pr )
and
uSlhs < ul − ur < uSrhs ,
S S where uSlhs = glS (max(pl , pr ), pl ) + gsr (max(pl , pr ), pr ) and uSlhr = glS (p2 , pl ) + gsr (p2 , pr ). s Here and below, glS (p, pl ) is ggl (p, pl ) if the fluid on the left-hand side is a gas, S while it is gwl (p, pl ) if the fluid on the left-hand side is water. Proof of Theorem 2.4. For the left shock wave, we have the following equality using the shock relationship:
(2.24)
ul − ueI = glS (peI , pl ).
For the right elastic shock wave, we further have the equality (2.16). The left and right shock waves give rise to peI > max(pl , pr ). Since the solid medium is under elastic compression for this type of solution, we have p2 > peI , which leads to the satisfaction of the first inequality of conditions (2.23). The summation of (2.16) and (2.24) gives rise to a function of peI to be given as (2.25)
S (peI , pr ) + ur − ul = 0. f (peI ) := glS (peI , pl ) + gsr
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1115
G/W
The solution type of S |SLD requires (2.25) to have a unique root peI ∈ [max(pl , pr ), p2 ]. S This leads to the satisfaction of the second inequality of conditions (2.23) via a simple function analysis. G/W Theorem 2.5. If the solution type is S |SLD for Riemann problem (2.1), then SS the initial status of Ul and Ur satisfies the conditions (2.26)
p2 > pr
and
SS uSS lhs < ul − u2 < urhs ,
S S SS S S where uSS lhs = gl (max(p2 , pl ), pl ) + gsr (max(p2 , pl ), p2 ) and ulhr = gl (p3 , pl ) + gsr (p3 , p2 ). Proof of Theorem 2.5. From the left shock wave, we have equality (2.24). The double shock waves in the solid medium give rise to the satisfaction of equalities (2.19) and (2.20), respectively. The summation of equalities (2.24) and (2.20) results in a function of peI to be given as
(2.27)
S (peI , p2 ) + u2 − ul = 0. f (peI ) := glS (peI , pl ) + gsr
Function (2.27) is required to have a unique root peI ∈ [max(p2 , pl ), p3 ] for the solution G/W type of S |SLD SS . This leads to the satisfaction of conditions (2.26) via a simple function analysis. G/W Theorem 2.6. If the solution type is S |SLD for Riemann problem (2.1), then S3 the initial status of Ul and Ur satisfies the conditions (2.28)
S ul − ur > glS (p3 , pl ) + gsr (p3 , pr ),
with
p3 = max(p3 , pl , pr ).
Proof of Theorem 2.6. From the left shock wave, we have the equality (2.24). The right strong plastic shock wave in the solid medium gives rise to equality (2.16). The summation of equalities (2.24) and (2.16) leads to the function (2.25). Function G/W (2.25) is required to have a unique root peI ∈ [p3 , ∞] for the solution type of S |SLD S3 . This leads to satisfaction of condition (2.28) via function analysis. The abovementioned conditions listed as Theorems 2.1–2.6 are also sufficient for G/W G/W G/W SLD G/W SLD G/W SLD , R |SLD |S3 , S |S , S |SS , the respective solution types if R |SLD S SS , R G/W
are assumed to be the only possible solution types of the Riemann and S |SLD S3 problem (2.1) for the solid under compression. 3. The MGFM for the fluid-solid problem. 3.1. The GFM-based algorithm. In a GFM-based algorithm, the level set technique is usually employed to capture the moving interface. A band of 3 to 5 grid points as ghost cells is defined in the vicinity of the interface. At the ghost cells, ghost fluid and real fluid coexist. Once the ghost fluid nodes and ghost fluid status are defined for each medium, one employs one’s favorite pure/single medium numerical scheme/solver to solve for each medium covering both the real fluid and ghost fluid grid nodes. By combining the solution for each medium according to the new interface location, one obtains the overall solution valid for the whole computational domain at the new time step. To better understand the GFM-based algorithm, we illustrate in Figure 3 the GFM-based algorithm when applied to solve the Riemann problem (2.1). Assuming that the material interface is located between grid nodes i and i + 1 at time t = tn , we want to compute for the flow field at the next time level at t = tn+1 . The new location of the interface is first obtained via solving the level set equation. To compute the flow field for Medium1, grid nodes (i + 1, i + 2, or more nodes depending
1116
T. G. LIU, W. F. XIE, AND B. C. KHOO
Fig. 3. The illustration of GFM treatment for the interface.
on the accuracy of the scheme used) are defined as the ghost fluid nodes for Medium1. Here the ghost fluid status (i.e., Ur∗ as indicated in Figure 3) is also defined with the same EOS as for Medium1. In a similar way, the ghost fluid nodes (i, i − 1, i − 2, . . .) and ghost fluid status for Medium2 can be defined. By defining the ghost fluid status, the boundary conditions at the interface are implicitly imposed. After the ghost fluid states are defined, one carries out the computation from node 1 to the ghost fluid node i + 1 for Medium1, and from ghost fluid node i to the end grid node for Medium2. Thus, there are respective solutions for the two media in the vicinity (nodes i and i+1) of the interface. According to the new location of the interface, one can obtain the final solution in the vicinity of the interface and thus over the whole computational domain. More specifically, the solution at the real fluid nodes beyond the vicinity (nodes i and i+1) of the interface is taken as the final solution at t = tn+1 at these grid points. As for the solution at nodes i and i + 1, if the new location of the interface is still in between grid nodes i and i + 1, the solution of Medium1 at node i and the solution of Medium2 at node i + 1 are taken as the final solution at t = tn+1 ; if the interface has moved to between grid nodes i − 1 and i, the solution of Medium2 at both nodes i and i + 1 is taken as the final solution at these two nodes; if the new location of the interface lies between nodes i + 1 and i + 2 the solution of Medium1 at both nodes i and i + 1 is taken as the final solution at t = tn+1 . When the GFM-based algorithm is employed to solve the Riemann problem (2.1), it thus essentially consists of solving two separate pure-medium Riemann problems (called GFM Riemann problems) in the respective media with associated one-sided ghost fluid in each time step. One is in the left medium with the initial conditions of Ul , x < x0 , ∂F (U ) ∂U + = 0, U |t=0 = (3.1) ∂t ∂x Ur∗ , x > x0 , and it solves from the grid point 1 on the left end to the ghost points. The other is in the right medium with the initial conditions of Ul∗ , x < x0 , ∂F (U ) ∂U + = 0, U |t=0 = (3.2) ∂t ∂x Ur , x > x0 , and it solves from the ghost points to the end point on the right. Hereafter, “ ∗ ” indicates the ghost fluid (status). By solving (3.1) and (3.2) and combining their solution in the respective real fluid sides according to the new interface location, the final solution is obtained at the new time step.
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1117
Nonconservation is the obvious shortcoming of the GFM-based algorithm. There have been attempts to make the GFM-based algorithm conservative [7, 21, 4]. However, an efficient and practically useful conservative GFM-based algorithm has yet to be developed. The other problem encountered for the GFM-based algorithm is that its performance depends on the definition of the ghost fluid status. In fact, it is absolutely necessary to take into consideration the influence of wave interaction at the material interface and the effect of material properties on the interfacial status in the definition of ghost fluid status [20, 16]. 3.2. The modified ghost fluid method. To develop a more universally applicable GFM, Liu, Khoo, and Yeo [20] proposed defining and solving a multimedium Riemann problem at the interface along the normal direction via two nonlinear characteristic equations intersecting at the material interface. The resultant GFM algorithm is termed the modified GFM or MGFM. Let us briefly revisit the approximate Riemann problem solver (ARPS) or the implicit characteristic method employed in the MGFM. There are two nonlinear characteristics intersecting at the interface; one stems from the left medium flow, while the other originates from the right medium flow. These can be written in association with system (2.1) as ⎧ dpI duI ⎪ ⎪ + ρIL cIL = 0 along dx ⎨ dt = uI + cIL , dt dt (3.3) ⎪ dpI duI ⎪ ⎩ − ρIR cIR = 0 along dx dt = uI − cIR , dt dt where subscripts “I, ” “IL, ” and “IR” refer to the interface, the left-hand side of the interface, and the right-hand side of the interface, respectively. ρIL (ρIR ) and cIL (cIR ) are the density and sound speed on the left-(right-)hand side of the interface; uI and pI are the velocity and pressure at the interface. The status UIL and UIR can be obtained via interpolation along two nonlinear characteristic lines tracing back from the interface into the respective media. During decomposition of the singularity or the moment of a shock/jet impact on the interface, uIL (pIL ) and uIR (pIR ) are not continuous across the interface. System (3.3), therefore, has to be specially solved to ensure the correct decomposition of the singularity. As such, the ARPS based on a doubled shock structure is employed [18] to solve system (3.3). This leads to the following formula when the ARPS is applied to Riemann problem (2.1): ⎧ pI − p l ⎪ ⎪ + (uI − ul ) = 0, ⎨ W l (3.4) p − pr ⎪ ⎪ ⎩ I − (uI − ur ) = 0 Wr √ √ pI −pl I −pr with Wl = 1/ρl −1/ρ and Wr = 1/ρrp−1/ρ since UIL = Ul and UIR = Ur . s (pI ) g/w (pI ) Here ρg/w (pI ) is ρg (pI ) if the left medium is a gas, while it is ρw (pI ) if the left medium is water. It has been shown [18] that the interfacial status obtained via ARPS (3.4) approximates well the exact solution of the gas-gas or gas-water Riemann problem. 3.3. Definition of ghost fluid status under elastic-plastic deformation. The success of ARPS (3.4) as applied to the gas-gas or gas-water Riemann problem depends on there being at most one nonlinear Riemann wave admitted in each medium, and the said nonlinear wave connecting with the initial status. (The structure of at most two nonlinear Riemann waves is also the basis of other previous GFM
1118
T. G. LIU, W. F. XIE, AND B. C. KHOO
algorithms and many high resolution schemes.) On the other hand, there can be two nonlinear Riemann waves in the solid medium when the solid medium is under elastic-plastic compression. Physically, the influence of both plastic and elastic waves has to be taken into account to determine the interfacial status. In ARPS (3.4), the two nonlinear Riemann waves separated by the interface are assumed to connect with the initial states Ul and Ur in the respective media. One might have already observed in section 2 that, for the fluid-solid Riemann problem (2.1), there are also two nonlinear Riemann waves connecting the interface from the respective left and right media. However, these two nonlinear Riemann waves do not always connect to the initial states. More specifically, the nonlinear Riemann wave in the solid medium actually connects to the state behind the elastic wave when the solid medium is under elastic-plastic deformation. As a result, the influence of the elastic wave has to be taken into account first and then the plastic wave in order to correctly determine the interfacial status in solving system (3.3). To do so, Ur must be replaced by U2 —the state behind the elastic wave at the limit of elasticity under compression. This leads to the following ARPS used when the solid is under plastic deformation: ⎧ pI − pl ⎪ ⎪ + (uI − ul ) = 0, ⎨ W l (3.5) p − p2 ⎪ ⎪ ⎩ I − (uI − u2 ) = 0 Wr2 √ I −p2 with Wr2 = 1/ρ2p−1/ρ . System (3.5) will be employed to predict the interfacial s (pI ) status when the solid medium is under elastic-plastic compression. For system (3.5), we have the following conclusion. Theorem 3.1. If the solid medium is under elastic-plastic compression after the “diaphragm” is removed for the fluid-solid Riemann problem (2.1) with the initial conditions of Ul and Ur on the left- and right-hand sides of the interface, respectively, pl − 1)3 . system (3.5) approximates the exact Riemann solver to accuracy of O(¯ peI /¯ Proof of Theorem 3.1. Because the solid medium is under elastic-plastic compresG/W G/W and S |SLD sion, the possible solution types are R |SLD SS SS . For the solution type of G/W SLD | , the following equalities hold as discussed in section 2: SS S (3.6)
ul −
ueI
[peI
=
1 1 − pl ] − , ρl ρg/w (peI )
(3.7)
ueI
− u2 =
1 1 . − p2 ] − ρ2 ρs (peI )
[peI
Equalities (3.6) and (3.7) exactly lead to the first and second equalities in system (3.5), respectively. Thus, system (3.5) provides the exact interface status for the solution G/W G/W SLD type S |SLD |SS , the left rarefaction wave gives rise SS . As for the solution type R to the following equality: (3.8)
ul −
ueI
2cl = γ−1
p¯eI p¯l
γ−1 2γ
−1 .
1119
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
The right plastic shock wave leads to the satisfaction of equality (3.7). Via Taylor series expansion, (3.8) can then be expressed as (3.9)
ul − ueI =
cl γ
e 3 p¯I γ + 1 p¯eI 1− −1 +O −1 . 4γ p¯l p¯l
p¯eI −1 p¯l
Similarly, the first equation of system (3.5) can be expanded as (3.10)
ul − u I =
cl γ
p¯I −1 p¯l
1−
γ+1 4γ
p¯I −1 p¯l
+O
p¯I −1 p¯l
3
by replacing ρg/w (pI ) with ρg (pI ) or ρw (pI ) and carrying out the Taylor series expansion. Comparing (3.10) to (3.9), one can deduce that Theorem 3.1 is true. This ends the proof of Theorem 3.1. On the other hand, if the structure is under stand-alone elastic or strong plastic deformation, there is only one nonlinear Riemann wave (the elastic shock wave or the strong plastic shock wave) propagating in the solid medium. ARPS (3.4) can be applied directly to predict the interface status. Thus, the ARPS, applied to various conditions to predict the fluid-structure interface status for fluid-solid Riemann problem (2.1), can be expressed as
(3.11)
⎧ p −p I l ⎪ ⎪ ⎨ Wl + (uI − ul ) = 0, pI − pr/2 ⎪ ⎪ − (uI − ur/2 ) = 0 ⎩ Wr/2
√ pI −pr/2 pI −pl with Wl = 1/ρl −1/ρ and W = r/2 1/ρr/2 −1/ρs (pI ) . Here, ur/2 , pr/2 , and ρr/2 g/w (pI ) are u2 , p2 , and ρ2 , respectively, if conditions (2.18) in Theorem 2.2 or conditions (2.26) in Theorem 2.5 are satisfied for Ul and Ur (i.e., the structure under elasticplastic compression). Otherwise, ur/2 , pr/2 , and ρr/2 are ur , pr , and ρr , respectively. Once the interfacial velocity and pressure are obtained, the interfacial densities on the respective sides can be obtained similarly as shown in [20]. 3.4. The extension to multidimensions. Quite obviously, by employing the present technique in the normal direction, the present MGFM can be easily extended to multidimensions. One can exactly follow the way described in [20, 6]. Here we briefly state another similar but simpler way to do the extension. We define a band of 2 to 4 grid points via |φ| < ε in the vicinity of the interface. Here φ is the level set distance function, and ε is set to be about 3 min(Δx, Δy, Δz); Δx, Δy, and Δz are the spatial step sizes in the respective x, y, and z directions. We divide the grid points in the band into three subsets, ΩReal , ΩτGF −N ear , and ΩτGF −f ar . ΩReal consists of the τ τ GF −N ear includes the ghost fluid points just next to real fluid points in the band; Ωτ the interface; ΩτGF −f ar consists of other ghost fluid points in the band. Next, we need −N ear via to define the ghost fluid status. We shall do so first for those points in ΩGF τ solving a 1D fluid-solid Riemann problem along the normal direction. For illustration, we consider a point (I, J, K) in ΩτGF −N ear . For convenience, we , and thus we have φ < 0 for those grid shall assume φ ≥ 0 valid for grid points in ΩReal τ −N ear and ΩτGF −f ar . We denote the flow status at φ ≥ 0 as U + and at points in ΩGF τ − , and u− φ < 0 as U − . The (real) flow state at (I, J, K) is denoted as UI,J,K I,J,K is its associated velocity field. To construct a 1D fluid-solid Riemann problem at this point
1120
T. G. LIU, W. F. XIE, AND B. C. KHOO
(I, J, K), we extend the flow field in ΩReal to this point via extrapolation along the τ normal direction or via solving Ut +n · ∇U = 0 to steady state [6] with the flow states + . We denote the calculated flow status at this point as UI,J,K and its fixed in ΩReal τ + velocity field as uI,J,K . Therefore, both media have their flow states at this point. We + − then use UI,J,K and UI,J,K to define and solve a 1D fluid-solid Riemann problem to ∗ at this point. More specifically, by projecting the reobtain the ghost fluid state UI,J,K spective velocity fields into the normal direction and tangential direction, i.e., u± I,J,K = ± ± unI,J,K n + uτ I,J,K , we can form the 1D fluid-solid Riemann problem with the initial − + ± ± ± ± T states of UnI,J,K and UnI,J,K . That is, UnI,J,K = [ρ± I,J,K , ρI,J,K unI,J,K , EnI,J,K ] ± ± ± ± ± with EnI,J,K = ρI,J,K eI,J,K + 0.5ρI,J,K (unI,J,K )2 . Next, we solve this Riemann problem using (3.11) to predict the pressure and normal velocity at this point. The density can also be obtained via the respective EOS. The predicted pressure and density are used to define the pressure and density of ghost fluid status at this point. The ghost fluid velocity at this point is the vector summation of the predicted normal velocity and u+ τ I,J,K . Thus the ghost fluid status at this point is defined. Once the ghost fluid status for all points in ΩτGF −N ear is defined, the next step is −f ar . This is done to obtain the ghost fluid status for other ghost fluid points in ΩGF τ via solving Ut ± n · ∇U = 0 to steady state with fluid states fixed at grid nodes in −N ear and ΩGF : if φ is negative in the real fluid side, Ut + n · ∇U = 0 is both ΩReal τ τ solved; otherwise Ut − n · ∇U = 0 is solved. Solving Ut ± n · ∇U = 0 to steady state is very fast by a first-order upwind scheme. 3.5. The accuracy of the present MGFM. The utilization of proper ghost fluid status increases not only the numerical stability but also accuracy in the vicinity of the interface [20, 33, 16]. In fact, it has been shown [15] that there is a large leading error term for those GFM-based algorithms with the real fluid velocity and/or pressure employed directly to define the ghost fluid status. The large leading error leads to poor performance in the treatment of the interface when applied to simulate the problem of a strong jet or shock wave impacting on a gas-gas interface. On the other hand, this large leading error is essentially eliminated in the MGFM [15], thereby resulting in imposing the boundary conditions at the moving interface to higher order “accuracy” regardless of the flow situations. We shall show that higher order “accuracy” is achieved in the imposition and treatment of the interface when the MGFM with ARPS (3.11) is used to treat the fluid-structure interface; this is so even in the presence of a strong shock or jet impacting the interface. We have the following conclusion. Theorem 3.2. The MGFM-based algorithm with ARPS (3.11) to predict the ghost fluid status is higher order “accurate” with an error estimate of O(|¯ peI /¯ pl − 1|)3 when applied to treat the interface for the Riemann problem (2.1) with structure under compression. Here, the error estimate is not expressed in the classical form of O( x)d because the solution for Riemann problem (2.1) can be discontinuous in the vicinity of the interface. If the interface is in balance (that is, the pressure and normal velocity are continuous across the interface), one has pl = peI + O( x) in general as pl (i.e., pIL ) is obtained via interpolation along the nonlinear characteristic lines tracing back from the interface into the left medium. As a result, the error expression returns to the classical form of O( x)d , and then the “order of accuracy” assumes the classical meaning. Because a GFM-based algorithm obtains solution via essentially solving two pure-
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1121
medium GFM Riemann problems (3.1) and (3.2), Theorem 3.2 will be valid if it can be shown that both GFM Riemann problems (3.1) and (3.2) are capable of providing higher order “accurate” numerical solution to the exact solution at the interface using ARPS (3.11) to define the ghost fluid status. Two steps are used to prove Theorem 3.2. The first is to show that the interface status predicted via ARPS (3.11) is a higher order “accurate” estimate to the exact solution at the interface. The second is to show both GFM Riemann problems (3.1) and (3.2) can provide higher order “accurate” numerical solution to the exact solution at the interface with the predicted interface status used to define the ghost fluid status. For the first step, we have Lemma 3.1 applicable for ARPS (3.11). Lemma 3.1. For ARPS (3.11) as applied to Riemann problem (2.1) with structure under compression, the following error estimations hold: 3 e p¯I e (1) ul − uI = O − 1 , p¯l 3 e p¯I e (2) pl − pI = O − 1 . p¯l Here, uI and pI are calculated with ARPS (3.11). It is easily shown that Lemma 3.1 is true by following the same procedure as in the proof of Theorem 3.1. Lemma 3.2. Both GFM Riemann problems (3.1) and (3.2) provide a solution, which is the same as the exact solution of Riemann problem (2.1) in the respective real fluid sides with the correct interface location, if the exact interface status is used to define the ghost fluid status. This is to say that GFM Riemann problems (3.1) and (3.2) with the respective initial conditions of (3.12) and (3.13), x < x0 , Ul , U |t=0 = (3.12) e UIL , x > x0 , (3.13)
U |t=0 =
e , UIR Ur ,
x < x0 , x > x0 ,
provide the same solution in the respective real fluid sides as that of the original Riee e T e e T mann problem (2.1). Here, UIL = [ρeIL , ρeIL ueI , EIL ] and UIR = [ρeIR , ρeIR ueI , EIR ] , where ρeIL and ρeIR are the exact densities for the respective left and right media at the e and interface when the “diaphragm” is removed for the Riemann problem, and EIL e EIR are the respective associated exact total energy on each side of the interface with the exact interface pressure peI used. Lemma 3.2 is obviously true as the solution of a Riemann problem of present interest is uniquely determined by the interface status. Lemma 3.3. If GFM Riemann problems (3.1) and (3.2) are well-posed, the following error estimates hold for the respective GFM Riemann problems (3.1) and (3.2) using the MGFM with ARPS (3.11) to define the ghost fluid status: 3 e p¯I A e (1) uI − uI = O − 1 , p¯l 3 e p¯I A e (2) pI − pI = O − 1 , p¯l
1122
T. G. LIU, W. F. XIE, AND B. C. KHOO
3 e p¯I e , (3) uB − u = O − 1 I I p¯l pB I
(4)
−
peI
3 e p¯I = O − 1 . p¯l
B A B Here, uA I (resp., uI ) and pI (resp., pI ) are the exact interfacial velocity and pressure of the GFM Riemann problem (3.1) (resp., (3.2)). peI /¯ pl − 1|)3 from Lemma 3.1 for GFM Proof of Lemma 3.3. We have Ur∗ = O(|¯ Riemann problem (3.1). Thus, the solution of problem (3.1) is actually the solution with initial condition (3.12) with a perturbation quantity of O(|¯ peI /¯ pl −1|)3 . This leads to the satisfaction of conclusions (1) and (2) in Lemma 3.3 using Lemma 3.2 and the well-posed property of GFM Riemann problem (3.1). In a similar way, conclusions (3) and (4) can be proved. This ends the proof for Lemma 3.3. Lemma 3.3 leads directly to Theorem 3.2. One may note that the assumption of the well-posed property may be removed, and a one-order lower error estimate in Lemma 3.3 can be obtained by enumerating and checking every possible type of solution of GFM Riemann problems (3.1) and (3.2) and following the similar steps used in [15]. Theorem 3.2 also implies that the conservative error is well limited. In fact, we have the following conclusion. Theorem 3.3. The MGFM-based algorithm with ARPS (3.11) used to predict pl − 1|)3 when applied to the ghost fluid status has local conservative error of O(|¯ peI /¯ treat the interface for Riemann problem (2.1) with structure under compression. Proof of Theorem 3.3. We assume that the interface is located in [xA , xB ] and that xI denotes the interface position. Thus [xA , xI ] is occupied by the left medium, while [xI , xB ] is occupied by the right medium. We denote RHSL(t), RHSR(t), and RHST (t) as the conservation errors for the medium on the left- and right-hand side of the interface and over the whole computational domain, respectively, as follows:
(3.14)
1 RHSL(t) := Δt
xn+1 I
U
n+1
dx −
xA
xn I
U n dx
xA
(FIL − uI UIL − FA )dt ,
+ tn
(3.15)
1 RHSR(t) := Δt
xB
U
n+1
xn+1 I
dx −
xB
U n dx xn I
tn
(3.16)
xB
U xA
n+1
dx −
tn+1
+
1 RHST (t) := Δt
tn+1
(FB − FIR + uI UIR )dt ,
xB
tn+1
n
U dx + xA
tn
(FB − FA )dt ;
this is carried out via applying the integral conservation law to the GFM Riemann problem (3.1) over interval [xA , xI ], to the GFM Riemann problem (3.2) over interval [xI , xB ], and to the Riemann problem (2.1) over interval [xA , xB ] from time tn to tn+1 , respectively. FA and FB are fluxes at xA and xB , respectively. FIL and FIR are
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
(a) interface velocity
1123
(b) interface pressure
Fig. 4. The comparison of interface velocity and pressure by the ARPS (3.11) to the exact solution.
fluxes at the respective left- and right-hand sides of the interface. We then have (3.17)
A A T FIL − uI UIL = [0, pA I , uI p I ] ,
(3.18)
A A T FIR − uI UIR = [0, pA I , uI p I ] .
Theoretically, we should exactly have that (3.19)
FIL − uI UIL = [0, peI , ueI peI ]T ,
(3.20)
FIR − uI UIR = [0, peI , ueI peI ]T
with RHSL(t) = 0 and RHSR(t) = 0, and that the summation of (3.14) and (3.15) leads to (3.16) with RHST (t) = 0 because of conservative laws. According to Lemma pl − 1|)3 and |RHSR(t)| = O(|¯ peI /¯ pl − 3.3, we obviously have |RHSL(t)| = O(|¯ peI /¯ 3 1|) . Summing the conservation errors attributed to the left and right media leads pl − 1|)3 . This ends the proof of to the total conservation error, |RHST (t)| = O(|¯ peI /¯ Theorem 3.3. The conservation laws require that (3.14), (3.15), and thus (3.16) are always equal to zero. A conservative Eulerian method for multimedium flow can easily maintain (3.16) in general but not (3.14) and (3.15) due to numerical smearing at the interface. Theorem 3.3 shows that the MGFM has a well-limited conservative error for each medium although it is strictly nonconservative. Numerical tests show that ARPS (3.11) can provide very accurate interface status in various flow situations. To verify Theorem 3.1 and thus Theorem 3.2, we employ ARPS (3.11) to predict the interfacial status for a series of water-steel Riemann probG/W G/W G/W G/W lems of type R |SLD and R |SLD (for S |SLD and S |SLD S SS S SS , ARPS (3.11) can provide exact interfacial status, and hence no further action is required), where the steel is assumed under one atmosphere initially and the pressure in water changes gradually from one atmosphere to 1M bar. Figures 4(a) and 4(b) show the interfacial velocity and pressure predicted by ARPS (3.11) for comparison to the exact solution. The predicted results concur very well with the analytical solution with relative errors of less than 1%.
1124
T. G. LIU, W. F. XIE, AND B. C. KHOO
(a) velocity
(b) pressure
(c) density
Fig. 5. Comparison to the analytical solution for Case 1 at t = 7.17E-4.
4. Numerical examples. In this section, we shall further test the integrated MGFM with ARPS (3.11) to predict the ghost fluid status when the solid is under elastic-plastic deformation. All the computations are carried out via the MUSCL scheme [28, 31]. The CFL number is set to be 0.9 over 401 uniform mesh points in domain [0, 1] for all 1D examples. All parameters in computation are nondimensional except for the multidimensional computation. These problems are very challenging, and no meaningful results are obtained using the original GFM [6]; hence no comparison is made in the following. SLD Case 1. Gas-steel Riemann problem. The solution type of this case is G R |SS . The initial conditions are ul = 0.0, pl = 100000.0, ρl = 3.0, γg = 2.0; ur = 0.0, pr = 1.0, ρr = 7.8. The initial location of the interface is at 0.4. For better imposition of the boundary conditions at the interface, the technique developed in [33] is applied. That is, the real fluid status just next to the interface is also redefined using the predicted interface status. We ran the computation to a final time of 7.17E-4. Figures 5(a)– 5(c) show the numerical results obtained by the present MGFM for comparison to the analytical solution. The steel is under elastic-plastic deformation. Both the trailing plastic and leading elastic shock waves are captured correctly. Case 2. Water-steel Riemann problem. The solution type of this case is a rarefaction wave in water and both elastic and plastic shock waves in a solid. The initial conditions are ul = 0.0, pl = 80000.0, ρl = 1.607; ur = 0.0, pr = 1.0, ρr = 7.8.
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
(a) velocity
1125
(b) pressure
(c) density
Fig. 6. Comparison to the analytical solution for Case 2 at t = 6.791E-4.
The initial location of the interface is at 0.5. The computation is run to a final time of 6.791E-4. Figures 6(a)–6(c) show the numerical results obtained by the present MGFM, compared to the analytical solution for the respective velocity, pressure, and density. Both the elastic and plastic shock waves are captured correctly. In all, the numerical results by the present MGFM compared well to the exact solution. SLD Case 3. Water-steel Riemann problem. The solution type of this case is W S |SS . The initial conditions are ul = 50.0, pl = 50000.0, ρl = 1.507; ur = 0.0, pr = 1.0, ρr = 7.8. The initial location of the interface is at 0.5. The computation is run to a final time of 6.791E-4. The numerical results with comparison to the analytical solution are shown in Figures 7(a)–7(c) for velocity, pressure, and density, respectively. The numerical results compared very well to the analytical solution. Case 4. Steel-steel Riemann problem. The technique developed in section 3 can be easily applied to solid-solid compressible problems. This is a problem of two steel rods impacting each other, resulting in both steel rods under elastic-plastic compression. The initial conditions are ul = 10.0, pl = 1.0, ρl = 7.8; ur = −5.0, pr = 1.0, ρr = 7.8. For this problem, there is a leading elastic shock and a trailing plastic shock in each rod. Figures 8(a)–8(c) show the numerical results obtained by the present MGFM, compared to the analytical solution for velocity, pressure, and density, respectively. The numerical results compared very reasonably to the analysis.
1126
T. G. LIU, W. F. XIE, AND B. C. KHOO
(a) velocity
(b) pressure
(c) density
Fig. 7. Comparison to the analytical solution for Case 3 at t = 6.791E-4.
Case 5. This is a two-dimensional underwater explosion near a planar steel wall. The initial conditions are given as follows. An air cylinder of unit radius is located at the origin (0.0, 0.0), and the initial flow parameters for the explosion bubble are ρg = 3860kg/m3 , pg = 100000bar, ug = 0m/s, vg = 0m/s, and γg = 2; the initial flow conditions for the water are ρl = 1000kg/m3 , pl = 1bar, ul = 0m/s, and vl = 0m/s; the initial conditions for the steel wall are ρs = 7800kg/m3 , ps = 1bar, us = 0m/s, and vs = 0m/s. The computational domain is a rectangular region with x × y ∈ [−6m, 6m] × [−6m, 6m], and the planar wall is located at the straight line y = 3m with a thickness of 3m. A total of 361 × 361 uniform grid points are distributed in the computational domain, and the CFL number is set to 0.45. The nonreflective boundary conditions are used for all the outside boundaries. In this problem, a strong underwater shock is generated and soon impacts the wall, resulting in the incident shock partly reflected from the wall surface and partly transmitted into the wall. The underwater shock under the present given conditions is so strong that the wall is under elastic-plastic deformation, resulting in a leading elastic wave and a trailing plastic wave propagating simultaneously in the wall structure. Due to the large sound speed of the steel, the leading elastic shock wave inside the structure travels faster than the incident underwater shock, leading to the formation of a precursive wave propagating on the structure surface, which will soon travel outside the computational domain. Later on, the reflected wave from the wall
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
(a) velocity
1127
(b) pressure
(c) density
Fig. 8. Comparison to the analytical solution for Case 4 at t = 6.751E-4.
interacts with the expanding explosion bubble surface. As a result of shock-bubble interaction, a rarefaction wave [19] forms and propagates towards the structure. The interaction of the rarefaction wave and the deformation of structure causes a low pressure region to be formed and eventually the occurrence of cavitation next to the structure. Figures 9(a)–9(c) show the pressure contours at three stages of much earlier times. Shock-solid interactions and wave propagations through the water-solid interface can be observed from these pressure contours. More specifically, Figure 9(a) shows the pressure contours at t = 0.085ms. Due to the strong impact of the underwater shock, both elastic shock and plastic shock waves are observed in the wall. At time t = 0.10ms as shown in Figure 9(b), the reflected shock from the wall just reached the explosion bubble. In the meantime, the elastic shock wave has clearly run ahead of the plastic shock waves in the wall structure. The interaction of the reflected shock wave with the explosion bubble results in a strong rarefaction wave moving towards and arriving at the planar wall at about t = 0.125ms (see Figure 9(c)). Because the sound speed of the steel is far greater than that of the water, the leading elastic shock wave with decreasing strength propagates faster than the incident underwater shock, leading to the formation of a precursor wave along the structure surface as shown in Figure 9(c) at t = 0.125ms; the regular reflection of the (original) incident underwater shock on the wall surface has also evolved/transformed into the Mach re-
1128
T. G. LIU, W. F. XIE, AND B. C. KHOO
(a)
(b)
(c)
Fig. 9. Pressure contours for Case 5: (a) t = 0.085ms; (b) t = 0.100ms; (c) t = 0.125ms.
flection at this moment. As time progresses, the pressure next to the structure would be lower than the saturated vapor pressure, leading to the appearance of cavitation. The appearance of cavitation causes difficulties in the definition of ghost fluid status; special techniques have to be developed. The present computation stopped before the cavitation occurs. 5. Conclusions. The elastic-plastic deformation of solid leads to two nonlinear Riemann waves propagating in the solid medium. To understand the influence of material elastic-plastic deformation on the fluid-structure interfacial status, the fluidsolid Riemann problem was first analyzed and discussed. The analysis carried out on the fluid-solid Riemann problem led to the suggestion of an ARPS to take into account the influence of material plastic deformation. This resulted in the further development of the MGFM in this work. Theoretical analysis and numerical tests showed that the MGFM with the proposed approximate Riemann solver can work effectively and efficiently.
MGFM FOR COUPLING OF FLUID AND HEP STRUCTURE
1129
REFERENCES [1] R. Abgrall, How to prevent pressure oscillations in multicomponent flow calculations: A quasi-conservative approach, J. Comput. Phys., 125 (1996), pp. 150–160. [2] R. Abgrall and S. Karni, Computations of compressible multifluids, J. Comput. Phys., 169 (2001), pp. 594–623. [3] T. D. Aslam, A level set algorithm for tracking discontinuities in hyperbolic conservation laws II: Systems of equations, J. Sci. Comput., 19 (2003), pp. 37–62. [4] D. A. Bailey, P. K. Sweby, and P. Glaister, A ghost fluid, moving finite volume plus continuous remap method for compressible Euler flow, Internat. J. Numer. Methods Fluids, 47 (2005), pp. 833–840. [5] R. P. Fedkiw, Coupling an Eulerian fluid calculation to a Lagrangian solid calculation with the ghost fluid method, J. Comput. Phys., 175 (2002), pp. 200–224. [6] R. P. Fedkiw, T. Aslam, B. Merriman, and S. Osher, A non-oscillatory Eulerian approach to interfaces in multimaterial flows (the ghost fluid method), J. Comput. Phys., 152 (1999), pp. 457–492. [7] J. Glimm, X. L. Li, Y. Liu, and N. Zhao, Conservative front tracking and level set algorithms, Proc. Natl. Acad. Sci. USA, 98 (2001), pp. 14198–14201. [8] A. Harten, High resolution schemes for hyperbolic conservation laws, J. Comput. Phys., 49 (1983), pp. 357–393. [9] A. Harten, On a class of high resolution total-variation-stable finite-difference schemes, SIAM J. Numer. Anal., 21 (1984), pp. 1–23. [10] A. Harten, B. Engquist, S. Osher, and S. R. Chakravarthy, Uniformly high-order accurate essentially nonoscillatory schemes. III, J. Comput. Phys., 71 (1987), pp. 231–303. [11] A. Harten and S. Osher, Uniformly high-order accurate nonoscillatory schemes. I, SIAM J. Numer. Anal., 24 (1987), pp. 279–309. [12] S. Karni, Multicomponent flow calculations by a consistent primitive algorithm, J. Comput. Phys., 112 (1994), pp. 31–43. [13] A. G. Kulikovskii, N. V. Pogorelov, and A. Yu. Semenov, Mathematical aspects of numerical solution of hyperbolic systems, Chapman & Hall/CRC Monogr. Surv. Pure Appl. Math. 118, Chapman & Hall/CRC, Boca Raton, FL, 2002. [14] O. Le Metayer, J. Massoni, and R. Saurel, Modelling evaporation fronts with reactive Riemann solvers, J. Comput. Phys., 205 (2005), pp. 567–610. [15] T. G. Liu and B. C. Khoo, The accuracy of the modified ghost fluid method for gas-gas Riemann problem, Appl. Numer. Math., 57 (2007), pp. 712–733. [16] T. G. Liu, B. C. Khoo, and C. W. Wang, The ghost fluid method for gas-water simulation, J. Comput. Phys., 204 (2005), pp. 193–221. [17] T. G. Liu, B. C. Khoo, and W. F. Xie, The modified ghost fluid method as applied to extreme fluid-structure interaction in the presence of cavitation, Commun. Comput. Phys., 1 (2006), pp. 898–919. [18] T. G. Liu, B. C. Khoo, and K. S. Yeo, The simulation of compressible multi-medium flow. I: A new methodology with test applications to 1D gas-gas and gas-water cases, Comput. & Fluids, 30 (2001), pp. 291–314. [19] T. G. Liu, B. C. Khoo, and K. S. Yeo, The simulation of compressible multi-medium flow. II: Applications to 2D underwater shock refraction, Comput. & Fluids, 30 (2001), pp. 315–337. [20] T. G. Liu, B. C. Khoo, and K. S. Yeo, Ghost fluid method for strong shock impacting on material interface, J. Comput. Phys., 190 (2003), pp. 651–681. [21] D. Q. Nguyen, F. Gibou, and R. Fedkiw, A fully conservative ghost fluid method and stiff detonation waves, in The 12th International Detonation Symposium, San Diego, CA, 2002. [22] R. R. Nourgaliev, T. N. Dinh, and T. G. Theofanous, Adaptive characteristics-based matching for compressible multifluid dynamics, J. Comput. Phys., 213 (2006), pp. 500–529. [23] R. Saurel and R. Abgrall, A multiphase Godunov method for compressible multifluid and multiphase flows, J. Comput. Phys., 150 (1999), pp. 425–467. [24] C. W. Shu and S. Osher, Efficient implementation of essentially nonoscillatory shockcapturing schemes, J. Comput. Phys., 77 (1988), pp. 439–471. [25] K. M. Shyue, An efficient shock-capturing algorithm for compressible multicomponent problems, J. Comput. Phys., 142 (1998), pp. 208–242. [26] J. Smoller, Shock Waves and Reaction-Diffusion Equations, Springer-Verlag, New York, 1983. [27] H. S. Tang and F. Sotiropoulos, A second-order Godunov method for wave problems in coupled solid-water-gas systems, J. Comput. Phys., 151 (1999), pp. 790–815. [28] E. F. Toro, Riemann Solvers and Numerical Methods for Fluid Dynamics: A Practical Introduction, Springer-Verlag, New York, 1997.
1130
T. G. LIU, W. F. XIE, AND B. C. KHOO
[29] M. B. Tyndall, Numerical modelling of shocks in solids with elastic-plastic conditions, Shock Waves, 3 (1993), pp. 55–66. [30] E. H. van Brummelen and B. Kolen, A pressure-invariant conservative Godunov-type method for barotropic two-fluid flows, J. Comput. Phys., 185 (2003), pp. 289–308. [31] B. van Leer, Towards the ultimate conservative difference scheme. III: Upstream centered finite-difference schemes for ideal compressible flow, J. Comput. Phys., 23 (1977), pp. 263– 275. [32] D. C. Wallace, Equation of state from weak shocks in solids, Phys. Rev. B (3), 22 (1980), pp. 1495–1502. [33] C. W. Wang, T. G. Liu, and B. C. Khoo, A real ghost fluid method for the simulation of multimedium compressible flow, SIAM J. Sci. Comput., 28 (2006), pp. 278–302. [34] M. C. Wilkins, Calculation of elastic-plastic flow, Meth. Comput. Phys., 3 (1964), pp. 211–263. [35] M. C. Wilkins, Computer Simulation of Dynamic Phenomena, Springer-Verlag, Berlin, 1999.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1131–1155
LOW-DIMENSIONAL POLYTOPE APPROXIMATION AND ITS APPLICATIONS TO NONNEGATIVE MATRIX FACTORIZATION∗ MOODY T. CHU† AND MATTHEW M. LIN† Abstract. In this study, nonnegative matrix factorization is recast as the problem of approximating a polytope on the probability simplex by another polytope with fewer facets. Working on the probability simplex has the advantage that data are limited to a compact set with a known boundary, making it easier to trace the approximation procedure. In particular, the supporting hyperplane that separates a point from a disjoint polytope, a fact asserted by the Hahn–Banach theorem, can be calculated in finitely many steps. This approach leads to a convenient way of computing the proximity map which, in contrast to most existing algorithms where only an approximate map is used, finds the unique and global minimum per iteration. This paper sets up a theoretical framework, outlines a numerical algorithm, and suggests an effective implementation. Testing results strongly evidence that this approach obtains a better low rank nonnegative matrix approximation in fewer steps than conventional methods. Key words. nonnegative matrix factorization, polytope approximation, probability simplex, supporting hyperplane, Hahn–Banach theorem AMS subject classifications. 41A36, 65D18, 46A22, 90C25, 65F30 DOI. 10.1137/070680436
1. Introduction. The problem of nonnegative matrix factorization (NMF) arises in a large variety of disciplines in sciences and engineering. Its wide range of important applications such as text mining, cheminformatics, factor retrieval, image articulation, dimension reduction, and so on has attracted considerable research efforts. Many different kinds of NMF techniques have been proposed in the literature, notably the popular Lee and Seung iterative update algorithm [17, 18]. Successful applications to various models with nonnegative data values are abounding. We can hardly be exhaustive by suggesting [6, 11, 12, 13, 16, 20, 19, 21, 22, 24], and the references contained therein as a partial list of interesting work. The review article [27] contains many other more recent applications and references. The basic idea behind NMF is the typical linear model (1.1)
Y = AF,
where Y = [yij ] ∈ Rm×n denotes the matrix of “observed” data, with yij representing, in a broad sense, the score obtained by entity j on variable i, A = [aik ] ∈ Rm×p is a matrix, with aik representing the loading of variable i from factor k or, equivalently, the influence of factor k on variable i, and F = [fkj ] ∈ Rp×n , with fkj denoting the score of factor k by entity j or the response of entity j to factor k. The particular emphasis in NMF is that all entries of the matrices are required to be nonnegative. To provide readers with a few motives on the study of NMF, we briefly outline two ∗ Received
by the editors January 18, 2007; accepted for publication (in revised form) July 11, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/68043.html † Department of Mathematics, North Carolina State University, Raleigh, NC 27695-8205 (
[email protected],
[email protected]). The research of the first author was supported in part by the National Science Foundation under grants CCR-0204157 and DMS-0505880 and by NIH Roadmap for Medical Research grant 1 P20 HG003900-01. The research of the second author was supported in part by the National Science Foundation under grant DMS-0505880. 1131
1132
MOODY T. CHU AND MATTHEW M. LIN
applications below. A good survey of other interesting NMF applications can be found in [27, section 6.2]. The receptor model is an observational technique commonly employed by the air pollution research community which makes use of the ambient data and source profile data to apportion sources or source categories [10, 11, 15, 26]. The fundamental principle in this model is the conservation of masses. Assume that there are p sources which contribute m chemical species to n samples. The mass balance equation within this system can be expressed via the relationship (1.2)
yij =
p
aik fkj ,
k=1
where yij is the elemental concentration of the ith chemical measured in the jth sample, aik is the gravimetric concentration of the ith chemical in the kth source, and fkj is the airborne mass concentration that the kth source has contributed to the jth sample. In a typical scenario, only values of yij are observable, whereas neither the sources are known nor the compositions of the local particulate emissions are measured. Thus, a critical question is to estimate the number p, the compositions aik , and the contributions fkj of the sources. For physical feasibility, the source compositions aik and the source contributions fkj must all be nonnegative. The identification and apportionment, therefore, becomes a nonnegative matrix factorization problem of Y . In another application, NMF has been suggested as a way to identify and classify intrinsic “parts” that make up the object being imaged by multiple observations. More specifically, each column yj of a nonnegative matrix Y now represents m pixel values of one image. The columns ak of A are basis elements in Rm . The columns of F , belonging to Rp , can be thought of as coefficient sequences representing the n images in the basis elements. In other words, the relationship (1.3)
yj =
p
ak fkj
k=1
can be thought of as that there are standard parts ak in a variety of positions and that each image yj is made by superposing these parts together in some ways. Those parts, being images themselves, are necessarily nonnegative. The superposition coefficients, each part being present or absent, are also necessarily nonnegative. In either case above and in many other contexts of applications, we see that the p factors, interpreted as either the sources or the basis elements, play a vital role. In practice, there is a need to determine as few factors as possible, and, hence, a low rank nonnegative matrix approximation of the data matrix Y arises. The mathematical problem can be stated as follows: (NMF) Given a nonnegative matrix Y ∈ Rm×n and a positive integer p < min {m, n}, find nonnegative matrices U ∈ Rm×p and V ∈ Rp×n so as to minimize the functional (1.4)
f (U, V ) :=
1
Y − U V 2F . 2
It has been argued that NMF can be interpreted geometrically as the problem of finding a simplicial cone which contains a cloud of data points located in the first orthant [6]. Further extending that thought, this paper recasts NMF as the problem of
1133
POLYTOPE APPROXIMATION AND NMF
approximating a polytope on the probability simplex by another polytope which has fewer facets. Our basic idea follows from computational geometry where a complex surface is to be approximated by a simpler one. In our particular setting, the complex surface refers to the boundary of a convex polytope of n points in Rm , whereas the simpler surface refers to that of p points in the same space. A unique characteristic in our approach is that the approximation is to take place on the probability simplex. We shall exploit the fact that the probability simplex, being a compact set with a well-distinguishable boundary, makes the optimization procedure easier to manage. For convenience, we denote henceforth a nonnegative matrix U by the notation U 0. This paper actually deals with two related but independent problems. The first problem considers the polytope approximation only on the probability simplex. The underlying geometry is easy to understand, but the problem is of interest in itself. More importantly, with slight modifications the idea lends it geometric characteristics naturally to the more difficult NMF problem. For both problems a common feature in our approach is alternating optimization. That is, by rewriting the equivalent formulations (at the global minimization point): (1.5) min Y − U V 2F = min min Y − U V 2F U,V 0
U 0
V 0
(1.6)
= min min Y − U V 2F V 0
U 0
,
we solve minV 0 Y − U0 V 2F from an initial matrix U0 to find the solution matrix V0 and then solve minU 0 Y − U V0 2F to find the next iterate U1 . In general, V is obtained from U , and U+1 is obtained from V . Our main contribution is to explain how the unique global minimizers to the two subproblems (1.5) and (1.6) can be computed alternatively. We organize our presentation as follows. Beginning in section 2, we briefly describe how the original NMF can be formulated as a low-dimensional polytope approximation through scaling. When limited to the probability simplex, we introduce in section 3 two basic mechanisms for polytope approximation—a recursive algorithm which projects a given point onto a prescribed polytope and a descent method which slides points along the boundary of the probability simplex to improve the objective value. We emphasize that the recursive algorithm determines the unique and global proximity map in finitely many steps and thus enables our NMF method, to be discussed in section 4, to obtain much better, generally of several order, improvement over the well-known Lee–Seung updating algorithm [18] per iterative step. 2. Pullback to the probability simplex. Given a nonnegative matrix Y = [y1 , . . . , yn ] ∈ Rm×n with nonzero columns yk ∈ Rm , define the scaling factor σ(Y ) by (2.1)
σ(Y ) := diag { y1 1 , . . . , yn 1 } ,
where · 1 stands for the 1-norm of a vector, and the pullback map ϑ(Y ) by (2.2)
ϑ(Y ) := Y σ(Y )−1 .
Each column of ϑ(Y ) can be regarded as a point on the (m−1)-dimensional probability simplex Dm defined by (2.3) Dm := y ∈ Rm |y 0, 1 my = 1 ,
1134
MOODY T. CHU AND MATTHEW M. LIN
1
Convex Hull of Columns
Probability Simplex
.. . .. 1
1
Fig. 2.1. Convex hull of ϑ(Y ) ∈ R3×n , with n = 11.
where 1m = [1, . . . , 1] stands for the vector of all 1’s in Rm . For each given nonnegative matrix Y ∈ Rm×n , there is a smallest convex polytope C(Y ) on Dm containing all columns of ϑ(Y ). Indeed, the vertices of C(Y ) consist of a subset of columns of ϑ(Y ). If we denote this fact by C(Y ) := conv(ϑ(Y )) = conv(ϑ(Y )), where Y = [yi1 , . . . , yip ] ∈ Rm×p is a submatrix of Y , then every column of ϑ(Y ) is a convex combination of columns of ϑ(Y ), and we can write (2.4)
ϑ(Y ) = ϑ(Y )Q,
where Q ∈ Rp×n itself represents p points in the simplex Dp . Together, we have obtained (2.5) Y = ϑ(Y )σ(Y ) = ϑ(Y )(Qσ(Y )), which is an exact nonnegative matrix factorization of Y . It is important to note in this setting that the integer p is the number of vertices of the convex hull conv(ϑ(Y )). We only know that p ≤ n, but it might be that p ≥ m. See Figure 2.1. In practice, we prefer to see that ϑ(Y ) is contained in convex hull with fewer than min{m, n} vertices, but clearly that is not always possible. Conversely, suppose a given Y can be factorized as the product of two nonnegative matrices Y = U V , with U ∈ Rm×p and V ∈ Rp×m . We can write (2.6)
Y = ϑ(Y )σ(Y ) = U V = ϑ(U )ϑ(σ(U )V )σ(σ(U )V ).
Note that the product ϑ(U )ϑ(σ(U )V ) itself is on the simplex Dm . It follows that (2.7)
ϑ(Y ) = ϑ(U )ϑ(σ(U )V ),
(2.8)
σ(Y ) = σ(σ(U )V ).
1135
POLYTOPE APPROXIMATION AND NMF
Because U V = (U D)(D−1 V ) for any invertible matrix D ∈ Rp×p , we may assume without loss of generality that U is already a pullback so that σ(U ) = In . It follows that ϑ(Y ) = ϑ(U )ϑ(V ) and σ(Y ) = σ(V ). The discussion thus far assumes the equality in the factorization of Y . To hold this equality would require either that the integer p is large enough to accommodate all vertices for the convex hull of ϑ(Y ) or that the matrix Y itself is of low rank p. Though these requirements cannot be met in general, the above analysis does suggest that the pullback of Y to the probability simplex might shed some interesting geometric meaning of NMF which we shall study in the subsequent sections. Assuming henceforth that the matrix U is already normalized to Dm , that is, assuming U = ϑ(U ) and σ(U ) = Ip , rewrite the objective functional in NMF as follows: 2 ⎛ ⎞ 1 1 2 −1 ⎠ ⎝ f (U, V ) = Y − U V F = ϑ(Y ) − U V σ(Y ) σ(Y ) (2.9) . 2 2 W
F
For convenience, denote (2.10)
W = V σ(Y )−1 .
Then columns in the product U V σ(Y )−1 = U W , each of which is a nonnegative combination of columns of U , can be interpreted as points in the simplicial cone of U . The problem of NMF is equivalent to finding points {u1 , . . . , up } on the simplex Dm so that the total distance from ϑ(Y ) to the simplicial cone spanned by {u1 , . . . , up }, measured with respect to a weighted norm, is minimized. Corresponding to each given Y , the scaling factor σ(Y ) is determined. The distance between Y and U V measured in the Frobenius norm therefore can be considered as the distance between ϑ(Y ) and U W measured in the induced norm from a weighted inner product. The general theory for Hilbert space, in particular the separation of disjoint convex sets by hyperplanes based on the Hahn–Banach theorem, remains applicable. The main thrust of this paper is to explain that by working on the probability simplex Dm the supporting hyperplanes can be calculated effectively. 3. Polytope fitting problem on the probability simplex. Before we discuss how (2.9) can be minimized in the next section, it might be worthwhile to consider in this section a special set estimation problem that is of interest in its own right— best approximate the more complicated polytope conv(ϑ(Y )) by a simpler polytope conv(U ) on the simplex Dm in the sense that the total distance from each point of ϑ(Y ) to the polylope conv(U ) is minimized. This is equivalent to the minimization of the objective function (3.1)
1
ϑ(Y ) − U W 2F , 2
subject to the constraints that columns of U are in Dm and columns of W are in Dp . It is not difficult to see that this problem is similar in spirit to the well-known sphere packing problem [5] as well as the classical problem of approximating convex bodies by polytopes [3, 4, 7, 8]. The underlying principal is also similar to that of the k-plane method [2] or the support vector machines method [1, 14], except that the geometric object used in our approximation, which is either a subspace or a polytope, is of high codimension and hence is harder to characterize. For applications,
1136
MOODY T. CHU AND MATTHEW M. LIN
D3
conv (ϑ(Y ))
conv(U )
Fig. 3.1. Convex hull of ϑ(Y ) and U in D3 .
we mention that set estimation is important in pattern analysis [9], robotic vision, and tomography. Most of the applications in pattern recognition are limited to only three-dimensional objects, whereas we are interested in higher-dimensional entities. The optimization problem proposed in (3.1) seems to possess a profound geometric meaning in itself. Since conv(U ) has fewer vertices than conv(ϑ(Y )), minimizing (3.1) means in a sense retrieving and regulating the “shape” of ϑ(Y ) by fewer facets. If the data ϑ(Y ) have an elongated distribution over Dm to begin with, the convex hull at the optimal solution U should display a simpler shape but with similar orientation. Since columns of W represents convex combination coefficients, the diameter of conv(U ) should be at least as large as that of conv(ϑ(Y )). With that in mind, we notice that the solution U need not be unique because we can always “expand” conv(U ) outward a little bit toward the boundary of Dm and still obtain the same convex combination U W . The drawing depicted in Figure 3.1 illustrates the idea for the case m = 3 and p = 2. Note that the segment representing conv(U ) can be extended to the two sides of the equilateral triangle while still resulting in the same nearest points to ϑ(Y ). We may therefore assume that the columns of U reside on the boundary of the simplex Dm to begin with. A point on the boundary of Dm is characterized by the location of zero(s) in its coordinates. For example, each facet has one zero. The ith column in the matrix ⎡ ⎤ 0 1 1 ... 1 ⎢ 1 0 1 ... 1 ⎥ ⎢ ⎥ ⎢ 1 1 0 1 ⎥ 1 ⎢ ⎥ ⎢ .. ⎥ . . ⎢ ⎥ m−1 . ⎢ . ⎥ ⎣ 1 1 ⎦ 1 1 1 0 is the “midpoint” of the ith facet of the simplex Dm . When there is more than one zero in the entries, we consider this point to be on a “ridge” and so on. Putting the above thoughts together, the polytope fitting problem can be cast as a constrained optimization problem: minimize (3.2)
subject to
g(U, W ) =
1
ϑ(Y ) − U W 2F , 2
columns of U ∈ ∂Dm ,
W 0,
1 p W = 1n ,
POLYTOPE APPROXIMATION AND NMF
1137
where ∂Dm stands for the boundary of Dm . During the computation, it matters to track which boundary is being involved in U . It remains to explain in the next two subsections how to solve (3.2) by alternating optimization. Our ultimate goal is to extend the techniques to the NMF problem. 3.1. Nearest point in the convex hull of U . Let U ∈ Rm×p be a fixed matrix with its columns in ∂Dm . It is not difficult to derive from linear regression theory that the expression (3.3)
W := (U U )−1 (U ϑ(Y ) − 1p μ ),
with (3.4)
μ =
−1 1 U ϑ(Y ) − 1 p (U U ) n , U )−1 1 1 (U p p
is a global minimizer of g(U, W ) and satisfies 1 p W = 1n . The trouble is that the constraint W 0 often is not satisfied. As a remedy, we propose employing the proximity map guaranteed by the classical Hahn–Banach theorem to compute W . The Hahn–Banach theorem is a powerful tool in functional analysis. In the simplest terms, the theorem asserts that two disjoint convex sets in a topological vector space can be separated by a continuous linear map. The following equivalent statement serves as the basis of our theory. Theorem 3.1. Let C be a fixed convex set in Rm . Corresponding to each given point x ∈ Rm , there is a unique point ρ(x) ∈ C that is nearest to x. The nearest point ρ(x) is called the proximity map of x with respect to C. In a Euclidean space, it is easy to see that the necessary and sufficient condition for ρ(x) is that the inequality
(3.5)
(x − ρ(x)) (z − ρ(x)) ≤ 0
holds for all z ∈ C. In particular, ρ(0) 2 ≤ ρ(0) z for all z ∈ C. Recall that a hyperplane is determined by a normal vector n and a scalar c: (3.6)
H(n, c) := {x|n x = c}.
The set H + (n, c) := {x|n x ≥ c} therefore forms a half space. If a given convex set C does not contain the origin 0, then we say that the hyperplane H(ρ(0), ρ(0) 2 ) supports C in the sense that C ⊂ H + (ρ(0), ρ(0) 2 ). For our application, the key point is that, corresponding to each given column of ϑ(Y ), there is a unique nearest point in conv(U ). Being in conv(U ), the nearest point is a convex combination of columns of U . That is, W 0 automatically. The remaining question is to find the proximity map for each column of ϑ(Y ) with respect to u. Toward that end, we make use of a recursive algorithm proposed by Sekitani and Yamamato [25] to find the proximity map. This is particularly suitable when p is not too large. Algorithm 3.1. Let U denote a given set of p points in Rm . The following ˆ = N (U ), return x ˆ = ρ(0) with respect to conv(U ). steps, denoted collectively as x
1138
MOODY T. CHU AND MATTHEW M. LIN
1. Start with k := 1 and an arbitrary point x0 from conv(U ). 2. Find a supporting hyperplane. • αk := min x k−1 p|p ∈ U . & := xk−1 and stop. • If xk−1 2 ≤ αk , then x 3. Recursion. • Pk := p|p ∈ U and x k−1 p = αk . • Call yk := N (Pk ). 4. Check separation. • βk := min yk p|p ∈ U − Pk . & := yk and stop. • If yk 2 ≤ βk , then x 5. Rotation and updating. • λk := max{λ| ((1 − λ)xk−1 + λyk ) yk ≤ ((1 − λ)xk−1 + λyk ) p, p ∈ U − Pk }. • xk := (1 − λk )xk−1 + λk yk . • k := k + 1 and go to step 2. To illustrate the recursive nature of this algorithm at step 3, we include the following MATLAB code for the convenience of readers. function [y,c] = sekitani(U,active); % % The code SEKITANI computes the nearest point on the polytope generated by % the columns of U to the origin. % % Reference: % % Sekitani and Yamamoto Algorithm, Math. Programming, 61(1993), 233-249. % % Input: % % U = vertices of the given polytope % active = set of active column indices (of U) for recursive purpose. % % Output: % % y = the point on conv(U) with minimal norm % c = convex coordinates of y with respect to U, that is, y = Uc % [m,p0] = size(U); p = length(active); eps = 1.e-13; looping = ’y’;
% relaxed machine zero
temp0 = norm(U(:,active(1))); S = 1; for j = 2:p temp1 = norm(U(:,active(j))); if temp1 < temp0, temp0 = temp1; S = j; end end x = U(:,active(S)); % starting point xc = zeros(p0,1); xc(active(S)) = 1; % initial coordinates of x while looping == ’y’; temp0 = x’*U(:,active);
POLYTOPE APPROXIMATION AND NMF
alpha = min(temp0); K0 = find(temp0<=alpha+eps); if norm(x)^2 <= alpha+eps y = x; c = xc; looping = ’n’; return else Pk = active(K0); if length(K0) == 1, y = U(:,Pk); c = zeros(p0,1); c(Pk) = 1; else [y,c] = sekitani(U,Pk); end end
1139
% find supporting hyperplanes
% recursion
I_K0 = setdiff(1:p,K0); temp1 = y’*U(:,active(I_K0)); beta = min(temp1); if norm(y)^2 <= beta+eps looping = ’n’; return else temp3 = U(:,active(I_K0))-y*ones(1,length(I_K0)); temp5 = find(((y-x)’*temp3)<0); temp4 = (x’*temp3(:,temp5))./((y-x)’*temp3(:,temp5)); lambda = -max(temp4); % maximal rotation parameter delta = lambda*(y-x); if norm(delta) <= max([m,p])*eps looping = ’n’; return else x = x + lambda*(y-x); xc = xc + lambda*(c-xc); % updated coordinates of x looping = ’y’; end end end
Some remarkably nice features of the routine sekitani include that it is not based on simplicial decomposition as in the classical method [28] and hence does not require one to solve systems of linear equations, that it can start with an arbitrary point in conv(U ), that it does not need an initial separating hyperplane, and that it terminates in finite steps. Details of its mathematical theory can be found in [25]. The method originally proposed in [25] computes only the nearest point y on the polytope conv(U ) to the origin but does not provide the corresponding convex combination coefficients of y with respect to U which are needed in our application. 'p As indicated in the above algorithm, the coefficients c such that y = U c with i=1 ci = 1 and ci ≥ 0 are attainable with just a few vector operations. To find the proximity map ρ(x) of a general point x, it is just a matter of shifting the origin. The following algorithm allows us to find the optimal solution to
1140
MOODY T. CHU AND MATTHEW M. LIN
problem (3.2) for each given U . Algorithm 3.2. Given Z = ϑ(Y ) ∈ Rm×n and U ∈ ∂Dm , the following loop computes the convex combination coefficients W for the proximity map of Z onto conv(U ) : S = []; C = []; for i = 1:n y = Z(:,i); U0 = U - y*ones(1,p); active = 1:p; [s,c] = sekitani(U0,active); S = [S,s+y]; C = [C,c]; end W = C;
3.2. Improving U . Once the optimal W is found, the next step is to move columns of U along the boundary ∂Dm to improve the objective value. This amounts to redefining the convex hull conv(U ) to better fit cov(ϑ(Y )). We suggest to carry out this task by the projected gradient with the line search. In this section, we show how the projected gradient and the line search can be calculated efficiently. We first characterize the projected gradient. It is easy to see that the gradient of g with respect to U at a feasible point (U, W ) is given by the m × p matrix (3.7)
∇U g(U, W ) :=
∂g = −(ϑ(Y ) − U W )W . ∂U
For i = 1, . . . p, the ith column of ∇U g(U, W ) represents precisely the gradient ∇ui g(U, W ). In general, this gradient points away from the simplex Dm . We want to project the gradient back to ∂Dm so that a movement of ui in the negative direction of its projected gradient remains in ∂Dm and decreases the objective value of g. The probability simplex Dm is made of m facets, identified by the trait that one of the coordinates is zero. It is important to know onto which facets the gradient ∇ui g(U, W ) is projected. Typically, we expect that an improvement of ui should take place within the same facet where ui resides. However, if ui happens to be at the intersection of two facets, then a decision should be made on over which facet a better improvement is achieved. Crossing facets is possible. This more complicated task will be addressed later. The projection has to be calculated column by column. We mentioned earlier that for computation it is important to track which facets of Dm are involved. Here is the reason. Suppose that ui resides on the jth facet of Dm . The jth facet of Dm can be thought of as the intersection of two hyperplanes y1 + · · · + ym = 1 and yj = 0 in Rm . If we define the m × 2 matrix (3.8)
Aj := [1m , ej ] ,
where ej is the jth standard unit vector in Rm , then the projection of ∇ui g(U, W ) onto the jth facet is given by (3.9)
−1 ∇jui g(U, W ) := (Im − Aj (A Aj )∇ui g(U, W ). j Aj )
−1 In fact, the projection matrix Aj (A Aj need not be calculated since it is of j Aj )
1141
POLYTOPE APPROXIMATION AND NMF
the form ⎡
(3.10)
1 ⎢ .. ⎢ . ⎢ ⎢ 1 1 ⎢ −1 ⎢ 0 A ) A = Aj (A j j j m−1⎢ ⎢ 1 ⎢ ⎢ . ⎣ ..
... .. .
1
...
1 1 0 1 1
0 .. .
1
...
0 1 m − 1 0 ... 0 1 0
1
...
1
0 1 .. .
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎦
1
where the value m − 1 occurs at the (j, j) position of the matrix. This formula offers a direct way to compute the projected gradient. Once we obtain the projection ∇jui g(U, W ), i = 1, . . . , p, we may use line search techniques to adjust ui to reduce the object value. When doing the line search, it becomes critical to adjust the step size so as not to overshoot beyond the facet. That is, we do not allow any entry of the newly computed matrix U to become negative. The optimal step size, which is easy to compute, must be scaled back whenever an entry is crossing zero. Zero crossing is an indication that the search path is now reaching a ridge and a change of facet is necessary. A preliminary numerical algorithm for updating U is as follows Algorithm 3.3. Given Z = ϑ(Y ) ∈ Rm×n , U ∈ ∂Dm , and W ∈ Rp×n , the following steps update U along ∂Dm while minimizing the objective g(U, W ) : function [U,facet] = update(Z,U,W,facet); % % Input: % % Z = points to be aprroximated % U = vertices of the current polytope % W = convex combination coefficients % facet = indices of facets where U is residing % % Output: % % U = vertices of new polytope % facet = indices of new facets where U is residing % [m,p] = size(U); eps = 1.e-10; residue = Z-U*W; gradient_U = residue*W’; if norm(residue,’fro’) < eps, return, end
% termination tolerance
% negative gradient
tempA = eye(m)-ones(m)/(m-1); for i = 1:p Proj_matrix = tempA; Proj_matrix(facet(i),:) = 0; Proj_matrix(:,facet(i)) = 0; temp0 = Proj_matrix*gradient_U(:,i); if norm(temp0) > eps*10 temp0 = temp0/norm(temp0);
1142
MOODY T. CHU AND MATTHEW M. LIN
else temp0 = temp0*0; end proj_grad_U(:,i) = temp0;
% normalized projected gradient
end temp1 = trace(proj_grad_U*gradient_U’); temp2 = norm(proj_grad_U*W,’fro’)^2; alpha = temp1/temp2;
% step size for steepest descent
Unew = U + alpha * proj_grad_U; Unew = Unew.*(abs(Unew)>eps*10); [row,column] = find(Unew<0); % detecting zero crossing if isempty(column) == 0, alpha_modify = alpha; for i = 1:length(column) temp = -U(row(i),column(i))/proj_grad_U(row(i),column(i)); if abs(temp) < eps alpha_modify = 0; vertex = [row(i),column(i)]; elseif temp < alpha_modify alpha_modify = temp; vertex = [row(i),column(i)]; end end U = U + alpha_modify*proj_grad_U; facet(vertex(2)) = vertex(1);
% facet change
else U = Unew; end U = U.*(abs(U)>eps);
With Algorithms 3.2 and 3.3 established, we can use them iteratively between U and W to solve the optimization problem (3.2). This is a descent method in both directions, and the iteration continues until convergence. Needless to say, the limit point depends on the starting value U (0) . Some numerical examples will be given in section 5. 4. Application to the NMF. The NMF formulated in (2.9) differs from the above polytope approximation in two aspects: that the weight σ(Y ) will change the inner product used in the polytope approximation and that W is no longer required to be on the simplex Dp . Nonetheless, the idea explored in the preceding section can easily be generalized. In this section, we discuss the application of polytopy approximation to the NMF. We first explain where the conventional method might break down. Despite its failure, the conventional formulation sheds some insight into the geometric understanding which we shall exploit later. The Lagrangian without the constraint W 0 is given by (4.1)
L(U, W ; μ) :=
1
(ϑ(Y ) − U W )σ(Y ), (ϑ(Y ) − U W )σ(Y ) + (1 m U − 1p )μ. 2
POLYTOPE APPROXIMATION AND NMF
1143
Then with respect to the Frobenius inner product over the product space Rm×p ×Rp×n , we find that the partial gradients of L are given by (4.2)
∂L(U, W ; μ) = −(ϑ(Y ) − U W )σ(Y )2 W + 1m μ , ∂U
(4.3)
∂L(U, W ; μ) = −U (ϑ(Y ) − U W )σ(Y )2 , ∂W
respectively. The first order optimality conditions requires that U and W must satisfy the equations )( )−1 ( (4.4) , U = ϑ(Y )σ(Y )2 W − 1m μ W σ(Y )2 W (4.5)
)−1 ( U ϑ(Y ), W = U U
where ϑ(Y ) and σ(Y ) are fixed matrices from a given Y . The Lagrange multiplier μ can be determined from the constraint 1 m U = 1p , and thus (4.6)
μ =
) 1 ( 2 . 1n σ(Y )2 W − 1 p W σ(Y ) W m
It is interesting to note that (4.5) is precisely the normal equation for the least squares problem of (4.7)
U W = ϑ(Y ),
except that we prefer to see W 0. It might be illuminative to consider the geometry associated with the simplest case when m = 2 and p = 1. For convenience, denote σ(Y ) = diag{σ1 , . . . , σn }. Then the NMF problem is equivalent to finding u ∈ D2 and nonnegative scalars w1 , . . . , wn such that 1 2 σ ϑ(yi ) − uwi 22 2 i=1 i n
(4.8)
f (u, w1 , . . . , wn ) :=
is minimized. The relationships (4.4) and (4.5) become n σ2 w σ 2 w − σi2 wi2 'n i i2 2 ϑ(yi ) − i'ni u= 1 (4.9) 2 , 2 i=1 σi2 wi2 i=1 σi wi i=1 (4.10)
wi =
u ϑ(yi ) , u u
i = 1, . . . , n.
Note that the vector uwi , with wi defined from (4.10), is precisely the projection of ϑ(yi ) onto u. In this case, the value of wi is guaranteed to be positive and is known as soon as u is given. It remains to find a nonnegative vector u that satisfies (4.9), (4.10), and u ∈ D2 . See Figure 4.1. Unfortunately, this simple geometry cannot be generalized. The need of W 0 cannot be guaranteed by the projection (4.5) in higher-dimensional space. In the following, we explain how to adapt the techniques developed for the polytope approximation in the preceding section for the NMF.
1144
MOODY T. CHU AND MATTHEW M. LIN
Find this direction
ϑ(y1 ) ϑ(y2 ) u ϑ(yn )
Fig. 4.1. NMF in R2 when p = 1.
4.1. Nearest point in the simplicial cone of U . With the change of variable (2.10), we shall work with the objective functional 1 2 1
(ϑ(Y ) − U W )σ(Y ) 2F = σ ϑ(yi ) − U wi 22 , 2 2 i=1 i n
(4.11)
h(U, W ) =
which is equivalent to f (U, V ) in (2.9). Clearly, if each term in (4.11) is minimized, then h(U, W ) is necessarily minimized. For each fixed U , instead of (4.5), we want to best approximate each column of ϑ(Y ) within the simplicial cone of U . Since columns of ϑ(Y ) reside on the compact set Dm , their nearest points on the simplicial cone reside in a bounded set as well. We do not need a very extended region on the cone of U to obtain the nearest points. Indeed, since wi (4.12) min ϑ(yi ) − U wi 2 = min ϑ(yi ) − (αU ) , wi 0 wi 0 α 2 we should be able to scale down the length of the vector so that wαi 1 ≤ 1 for all i = 1, . . . , n with a large enough positive scalar α. To show that such a scaling factor ˆ i of (4.12), α does exist independently of Y and U , observe that, at the minimizer w ˆ i 2 ≤ ϑ(yi ) 2 ≤ ϑ(yi ) 1 = 1,
ϑ(yi ) − U w implying that ˆ i 2 ≤ 2.
U w √ ˆ i 1 ≤ 2 m. Since we have assumed that ϑ(U ) = U , it follows that w With a large enough and fixed positive constant α, define (4.13)
= [0, αu1 , . . . , αup ]. U
∈ Rm×(p+1) represent p+1 vertices of the polytope toward which ϑ(yi ), Columns of U i = 1, . . . , n, is to find its nearest point on the simplicial cone of U . See the geometric
1145
POLYTOPE APPROXIMATION AND NMF
1
Convex hull of ϑ(Y )
.. . .. u2
1
. .u
1
Simplicial cone of U
1
Fig. 4.2. Simplicial cone of U and convex hull of ϑ(Y ) ∈ R3×n with p = 2 and n = 11.
illustration depicted in Figure 4.2. By now, Algorithm 3.2 can be applied to find the * ∈ R(p+1)×n for the proximity map of ϑ(Y ) onto convex combination coefficients W ). conv(U * into two blocks: Decompose W
w0 *= , (4.14) W W0 * and W0 ∈ Rp×n . It is clear that the nearest where w0 denotes the first row of W approximation of the matrix ϑ(Y ) on the simplicial cone of U is given by the product of matrices (4.15)
W * = U W, U
with W = αW0 . Note that by construction it is guaranteed that W 0, but W is no longer on Dp . We stress that, given Y and U , our Algorithm 3.2 computes a p × n nonnegative matrix (4.16)
V = W σ(Y ),
which is a global minimizer to the objective functional f (U, V ). This is in contrast to the nature of most available NMF algorithms, in particular the well-known Lee and Seung multiplicative update rule [18]: (4.17)
V + = V. ∗ (U Y )./(U U V ),
1146
MOODY T. CHU AND MATTHEW M. LIN
where only an approximate minimizer is attained. We stress also that the cost of Algorithm 3.2 is compatible with that of the multiplicative update since the former involves only a few inner products in the process of seeking the supporting hyperplanes. 4.2. Updating U . We describe two ways to update U for the NMF from a given W 0, which could have been found from the above calculation. The first approach is by means of the steepest descent method. Since the minimum of a function over a larger domain generally is lower than that over a smaller domain, it is reasonable to consider making the “angle” of the simplicial cone as wide as possible. It might be beneficial to expand the simplicial cone of U sideways until it touches the boundary of Dm . In other ways, we may assume that columns of U are on the boundary ∂Dm to begin with. See the idea demonstrated in Figures 3.1 and 4.2. With this in mind, we calculate the gradient of h(U, W ) with respect to U and obtain (4.18)
∇U h(U, W ) :=
∂h = −(Y − U W )W . ∂U
We may now employ the idea proposed in section 3.2 to compute the projected gradient of ∇U h(U, W ) onto proper facets and adjust U along ∂Dm via the steepest descent method. In short, Algorithm 3.3 is applicable with the modification of setting Z = Y in the input of routine update. The second approach is even simpler and more appealing, once Algorithm 3.2 is in place. It is similar in spirit to that of Lee and Seung’s algorithm [18] where we can update U in exactly the same way as computing the optimal W by swapping the roles of U and V . More precisely, we write
(4.19)
2 ⎛ ⎞ 1 −1 ⎠ ⎝ f (U, V ) = ϑ(Y ) − ϑ(V )(σ(V )U σ(Y ) ) σ(Y ) . 2 Φ
F
We identify ϑ(V ) as the given points upon whose simplicial cone points in Rn represented by columns of ϑ(Y ) are to be approximated. We apply the procedures described in section 4.1 to compute the unique and optimal simplicial combination coefficients Φ ∈ Rp×m . Then the optimal U is given by (4.20)
) ( U = σ(V )−1 Φσ(Y ) .
Again this procedure is similar to Lee and Seung’s multiplicative update (4.21)
U + := U. ∗ (Y V )./(U V V ),
except that our U found through (4.20) is a global minimizer for f (U, V ) from a fixed V which is calculated in (4.16). For completion, we summarize the second approach in the following algorithm. Algorithm 4.1. Given a nonnegative matrix Y ∈ Rm×n , integers p < min{m, n}, and maximal allowable number T of iterations, each of the following iterations computes the proximity maps of Y onto the simplcial cones of U ∈ Rm×p and V ∈ Rp×n alternatively. The resulting nonnegative matrices U ∈ Dm and V therefore progress toward minimizing f (U, V ).
POLYTOPE APPROXIMATION AND NMF
1147
SigmaY = sum(Y); Z = Y*diag(1./SigmaY); % pull back of Y SigmaYt = sum(Y’); Zt =Y’*diag(1./SigmaYt); % pull back of Y’ alpha = 2*sqrt(m); % scaling factor U = rand(m,p);
% initialization of U
for iter = 1:T % loop of ADI T times sigmaU = sum(U); sigmaU(find(sigmaU~=0))=1./sigmaU(find(sigmaU~=0)); U = U*diag(sigmaU); % pull-back of U Utilde = [zeros(m,1),alpha*U];
% simplicial cone of U
S = []; C = []; for i = 1:n y = Z(:,i); U0 = Utilde - y*ones(1,p+1); active = 1:p+1; [s,c] = sekitani(U0,active); S = [S,s+y]; C = [C,c]; end W = alpha*C(2:p+1,:); V = W*diag(SigmaY); V = V.*(V>eps*10); if iter == T return end
% optimal V, given U
% end of iteration
sigmaVt = sum(V’); sigmaVt(find(sigmaVt~=0))=1./sigmaVt(find(sigmaVt~=0)); Vt = V’*diag(sigmaVt); Vtilde = [zeros(n,1),alpha*Vt];
% simplicial cone of V
S = []; C = []; for i = 1:m y = Zt(:,i); V0 = Vtilde - y*ones(1,p+1); active = 1:p+1; [s,c] = sekitani(V0,active); S = [S,s+y]; C = [C,c]; end U = (alpha*C(2:p+1,:)*diag(SigmaYt))’; U = U.*(U>eps*10);
% optimal U (w/t scaling), given V
end
We emphasize that both matrices W in (4.15) (and hence V ) and Φ in (4.19) (and hence U ) are readily available from the routine sekitani which involves no matrix
1148
MOODY T. CHU AND MATTHEW M. LIN
inversion but only matrix-vector multiplications. Being able to compute the unique global minimizer in each alternating direction effectively this way is quite noteworthy. We have to comment on, nonetheless, two possible drawbacks of our algorithm. First, it is difficult to predict in advance the depth of recursion needed for the calculation. When p is large, the recursion might take several levels deep to complete the process and thus drives up the overhead. Fortunately, in most NMF applications, p is rather small. Since only a few vertices of the polytopes, namely, the truncated simplicial cones of either U or V , need to be checked for support hyperplanes, the recursion can be accomplished reasonably rapidly. Second, the way we compute the proximity map by sekitani thus far is to deal with one column of Y a time. This is inefficient when m or n is large, which unfortunately is the case for NMF applications. Of course, since each column can be handled independently, the computation satisfies the criteria of the so-called embarrassingly parallelizable task. No particular programming effort is needed, if the work is to be loaded on a parallel computer. Still, it is desirable to be able to compute the proximity maps of multiple columns simultaneously. We think this goal can be achieved by reformulating the original ideas in the Sekitani and Yamamoto algorithm in terms of BLAS3 operations. For instance, similar to the scalar equation (3.6), we can conveniently summarize n hyperplanes in Rm by a single matrix equation (4.22)
N x = c,
where each column of N ∈ Rm×n represents a normal vector. Likewise, many of the inequalities in Algorithm 3.1 should have their counterparts as inequality systems. We are exploring this possibility and will report our investigation more formally somewhere else. 5. Numerical experiment. In this section, we provide testing results of several numerical experiments to demonstrate the working of our algorithms. Example 1. In order to demonstrate the polytope approximation on the probability simplex graphically, we limit ourselves to R3 , although the theory works for general m. We randomly generate n = 18 points on the simplex. Normally, for low rank polytope approximation of a given set of points, we would have p < min{m, n}. With p = 3 in this example, the polytope approximation turns into a problem of seeking a “triangle” to enclose the 18 given points [8]. Needless to say, the probability simplex D3 itself can do the job. Does there exist at least another triangle that circumscribes these points? Applying Algorithm 3.2 followed by Algorithm 3.3 iteratively 100 times, we see in the top drawing of Figure 5.1 that a “minimal” triangle with vertices on ∂D3 is found. The triangle is minimal in the sense that some of the given points reside on the sides of the triangle which, thus, cannot be reduced further without leaving some points outside. The dynamics of adjustment by our algorithm on the triangles with vertices on ∂D3 is illustrated in the bottom drawing of Figure 5.1. We caution that, just like most methods of iterative nature, the limiting behavior of our method depends on the initial values. It is possible that a limiting triangle does not include all given points to its interior when the iteration is terminated because the method has reached a local solution. Example 2. One of the most attractive features in Algorithm 4.1 is that the method finds the global minimizer per iteration. To demonstrate this point, we design our experiment as follows. Let A ∈ Rm×p and B ∈ Rp×n , with p < min{m, n}, be two randomly generated nonnegative matrices. Define Y = AB, which will be
1149
POLYTOPE APPROXIMATION AND NMF
1
0.8
0.6
0.4
0.2
0 0 0.2 0.4 0.6 1 0.8
0.8
0.6 0.4 1
x
0.2 0 y
1
0.8
0.6
0.4
0.2
0 0 0.2 0.4 0.6 1 0.8
0.8
0.6 0.4
x
1
0.2 0 y
Fig. 5.1. Triangle (polytope) enclosing a prescribed set of points on D3 .
our target data matrix. Starting from the same initial value (U0 , V0 ), we compare the objective values f (U, V ) per iteration with the popular Lee–Seung multiplicative update algorithm. With a test case where m = 100, n = 50, and p = 5 in 100 iterations, Figure 5.2 demonstrates that our algorithm consistently produces much closer approximation to Y per iteration than the Lee–Seung algorithm. In the case on the top, the objective value is reduced to 0.1909 by our method as opposed to 13.1844 obtained by the Lee– Seung method, although these values continue to be improved with more iterations. In yet another random case on the bottom, the objective value attained by our method is 3.3035 × 10−4 as opposed to 1.1989 by Lee–Seung. We consistently discover, as indicated by these two experiments, that our method always produces an impressive
1150
MOODY T. CHU AND MATTHEW M. LIN
History of Objective Values
3
10
Chu−Lin Lee−Seung 2
f(U,V)
10
1
10
0
10
−1
10
0
20
40
60
80
100
120
Number of Iteration History of Objective Values
4
10
Chu−Lin Lee−Seung 2
f(U,V)
10
0
10
−2
10
−4
10
0
20
40
60
80
100
120
Number of Iteration Fig. 5.2. Objective values f (U, V ) per iteration by the Chu–Lin algorithm versus the Lee–Seung algorithm.
improvement by multiple orders. Example 3. To further demonstrate that our method decreases the objective values f (U, V ) more rapidly than the Lee–Seung algorithm, we generate randomly a 50 × 30 matrix Y of rank 20. We systematically seek its nonnegative matrix approximation of rank p, p = 1, . . . , 15, but allowing only 20 iterations. Each point in Figure 5.3 represents the objective value f (U, V ) after 20 iterations of the corresponding method. It appears that our method decreases the objective value as p increases, while the Lee–Seung method does not seem to be sensitive to p during the first 20 iterations.
POLYTOPE APPROXIMATION AND NMF
1151
Objective Values in 20 Iterations per Rank 140
120
100
f(U,V)
Chu−Lin Lee−Seung
80
Lee−Seung (100) 60
40
20
0 0
5
10
15
Rank of Factorization Fig. 5.3. Objective values f (U, V ) in 20 iterations per specified rank by the Chu–Lin and the Lee–Seung algorithms.
The continuous curve in the middle of Figure 5.3 represents the objective values obtained by the Lee–Seung algorithm after 100 iterations. We choose not to draw the objective values obtained by our method after 100 iterations because they are essentially the same as those after 20 iterations with mild differences at the second decimal digit. In other words, our method not only yields a better factorization than the Lee–Seung method but also seems to converge quickly to an optimal solution in the early iterations. It is a little bit premature to assess the complexity of our algorithm at this stage because we have not yet fully exploited the implementation. Algorithm 4.1 is only a rough working code. However, Figure 5.4 that measures the CPU time per sweep in the alternating optimization of our algorithm (for the same 50 × 30 matrix Y of rank 20) seems to consistently suggest that the complexity of our algorithm is exponential in p. Recall that the principal cost in Algorithm 4.1 is the repeated calling of the routine sekitani. Putting the effect of sizes of m and n aside, an exponential growth of complexity in p seems reasonable to expect from the recursive nature of the Sekitani and Yamamoto algorithms. We have indicated earlier that the routine sekitani itself should be improved by parallelization and vectorization. Such an improvement should be able to either shift the curve in Figure 5.4 downward or reduce the slope of the curve, but the exponential dependence on p should remain. Example 4. The “swimmer” database characterized in [6] consists of a set of blackand-white stick figures satisfying the so-called separable factorial articulation criteria under which the NMF is known in theory to be able to identify the standard parts in the articulation model (1.3). More specifically, each figure consists of a “torso” of 12 pixels in the center and four “limbs” of six pixels that can be in any one of four positions. With limbs in all possible positions, there are a total of 256 figures of dimension 32 × 32 pixels. Our data set is provided to us with the courtesy of Alberto Pascual-Motano [23] and Victoria Stodden [6]. A subset of these images is sketched in Figure 5.5.
1152
MOODY T. CHU AND MATTHEW M. LIN
Complexity of the Chu−Lin Algorithm per Sweep 3 CPU Time (in Seconds)
10
2
10
1
10
0
10
−1
10
−2
10
0
5
10
15
Rank of Factorization Fig. 5.4. Complexity of the Chu–Lin algorithm measured in CPU time per sweep of the alternating optimization.
Fig. 5.5. 80 sample images from the swimmer database.
POLYTOPE APPROXIMATION AND NMF
1153
Fig. 5.6. 17 parts decomposed from the swimmer database by Algorithm 4.1 in 10 iterations.
After vectorizing each image into a column, our target matrix Y is of size 1024 × 256. The numerical rank of Y is 13, but we shall decompose it with p = 17 in anticipation of 17 standard parts. We remark that the effect of this overestimation of rank of Y in the NMF is something that deserves further study, but we shall not elaborate in this discussion. Applying our algorithm with only 10 iterations, we obtain the parts sketched in Figure 5.6. It is seen that not all limbs are separated from the torso, but all positions are clearly identified. It is worth noting that frame 16 contains entirely zero information, perhaps having something to do with the overestimation of the rank in the NMF. 6. Conclusion. The notion of low-dimensional polytope approximation is investigated in this paper. Given a polytope in the first orthant, we advocate the projection of the polytope back to the probability simplex. The pullback has the advantages that the resulting polytopes are, in a sense, regulated to a more manageable compact set and that the proximity map which identifies the unique nearest point on a polytope to an arbitrarily given point can be calculated in finitely many steps. Two kinds of low-dimensional polytopes are used in the discussion. The polytopes of the first kind are restricted entirely to the probability simplex, whereas those of the second kind are truncated simplicial cones generated from polytopes of the first kind. Our main thrust in this study is to apply the low-dimensional polytope approximation to the nonnegative matrix factorization problem. The fact that we are using the proximity maps to compute the unique global minimization in each alternating direction strongly suggests that we should have obtained the best possible approximation per iteration, although there is no overall guarantee that our algorithm solves the original problem to its absolute optimality. Numerical experiments seem to evidence much
1154
MOODY T. CHU AND MATTHEW M. LIN
smaller residual errors in the factorization by our method than by the Lee–Seung multiplicative updating scheme. In attempting to convey our ideas more easily, the main task of computing the proximity map is accomplished at present column by column and thus is less competitive in speed with the Lee–Seung algorithm, which can be executed under matrix-tomatrix operations. A major topic for the next phase of study should be to investigate the possibility of computing the proximity map for multiple columns simultaneously. Such a vectorization, if realizable, would be an added power to our method, which in theory should produce the best possible approximation per alternating direction. REFERENCES [1] K. P. Bennett and C. Campbell, Support vector machines: Hype or hallelujah?, SIGKDD Explor. Newsl., 2 (2000), pp. 1–13. [2] P. S. Bradley and O. L. Mangasarian, k-Plane clustering, J. Global Optim., 16 (2000), pp. 23–32. [3] L. Chen, New analysis of the sphere covering problems and optimal polytope approximation of convex bodies, J. Approx. Theory, 133 (2005), pp. 134–145. [4] K. L. Clarkson, Algorithm for polytope covering and approximation, in Algorithms and Data Structures, Lecture Notes in Comput. Sci. 709, 1993, pp. 246–252. [5] J. H. Conway and N. J. A. Sloane, Sphere Packings, Lattices, and Groups, 3rd ed., SpringerVerlag, New York, 1999. [6] D. Donoho and V. Stodden, When does nonnegative matrix factorization give a correct decomposition into parts, in Proceedings of the 17th Annual Conference Neural Information Processing Systems, NIPS, Stanford University, Stanford, CA, 2003; also available online from http://www-stat.stanford.edu/∼donoho. [7] P. M. Gruber, Volume approximation of convex bodies by inscribed polytopes, Math. Ann., 281 (1988), pp. 229–245. [8] P. M. Gruber, Volume approximation of convex bodies by circumscribed polytopes, DIMACS Series 4, AMS, Providence, RI, 1991. [9] P. Hall and B. A. Turlach, On the estimation of a convex set with corners, IEEE Trans. Pattern Anal. Machine Intell., 21 (1999), pp. 225–234. [10] P. K. Hopke, Receptor Modeling in Environmental Chemistry, Wiley and Sons, New York, 1985. [11] P. K. Hopke, Receptor Modeling for Air Quality Management, Elsevier, Amsterdam, 1991. [12] P. O. Hoyer, Nonnegative sparse coding, Neural Networks for Signal Processing XII, Proceedings of IEEE Workshop on Neural Networks for Signal Processing, Martigny, 2002. [13] T. Kawamoto, K. Hotta, T. Mishima, J. Fujiki, M. Tanaka, and T. Kurita, Estimation of single tones from chord sounds using nonnegative matrix factorization, Neural Network World, 3 (2000), pp. 429–436. [14] S. S. Keerthi, S. K. Shevade, C. Bhattacharyya, and K. R. K. Murthy, A fast iterative nearest point algorithm for support vector machine classifier design, IEEE Trans. Neural Networks, 11 (2000), pp. 124–136. [15] E. Kim, P. K. Hopke, and E. S. Edgerton, Source identification of Atlanta aerosol by positive matrix factorization, J. Air Waste Manage. Assoc., 53 (2003), pp. 731–739. [16] H. Kim and H. Park, Nonnegative Matrix Factorization Based on Alternating Nonnegativity Constrained Least Squares and Activity Set Method, Technical report GT-CSE-07-01, George Institute of Technology, 2007; SIAM J. Matrix Anal. Appl., to appear. [17] D. D. Lee and H. S. Seung, Learning the parts of objects by nonnegative matrix factorization, Nature, 401 (1999), pp. 788–791. [18] D. D. Lee and H. S. Seung, Algorithms for nonnegative matrix factorization, in Advances in Neural Information Processing 13, MIT Press, Cambridge, MA, 2001, pp. 556–562. [19] E. Lee, C. K. Chun, and P. Paatero, Application of positive matrix factorization in source apportionment of particulate pollutants, Atmos. Environ., 33 (1999), pp. 3201–3212. [20] W. Liu and J. Yi, Existing and New Algorithms for Nonnegative Matrix Factorization, report, University of Texas at Austin, 2003, available at http://www.cs.utexas.edu/users/liuwg/ 383CProject/final report.pdf. [21] P. Paatero and U. Tapper, Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values, Environmetrics, 5 (1994), pp. 111–126.
POLYTOPE APPROXIMATION AND NMF
1155
[22] P. Paatero and U. Tapper, Least squares formulation of robust nonnegative factor analysis, Chemomet. Intell. Lab. Systems, 37 (1997), pp. 23–35. [23] A. Pascual-Montano, J. M. Carzzo, K. Lochi, D. Lehmann, and R. D. Pascual-Marqui, Nonsmooth nonnegative matrix factorization (nsNMF), IEEE Trans. Pattern Anal. Machine Intell., 28 (2006), pp. 403–415. [24] J. Piper, V. P. Pauca, R. J. Plemmons, and M. Giffin, Unmixing spectral data for space objects using independent component analysis and nonnegative matrix factorization, in Proceedings of Amos Technical Conference, Maui, HI, 2004; see http://www.wfu.edu/∼plemmons. [25] K. Sekitani and Y. Yamamoto, A recursive algorithm for finding the minimum norm point in a polytope and a pair of closest points in two polytopes, Math. Program., 61 (1993), pp. 233–249. ¨l, A receptor model using a specific nonnegative transformation [26] J. Shen and G. W. Israe technique for ambient aerosol, Atmos. Environ., 23 (1989), pp. 2289–2298. [27] S. Sra and I. S. Dhillon, Nonnegative Matrix Approximation: Algorithms and Applications, Technical report TR-06-27, University of Texas, Austin, 2006. [28] P. Wolfe, Finding the nearest point in a polytope, Math. Program., 11 (1976), pp. 128–149.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1156–1177
A CONTINUOUS INTERIOR PENALTY METHOD FOR VISCOELASTIC FLOWS∗ ANDREA BONITO† AND ERIK BURMAN‡ Abstract. In this paper we consider a finite element discretization of the Oldroyd-B model of viscoelastic flows. The method uses standard continuous polynomial finite element spaces for velocities, pressures, and stresses. Inf-sup stability and stability for convection-dominated flows are obtained by adding a term penalizing the jump of the solution gradient over element faces. To increase robustness when the Deborah number is high, we add a nonlinear artificial viscosity of shock-capturing type. The method is analyzed on a linear model problem, and optimal a priori error estimates are proven that are independent of the solvent viscosity ηs . Finally we demonstrate the performance of the method on some known benchmark cases. Key words. finite element methods, stabilized methods, continuous interior penalty, viscoelastic flows, Oldroyd-B AMS subject classifications. 65N12, 65N30, 76A10, 76M10 DOI. 10.1137/060677033
1. Introduction. The numerical computation of viscoelastic flows is a challenging problem that has received increasing attention during the past twenty years. The system takes the form of the incompressible Navier–Stokes equations coupled to a nonlinear hyperbolic equation for the extra stress. Several difficulties have to be handled simultaneously by the numerical method. First is the inf-sup stability condition for the velocity/pressure coupling of the Navier–Stokes equation; second, there is an inf-sup condition due to the coupling between the equation for the extra stress and the Stokes-type system of the Navier–Stokes equations. These difficulties can be studied by considering the so-called three-field Stokes equations [8, 15, 32, 43, 41, 42, 45, 3, 9]. Third, the full viscoelastic system also features a transport term and a nonlinear coupling term in the equation for the stresses; the strength of this term is measured by a parameter that can be expressed in the nondimensional Deborah number. This results in the need to stabilize also the transport term and possibly account for instabilities induced by the nonlinear terms. The existence of a slow steady viscoelastic flow has been proven by Renardy [40] in Hilbert spaces. Picasso and Rappaz [37] analyzed the stationary nonlinear case (without a transport term in the extra stress equation), and they proved a priori and a posteriori error estimates for the finite element approximation error provided the Deborah number is small. The extension to the time-dependent problem has been treated in [6] and to a stochastic model in [5, 4]. Another theoretical approach to numerical methods for viscoelastic flows was proposed by Lozinski and Owens [31]. They proved an energy estimate and introduced a numerical model guaranteeing the positive definiteness of ∗ Received by the editors December 7, 2006; accepted for publication (in revised form) October 9, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/67703.html † Department of Mathematics, University of Maryland, College Park, MD 20742 (andrea.bonito@ a3.epfl.ch). The work of this author was supported by the Swiss National Science Foundation Fellowship PBEL2–114311. ‡ Department of Mathematics, University of Sussex, Brighton BN1 9RF, UK (e.n.burman@sussex. ac.uk).
1156
A CIP METHOD FOR VISCOELASTIC FLOW
1157
the stress tensor. Lee and Xu [30] also emphasized the importance of keeping the stress tensor positive definite during the computations and gave some guidelines on how to construct a finite element method that would satisfy this constraint. There is a huge amount of literature on finite element methods for viscoelastic flow; we refer to the monograph by Owens and Phillips for an overview [36]. For works on methods using mixed finite element methods we refer to the review article of Baaijens [1] and references therein and the more recent works by Ervin and coworkers [20, 21]. For methods using stabilized finite elements in a framework similar to ours we refer to the work of Behr and co-workers [33] and the work of Codina [16]. Finally, note that fully three-dimensional free surface Oldroyd-B flows were successfully computed by Bonito, Picasso, and Laso [7] by using a Galerkin-leastsquare (GLS)/elastic viscous split stress procedure (EVSS). In this paper we propose a method where all of the instabilities are treated in a uniform fashion, by adding a term stabilizing the gradient jump over element faces. This type of method is a generalization of the interior penalty method for continuous approximation spaces proposed by Douglas and Dupont [18] for convection-diffusion problems. The analysis for high Peclet number problems was given by Burman and Hansbo in [13], and inf-sup stability for Stokes systems using equal order interpolation was proven in [14]. In [12], Burman, Fern´ andez, and Hansbo provided an extension to Oseen’s equations by obtaining estimates independent of the local Reynolds number stabilization term taking care of the convective term. Here we prove estimates for the interior penalty method applied to a linear model problem of viscoelastic flow by showing that the discretization is stable and has quasi-optimal convergence properties. An outline of the paper is as follows: First we introduce the linear model problem and comment on the well-posedness. In section 2 we give the finite element formulation, and in section 3 we prove an inf-sup condition. This leads to optimal a priori error estimates that are given in section 4. In section 5 we discuss an iterative solution algorithm decoupling the velocity/pressure computation from the stress computation, and we prove that the iterations converge. In section 6 we introduce an additional nonlinear stabilization term by drawing on earlier work on nonlinear hyperbolic conservation laws and nonlinear artificial viscosity. In section 7 finally we give some numerical examples demonstrating optimal convergence for smooth solutions in the nonlinear case, the effect of linear stabilization in the presence of singularities for the linear problem, and finally the effect of nonlinear stabilization in a nonlinear case. 1.1. The Oldroyd-B model of viscoelastic flow and a linear model problem. Given are ηs , ηp > 0 the solvent and polymer viscosities, respectively, and λ > 0 the relaxation time parameter of the fluid—the time for the stress to return to zero under a constant-strain condition. Denoting by u the velocities, p the pressure, and σ the extra-stress tensor, the Oldroyd-B model for viscoelastic flows takes the form
(1.1)
−∇ · (2ηs (u) + σ) + ∇p = f ∇·u=0 λ((u · ∇ ) σ − g(σ, u)) + σ − 2ηp (u) = 0 u=β σ=0
with g(σ, u) = ∇u σ + σ(∇u)T .
in Ω, in Ω, in Ω, on ∂Ω, on ∂Ωin ,
1158
ANDREA BONITO AND ERIK BURMAN
Here ∂Ωin := {x ∈ ∂Ω : β · ν < 0}, and ν denotes the unit outer normal vector of Ω. Note that the term (u · ∇ ) σ − g(σ, u) is a (stationary) Oldroyd-type derivative, which is frame invariant. Refer to [2] for more precisions and other models. The existence of a viscoelastic flow (u, σ) ∈ H 3 × H 2 satisfying (1.1) with ∂Ω ∈ 1,1 C has been proved by Renardy [39] in Hilbert spaces provided the data f are small enough in H 1 . The extension of this result to Banach spaces has been treated by Fern´ andez-Cara, Guill´en, and Ortega; see, for instance, [23]. The linear problem that we propose for the analysis is obtained from the stationary Oldroyd-B equation by dropping the nonlinear terms, i.e., taking formally g(σ, u) = 0 and replacing (u · ∇)σ by (β · ∇)σ above, where the assumption on the vector field β will be specified later. This results in a system that resembles the known three-field Stokes system but with an advective term in the equation for the stresses. The equation for the stress tensor is therefore a pure transport equation. The analysis of the method will be performed on a linear model problem of the following form:
(1.2)
− ∇ · (2ηs (u) + σ) + ∇p = f ∇·u=0 λ (β · ∇) σ + σ − 2ηp (u) = 0
in Ω, in Ω,
u=β
in Ω, on ∂Ω,
σ=0
on ∂Ωin ,
where β ∈ H 1 (Ω), ∇ · β = 0, and ∇β L∞ (Ω) ≤ M < ∞. The existence and uniqueness for this problem were proved in [19] under sufficient conditions on M and λ. In this paper we will assume that we are working in an Ω that is bounded convex polygonal and we have (at least) the additional regularity β ∈ C 1 (Ω), + , (1.3)
D1 σ L2 (Ω) + D1 p L2 (Ω) + D2 u L2 (Ω) ≤ C f L2 (Ω) + β C 1 (Ω) , where C is a constant depending only on Ω, ηs , ηp , and λ. A proof of this additional regularity in a smooth domain with small data can be found in [39, 23] for the more general problem (1.1). 2. A finite - element formulation. We will use the notations (u, v) = Ω u · v and u, vS = S u · v, with S the boundary of the domain or a face of an element. Let Th denote a conforming triangulation of Ω, and let Eh denote the set of interior faces in Th . We shall henceforth assume that the sequence of meshes {Th }0
/ . h3 [∇ph ], [∇qh ] , 2ηp e e∈Eh . / ηs + ηp ju (uh , vh ) = γu uh , vh
2ηp h[∇uh ], [∇vh ]e + γb , h ∂Ω jp (ph , qh ) = γp
e∈Eh
and (2.3)
jσ (σh , τh ) = γs
0 e∈Eh
1 h2 β · ν L∞ (e) [∇σh ], [∇τh ] e ,
1159
A CIP METHOD FOR VISCOELASTIC FLOW
where γp , γu , γb , and γs are positive constants and [v]|e denotes the jump of the quantity v over e. Moreover let us introduce the bilinear forms (2.4)
a(uh , vh ) = 2ηs ( (uh ), (vh )) − 2ηs (uh ) · ν, vh ∂Ω ,
(2.5)
b(ph , vh ) = −(ph , ∇ · vh ) + ph , vh · ν∂Ω ,
(2.6)
c(σh , vh ) = (σh , (vh )) − σh · ν, vh ∂Ω , 1 λ d(σh , τh ) = (σh + λ(β · ∇)σh , τh ) +
σh , |β · ν | τh ∂Ωin . 2ηp 2ηp
(2.7)
× Vh The method we propose then takes the form: Find (uh , σh , ph ) ∈ Vdh × Vd×d h such that (2.8) a(uh , vh ) + b(ph , vh ) − b(qh , uh ) + c(σh , vh ) − c(τh , uh ) + d(σh , τh ) . / γb (ηs + ηp ) β, vh + jp (ph , qh ) + ju (uh , vh ) + jσ (σh , τh ) = (f, vh ) + h ∂Ω ∀(vh , τh , qh ) ∈ Vdh × Vhd×d × Vh . Note that in the above formulation the boundary conditions u=β
on ∂Ω
(resp., σ = 0
on ∂Ωin ),
are imposed weakly justifying the presence of the last term in (2.2) (resp., (2.7)). For ease of notation we will also consider the following compact form, introducing the variables Uh = (uh , σh , ph ) and Vh = (vh , τh , qh ) and the finite element space Xh = Vdh × Vhd×d × Vh : A(Uh , Vh ) =a(uh , vh ) + b(ph , vh ) − b(qh , uh ) + c(σh , vh ) − c(τh , uh ) + d(σh , τh ) and J(Uh , Vh ) = jp (ph , qh ) + ju (uh , vh ) + jσ (σh , τh ) yielding the compact formulation: Find Uh ∈ Xh such that . / γb (ηs + ηp ) β, vh A(Uh , Vh ) + J(Uh , Vh ) = (f, vh ) + h ∂Ω
∀ Vh ∈ Xh .
Clearly this formulation is strongly consistent for sufficiently smooth exact solutions as pointed out in the following lemma. Lemma 2.1 (Galerkin orthogonality). Assume that U = (u, σ, p) ∈ H 2 (Ω)d × 2 H (Ω)d×d × H 2 (Ω), and then the formulation (2.8) satisfies A(U − Uh , Vh ) + J(U − Uh , Vh ) = 0
∀ Vh ∈ Xh .
Proof. The proof is immediate since under the regularity assumption there holds γ (η +η ) ju (u, vh ) = b sh p u, vh ∂Ω , jp (p, qh ) = 0, and jσ (σ, τh ) = 0. Remark 2.2. The analysis proposed below may be extended in a straightforward manner to the case where the velocities are chosen in the affine H 1 -conforming finite element space, whereas pressures and stresses are approximated by piecewise constants. In this case only the jumps of discontinuous pressures have to be stabilized, and the convection term of the stresses is discretized, by using standard upwind fluxes.
1160
ANDREA BONITO AND ERIK BURMAN
3. The inf-sup condition. For the numerical scheme (2.8) to be well-posed, it is essential that there holds an inf-sup condition uniformly in the mesh size h. In order to prove this we recall a lemma from [10] based on the Oswald interpolant [34, 27]. Lemma 3.1 (Oswald interpolation). Let πh∗ : Wh → Vh denote the Oswald interpolation operator on the finite element space. Then there exists a constant cO independent of h such that 0 1 h2s+1 [∇yh ], [∇yh ] e , 0 ≤ s ≤ 1.
hs (∇yh − πh∗ ∇yh ) 2 ≤ cO e∈Eh
Consider now the triple norm given by |||U |||2 =
1 1 1 λ
σ 2 + 2ηs (u) 2 +
p 2 +
|β · ν | 2 σ 2L2 (∂Ω) , 2ηp 2ηp 2ηp
where U = (u, σ, p) and the following corresponding discrete triple norm: |||Uh |||2h = |||Uh |||2 + J(Uh , Uh ). When β = 0, the following discrete norm will be used: |||(uh , σh , ph )|||2∗ =
1 1
σh 2 +2(ηs +ηp ) (uh ) 2 +
ph 2 +jp (ph , ph )+ju (uh , uh ). 2ηp 2ηp
We will prove that the inf-sup condition is satisfied for the discrete form. Theorem 3.2 (stability). Assume that the mesh satisfies the quasiuniformity of the mesh. Then there exist two constants γb∗ > 0 and c independent of h such that for all γp , γu , γs positive constants and for all γb > γb∗ there holds for all Uh ∈ Xh |||Uh |||h ≤ c sup
Vh ∈Xh Vh =0
A(Uh , Vh ) + J(Uh , Vh ) . |||Vh |||h
Remark 3.3. When ηs = 0, the above inf-sup condition is not sufficient to control the velocity. However, if β = 0, by using the same arguments a similar inf-sup condition holds for the discrete norm |||.|||∗ even when ηs = 0; see [3]. The additional control of (Uh ) is obtained by testing with τ = πh (uh ) in (2.8) in a third step of the proof; see [3]. Proof of Theorem 3.2. First we show that there exist Vh = (vh , τh , qh ) ∈ Xh and a constant c1 independent of h and ηs such that c1 |||Uh |||2h ≤ A(Uh , Vh ) + J(Uh , Vh ), and then we show that there exists a constant c2 independent of h and ηs such that |||Vh |||h ≤ c2 |||Uh |||h , after which the claim follows. The first part is the most laborious. The proof is made in two steps. First we establish the coercivity of the bilinear form. Then we recover control of the L2 -norm of the pressure. (1) Choosing (vh , τh , qh ) = (uh , σh , ph ) immediately leads to the equality 1 λ
σh 2 + ((β · ∇)σh , σh ) 2ηp 2ηp λ
σh , |β · ν | σh ∂Ωin . − (2ηs (uh )) · ν, uh ∂Ω + 2ηp
(3.1) A(Uh , Uh ) = 2ηs (uh ) 2 +
A CIP METHOD FOR VISCOELASTIC FLOW
1161
By using an integration by part and since ∇ · β = 0, it follows that 1
(β · ν)σh , σh ∂Ω 2 1 1 = |β · ν | σh , σh ∂Ω\∂Ωin − |β · ν | σh , σh ∂Ωin . 2 2
((β · ∇)σh , σh ) =
Moreover a Cauchy–Schwarz inequality and a trace inequality lead to ηs + ηp 1 1 − ct (2ηs ) 2 (uh ) L2 (Ω) + 1 σh L2 (Ω) h 12 uh (2ηp ) 2 ∂Ω ≤ − (2ηs (uh ) + σh ) · ν, uh ∂Ω , where ct depends only on the trace inequality. Thus, for γb sufficiently large there exists a constant c1 independent of h and ηs such that 1 λ 1/2 2 2 2 c1
σh + 2ηs (uh ) +
|β · ν | σh L2 (∂Ω) + J(Uh , Uh ) 2ηp 2ηp ≤ A(Uh , Uh ) + J(Uh , Uh ). (2) By the surjectivity of the divergence operator we know that there exists vp ∈ H01 (Ω) such that ∇ · vp = ph and vp 1,Ω ≤ c ph 0,Ω , where c is a constant independent of h and ηs . We now take vh = − 2η1p πh vp , qh = 0, and τh = 0 to obtain 1 1 − A((uh , σh , ph ), (πh vp , 0, 0)) − ju (uh , πh vp ) ≥ − 1 2ηs (uh ) 2 2ηp 2ηp 2ηs 2 1 1 −
(πh vp ) 2 −
σh 2 −
(πh vp ) 2 +
ph 2 16 1 ηp2 2ηp 8 2 ηp 2ηp 1 + (∇ph − π ∗ ∇ph , vp − πh vp ) 2ηp 1 1
(2ηs (uh ) + σh ) · ν, πh vp − vp ∂Ω − ju uh , πh vp + 2ηp 2ηp for all 1 > 0, 2 > 0. We now note that the following inequalities hold: (3.2)
(πh vp ) 2 ≤ c3 ph 2 , 1 cO (∇ph − π ∗ ∇ph , vp − πh vp ) ≤ 3 jp (ph , ph ) +
h−1 (vp − πh vp ) 2 2ηp 16 3 ηp γp for all 3 > 0 and where cO is given by Lemma 3.1. By using a standard interpolation result for the L2 -projection and the properties of the function vp , we have
h−1 (vp − πh vp ) 2 ≤ c4 ph 2 . By a trace inequality followed by an inverse inequality and the above approximation result, we have for the boundary term 1
(2ηs (uh ) + σh ) · ν, πh vp − vp ∂Ω 2ηp 2c5 ηs 5 c5 ≤ 2ηs 4 (uh ) 2 +
σh 2 + +
ph 2 2ηp 16 4 ηp2 8 5 ηp
1162
ANDREA BONITO AND ERIK BURMAN
for all 4 > 0 and 5 > 0 and where the constant c5 depends on the trace inequality, the inverse inequality, and the interpolation estimate. For the jump term on the velocities there holds 1 1 1 1 πh vp ≥ − 6 ju (uh , uh ) − ju πh vp , πh vp ju uh , 2ηp 4 6 2ηp 2ηp for all 6 > 0, and we note that by the trace inequality once again there holds 1 1 1 πh vp , πh vp ≤ c6
ph 2 , (3.3) ju 2ηp 2ηp 2ηp where c6 is independent of h. By collecting terms we see that there holds 1 1 A (uh , σh , ph ), πh vp , 0, 0 + J (uh , σh , ph ), πh vp , 0, 0 2ηp 2ηp 1 2c3 ηs c3 cO c4 2c5 ηs c5 c6 ≥ 1− − − − − −
ph 2 8 1 ηp 4 2 8 3 γp 8 4 ηp 4 5 4 6 2ηp 1 2ηs − ( 2 + 5 )
σh 2 − ( 1 + 4 )
(uh ) 2 − 6 ju (uh , uh ) − 3 jp (ph , ph ) 2ηp 2ηp for all i , i = 1, . . . , 6 positive constants. By collecting the results of (1) and (2) we may conclude that there exist constants α and cα > 0 independent of h and ηs such that for all η ≥ 0 taking (vh , τh , qh ) = (uh + α 2η1p πh vp , σh , ph ) yields cα |||Uh |||2h ≤ A(Uh , Vh ) + J(Uh , Vh ). To conclude we now need to show that |||Vh |||h ≤ c|||Uh |||h , but this follows immediately by the triangle inequality and by recalling the inequalities (3.2) and (3.3) and using the stability of the L2 -projection. 4. A priori error estimates. A priori error estimates follow from the previously proved inf-sup condition together with the proper continuities of the bilinear forms and the approximation properties of the finite element space. We will start with the latter. Let U = (u, p, σ). Lemma 4.1 (interpolation). Assume that all of the components of U are in H k+1 (Ω). Then there exists a constant c independent of h and ηs such that there holds 1 1 1 1 2 |||U − πh U |||h ≤ chk (ηp + ηs ) 2 u k+1,Ω + 1 h σ k+1,Ω + 1 h p k+1,Ω (2ηp ) 2 (2ηp ) 2 for all h < 1, where k ≥ 1 is the polynomial order of the finite element spaces. Proof. The only part that has to be investigated is the convergence order of the jump terms. Let us denote by c a generic constant independent of h and ηs . Note that by the trace inequality we have . / (ηs + ηp )γb u − πh u, u − πh u
2ηp h[∇(u − πh u)], [∇(u − πh u)]e + h ∂Ω ) ( 2 ≤ c (ηs + ηp ) ∇(u − πh u) 0,K + (ηs + ηp )h2 ∇(u − πh u) 21,K .
A CIP METHOD FOR VISCOELASTIC FLOW
1163
Summing over all e ∈ Eh yields, by the quasiuniformity of the mesh, ju (u − πh u, u − πh u) ≤ cηp h2k u 2k+1,Ω . Similarly we obtain for the pressure jp (p − πh p, p − πh p) ≤ c
1 2k+2 h
p 2k+1,Ω ηp
and the extra stress jσ (σ − πh σ, σ − πh σ) ≤ ch1+2k σ 2k+1,Ω . The claim now follows as an immediate consequence of the approximation properties of the L2 -projection. Theorem 4.2 (convergence in the mesh-dependent norm). Assume that all of the components of U = (u, σ, p) are in H k+1 (Ω) and that the hypotheses of Theorem 3.2 are satisfied; then, for all γp , γu , γs positive constants and for all γb > γb∗ , there exists a constant c independent of h and ηs such that there holds 1 1 1 1 k |||U − Uh |||h ≤ ch (ηp + ηs ) 2 u k+1,Ω + 1 h 2 σ k+1,Ω + 1 h p k+1,Ω ηp2 ηp2 for all h < 1, where k ≥ 1 is the polynomial order of the finite element spaces. Remark 4.3. Theorem 4.2 does not guarantee the control of (u − uh ) when ηs = 0. However, for ηs = 0 a similar result holds when β = 0 by using the stronger norm ||| · |||∗ : 1 1 1 |||U − Uh |||∗ ≤ chk ηp2 u k+1,Ω + 1 h σ k+1,Ω + 1 h p k+1,Ω ; ηp2 ηp2 see Remark 3.3 and [3]. Proof. First note that by the triangle inequality we have |||U − Uh |||h ≤ |||U − πh U |||h + |||Uh − πh U |||h . The convergence of the first term is immediate by Lemma 4.1. For the second term we know that the inf-sup condition holds |||Uh − πh U |||h ≤ c sup
Vh ∈Vh Vh =0
A(Uh − πh U, Vh ) + J(Uh − πh U ; Vh ) . |||Vh |||h
By Galerkin orthogonality there holds |||Uh − πh U |||h ≤ c sup
Vh ∈Vh Vh =0
A(U − πh U, Vh ) + J(U − πh U ; Vh ) . |||Vh |||h
Now we may note that by a Cauchy–Schwarz inequality there exists a constant c independent of h and ηs such that J(U − πh U, Vh ) ≤ c|||U − πh U |||h |||Vh |||h .
1164
ANDREA BONITO AND ERIK BURMAN
By considering now the standard Galerkin part we have (4.1)
= a(u − πh u, vh ) + b(p − πh p, vh ) − b(qh , u − πh u) +c(σ − πh σ, uh ) − c(τh , u − πh u) + d(σ − πh σ, τh ).
A(U − πh U ; Vh )
Let us now estimate all of the terms in the right-hand side. For ease of notation let us denote by c a generic constant independent of h and ηs . All of the terms in the right-hand side of the above equation can be estimated as follows: 1 a(u − πh u, vh ) ≤ 2ηs (u − πh u)
(vh ) + 2ηs (u − πh u) c v h h 2 L (∂Ω) ≤ c|||U − πh U ||| |||Vh |||h , b(p − πh p, vh ) ≤
1
1
(2ηp )
1 2
p − πh p (2ηp ) 2 ∇ · vh − π ∗ ∇ · vh + | p − πh p, vh · ν∂Ω | 1
1
≤ c|||U − πh U |||h |||Vh |||h + ≤ c |||U − πh U |||h +
hk+1 1
ηp2
Ch 2 1
ηp2
p − πh p ∂Ω
p k+1
ηp2 1
h2
vh ∂Ω
|||Vh |||h ,
b(qh , u − πh u) ≤ (qh , ∇ · (u − πh u)) − (qh , (u − πh u) · ν) 1
= (∇qh − π ∗ ∇qh , u − πh u) ≤ cjp (qh , qh )h−1 ηp2 u − πh u 0,Ω 1
≤ cjp (qh , qh )hk ηp2 u k+1,Ω , c(σ − πh σ, vh ) ≤ σ − πh σ (vh ) + | (σ − πh σ) · ν, vh | 1 1 ∗ − 12 2 vh |∂Ω ) ≤ 1 σ − πh σ (2ηp ) ( (vh ) − π (vh ) + |h (2ηp ) 2 ≤ c|||U − πh U ||| |||Vh |||h , c(τh , u − πh u) ≤ τh (u − πh u) + | τh · ν, u − πh u | ≤
1
1
1
(2ηp ) 2
1
τh (2ηp ) 2 ( (u − πh u) + |h− 2 (u − πh u)|∂Ω ) 1
≤ c|||Vh |||h hk (2ηp ) 2 u k+1 . For the last term, we denote by βh the L2 -projection of β onto
vh ∈ L2 (Ω)d ; ∀K ∈ Th , vh |K ∈ P0 (K) .
1
Since β ∈ C 2 (Ω) it holds that 1
β − βh L∞ ≤ ch 2 β
1
C 2 (Ω)
.
A CIP METHOD FOR VISCOELASTIC FLOW
1165
Thus we obtain λ λ ((β · ∇)(σ − πh σ), τh ) −
σ − πh σ, (β · ν)τh ∂Ωin 2ηp 2ηp λ ≤− (σ − πh σ, ((β − βh ) · ∇)τh ) 2ηp λ s(σ − πh σ, (βh · ∇)τh − π ∗ ((βh · ∇)τh )) − 2ηp 3 1 λ 2 + σ − πh σ, |β · ν | 2 τh 2ηp ∂Ω 1 1 cλ ∞ τh + |β · n | 2 τh ∂Ω ≤ 1 σ − πh σ
1 β − βh L 2ηp h 2 h2
d(σ − πh σ, τh ) =
h |β − βh | [∇τh ] L2 (Ω) + h |β | [∇τh ] L2 (Ω) c ≤ 1 ||σ − πh σ|| |||Vh |||h . h2 The claim now follows by collecting the bounds obtained and by using Lemma 4.1. Theorem 4.4 (convergence in the energy norm). Assume that u ∈ H 2 (Ω)d , σ ∈ H 1 (Ω)d×d , and p ∈ H 1 (Ω) and that the hypothesis of Theorem 3.2 are satisfied; then, for all γp , γu , γs positive constants and for all γb > γb∗ , there exists a constant c independent of h and ηs such that 1 1 1 1 1 1 |||U − Uh ||| ≤ ch 2 (ηp + ηs ) 2 h 2 u 2,Ω + 1 σ 1,Ω + 1 h 2 p 1,Ω . ηp2 ηp2 Moreover, by regularity assumption (1.3) we have 1
|||U − Uh ||| ≤ c˜h 2 , where the constant c˜ depends only on the problem data f , β, ηs , ηp , λ, and Ω. Remark 4.5. As in Theorem 4.2, for ηs = 0 a similar theorem holds when β = 0 by using the stronger norm ||| · |||∗ : 1 1 1 2 |||U − Uh |||∗ ≤ ch ηp u k+1,Ω + 1 σ k,Ω + 1 p k,Ω , ηp2 ηp2 see [3]. Proof. The proof is very similar to the previous proof but uses the decomposition |||U − Uh ||| ≤ |||U − πh U ||| + |||Uh − πh U |||h . Then the inf-sup condition is applied to the second term. However, Galerkin orthogonality no longer holds exactly, and we get |||Uh − πh U |||h ≤ c sup
Vh ∈Vh Vh =0
A(U −πh U, Vh ) + J(U −πh U, Vh )−jp (πh p, qh )−jσ (πh σ, τh ) . |||Vh |||h
Clearly the only thing that differs compared to the previous case is the nonconsistence of the pressure and the extra-stress terms, which may be treated as follows: jp (πh p, qh ) ≤ jp (πh p, πh p)|||Vh |||h ≤ ch p 1,Ω |||Vh |||h ,
1166
ANDREA BONITO AND ERIK BURMAN
where c is a generic constant independent of h and ηs . The last inequality is a consequence of the trace inequality and the stability of the L2 -projection applied in the following fashion: 0 1 h3 [∇πh p], [∇πh p] e ≤ c h2 ∇πh p 2K ≤ ch2 ∇p 2 . e∈Eh
K∈Th
The term for the extra stress can be treated in the same way. 5. A stable iterative algorithm. We propose an iterative algorithm allowing us to solve for the velocity pressure coupling independently of the extra stresses. A similar iterative method as in [22, 25, 8] is presented. The interest of such an algorithm is to decouple the velocity-pressure computation from extra-stress computation for solving (2.8). Each subiteration of the iterative algorithm consists of two steps. First, using the Navier–Stokes equation, the new approximation (unh , pnh ) is determined by using the value of the extra stress at previous step σhn−1 . Then the new approximation σhn is computed by using the constitutive relation with the value unh . More precisely, , σhn−1 , pn−1 ) to be the known approximation of (uh , σh , ph ) after n − 1 assume (un−1 h h steps. The first step consists of finding (unh , pnh ) such as (5.1)
(A+J)((unh , σhn−1 , pnh ), (vh , 0, qh ))+K(unh , un−1 , vh ) h . / γb (ηs + ηp ) β, vh = (fh , vh )+ h ∂Ω
∀(vh , ph ) ∈ Vd × V
and then finding σhn such as (5.2)
A((unh , σhn , pnh ), (0, τh , 0)) = 0
∀τh ∈ Vd×d .
Here K : H 1 (Ω)d × H 1 (Ω)d × H 1 (Ω)d → R is defined for all u1 , u2 , v ∈ H 1 (Ω)d by K[u1 , u2 , v] := 2ηp ( (u1 − u2 ), (v)) . The term K(unh , un−1 , v), which vanishes for unh = un−1 , has been added to (2.8) in h h (5.1) in order to obtain a stable iterative algorithm even when ηs vanishes. Theorem 5.1 (stability of the iterative scheme). Let (unh , σhn , pnh ) be the solution of (5.1) and (5.2) with f = 0 and β = 0. There exist γu∗ and γb∗ > 0 independent of h such that, for all γp > 0, γu ≥ γu∗ , γb ≥ γb∗ , and γσ ≥ 0, there exists a constant c > 0 such that ηp (unh ) 2 +
3 3
σhn 2 + c pnh 2 ≤ ηp (uhn−1 ) 2 +
σ n−1 2 . 16ηp 16ηp h
Proof. The same arguments as in Theorem 3.2 are used in this proof. By taking vh = unh , q = pnh in (5.1) and τ = σhn in (5.2), it follows by using the definitions of the bilinear forms (2.4), (2.5), (2.1), (2.2) that (5.3)
1 2(ηs + ηp ) (unh ) 2 +
σ n 2 + ju (unh , unh ) + jp (pnh , pnh ) + jσ (σhn , σhn ) 2ηp h ( ) = 2ηp (un−1 ) − σhn−1 , (unh ) + (σhn , (unh )) h 0 1 + (σhn−1 − σhn ) · ν, unh ∂Ω + 2ηs (unh ) · ν, unh ∂Ω .
A CIP METHOD FOR VISCOELASTIC FLOW
1167
A standard Young’s inequality leads to (σhn , (unh )) ≤ ηp (unh ) 2 +
1
σ n 2 . 4ηp h
), by using the orthogonality of the Galerkin approximation Since σhn−1 = 2ηp πh (un−1 h and a Young’s inequality again we obtain ( ) ) ( 2ηp (un−1 ) − σhn−1 , (unh ) = 2ηp (un−1 ) − 2ηp πh (un−1 ), (unh − πh∗ (unh )) h h h 1 ) 2 +
σ n−1 2 + 3ηp (unh ) − πh∗ (unh ) 2 . ≤ ηp (un−1 h 8ηp h The last term in the inequality above can be estimated by using Lemma 3.1
he [ (unh )], [ (unh )]∂Ω 3ηp (unh ) − πh∗ (unh ) 2 ≤ 3ηp cO e
3cO ≤ 2ηp γu
he [∇unh ], [∇unh ]∂Ω . 2γu e By using the trace inequality followed by an inverse estimate, we can prove that there exist two constants c˜1 , c˜2 > 0 such that 1 ) 8˜ 0 n−1 c1 2 ηp γb n n 3 1 ( n−1 2 u ,u
σh + σhn 2 + (σh − σhn ) · ν, unh ∂Ω ≤ 16ηp γb h h h ∂Ω and
2ηs ( (unh ) · ν, unh )∂Ω ≤ ηs (unh ) 2 +
c˜2 2 ηp γb n n 3 u ,u . γb h h h ∂Ω
By collecting the inequalities above in (5.3) we obtain 3
σ n 2 + ju (unh , unh ) + jp (pnh , pnh ) + jσ (σhn , σhn ) 16ηp h 3 ) 2 +
σ n−1 2 ≤ ηp (un−1 h 16ηp h . / 3cO 8˜ c1 + c˜2 (ηs + ηp )γb n n u h , uh + 2ηp γu
he [∇unh ], [∇unh ]∂Ω + . 2γu γb h ∂Ω e
ηp (unh ) 2 +
Now by choosing γu∗ and γb∗ such that 3 cO , γb∗ > 8˜ c1 + c˜2 , 2 we obtain that there exists a constant c > 0 independent of h and ηs such that γu∗ >
3
σ n 2 + c (ju (unh , unh ) + jp (pnh , pnh ) + jσ (σhn , σhn )) 16ηp h 3 ) 2 +
σ n−1 2 . ≤ ηp (un−1 h 16ηp h
(ηs + ηp ) (unh ) 2 + (5.4)
It remains to prove that there exists a constant c > 0 independent of h and ηs such that (5.5) 1
pnh 2 ≤ c (ηs + ηp ) (unh ) 2 +
σhn 2 + ju (unh , unh ) + jp (pnh , pnh ) + jσ (σh , σh ) . 2ηp
1168
ANDREA BONITO AND ERIK BURMAN
In order to do that, using the surjectivity of the divergence operator once again, there exists vpn ∈ H01 (Ω) such that ∇ · vpn = pnh and vpn 1,Ω ≤ c˜ pnh 0,Ω for a constant c˜ independent of h and ηs . By choosing now vh = −πh 2η1p vpn and qh = 0 in (5.1) and using the same arguments as in Theorem 3.2 part (2), one can prove (5.5). 6. Nonlinear stabilization terms, shock capturing. It is well known that, in the case of nonlinear hyperbolic conservation laws, linear stabilization is insufficient to assure convergence to an entropy solution. This is due to the spurious oscillations that remain close to internal layers. Such oscillations destroy the monotonicity properties needed to pass to the limit in the entropy inequality. It was shown by Johnson and Szepessy [28] (see also [29]) that a nonlinear shock-capturing term can be used to smooth the solution close to layers. This way, in the case of the Burgers’ equation one may show that the sequence of finite element solutions converges to the unique entropy solution. More recently it was shown that a nonlinear viscosity based on the gradient jumps is alone sufficient to assure convergence of finite element approximations of the Burgers’ equation using a discrete maximum principle [11]. By drawing from these experiences from the Burgers equation (see also [12]) we propose to add a stabilization term on the form jsc (uh ; σh , τh ) =
(νK ∇σ, ∇τ ),
K∈Th
with νK = γnl h2K λ maxe∈∂K [∇u] e . Note that this term is weakly consistent; clearly if u ∈ [H 2 (Ω)], then jsc (u, σh , τh ) = 0. The idea is to add an artificial viscosity that can control the nonlinear term but that vanishes at an optimal rate under mesh refinement. In particular, the nonlinear stabilization term smears the extra-stress field in zones where the velocity gradient exhibits large variations. As we shall see in the numerical section such an ad hoc nonlinear diffusion allows us to double the Deborah number that may be attained. 7. Numerical examples. Several tests are presented in this section. The computations were performed by using FreeFem++ [38]. We will restrict ourselves to the case k = 1 corresponding to P1 approximations. By considering Poiseuille flow of a linearized Oldroyd-B fluid, convergence rates for problem (1.2) consistent with Theorems 4.2 and 4.4 will be presented. Then it will be shown by using the threefield Stokes system that similar results as when using a GLS/EVSS (see, for instance, [24, 8, 25]) are obtained on a contraction 4 : 1 problem. Finally, we will focus on problem (1.1), and it will be shown that the nonlinear stabilization term allows one to reach a higher Deborah number on the flow past a cylinder in a channel problem. The stationary results provided in this section have been obtained by solving the evolutionary problem using the iterative procedure of section 5 for each time step. H 7.1. Poiseuille flow. Consider a rectangular pipe of dimensions [0, L]×[− H 2, 2] in the x−y directions, where L = 0.15 [m] and H = 0.03 [m]. The boundary conditions are the following. On the top and bottom sides (y = 0 and y = H 2 ), no-slip boundary conditions apply. On the inlet (x = 0) the velocity and the extra stress are given by
u(0, y) =
ux (y) 0
,
σ(0, y) =
σ11 (y) σ12 (y) 0 σ12 (y)
,
1169
A CIP METHOD FOR VISCOELASTIC FLOW
L2 -errors
0.01 0.001 u p
0.0001 1e-05 1e-06
σxx σxy slope 1 slope 2
1e-07 1e-08 0.001
0.01 h
Figure 7.1. Poiseuille flow: convergence orders.
respectively, with H H + y , σ11 (y) = 8ληp V 2 y 2 , and σ12 (y) = −2ηp V y. −y (7.1) ux (y) = V 2 2 On the outlet (x = L) the velocity and the pressure are given by ux (y) u(L, y) = , p(L1 , y) ≡ 0, 0 respectively. We set βx (y) = ux (y) and βy (x, y) = 0. The velocity and extra stress must satisfy (7.1) in the whole pipe, and we choose V = 1 m/s, λ = 1 s, ηs = 0, and ηp = 1 Pa s. Three unstructured meshes are used to check convergence (coarse: 50 × 10; intermediate: 100 × 20; fine: 200 × 40). In Figure 7.1, the error in the L2 norm of the velocity u, the pressure p, and extra-stress components σ11 , σ12 is plotted versus the mesh size. Clearly order one convergence rate is observed for the pressure (in fact superconvergence is observed for the pressure) and the extra stress, while the convergence rate of the velocity is order two, this being consistent with theoretical predictions. 7.2. The 4:1 planar contraction. Numerical results of computation in the 4:1 abrupt contraction flow case are presented and comparison with the GLS/EVSS method is performed. Let us recall briefly the EVSS procedure in the present context. By using the L2 of (uh ), namely, πh (uh ), it is possible to take advantage of the projection onto Vd×d h term 2ηp (uh ) present in the third equation of (1.2) to obtain control of the velocity gradient even when ηs = 0. Indeed, an iterative procedure is used to decouple the computation of the velocity pressure to the extra stress as in section 5, and the term + , 2ηp ( (uh ), (vh )) − 2ηp πh (uprevious ), (v ) h h correis added to the momentum equation (first equation of (1.2)). Here uprevious h sponds to velocity on the previous step of the iterative method. Refer, for instance, to [24, 8, 25] for a detailed description of the method. In addition, following the ideas of [26], the “reduced” GLS stabilization consists of stabilizing the pressure by adding the following term to the discrete problem: αh2 (−∇ · (2ηs (uh ) − ph + σh ) − f, ∇qh ) , ηp e∈ h
1170
ANDREA BONITO AND ERIK BURMAN
H 2
uy
Figure 7.2. Computational domain for the 4:1 contraction (1233 vertices, and since k = 1 this corresponds to (2 + 1 + 3) × 1233 = 7398 DOF).
0.2 0 -0.2 -0.4 -0.6 -0.8 -1 -1.2 -1.4 0 0.5 1 1.5 2 2.5 3 3.5 4 x
Figure 7.3. (Left) 20 isovalues of the GLS method only for the pressure from −0.9 (black) to 0.06 (white) and (right) profile of uy (x, 0.025).
where α > 0. Refer to [8] for a detailed description of those procedures as well as the link between them. This test case underlines the importance of the stabilization of the constitutive equation. The symmetry of the geometry is used to reduce the computational domain by half, as shown in Figure 7.2. Zero Dirichlet boundary conditions are imposed on the walls, and the Poiseuille velocity profile (7.1) is imposed at the inlet with V = 64 m/s, H = 0.05 [m], and β ≡ 0, natural boundary conditions on the symmetry axis and at the outlet of the domain. For all of the computations presented in this subsection we choose ηs = 0 Pa s, ηp = 1 Pa s, and λ = 0 s. The results applying only GLS are shown in Figure 7.3. Similar results obtained by using the EVSS method and the continuous interior penalty (CIP) (γu = 0.1, γp = 0.1, γs = 0) formulation are presented in Figure 7.4. Note that the choice γs = 0 is possible since β ≡ 0 in that case. The number of iterations N needed by the algorithm described in section 5 to achieve (7.2) are provided in Figures 7.5 and 7.6. The robustness with respect to the stabilization parameter γp is clearly observed; see Figure 7.5. The algorithm is more dependent on the value of the parameter γu but provides a reasonable number of iterations for γu ∈ [5.10−2 , 10]; see Figure 7.6. 7.3. The flow past a cylinder in a channel. In this test case we consider the nonlinear problem (1.1). A cylinder of radius R = 1 [m] is placed at a distance L0 = 5 [m] symmetrically in a two-dimensional (2D) channel of width H = 4 [m] and length L = 20 [m] (see Figure 7.7). Boundary conditions are imposed as follows. Poiseuille flow at the inlet and outlet is ux (y) σ11 (y) σ12 (y) u(0, y) = u(L, y) = , , σ(0, y) = σ(L, y) = 0 0 σ12 (y)
1171
uy
A CIP METHOD FOR VISCOELASTIC FLOW
0.2 0 -0.2 -0.4 -0.6 -0.8
uy
0 0.5 1 1.5 2 2.5 3 3.5 4 x 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 0.5 1 1.5 2 2.5 3 3.5 4 x Figure 7.4. Left column: 20 isovalues from −0.9 (black) to 0.06 (white); right column: profile of uy (x, 0.025) (top: EVSS, bottom: CIP).
350 300 250 N 23 25 26 26 26
200 p
γp 10−5 10−3 10−1 1 10
150 100 50
γp γp γp γp
0
= 10−3 = 10−1 =1 = 10
-50 0
0.5
1
1.5
2 x
2.5
3
3.5
4
Figure 7.5. Robustness of the number of iterations with respect to the parameter γp (when γu = 0.1). (Left) Number of iterations for the algorithm to converge for several values of γp . (Right) Plot of some corresponding pressure profiles p(x, 0.025).
where ux (y), σ11 (y), and σ12 (y) are given by (7.1). The pressure is imposed to be zero at the outflow, p(y, L) = 0. No-slip conditions are imposed on the sphere and on the upper wall, while the y component of the velocity is imposed to vanish on the lower wall. The parameters are chosen such as described in Table 7.1. Due to the
1172
ANDREA BONITO AND ERIK BURMAN
450 400 350 N 44 26 7 4 3
300 250 200
p
γu 5.10−2 10−1 1 5 10
150 100 γu γu γu γu
50 0
= 5.10−2 = 10−1 =1 = 10
-50 0
0.5
1
1.5
2 x
2.5
3
3.5
4
Figure 7.6. Robustness of the number of iterations with respect to the parameter γu (when γp = 0.1). (Left) Number of iterations for the algorithm to converge for several values of γu . (Right) Plot of some corresponding pressure profiles p(x, 0.025).
y
H
R 000000000000000000000000000000000 111111111111111111111111111111111 111111111111111111111111111111111 000000000000000000000000000000000 000000000000000000000000000000000 111111111111111111111111111111111 000000000000000000000000000000000 111111111111111111111111111111111 000000000000000000000000000000000 111111111111111111111111111111111 000000000000000000000000000000000 111111111111111111111111111111111 111111111111111111111111111111111 000000000000000000000000000000000 L0
x
L
Figure 7.7. Cylinder radius R placed symmetrically in a 2D planar channel of half width. Table 7.1 Flow past a cylinder; parameters used. ηs 0.59 Pa s
ηp 0.41 Pa s
V 3/8 m/s
γp 0.1
γu 0.1
γσ 0.1
nonlinearity of the considered problem, we propose to seek a solution of (1.1) as the stationary limit of the corresponding evolution problem. Therefore to (1.1) we add ∂σ the time derivatives ∂u ∂t and ∂t . The time step used to reach the stationary state was chosen to be 0.01 for all of the simulations and the algorithm stopped when (7.2)
∇(un − un−1 ) L2 (Ω) < 1.e−6 .
∇u0 L2 (Ω)
The characteristics of the three different meshes used are provided in Table 7.2. 2 We define the Deborah number De = λV6RH , which will be the nondimensional parameter used to characterize the viscoelasticity of the fluid. We compare our results
1173
A CIP METHOD FOR VISCOELASTIC FLOW Table 7.2 Flow past a cylinder; meshes used.
h [m]
Coarse 0.5
Intermediate (int) 0.25
Fine 0.125
Table 7.3 Flow past a cylinder; Drag factor F ∗ without nonlinear stabilization. De
Coarse
Int
Fine
0.5 0.7 1 1.5
9.32 9.42 10.07 11.76
9.51 9.51 9.92 11.47
9.51 9.45 9.67 ×××
Table 7.4 Flow past a cylinder; drag factor with the nonlinear stabilization. For comparison the results of [17, 44] are included. De
Coarse
Int
Fine
Dou and Phan-Thien [17]
Sun et al. [44]
0.5 0.7 1 1.5 2 2.5 3
8.99 8.99 9.59 12.6 29.7 93.9 ×××
9.35 9.29 9.66 11.8 14.6 19.6 ×××
9.47 9.39 9.60 10.76 12.22 13.65 15.12
9.59 9.64 10.1 11.7 -
9.48 9.38 9.49 10.1 -
with results presented in the literature by means of the drag factor F ∗ defined by F∗ =
F , 4π(ηs + ηp )V /(6H 2 )
where F is the drag on the cylinder. Drag factors when γnl = 0 (no nonlinear stabilization used) are shown in Table 7.3 (× × × means that the pseudo time-stepping scheme did not reach a stationary state or the iterative scheme did not converge for the time step used). The values match those of [35, 17, 44] and those presented in [36, Table 9.1] (cf. Table 7.4). We now turn on the nonlinear stabilization term and set γnl = 0.1 drag factors are shown in Table 7.4. For comparison we give the results obtained by Dou and Phan-Thien [17] and Sun et al. [44]. For further comparisons we refer to [36, Table 9.1]. When the nonlinear stabilization is added, a higher Deborah number can be reached. It is interesting to see that the behavior of the method changes. When no nonlinear stabilization is present, the method can converge on coarse meshes, whereas no convergence is obtained on finer meshes; this typically indicates that the method is unstable: Increasing the number of degrees of freedom makes the algorithm deteriorate. In the stabilized case the situation is the opposite; for a high Deborah number the algorithm does not converge on coarse meshes due to divergence of the extra stress. On fine enough meshes, however, the method does converge, indicating that in this case the increasing number of degrees of freedom may be used to resolve the nonlinear operator without losing stability. We also observed that the magnitude of the nonlinear stabilization decreases under refinement.
1174
ANDREA BONITO AND ERIK BURMAN
70
coarse int fine
60 50 40 σ11 30 20 10 0 -10 -5
0
5 x
10
15
Figure 7.8. Flow past a cylinder: σ11 , De = 0.5.
160
coarse int fine
140 120 100 σ11
80 60 40 20 0 -20 -5
0
5 x
10
15
Figure 7.9. Flow past a cylinder: σ11 , De = 1.5.
The profiles of the component σ11 of the extra stress on the cylinder surface and on the wake of the cylinder at De = 0.5 (Figure 7.8), De = 1.5 (Figure 7.9), De = 2.5 (Figure 7.10), and De = 3 (Figure 7.11) are reported. For low Deborah numbers the coarse mesh approximations underestimate the stresses. However, for high Deborah numbers, the stresses on the cylinder are strongly overestimated on coarse meshes; see Figures 7.9 and 7.10.
A CIP METHOD FOR VISCOELASTIC FLOW
900
1175
coarse int fine
800 700 600 500 σ11 400 300 200 100 0 -100 -5
0
5 x
10
15
Figure 7.10. Flow past a cylinder: σ11 , De = 2.5.
200 fine
180 160 140 120 σ11
100 80 60 40 20 0 -20 -5
0
5 x
10
15
Figure 7.11. Flow past a cylinder: σ11 , De = 3.
Acknowledgments. The authors are thankful to the anonymous referees for constructive comments. REFERENCES [1] F. T. P. Baaijens, Mixed finite element methods for viscoelastic flow analysis: a review, J. Non-Newtonian Fluid Mech., 79 (1998) pp. 362–385. [2] R. Bird, C. Curtiss, R. Armstrong, and O. Hassager, Dynamics of Polymeric Liquids, Vol. 1 and 2, John Wiley & Sons, New York, 1987.
1176
ANDREA BONITO AND ERIK BURMAN
[3] A. Bonito and E. Burman, A face penalty method for the three fields Stokes equation arising from Oldroyd-B viscoelastic flows, in Numerical Mathematics and Advanced Applications, Springer, Berlin, 2006, pp. 487–494. ´ment, and M. Picasso, Finite element analysis of a simplified stochastic [4] A. Bonito, Ph. Cle Hookean dumbbells model arising from viscoelastic flows, M2AN Math. Model. Numer. Anal., 40 (2006), pp. 785–814. ´ment, and M. Picasso, Mathematical analysis of a simplified Hookean [5] A. Bonito, Ph. Cle dumbbells model arising from viscoelastic flows, J. Evol. Equ., 6 (2006), pp. 381–398. ´ment, and M. Picasso, Mathematical and numerical analysis of a sim[6] A. Bonito, Ph. Cle plified time-dependent viscoelastic flow, Numer. Math., 107 (2007), pp. 213–255. [7] A. Bonito, M. Picasso, and M. Laso, Numerical simulation of 3d viscoelastic flows with complex free surfaces, J. Comput. Phys., 215 (2006), pp. 691–716. [8] J. Bonvin, M. Picasso, and R. Stenberg, GLS and EVSS methods for a three-field Stokes problem arising from viscoelastic flows, Comput. Methods Appl. Mech. Engrg., 190 (2001), pp. 3893–3914. [9] F. Brezzi and M. Fortin, Mixed and hybrid finite element methods, Springer Ser. Comput. Math. 15, Springer, New York, 1991. [10] E. Burman, A unified analysis for conforming and nonconforming stabilized finite element methods using interior penalty, SIAM J. Numer. Anal., 43 (2005), pp. 2012–2033. [11] E. Burman, On nonlinear artificial viscosity, discrete maximum principle and hyperbolic conservation laws, BIT Numerical Mathematics, 47 (2007), pp. 715–733. ´ndez, and P. Hansbo, Continuous interior penalty finite element [12] E. Burman, M. A. Ferna method for Oseen’s equations, SIAM J. Numer. Anal., 44 (2006), pp. 1248–1274. [13] E. Burman and P. Hansbo, Edge stabilization for Galerkin approximations of convectiondiffusion problems, Comput. Methods Appl. Mech. Engrg., 193 (2004), pp. 1437–2453. [14] E. Burman and P. Hansbo, Edge stabilization for the generalized Stokes problem: A continuous interior penalty method, Comput. Methods Appl. Mech. Engrg., 195 (2006), pp. 2393– 2410. [15] R. Codina, Finite element approximation of the three field formulation of the elasticity problem using stabilization, in Computational Mechanics, Tsinghua University Press & Springer, New York, 2004, pp. 276–281. [16] R. Codina, Finite element approximation of the three field formulation of the elasticity problem using stabilization, in Numerical Mathematics and Advanced Applications, Berm´ udez, A. G´ omez, D. Quintela, and P. Salgado, eds., Springer, New York, 2006. [17] H. S. Dou and N. Phan-Thien, The flow of an Oldroy-b fluid past a cylinder in a channel: Adaptive viscosity vorticity (DAVSS-ω) formulation, J. Non-Newtonian Fluid Mech., 87 (1999), pp. 47–73. [18] J. Douglas and T. Dupont, Interior penalty procedures for elliptic and parabolic Galerkin methods, in Computing Methods in Applied Sciences (Second International Symposium, Versailles, 1975), Lecture Notes in Phys. 58, Springer, Berlin, 1976, pp. 207–216. [19] V. Ervin, H. Lee, and L. Ntasin, Analysis of the Oseen-viscoelastic fluid flow problem, J. Non-Newtonian Fluid Mech., 127 (2006), pp. 157–168. [20] V. J. Ervin and N. Heuer, Approximation of time-dependent, viscoelastic fluid flow: CrankNicolson, finite element approximation, Numer. Methods Partial Differential Equations, 20 (2004), pp. 248–283. [21] V. J. Ervin and W. W. Miles, Approximation of time-dependent viscoelastic fluid flow: SUPG approximation, SIAM J. Numer. Anal., 41 (2003), pp. 457–486. [22] M. Farhloul and M. Fortin, A new mixed finite element for the Stokes and elasticity problems, SIAM J. Numer. Anal., 30 (1993), pp. 971–990. ´ndez-Cara, F. Guille ´n, and R. R. Ortega, Mathematical modeling and analysis [23] E. Ferna of viscoelastic fluids of the Oldroyd kind, in Handb. Numer. Anal. VIII, North-Holland, Amsterdam, 2002, pp. 543–661. ´nette, and R. Pierre, On the discrete EVSS method, Comput. Methods [24] A. Fortin, R. Gue Appl. Mech. Engrg., 189 (2000), pp. 121–139. ´nette, and R. Pierre, Numerical analysis of the modified EVSS method, [25] M. Fortin, R. Gue Comput. Methods Appl. Mech. Engrg., 143 (1997), pp. 79–95. [26] L. P. Franca and T. J. R. Hughes, Convergence analyses of Galerkin least squares methods for symmetric advective-diffusive forms of the Stokes and incompressible Navier-Stokes equations, Comput. Methods Appl. Mech. Engrg., 105 (1993), pp. 285–298. [27] R. H. W. Hoppe and B. Wohlmuth, Element-oriented and edge-oriented local error estimators for nonconforming finite element methods, RAIRO Mod´ el. Math. Anal. Num´er., 30 (1996), pp. 237–263.
A CIP METHOD FOR VISCOELASTIC FLOW
1177
[28] C. Johnson and A. Szepessy, On the convergence of a finite element method for a nonlinear hyperbolic conservation law, Math. Comp., 49 (1987), pp. 427–444. [29] C. Johnson, A. Szepessy, and P. Hansbo, On the convergence of shock-capturing streamline diffusion finite element methods for hyperbolic conservation laws, Math. Comp., 54 (1990), pp. 107–129. [30] Y.-J. Lee and J. Xu, New formulations, positivity preserving discretizations and stability analysis for non-Newtonian flow models, Comput. Methods Appl. Mech. Engrg., 195 (2006), pp. 1180–1206. [31] A. Lozinski and R. G. Owens, An energy estimate for the Oldroyd B model: Theory and applications, J. Non-Newtonian Fluid Mech., 112 (2003), pp. 161–176. [32] H. Manouzi and M. Farhloul, Mixed finite element analysis of a non-linear three-fields Stokes model, IMA J. Numer. Anal., 21 (2001), pp. 143–164. ¨ki, T. Rossi, S. Korotov, E. On ˜ate, J. Pe ´riaux, and D. Kno ¨ rzer, eds., [33] P. Neittaanma Stabilized Finite Element Methods of GLS Type for Oldroyd-B Viscoelastic Fluid, Eccomas, Jyv¨ askyl¨ a, 2004. [34] P. Oswald, On a BPX-preconditioner for P1 Elements, Computing, 51 (1993), pp. 125–133. `re, and T. N. Philips, A locally-upwinded spectral technique [35] R. G. Owens, C. Chauvie (LUST) for viscoelastic flows, J. Non-Newtonian Fluid Mech., 108 (2002), pp. 49–71. [36] R. G. Owens and T. N. Phillips, Computational Rheology, Imperial College Press, London, 2002. [37] M. Picasso and J. Rappaz, Existence, a priori and a posteriori error estimates for a nonlinear three-field problem arising from Oldroyd-G viscoelastic flows, M2AN Math. Model. Numer. Anal., 35 (2001), pp. 879–897. [38] O. Pironneau, F. Hecht, and A. Le Hyaric, Freefem++ version 2.18.1. Available at http://www.freefem.org/ff++/. [39] M. Renardy, Existence of slow steady flows of viscoelastic fluids with differential constitutive equations, Z. Angew. Math. Mech., 65 (1985), pp. 449–451. [40] M. Renardy, Existence of slow steady flows of viscoelastic fluids of integral type, Z. Angew. Math. Mech., 68 (1988), pp. T40–T44. [41] V. Ruas, Finite element methods for the three-field Stokes system in R3 : Galerkin methods, RAIRO Mod´ el. Math. Anal. Num´er., 30 (1996), pp. 489–525. [42] V. Ruas, Galerkin-least-squares finite element methods for the three-field Stokes system in R3 , Comput. Methods Appl. Mech. Engrg., 142 (1997), pp. 235–256. ´jo, and M. A. M. Silva Ramos, Approximation of the [43] V. Ruas, J. H. Carneiro de Arau three-field Stokes system via optimized quadrilateral finite elements, RAIRO Mod´ el. Math. Anal. Num´er., 27 (1993), pp. 107–127. [44] J. Sun, M. D. Smith, R. C. Armstrong, and R. A. Brown, Finite element method for viscoelastic flows based on the discrete adaptive viscoelastic stress splitting and the discontinuous galerkin method: DAVSS-G/DG, J. Non-Newtonian Fluid Mech., 86 (1999), pp. 281–307. [45] M. Wakrim and F. Ghadi, A new mixed formulation for a three field Stokes problem, C. R. Acad. Sci. Paris S´er. I Math., 328 (1999), pp. 167–171.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1178–1204
c 2008 Society for Industrial and Applied Mathematics
STOCHASTIC PRECONDITIONING FOR DIAGONALLY DOMINANT MATRICES∗ HAIFENG QIAN† AND SACHIN S. SAPATNEKAR‡ Abstract. This paper presents a new stochastic preconditioning approach for large sparse matrices. For the class of matrices that are rowwise and columnwise irreducibly diagonally dominant, we prove that an incomplete LDLT factorization in a symmetric case or an incomplete LDU factorization in an asymmetric case can be obtained from random walks and used as a preconditioner. It is argued that our factor matrices have better quality, i.e., better accuracy-size trade-offs, than preconditioners produced by existing incomplete factorization methods. Therefore a resulting preconditioned Krylov-subspace iterative solver requires less computation than traditional methods to solve a set of linear equations with the same error tolerance. The advantage increases for larger and denser matrices. These claims are verified by numerical tests, and we provide techniques that can potentially extend the theory to non-diagonally-dominant matrices. Key words. incomplete factorization, iterative solver, preconditioning, random walk AMS subject classifications. 15A12, 15A23, 60G50, 65C05, 65F10 DOI. 10.1137/07068713X
1. Introduction. Preconditioning is a crucial part of an iterative solver. Suppose a set of linear equations is Ax = b, where A is a given square nonsingular matrix that is large and sparse, b is a given vector, and x is the unknown solution vector to be computed. A preconditioner is a square nonsingular matrix T such that an iterative solver can solve the transformed system T Ax = T b with a higher convergence rate. The quality of a preconditioner matrix T is how closely it approximates A−1 , measured by the condition number of T A or the spectral radius of (I − T A). Existing preconditioning techniques can be roughly divided into two categories: explicit methods and implicit methods [3]. In explicit methods, which are often referred to as approximate inverse methods, the preconditioner T is in the form of a matrix or a polynomial of matrices [3], [4], [28]. In implicit methods, the preconditioner T is in the form of (A )−1 , where A approximates A and is easier to solve [3], [28]. Although explicit preconditioning methods have the advantage of being easily parallelizable, implicit methods have been more successfully developed and more widely used. A prominent class of implicit preconditioners are those based on incomplete LU (ILU) factorization: For example, ILU(0), ILU(k) and ILUT are popular choices in numerical computation [2], [6], [28]. Another aspect of the quality of an ILU preconditioner is its stability, i.e., the condition numbers of the ILU factors, especially for indefinite matrices [7]; for diagonally dominant matrices, this is less an issue since most ILU methods, including ours, guarantee that the ILU factors are also diagonally dominant. The subject of this paper, except for section 8, is the class of matrices that are rowwise and columnwise irreducibly diagonally dominant, defined in Definition 1.2. ∗ Received by the editors April 1, 2007; accepted for publication (in revised form) November 2, 2007; published electronically March 21, 2008. This research was supported in part by the NSF under awards CCF-0634802 and CCF-0205227 and by an IBM fellowship. http://www.siam.org/journals/sisc/30-3/68713.html † IBM T. J. Watson Research Center, Yorktown Heights, NY (
[email protected]). ‡ Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN (
[email protected]).
1178
STOCHASTIC PRECONDITIONING
1179
Definition 1.1. The matrix graph of a square matrix A of dimension N is a directed graph with N nodes labeled 1, 2, . . . , N such that, for any i = j, an edge exists from node i to node j if and only if Ai,j = 0. Definition 1.2. A square matrix A is said to be rowwise and columnwise irreducibly diagonally ' dominant if it satisfies the following conditions for any i: 1. Ai,i ≥ 'j=i |Ai,j |. 2. Ai,i ≥ j=i |Aj,i |. ' 3. There exists k such that Ak,k > j=k |Ak,j | and that there exists a directed path from node i to node k in the' matrix graph of A. 4. There exists k such that Ak,k > j=k |Aj,k | and that there exists a directed path from node k to node i in the matrix graph of A. It is worth noting the following: If row i is strictly diagonally dominant, the third condition is trivially satisfied; if column i is strictly diagonally dominant, the fourth condition is trivially satisfied; all diagonal entries are positive; A is positive definite. If A is symmetric, its matrix graph can be represented by an undirected graph; then the first and second conditions merge into one, and the third and fourth merge as well. For a matrix A that satisfies Definition 1.2, the most widely used preconditioners are the variants of incomplete triangular factorization [2], [19], [28]. If A is asym˜D ˜U ˜ )−1 , where L ˜ is a lower metric, an ILU preconditioner is in the form of T = (L ˜ ˜ triangular matrix with unit diagonals, D is a diagonal matrix, and U is an upper triangular matrix with unit diagonals. To produce such a preconditioner, various existing techniques all perform Gaussian elimination on A, and each uses a specific strategy to drop insignificant entries during the process: ILU(0) applies a pattern-based strategy ˜ i,j = 0 or U ˜i,j = 0 only if Ai,j = 0 [28]; ILUT applies a value-based and allows L ˜ or U ˜ if its value is below a threshold [28]; a strategy and drops an entry from L more advanced strategy can be a combination of pattern, threshold, and other size limits such as maximum number of entries per row. Another ILU strategy, referred ˜ during the to as modified ILU (MILU), is to compensate the diagonal entries of D ˜ ˜ ˜ factorization process to guarantee that the row sums of LDU are equal to those of A [28]. Combining the MILU strategy with the previously mentioned dropping strategies results in MILU(0), MILUT, and so on. If A is symmetric, the ILU becomes the ˜D ˜L ˜ T )−1 , where L ˜ is a incomplete Cholesky (IC) preconditioner in the form of T = (L ˜ lower triangular matrix with unit diagonals and D is a diagonal matrix. The various ILU strategies all apply in their symmetric forms and result in the preconditioners IC(0), ICT, MIC(0), MICT, and so on. For symmetric M -matrices, another approach is the support-graph method, which also produces a preconditioner in the form of ˜D ˜L ˜ T corresponds to a subgraph of the ˜D ˜L ˜ T )−1 but with the property that L T = (L matrix graph of A; the main difference between the (nonrecursive) support-graph and IC methods is that entries are dropped from the original matrix A initially rather than from the partially factored matrix during the factorization process [5], [18], [30]. The proposed preconditioning technique in this paper belongs to the category of implicit preconditioners based on incomplete factorization, and our innovation is a stochastic procedure for building the incomplete triangular factors. It is argued algorithmically that our factor matrices have better quality, i.e., better accuracy-size trade-offs, than preconditioners produced by existing incomplete factorization methods. Therefore the resulting preconditioned Krylov-subspace solver, which we refer to as the hybrid solver, requires less computation than traditional methods to solve a set of linear equations with the same error tolerance, and the advantage increases for larger and denser matrices. We use numerical tests to compare our method against
1180
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
IC(0), ICT, MICT, and support-graph preconditioners on symmetric diagonally dominant benchmarks and compare against ILU(0), ILUT, and MILUT on asymmetric diagonally dominant benchmarks. We also provide in section 5.3 the relation to explicit factored-approximate-inverse preconditioners and in section 8 techniques that can potentially extend the theory to non-diagonally-dominant matrices. Parts of this paper were initially published in [25] and [26], which dealt with only symmetric diagonally dominant M -matrices in specific engineering problems. This manuscript includes unpublished mathematical proofs in section 3, unpublished sections 4 and 8 on the generalization of the theory, unpublished techniques in section 6 to improve the performance, as well as comprehensive numerical results. For clarity of the presentation, sections 2 and 3 will describe the main framework of our theory in the context of a narrower class of matrices: If a matrix satisfies Definition 1.2 and is a symmetric M -matrix, it is referred to as an R-matrix in this paper as a shorthand notion. Then in section 4 we will show that any matrix A that satisfies Definition 1.2 can be handled with two extra techniques. We will now review previous efforts of using stochastic methods to solve systems of linear equations. Historically, the theory was developed on two seemingly independent tracks, related to the analysis of potential theory [8], [15], [17], [20], [21], [23] and to the solution of systems of linear equations [11], [15], [31], [32], [34]. However, the two applications are closely related, and research along each of these tracks has resulted in the development of analogous algorithms, some of which are equivalent. The second track will be discussed here. The first work of a random-walk-based linear equation solver is [11], although it was presented as a solitaire game of drawing balls from urns. It was proven in [11] that, if the matrix A satisfies certain conditions, a game can be constructed and a random variable1 X can be defined such that its expected value E[X] = (A−1 )ij , where (A−1 )ij is an entry of the inverse matrix of A. Two years later, the work in [34] continued this discussion in the formulation of random walks and proposed the use of another random variable, and it was argued that, in certain special cases, this variable has a lower variance than X and hence is likely to converge faster. Both [11] and [34] have the advantage of being able to compute part of an inverse matrix without solving the whole system, in other words, localizing computation. Over the years, various descendant stochastic solvers have been developed [15], [31], [32], though some of them do not have the locality property. Early stochastic solvers suffer from accuracy limitations, and this was remedied by the sequential Monte Carlo method proposed in [14] and [22], through iterative refinement. Let x be an approximate solution to Ax = b found by a stochastic solver, let the residual vector be r = b − Ax , and let the error vector be z = x − x ; then Az = r. The idea of the sequential Monte Carlo method is to iteratively solve Az = r by using a stochastic solver and, in each iteration, to compute an approximate z that is then used to correct the current solution x . Although the sequential Monte Carlo method has existed for over forty years, it has not resulted in any powerful solver that can compete with direct and iterative solvers, due to the fact that random walks are needed in every iteration, resulting in a relatively high overall time complexity. 2. Stochastic linear equation solver. In this section, we study the underlying stochastic mechanism of the proposed preconditioner. It is presented as a stand-alone stochastic linear equation solver; however, in later sections, its usage is not to solve equations but to build an incomplete factorization. 1 The
notations are different from the original ones used in [11].
STOCHASTIC PRECONDITIONING
1181
Fig. 2.1. An instance of a random walk game.
2.1. The generic algorithm. Let us consider a random walk “game” defined on a finite undirected connected graph representing a street map, for example, Figure 2.1. A walker starts from one of the nodes, and every day he/she goes to an adjacent node l with probability pi,l for l = 1, 2, . . . , degree(i), where i is the current node, degree(i) is the number of edges connected to node i, and the adjacent nodes are labeled 1, 2, . . . , degree(i). The transition probabilities satisfy
degree(i)
(2.1)
pi,l = 1.
l=1
The walker pays an amount mi to a motel for lodging every day, until he/she reaches one of the homes, which are a subset of the nodes. Note that the motel price mi is a function of his/her current location, node i. The game ends when the walker reaches a home node: He/she stays there and gets awarded a certain amount of money m0 . We now consider the problem of calculating the expected amount of money that the walker has accumulated at the end of the walk, as a function of the starting node, assuming that he/she starts with nothing. The gain function is therefore defined as (2.2)
f (i) = E[total money earned |walk starts at node i].
It is obvious that (2.3)
f (one of the homes) = m0 .
For a nonhome node i, again assuming that the nodes adjacent to i are labeled 1, 2, . . . , degree(i), the f variables satisfy
degree(i)
(2.4)
f (i) =
pi,l f (l) − mi .
l=1
For a game with N nonhome nodes, there are N linear equations similar to the one above, and the solution to this set of equations gives the exact values of f at all nodes.
1182
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
In the above equations obtained from a random walk game, the set of allowable matrices is a superset of the set of R-matrices.2 In other words, given a set of linear equations Ax = b, where A is an R-matrix, we can always construct a random walk game that is mathematically equivalent, i.e., such that the f values are the desired solution x. To do so, we divide the ith equation by Ai,i to obtain xi +
(2.5)
Ai,j j=i
(2.6)
xi =
Ai,i
xj =
bi , Ai,i
Ai,j bi . − xj + Ai,i Ai,i j=i
Equations (2.4) and (2.6) have seemingly parallel structures. Let N be the dimension of the matrix A, and let us construct a game with N nonhome nodes, which are labeled 1, 2, . . . , N . Due to the properties of an R-matrix, we have the following: A • (− Ai,j ) is a nonnegative value and can be interpreted as the transition probi,i ability of going from node i to node j. i ) can be interpreted as the motel price mi at node i. • (− Abi,i However, the above mapping is insufficient due to the fact that condition (2.1) may A ) coefficients is not necessarily one. In fact, because be broken: The sum of the (− Ai,j i,i A
) coefficients all rows of the matrix A are diagonally dominant, the sum of the (− Ai,j i,i is always less than or equal to one. Condition (2.1) can be satisfied if we add an extra transition probability of going from node i to a home node, by rewriting (2.6) as ' Ai,j b j Ai,j (2.7) xi = · m0 + i , where bi = bi − Ai,j · m0 . − xj + Ai,i Ai,i Ai,i j j=i
'
Ai,j
It is easy to verify that Aj i,i is a nonnegative value for an R-matrix and that the following mapping establishes the equivalence between (2.4) and (2.7), while satisfying (2.1) and (2.3): A ) is the probability of going from node i to node j. • (− Ai,j ' i,i Ai,j • Aj i,i is the probability of going from node i to a home with award m0 . b
i ) is the motel price mi at node i. • (− Ai,i The choice of m0 is arbitrary because bi always compensates for the m0 term in (2.7), and in fact m0 can take different values in (2.7) for different rows i. Therefore the mapping from an equation set to a game is not unique. A simple scheme can be to i . let m0 = 0, and then mi = − Abi,i To find xi , the ith entry of solution vector x, a natural way is to simulate a certain number of random walks from node i and use the average monetary gain in these walks as the approximated entry value. If this amount is averaged over a sufficiently large number of walks by playing the game a sufficiently large number of times, then, by the law of large numbers [35], an acceptably accurate solution can be obtained. According to the central limit theorem [35], the estimation error of the above procedure is asymptotically a zero-mean Gaussian variable with variance inversely
2A
matrix from a game has all of the properties of an R-matrix except that it may be asymmetric.
STOCHASTIC PRECONDITIONING
1183
proportional to M , where M is the number of walks. Thus there is an accuracy-runtime trade-off. In implementation, instead of fixing M , one may employ a stopping criterion driven by a user-specified error margin Δ and confidence level α: (2.8)
P [−Δ < xi − xi < Δ] > α,
where xi is the estimated ith solution entry from M walks. 2.2. Two speedup techniques. In this section, we propose two new techniques that dramatically improve the performance of the stochastic solver. They will play a crucial role in the proposed preconditioning technique. 2.2.1. Creating homes. As discussed in the previous section, a single entry in the solution vector x can be evaluated by running random walks from its corresponding node in the game. To find the complete solution x, a straightforward way is to repeat such a procedure for every entry. This, however, is not the most efficient approach. We propose a speedup technique by adding the following rule: After the computation of xi is finished according to criterion (2.8), node i becomes a new home node in the game with an award amount equal to the estimated value xi . In other words, any later random walk that reaches node i terminates and is rewarded a money amount equal to the assigned xi . Without loss of generality, suppose that the nodes are processed in the natural ordering 1, 2, . . . , N ; then, for walks starting from node k, the node set {1, 2, . . . , k − 1} is homes where the walks terminate (in addition to the original homes generated from the strictly diagonally dominant rows of A), while the node set {k, k + 1, . . . , N } is motels where the walks pass by. One way to interpret this technique is by the following observation about (2.4): There is no distinction between the neighboring nodes that are homes and those that are motels, and the only reason that a walk can terminate at a home node is that its f value is known and is equal to the award. In fact, any node can be converted to a home node if we know its f value and assign the award accordingly. Our new rule is simply utilizing the estimated xi ≈ xi in such a conversion. Another way to interpret this technique is by looking at the source of the value xi . Each walk that ends at a new home and obtains such an award is equivalent to an average of multiple walks, each of which continues walking from there according to the original game settings. With this new method, as the computation for the complete solution x proceeds, more and more new home nodes are created in the game. This speeds up the algorithm dramatically, as walks from later nodes are carried out in a game with a larger and larger number of homes, and the average number of steps in each walk is reduced. At the same time, this method helps convergence without increasing M , because, as mentioned earlier, each walk becomes the average of multiple walks. The only cost3 is that the game becomes slightly biased when a new home node is created, due to the fact that the assigned award value is only an estimate, e.g., xi = xi ; overall, the benefit of this technique dominates its cost. 2.2.2. Bookkeeping. Direct solvers are efficient in computing solutions for multiple right-hand-side vectors after an initial matrix factorization, since only a forward/backward substitution step is required for each additional solve. Analogous to a direct solver, we propose a speedup mechanism for the stochastic solver. 3 The cost discussed here is in the context of the stochastic solver only. For the proposed preconditioner, this will no longer be an issue.
1184
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
Recall that, in the procedure of constructing a random walk game discussed in section 2.1, the topology of the game and the transition probabilities are solely determined by the matrix A and hence do not change when the right-hand-side vector b changes. Only motel prices and award values in the game are linked to b. When solving a set of linear equations with the matrix A for the first time, we create a journey record for every node in the game, listing the following information: • For any node i, record the number of walks performed from node i. • For any node i and any visited motel node j, record the number of times that walks from node i visit node j. • For any node i and any reached home node j, which can be either an initial home in the original game or a new home node created by the technique from section 2.2.1, record the number of walks that start from i and end at j. Then, if the right-hand-side vector b changes while the matrix A remains the same, we do not need to perform random walks again. Instead, we simply use the journey record repeatedly and assume that the walker takes the same routes, gets awards at the same locations, pays for the same motels, and only the award amounts and motel prices have been modified. Thus, after a journey record is created, new solutions can be computed by some multiplications and additions efficiently. Practically, this bookkeeping is feasible only after the technique from section 2.2.1 is in use, for otherwise the space complexity can be prohibitive for a large matrix. This bookkeeping technique will serve as an important basis of the proposed preconditioner. There the bookkeeping scheme itself gets modified, and a rigorous proof is presented in section 3.2 that the space complexity of the modified bookkeeping is upper-bounded by the space complexity of an exact matrix factorization. 3. Proof of incomplete LDLT factorization for R-matrices. In this section, we build an incomplete LDLT factorization of an R-matrix A by extracting information from the journey record of random walks. The proof is described in two stages: Section 3.1 proves that the journey record contains an approximate L factor, and then section 3.2 proves that its nonzero pattern is a subset of that of the exact L factor. The formula of the diagonal D factor is derived in section 3.3. The factorization procedure is independent of the right-hand-side vector b. Any appearance of b is symbolic, and the involved equations are true for any possible b.
3.1. The approximate factorization. Suppose the dimension of the matrix A is N , and its kth row corresponds to node k in Figure 2.1, k = 1, 2, . . . , N . Without loss of generality, assume that, in the stochastic solution, the nodes are processed in the natural ordering 1, 2, . . . , N . According to the speedup technique in section 2.2.1, for random walks that start from node k, the nodes in the set {1, 2, . . . , k−1} are already solved, and they now serve as home nodes where a random walk ends. The awards for reaching nodes {1, 2, . . . , k − 1} are the estimated values of {x1 , x2 , . . . , xk−1 }, respectively. Suppose that, in (2.7), we choose m0 = 0, and hence i for i = k, k + 1, . . . , N . Further: the motel prices are given by mi = − Abi,i • Let Mk be the number of walks carried out from node k. • Let Hk,i be the number of walks that start from node k and end at node i ∈ {1, 2, . . . , k − 1}. • Let Jk,i be the number of times that walks from node k pass the motel at node i ∈ {k, k + 1, . . . , N }.
STOCHASTIC PRECONDITIONING
1185
Taking the average of the results of the Mk walks from node k, we obtain the following equation for the estimated solution entry: 'k−1 'N bi i=1 Hk,i xi + i=k Jk,i Ai,i xk = (3.1) , Mk where xi is the estimated value of xi for i ∈ {1, 2, . . . , k − 1}. Note that the awards received at the initial home nodes are ignored in the above equation since m0 = 0. Moving the Hk,i terms to the left side, we obtain (3.2)
−
k−1 i=1
Jk,i Hk,i xi + xk = bi . Mk Mk Ai,i N
i=k
By writing the above equation for k = 1, 2, . . . , N and assembling the N equations together into a matrix form, we obtain Y x = Zb,
(3.3)
where x is the approximate solution produced by the stochastic solver; Y and Z are two square matrices of dimension N such that ∀k, Hk,i =− ∀k > i, Mk =0 ∀k < i, Jk,i = ∀k ≤ i, Mk Ai,i =0 ∀k > i.
Yk,k = 1 Yk,i Yk,i Zk,i (3.4)
Zk,i
These two matrices Y and Z are the journey record built by the bookkeeping technique in section 2.2.2. Obviously Y is a lower triangular matrix with unit diagonal entries, Z is an upper triangular matrix, and their entries are independent of the righthand-side vector b. Once Y and Z are built from random walks, given any b, one can apply (3.3) and find x efficiently by a forward substitution. It is worth pointing out the physical meaning of the matrix Y : The negative of an entry (−Yk,i ) is asymptotically equal to the probability that a walk from node k ends at node i, when Mk goes to infinity. Another property of the matrix Y is that, if a walk from node k can never reach an original home node ' generated from a strictly ' Mk − Hk,i i is zero. diagonally dominant row of A, the row sum i Yk,i = Mk From (3.3), we have (3.5)
Z −1 Y x = b.
Since the vector x in the above equation is an approximate solution to the original set of equations Ax = b, it follows that4 (3.6)
Z −1 Y ≈ A.
4 For any vector b, we have (Z −1 Y )−1 b = x ≈ x = A−1 b. Therefore, A(Z −1 Y )−1 b ≈ b, and then (I − A(Z −1 Y )−1 )b ≈ 0. Since this is true for any vector b, it must be true for eigenvectors of the matrix (I − A(Z −1 Y )−1 ), and it follows that the eigenvalues of the matrix (I − A(Z −1 Y )−1 ) are all close to zero. Thus we claim that Z −1 Y ≈ A.
1186
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
Because the inverse of an upper triangular matrix Z −1 is also upper triangular, (3.6) is in the form of an approximate “UL factorization” of A. The following definition and lemma present a simple relation between UL factorization and the more commonly encountered LU factorization. Definition 3.1. The operator rev(·) is defined on square matrices: Given a matrix A of dimension N , rev(A) is also a square matrix of dimension N such that ∀i, j ∈ {1, 2, . . . , N }.
rev(A)i,j = AN +1−i,N +1−j
Lemma 1. Let A = LU be the LU factorization of a square matrix A, and then rev(A) = rev(L)rev(U ) is true and is the UL factorization of rev(A). The proof of Lemma 1 is omitted. In simple terms, it states that the reverse ordering of the LU factors of A are the UL factors of reverse-ordered A. By applying Lemma 1 on (3.6), we obtain (3.7)
rev(Z −1 )rev(Y ) ≈ rev(A).
Since A is an R-matrix and is symmetric, rev(A) must be also symmetric, and we can take the transpose of both sides and have (3.8)
T
(rev(Y ))
(
)T
rev(Z −1 )
≈ rev(A),
which is in the form of a Doolittle LU factorization [10]: The matrix (rev(Y ))T is lower triangular with unit diagonal entries; the matrix (rev(Z −1 ))T is upper triangular. Lemma 2. The Doolittle LU factorization of a square matrix is unique. The proof of Lemma 2 is omitted. Let the exact Doolittle LU factorization of rev(A) be rev(A) = Lrev(A) Urev(A) and its exact LDLT factorization be rev(A) = Lrev(A) Drev(A) (Lrev(A) )T . Since (3.8) is an approximate Doolittle LU factorization of rev(A), while the exact Doolittle LU factorization is unique, it must be true that: (3.9) (3.10)
T
(rev(Y )) ≈ Lrev(A) , ( )T ( )T rev(Z −1 ) ≈ Urev(A) = Drev(A) Lrev(A) .
The above two equations indicate that, from the matrix Y built by random walks, we can obtain an approximation to factor Lrev(A) and that the matrix Z contains redundant information. Section 3.3 shows how to estimate the matrix Drev(A) by utilizing only the diagonal entries of the matrix Z, and hence the rest of Z is not needed at all. According to (3.4), the matrix Y is the award register in the journey record and keeps track of end nodes of random walks, while the matrix Z is the motel-expense register and keeps track of all intermediate nodes of walks. Therefore the matrix Z is the dominant portion of the journey record, and, by removing all of its off-diagonal entries, the modified journey record is significantly smaller than that in the original bookkeeping technique from section 2.2.2. In fact, an upper bound on the number of nonzero entries in the matrix Y is provided in the next section. 3.2. The incomplete nonzero pattern. The previous section proves that an approximate factorization of an R-matrix A can be obtained by random walks. It does not constitute a proof of incomplete factorization, because an incomplete factorization implies that its nonzero pattern must be a subset of that of the exact factor. Such a proof is the task of this section: to prove that an entry of (rev(Y ))T can be possibly nonzero only if the corresponding entry of Lrev(A) is nonzero.
STOCHASTIC PRECONDITIONING
1187
Fig. 3.1. One step in symmetric Gaussian elimination.
For i = j, by Definition 3.1 and (3.4), the (i, j) entry of (rev(Y ))T is + (3.11)
T
,
(rev(Y ))
i,j
= YN +1−j,N +1−i = −
HN +1−j,N +1−i . MN +1−j
This value is nonzero if and only if j < i and HN +1−j,N +1−i > 0. In other words, at least one random walk starts from node (N + 1 − j) and ends at node (N + 1 − i). To analyze the nonzero pattern of Lrev(A) , certain concepts from the literature of LU factorization are used here, and certain conclusions are cited without proof. More details can be found in [1], [10], [12], [13], [16]. Figure 3.1 illustrates one step in the exact Gaussian elimination of a matrix: removing one node from the matrix graph and creating a clique among its neighbors. For example, when node v1 is removed, a clique is formed for {v2 , v3 , v4 , v5 , v6 }, where the new edges correspond to fills added to the remaining matrix. At the same time, five nonzero values are written into the L matrix, at the five entries that are the intersections5 of node v1 ’s corresponding column and the five rows that correspond to nodes {v2 , v3 , v4 , v5 , v6 }. Definition 3.2. Given a graph G = (V, E), a node set S ⊂ V , and nodes / S, node v2 is said to be reachable from node v1 through v1 , v2 ∈ V such that v1 , v2 ∈ S if there exists a path between v1 and v2 such that all intermediate nodes, if any, belong to S. Definition 3.3. Given a graph G = (V, E), a node set S ⊂ V , and a node / S, the reachable set of v1 through S, denoted by R(v1 , S), is v1 ∈ V such that v1 ∈ defined as: R (v1 , S) = {v2 ∈ / S|v2 is reachable from v1 through S}. Note that, if v1 and v2 are adjacent, there is no intermediate node on the path between them, and then Definition 3.2 is satisfied for any node set S. Therefore, R(v1 , S) always includes the direct neighbors of v1 that do not belong to S. Given an R-matrix A, let G be its matrix graph, let L be the complete L factor in its exact LDLT factorization, and let v1 and v2 be two nodes in G. Note that every node in G has a corresponding row and a corresponding column in A and in L. The following lemma can be derived from [13, p. 98], [16]. Lemma 3. The entry in L at the intersection of column v1 and row v2 is nonzero if and only if: 1. v1 is eliminated prior to v2 during Gaussian elimination; 2. v2 ∈ R (v1 , {nodes eliminated prior to v1 }). 5 In this section, rows and columns of a matrix are often identified by their corresponding nodes in the matrix graph, and matrix entries are often identified as intersections of rows and columns. The reason is that such references are independent of the matrix ordering and thereby avoid confusion due to the two orderings involved in the discussion.
1188
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
We now apply this lemma on Lrev(A) . Because the factorization of rev(A) is performed in the reverse ordering, i.e., N, N − 1, . . . , 1, the (i, j) entry of Lrev(A) is the entry at the intersection of the column that corresponds to node (N + 1 − j) and the row that corresponds to node (N + 1 − i). This entry is nonzero if and only if both of the following conditions are met: 1. Node (N + 1 − j) is eliminated prior to node (N + 1 − i). 2. (N + 1 − i) ∈ R(N + 1 − j, Sj ), where Sj = {nodes eliminated prior to N + 1 − j}. Again, because the Gaussian elimination is carried out in the reverse ordering N, N − 1, . . . , 1, the first condition implies that N + 1 − j > N + 1 − i and hence j < i. The node set Sj in the second condition is simply {N + 2 − j, N + 3 − j, . . . , N }. Recall that (3.11) is nonzero if there is at least one random walk that starts from node (N +1−j) and ends at node (N +1−i). Also recall that according to section 2.2.1, when random walks are performed from node (N + 1 − j), nodes {1, 2, . . . , N − j} are home nodes that walks terminate, while nodes Sj = {N + 2 − j, N + 3 − j, . . . , N } are the motel nodes that a walk can pass through. Therefore, a walk from node (N +1−j) can possibly end at node (N + 1 − i) only if (N + 1 − i) is reachable from (N + 1 − j) through the motel node set, i.e., node set Sj . By now it is proven that both conditions for (Lrev(A) )i,j to be nonzero are necessary conditions for (3.11) to be nonzero. Therefore, the nonzero pattern of (rev(Y ))T is a subset of the nonzero pattern of Lrev(A) . Together, this conclusion and (3.9) give rise to the following lemma. Lemma 4. (rev(Y ))T is the L factor of an incomplete LDLT factorization of the matrix rev(A). This lemma indicates that, from random walks, we can obtain an incomplete LDLT factorization of the matrix A in its reversed index ordering. The remaining approximate diagonal matrix D is derived in the next section. 3.3. The diagonal component. To evaluate the approximate D matrix, we take the transpose of both sides of (3.10) and obtain rev(Z −1 ) ≈ Lrev(A) Drev(A) .
(3.12)
Lemma 5. For a nonsingular square matrix A, rev(A−1 ) = (rev(A))−1 . The proof of Lemma 5 is omitted. By applying it to (3.12), we have −1
(rev(Z))
≈ Lrev(A) Drev(A) ,
I ≈ rev(Z)Lrev(A) Drev(A) .
(3.13)
Recall that rev(Z) and Lrev(A) are both lower triangular, that Lrev(A) has unit diagonal entries, and that Drev(A) is a diagonal matrix. Therefore, the (i, i) diagonal entry in the above equation is simply ( ) ( ) (rev(Z))i,i Lrev(A) i,i Drev(A) i,i ≈ 1, ( ) 1 Drev(A) i,i ≈ (3.14) . (rev(Z))i,i By applying Definition 3.1 and (3.4), we finally have the equation for computing the approximate D factor, given as follows: (3.15)
(
Drev(A)
) i,i
≈
1 MN +1−i AN +1−i,N +1−i = . ZN +1−i,N +1−i JN +1−i,N +1−i
1189
STOCHASTIC PRECONDITIONING J
+1−i . It is It is worth pointing out the physical meaning of the quantity N +1−i,N MN +1−i the average number of times that a walk from node N + 1 − i passes node N + 1 − i itself; in other words, it is the average number of times that the walker returns to his/her starting point before the game is over. Equation (3.15) indicates that an entry in the D factor is equal to the corresponding diagonal entry of the original matrix A divided by the expected number of returns.
4. Proof of incomplete LDU/LDLT factorization for diagonally dominant matrices. The previous two sections have presented the theory of stochastic preconditioning in the context of R-matrices. In this section, we show that the proposed preconditioner applies to any matrix A that satisfies Definition 1.2. 4.1. Asymmetric matrices. Let us first remove the symmetry requirement on the matrix A. Recall that the construction of the random walk game and the derivation of (3.7) do not require A to be symmetric. Therefore, matrices Y and Z can still be obtained by (3.4) from random walks, which we will refer to as YA and ZA in this section, and (3.7) remains true for an asymmetric matrix A. Suppose that the exact LDU factorization of the matrix rev(A) is rev(A) = Lrev(A) Drev(A) Urev(A) , where Lrev(A) is a lower triangular matrix with unit diagonal entries, Urev(A) is an upper triangular matrix with unit diagonal entries, and Drev(A) is a diagonal matrix. It is easy to show, based on Lemma 2, that the LDU factorization is also unique. By substituting the factorization into (3.7), we have (4.1)
−1 )rev(YA ) ≈ Lrev(A) Drev(A) Urev(A) . rev(ZA
Based on the uniqueness of LDU factorization, it must be true that (4.2) (4.3)
rev(YA ) ≈ Urev(A) , −1 rev(ZA ) ≈ Lrev(A) Drev(A) .
By (4.2), we can approximate Urev(A) based on YA ; by (4.3), and through the same derivation as in section 3.3, we can approximate Drev(A) based on the diagonal entries of ZA . The remaining question is how to obtain Lrev(A) . Suppose that we construct a random walk game based on AT instead of A and obtain matrices YAT and ZAT based on (3.4). Then by (4.2), we have (4.4)
rev(YAT ) ≈ Urev(AT ) ,
where Urev(AT ) is the exact U factor in the LDU factorization of rev(AT ). It is easy to derive the following: )T ( )T ( T rev(AT ) = (rev(A)) = Urev(A) Drev(A) Lrev(A) , (4.5) )T ( )T ( (4.6) Lrev(AT ) Drev(AT ) Urev(AT ) = Urev(A) Drev(A) Lrev(A) . Based on the uniqueness of the LDU factorization, it must be true that ( )T Lrev(A) = Urev(AT ) . (4.7) By (4.4) and (4.7), we finally have )T ( rev(YAT ) ≈ Lrev(A) , (4.8)
T
(rev(YAT )) ≈ Lrev(A) .
In other words, we can approximate Lrev(A) based on YAT .
1190
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
Fig. 4.1. One step in asymmetric Gaussian elimination.
As a summary, when the matrix A is asymmetric, we need to construct two random walk games for A and AT , and then, based on the two Y matrices and the diagonal entries of one of the Z matrices,6 we can approximate the LDU factorization of rev(A) based on (3.4), (3.15), (4.2), and (4.8). Both the time complexity and the space complexity of building the preconditioner become roughly twice those of the symmetric case: This is the same behavior as a traditional ILU. The remainder of this section gives an outline of the proof that the nonzero patterns of the above approximate L and U factors by (4.2) and (4.8) are subsets of those of the exact factors. The proof is essentially the same as section 3.2, and we will point out only the differences due to asymmetry. Figure 4.1 illustrates one step in the exact Gaussian elimination of an asymmetric matrix: removing one node from the matrix graph and adding an edge from each of its fan-in nodes to each of its fan-out nodes; the new edges correspond to fills added to the remaining matrix. In the example of Figure 4.1, when node v1 is removed, edges are added from each of its fan-in nodes {v2 , v3 , v4 } to each of its fan-out nodes {v4 , v5 , v6 }, with the exception that no edge is added from v4 to itself. At the same time, three nonzero values are written into the L matrix, at the three entries that are the intersections of node v1 ’s corresponding column and the three rows that correspond to nodes {v2 , v3 , v4 }; three nonzero values are written into the U matrix, at the three entries that are the intersections of node v1 ’s corresponding row and the three columns that correspond to nodes {v4 , v5 , v6 }. By utilizing Figure 4.1, we can prove Lemma 6, which is the asymmetric version of Lemma 3; the proof is omitted. Definition 4.1. Given a directed graph G = (V, E), a node set S ⊂ V , and / S, node v2 is said to be reachable from node v1 nodes v1 , v2 ∈ V such that v1 , v2 ∈ through S if there exists a directed path from v1 to v2 such that all intermediate nodes, if any, belong to S. Definition 4.2. Given a directed graph G = (V, E), a node set S ⊂ V , and a / S, the reachable set of v1 through S, denoted by R(v1 , S), node v1 ∈ V such that v1 ∈ is defined as: / S|v2 is reachable from v1 through S}. R (v1 , S) = {v2 ∈ Lemma 6. Suppose that the exact LDU factorization of a square matrix A exists and is A = LDU . Let v1 and v2 be two nodes in the matrix graph of A. The entry in L at the intersection of column v1 and row v2 is nonzero if and only if: 1. v1 is eliminated prior to v2 during Gaussian elimination; 2. v1 ∈ R(v2 , {nodes eliminated prior to v1 }). The entry in U at the intersection of column v1 and row v2 is nonzero if and only if: 1. v2 is eliminated prior to v1 during Gaussian elimination; 2. v1 ∈ R(v2 , {nodes eliminated prior to v2 }). 6 Due
to the uniqueness of the LDU, it does not matter the diagonals of which Z are used.
1191
STOCHASTIC PRECONDITIONING
Fig. 4.2. A random walk in the modified game with scaling.
By applying Lemma 6 on the matrix rev(A), and through the same procedure as section 3.2, it can be proven7 that the conditions in Lemma 6 are necessary conditions for the nonzero entries in (4.2) and (4.8). Therefore, the nonzero patterns of the approximate L and U factors by (4.2) and (4.8) are subsets of those of the exact factors, and hence we have the following lemma, the asymmetric version of Lemma 4. Lemma 7. (rev(YAT ))T is the L factor, and rev(YA ) is the U factor, of an incomplete LDU factorization of the matrix rev(A). 4.2. Random walk game with scaling. This section shows that our method does not require A to be an M -matrix: As long as the matrix A satisfies Definition 1.2, its off-diagonal entries can be positive or complex-valued. To remove the M -matrix constraint, a new game is designed by defining a scaling factor8 s on each direction of every edge in the original game from section 2.1. Such a scaling factor becomes effective when a random walk passes that particular edge in that particular direction and remains effective until this random walk ends. Let us look at the stochastic solver first. A walk is shown in Figure 4.2: It passes a number of motels, each of which has its price ml , l ∈ {1, 2, . . . , Γ}, and ends at a home with certain award value maward . The monetary gain of this walk is defined as
(4.9)
gain = −m1 − s1 m2 − s1 s2 m3 − · · · −
Γ−1 4
sl · mΓ +
l=1
Γ 4
sl · maward .
l=1
In simple terms, this new game is different from the original game in that each transaction amount during the walk is scaled by the product of the currently active scaling factors. Define the expected gain function f to be the same as in (2.2), and it is easy to derive the replacement of (2.4):
degree(i)
(4.10)
f (i) =
pi,l si,l f (l) − mi ,
l=1
where si,l denotes the scaling factor associated with the direction i → l of the edge between i and l and the rest of the symbols are the same as defined in (2.4). In this section, we require that each s factor has an absolute value of one. Therefore, the scaling in the new game never changes the magnitude of a monetary transaction and changes only its sign or phase. Section 8 will discuss other scaling schemes.
7 It is important to note that, in the random walk game constructed for AT , the road directions are all reversed, compared to the game for A. 8 A similar concept of scaling factors can be found in [11], though tailored to its game design.
1192
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
Due to the degrees of freedom introduced by the scaling factors, the allowable matrix A in (4.10) is now any matrix that is diagonally dominant. Hence, given any matrix A that satisfies Definition 1.2, a corresponding game can be constructed. For stochastic preconditioning, this new game design with scaling leads to the redefinition of the H and J quantities in section 3.1 and in all subsequent formulations: ⎛ ⎞ 4 ⎝ Hk,i = (4.11) ∀k > i, sl ⎠ walk from k to i
(4.12)
Jk,i =
step l
partial walk from k to i
⎛ ⎝
4
⎞ sl ⎠
∀k ≤ i.
step l
Note that a single walk from k may contribute multiple terms in the summation in (4.12), if it passes node i multiple times, and each such time the partial walk is defined as the segment from the start k to the current passing of i; a partial walk in (4.12) always starts from the initial start at k and not from any intermediate passing of k; in the case of i = k, the summation in (4.12) includes a zero-length partial walk for each walk performed from k, and the zero-length product of scaling factors is defined to be one. When all s factors are set to be one, the new game design is identical to the original one, and (4.11) and (4.12) simply degenerate to the original definitions of H and J in section 3.1. After applying the new formulas (4.11) and (4.12), all of the equations and derivations in sections 3 and 4 are valid, and now we have proven the proposed preconditioning method for any matrix A that satisfies Definition 1.2. 5. Relation to and comparison with ILU/IC/approximate-inverse methods. In this section, the proposed hybrid solver for diagonally dominant matrices is presented in its entirety, by summarizing the previous three sections; its relation to and comparison with several existing preconditioning methods are discussed. 5.1. The hybrid solver. We begin by defining the rev(·) operator on vectors. Definition 5.1. The operator rev(·) is defined on vectors: Given vector x of length N , rev(x) is a vector of length N and rev(x)i = xN +1−i ∀i ∈ {1, 2, . . . , N }. It is easy to verify that Ax = b is equivalent to rev(A)rev(x) = rev(b). By now, we have collected the necessary pieces, and the hybrid solver is summarized in the pseudocodes in Algorithms 1 and 2. Here conjugate gradient (CG) and biconjugate gradient (BCG) are listed as example iterative engines, and the proposed preconditioner can work with any iterative solver. Algorithm 1. The hybrid solver for symmetric matrices. Precondition { Run random walks, build the matrix Y and find diagonal entries of Z using (3.4)(4.11)(4.12); Build Lrev(A) using (3.9); Build Drev(A) using (3.15); } Given b, solve { Convert Ax = b to rev(A)rev(x) = rev(b); Apply CG on rev(A)rev(x) = rev(b) with the −1 preconditioner (Lrev(A) Drev(A) LT ; rev(A) ) Convert rev(x) to x; }
STOCHASTIC PRECONDITIONING
1193
Algorithm 2. The hybrid solver for asymmetric matrices. Precondition { Run random walks in the game for A, build the matrix YA and find diagonal entries of ZA using (3.4)(4.11)(4.12); Build Urev(A) using (4.2) based on YA ; Build Drev(A) using (3.15) based on diagonals of ZA ; Run random walks in the game for AT , build the matrix YAT using (3.4)(4.11); Build Lrev(A) using (4.8) based on YAT ; } Given b, solve { Convert Ax = b to rev(A)rev(x) = rev(b); Apply BCG on rev(A)rev(x) = rev(b) with the preconditioner (Lrev(A) Drev(A) Urev(A) )−1 ; Convert rev(x) to x; }
5.2. Comparison with ILU/IC. This section presents an algorithmic argument that the proposed preconditioner has better quality than the traditional ILU or IC preconditioners. In other words, if the incomplete LDU/LDLT factorization built by our method has the same number of nonzero entries as one that is built by a traditional approach, we expect a better approximation to the exact LDU/LDLT factorization and a preconditioned Krylov-subspace solver to converge with fewer iterations. Experimental condition number comparisons are provided in section 7. Let us use the symmetric ICT approach as an example of traditional preconditioning methods; a similar argument can be made for other existing techniques for either symmetric or asymmetric matrices, as long as they are sequential procedures based on Gaussian elimination. Suppose that in Figure 3.1, when eliminating node v1 , the new edge between nodes v2 and v3 corresponds to an entry whose value falls below a specified threshold; then ICT drops that entry from the remaining matrix, and that edge is removed from the remaining matrix graph. Later when the algorithm reaches the stage of eliminating node v2 , because of that missing edge, no edge is created from v3 to the neighbors of v2 , and thus more edges are missing; this new set of missing edges then affect later computations accordingly. Therefore, an early decision of dropping an entry is propagated throughout the ICT process. On the one hand, this leads to the sparsity of the preconditioner, which is desirable; on the other hand, error accumulation occurs, and later columns of the resulting incomplete Cholesky factor may deviate from the exact Cholesky factor by a larger amount than the planned threshold. Such error accumulation gets exacerbated for larger and denser matrices. Let us generalize the above argument to all traditional ILU/IC methods and state it from a different perspective: Traditional ILU/IC methods are all sequential procedures where later rows/columns are calculated based on previously computed rows/columns, which may contain errors due to dropped entries; such data dependency can be represented by a directed acyclic graph; the depth of this dependency graph increases for larger and denser matrices, and we argue that higher data-dependency depth implies stronger error accumulation. The hybrid solver does not suffer from such error accumulation. According to (3.4), the calculation of Yk,i , which will become a single entry in the resulting incomplete LDU/LDLT factorization, replies on no other Y or Z entries. In other words, (3.4) provides a direct unbiased Monte Carlo estimator for each single entry in an LDU/LDLT factorization and allows obtaining any single entry in an LDU/LDLT
1194
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
factorization without computing any other entry. Practically, a row of the matrix Y can be computed together because the needed random walks are shared by its entries. When we run random walks from node k and collect the Hk,i values to build the kth row of the matrix Y according to (3.4), we know only that the nodes {1, 2, . . . , k − 1} are homes, and this is the only information needed.9 Therefore, the quality of the computed kth row of the matrix Y does not affect other rows in any way; each row is responsible for its own accuracy, according to a criterion to be discussed in section 6.1. In fact, in a parallel computing environment, the computation of each row of Y can be assigned to a different processor. It is important to distinguish the error accumulation discussed in this section and the cost of bias discussed at the end of section 2.2.1. For an iterative solver with implicit factorization-based preconditioning, there can be two different types of error accumulation: First, the process of building the preconditioner can accumulate error, as described earlier with the ICT example; second, the process of applying the preconditioner inside an iterative solver can accumulate error; i.e., the forward/backward substitution is a sequential solving process and later entries in the resulting vector are calculated based on earlier entries which contain errors. That bias discussed at the end of section 2.2.1, in the context of the hybrid solver, maps to the second type, the error accumulation in applying the preconditioner; such bias or error propagation is inevitable in all iterative solvers as long as an implicit factorization-based preconditioner is in use. Our claim here is that the hybrid solver is free of error accumulation in building the preconditioner and not in applying the preconditioner.10 In summary, due to the absence of error accumulation in building the preconditioner, we expect the proposed stochastic preconditioning to outperform traditional ILU/IC methods, and we expect a larger advantage on larger and denser matrices. 5.3. Relation to factored approximate inverses. A factored-approximateinverse preconditioner approximates A−1 in the form of a product of, typically three, matrices and is obtained by either a norm-minimization approach or an inverse triangular factorization [3], [4]. As an explicit preconditioning method, it has the advantages of parallelism as well as better stability, compared to ILU/IC, although its accuracy-size trade-off is often inferior. It was shown in [4] that triangular factored approximate inverses can be viewed as approximations of the inverses of LDU/LDLT factors. This section shows that such a factored-approximate-inverse preconditioner can be easily produced by our stochastic procedure and that we again have the advantage of being free of error accumulation over existing methods. By applying Lemma 5 on (4.3), we have rev(ZA )−1 ≈ Lrev(A) Drev(A) , −1 rev(ZA ) ≈ Drev(A) L−1 rev(A) ,
Drev(A) rev(ZA ) ≈ L−1 rev(A) .
(5.1)
9 As a side note, recall the property of the matrix Y that, for an R-matrix A, if a walk from node k can never reach an original 'home node generated from a strictly diagonally dominant row of
A, the row sum
'
i
Yk,i =
Mk −
i
Mk
Hk,i
is guaranteed to be zero. This is a property of an exact
LDLT factorization of an R-matrix and is maintained naturally in our preconditioner. This aspect is similar in flavor as MILU/MIC, which achieve this property by compensating their diagonal entries. 10 After a row of the matrix Y is calculated, it is possible to add a postprocessing step to drop insignificant entries. The criterion can be any of the strategies used in traditional incomplete factorization methods. With such postprocessing, the hybrid solver still maintains the advantage of independence between row calculations. This is not included in our implementation.
STOCHASTIC PRECONDITIONING
1195
Since ZA can be built by (3.4) and Drev(A) can be built by (3.15), the above equation gives a stochastic approach to approximate the matrix L−1 rev(A) . T Suppose that we construct a game based on A instead of A and obtain matrices ZAT and Drev(AT ) based on (3.4) and (3.15). Then by (5.1), we have (5.2)
. Drev(AT ) rev(ZAT ) ≈ L−1 rev(AT )
Based on the uniqueness of the LDU factorization, and by (4.6), it must be true that (5.3)
T Lrev(AT ) ≈ Urev(A) .
By combining the above two equations, we get ,−1 + T Drev(AT ) rev(ZAT ) ≈ Urev(A) , ,T + −1 Drev(AT ) rev(ZAT ) ≈ Urev(A) , (5.4)
−1 T (rev(ZAT )) Drev(A T) ≈ U rev(A) . T
−1 The above equation gives a stochastic approach to approximate the matrix Urev(A) . With (5.1) and (5.4), we can construct a factored-approximate-inverse precon˜ −1 ˜ −1 ˜ −1 ˜ −1 ˜ −1 ˜ −1 D ditioner in the form of U rev(A) rev(A) Lrev(A) , where Urev(A) , Drev(A) , and Lrev(A) −1 −1 −1 approximate Urev(A) , Drev(A) , and Lrev(A) , respectively. For a symmetric matrix A, −1 T since Urev(A) = (L−1 rev(A) ) , only (5.1) is needed. It is worth pointing out the physical meaning of this preconditioner: The approximate-inverse factor is contained in the motel register matrix Z rather than the award register matrix Y . Comparing the above stochastic approach against existing methods for building triangular factored approximate inverses, we again note that existing methods are sequential procedures where later rows/columns are calculated based on previously computed rows/columns which contain errors and that our approach has the advantage of being free of error accumulation. Therefore, the stochastic approximate-inverse preconditioning in this section may potentially give a better performance. In this paper, we choose to focus on implicit stochastic preconditioning and will not further discuss the stochastic approximate-inverse method beyond this section.
6. Implementation issues. This section describes several implementation aspects of the stochastic preconditioning. The goal is to minimize the cost of building the preconditioner and to achieve a better accuracy-size trade-off. 6.1. Stopping criterion. The topic of this section is the accuracy control of the preconditioner, that is, how one should choose Mk , the number of walks from node k, to achieve a certain accuracy level in estimating its corresponding entries in the LDU/LDLT factorization. In section 2.1, the stopping criterion in the stochastic solver is defined on the result of a walk; it is not applicable to preconditioning because here it is necessary for the criterion to be independent of the right-hand-side vector b. In our implementation, a new stopping criterion is defined on a value that is a function of only the matrix A, as follows. Let Ξk = E[length of a walk from node k], and let Ξk be the average length of the Mk walks. The stopping criterion is Ξk − Ξk (6.1) < Δ > α, P −Δ < Ξk
1196
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
where Δ is a relative error margin and α is a confidence level; for example, α = 99%. Practically, this criterion is checked by the following inequality: √ 1−α ΔΞk Mk −1 >Q (6.2) , σk 2 where σk is the standard deviation of the lengths of the Mk walks and Q is the standard normal complementary cumulative distribution function. Thus, Mk is determined dynamically, and random walks are run from node k until condition (6.1) is satisfied. Different choices of Δ and/or α result in different trade-off points between accuracy and size of the proposed preconditioner. Our experience with three-dimensional (3D) Laplacian matrices and VLSI circuit placement matrices, for which results will be presented in section 7, suggests that a Δ value between 30% and 40% with α = 99% gives a reasonably good trade-off, and the condition (6.1) is typically satisfied within 100 walks. In practice, it is also necessary to impose a lower bound on Mk , e.g., 20 walks. Note that this is not the only way to design the stopping criterion: It can also be defined on quantities other than Ξk (for example, the expected number of returns). 6.2. Exact computations for one-step walks. The technique in this section is a special treatment for the random walks with length 1, which we refer to as onestep walks. Such a walk occurs when an immediate neighbor of the starting node is a home node, and the first step of the walks happens to go there. The idea is to replace stochastic computations of one-step walks with their deterministic limits. Without loss of generality, assume that the node ordering in the hybrid solver is the natural ordering 1, 2, . . . , N . Let us consider the Mk walks from node k, and suppose that at least one of its immediate neighboring nodes is a home, which could be either an initial home, if the kth row of the matrix A is strictly diagonally dominant, or a node j such that j < k. Among the Mk walks, let Mk,1 be the number of one-step walks, and let Hk,i,1 be the portion of (4.11) contributed by one-step walks that go to node i, where node i is an arbitrary node such that i < k. For the case that node i is not adjacent to node k, Hk,i,1 is simply zero. Equation (3.4) can be rewritten as Mk − Mk,1 Hk,i − Hk,i,1 Hk,i Hk,i,1 Yk,i = − =− − (6.3) · . Mk Mk Mk Mk − Mk,1 By applying the H values in (4.11), the mapping between (2.7) and (4.10), and the fact that every scaling factor s has unit magnitude, we can derive the following: (6.4) (6.5)
Hk,i,1 Ak,i = sstep k to i · P [first step goes to node i] = − , Mk →∞ Mk Ak,k Mk − Mk,1 lim = P [first step goes to a nonabsorbing node] Mk →∞ Mk ' j>k |Ak,j | P [first step goes to node j] = . = Ak,k lim
j>k
H
M −M
k,i,1 We modify (6.3) by replacing the term M and the term k Mk k,1 with their k limits given by the above two equations and obtain a new formula for Yk,i : ' Hk,i − Hk,i,1 Ak,i j>k |Ak,j | Yk,i = − (6.6) · . Ak,k Ak,k Mk − Mk,1
1197
STOCHASTIC PRECONDITIONING H
−H
, which The remaining stochastic part of this new equation is the term Mk,ik −Mk,i,1 k,1 can be evaluated by considering only random walks whose length is at least two; in other words, one-step walks are ignored. In implementation, this can be realized by simulating the first step of walks by randomly picking one of the nonabsorbing neighbors of node k; note that then the number of random walks would automatically be (Mk − Mk,1 ), and no adjustment is needed. With a similar derivation, the Zk,k formula11 in (3.4) can be modified to (6.7)
Zk,k
1 = − Ak,k
'
'
j>k |Ak,j | A2k,k
+
j>k |Ak,j | A2k,k
Jk,k − Jk,k,1 · , Mk − Mk,1
where Jk,k,1 is the portion of (4.12) contributed by one-step walks. Obviously Jk,k,1 = Mk,1 , and therefore (6.8)
Zk,k
1 = + Ak,k
'
j>k |Ak,j | A2k,k
Jk,k − Mk,1 · −1 . Mk − Mk,1 J
−M
k,1 , again can be The remaining stochastic part of this new equation, the term Mk,k k −Mk,1 evaluated by considering only random walks with length being at least two. Practically, such a computation is concurrent with evaluating Yk,i ’s based on (6.6). The benefit of replacing (3.4) with (6.6) and (6.8) is twofold: • Part of the evaluation of Yk,i and Zk,k entries is converted from stochastic computation to its deterministic limit, and the accuracy is potentially improved. For a node k where all neighbors have lower indices, i.e., when all neighbors are home nodes, (6.6) and (6.8) become exact: They translate to the exact entry values in the complete LDU/LDLT factorization. • By avoiding simulating one-step walks, the amount of computation in building the preconditioner is reduced. For a node k where all neighbors are homes, the stochastic parts of (6.6) and (6.8) disappear, and no walks are needed.
6.3. Reusing walks. Without loss of generality, assume that the node ordering in the hybrid solver is the natural ordering 1, 2, . . . , N . A sampled random walk is completely specified by the node indices along the way and hence can be viewed as a sequence of integers {k1 , k2 , . . . , kΓ } such that k1 > kΓ , that k1 ≤ kl ∀l ∈ {2, . . . , Γ − 1}, and that an edge exists between node kl and node kl+1 ∀l ∈ {1, . . . , Γ − 1}. If a sequence of integers satisfies the above requirements, it is referred to as a legal sequence and can be mapped to an actual random walk. Due to the fact that a segment of a legal sequence may also be a legal sequence, it is possible to extract multiple legal sequences from a single simulated random walk and use them also as random walks in the evaluation of (3.4) or its placement, (6.6) and (6.8). However, there are rules that one must comply with when extracting these legal sequences. A fundamental premise is that random samples must be independent of each other. If two walks share a segment, they become correlated. Note that, if two walks have different starting nodes, they never participate in the same equation (6.6) or (6.8) and hence are allowed to share segments; if two walks have the same starting nodes, however, they are prohibited from overlapping. Moreover, due to the technique in the previous section, any one-step walk should be ignored. 11 Recall
that we need only diagonal entries of the matrix Z.
1198
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
(a) (b)
{2, 4, 6, 4, 5, 7, 6, 3, 2, 5, 8, 1} {4, 6, 4, 5, 7, 6, 3} {5, 7, 6, 3}
{5, 8, 1}
Fig. 6.1. An example of (a) the legal sequence of a simulated random walk and (b) three extra walks extracted from it.
Figure 6.1 shows an example of extracting multiple legal sequences from a single simulated random walk. The sequence {2, 5, 8, 1} cannot be used because it has the same starting node as the entire sequence; the sequence {4, 5, 7, 6, 3} cannot be used because it has the same starting node as {4, 6, 4, 5, 7, 6, 3} and the two sequences overlap.12 On the other hand, {5, 7, 6, 3} and {5, 8, 1} are both extracted because they do not overlap and hence are two independent random walks. Considering all of the above requirements, the procedure for an R-matrix is shown in Algorithm 3, where the extracted legal sequences are directly accounted for in the M , H, and J accumulators, which are defined as in section 3.1. Note that the simulated random walk is never stored in memory, and the only extra storage due to this technique is the stacks, which contain a monotonically increasing sequence of integers at any moment. For a random walk game with scaling for a non-R-matrix, more storage is needed to keep track of products of scaling factors, and the increments of H and J variables should be the proper products of scaling factors instead of ones. Algorithm 3. Extract multiple walks from a single simulation, for an R-matrix. stack1.push( k1 ); stack2.push( 1 ); For l = 2, 3, . . . , until the end of walk, do { While( kl < stack1.top() ){ If( l > stack2.top()+1 ){ k = stack1.top(); Mk = Mk + 1; Hk ,kl = Hk ,kl + 1; Jk ,k = Jk ,k + 1; } stack1.pop(); stack2.pop(); } If( kl > stack1.top() ){ stack1.push( kl ); stack2.push( l ); } else Jkl ,kl = Jkl ,kl + 1; }
This technique reduces the preconditioning runtime by fully utilizing the information contained in each simulated walk such that it contributes to (6.6) and (6.8) as multiple walks. It guarantees that no two overlapping walks have the same starting node and hence does not hurt the accuracy of the produced preconditioner. The only 12 It is also legitimate to extract {4, 5, 7, 6, 3} instead of {4, 6, 4, 5, 7, 6, 3}. However, the premise of random sampling must be fulfilled: The decision of whether to start a sequence with k2 = 4 must be made without the knowledge of numbers after k2 , and the decision of whether to start a sequence with k4 = 4 must be made without the knowledge of numbers after k4 . The strategy in Algorithm 3 is to start a sequence as early as possible and hence produces {4, 6, 4, 5, 7, 6, 3} instead of {4, 5, 7, 6, 3}.
1199
STOCHASTIC PRECONDITIONING
cost of this technique is that the node ordering must be determined beforehand, and hence pivoting is not allowed during the incomplete factorization.13 6.4. Matrix ordering. In existing factorization-based preconditioning techniques, matrix ordering affects the performance. The same statement is true for the proposed stochastic preconditioner. In general, since we perform an incomplete LDU/LDLT factorization on the reverse ordering of the matrix A, we can utilize any existing ordering method on A and then reverse the resulting ordering; in this way, any benefit of that ordering method can be inherited by us. For example, a reversed approximate minimum degree (AMD) ordering [1] is likely to improve the accuracysize trade-off; a reversed reverse Cuthill–McKee (RCM) ordering [9], which becomes the original Cuthill–McKee ordering, is likely to improve cache efficiency. 7. Numerical results. Twelve symmetric benchmark matrices are used to compare the stochastic preconditioner against IC(0), ICT, MICT, and support-graph preconditioners; twelve asymmetric benchmark matrices are used to compare the stochastic preconditioner against ILU(0), ILUT, and MILUT preconditioners. The first set of symmetric benchmarks are generated by SPARSKIT [29] by finitedifference discretization, with the regular seven-point stencil, of the 3D Laplace equation ∇2 u = 0 with a Dirichlet boundary condition. The matrices correspond to 3D grids with sizes 50 by 50 by 50, 60 by 60 by 60, up to 100 by 100 by 100, and a righthand-side vector with all entries being 1 is used with each of them. They are listed in Tables 7.1, 7.3 and 7.4 as benchmarks m1–m6. Another six application-specific benchmarks, m7–m12, are reported in Tables 7.3, 7.4, and 7.2: They are symmetric placement matrices from VLSI designs and are denser than the 3D-grid matrices. The twelve asymmetric benchmarks m1’–m12 in Tables 7.5 and 7.6 are derived from m1 to m12, respectively: Each of them is generated by randomly switching signs of the off-diagonal entries in a symmetric benchmark and randomly removing a fraction of its off-diagonal entries. MATLAB is used to generate the IC(0), ICT, MICT, ILU(0), ILUT, and MILUT preconditioners. Three matrix ordering algorithms are available in MATLAB: minimum degree ordering [12], AMD [1], and RCM [9]. AMD results in the best performance on the benchmarks and is used for all. TAUCS [33] is used to generate Table 7.1 Condition number comparison of the stochastic preconditioner against ICT, MICT, and support-graph preconditioners on the symmetric 3D-grid benchmarks. N is the dimension of a matrix; E is the number of nonzero entries of a matrix; C1 is the condition number of the original matrix; S is the number of nonzero entries of the preconditioner (L and D matrices); C2 is the condition number after split preconditioning. Matrix m1 m2 m3 m4 m5 m6
N
E
C1
1.25e5 2.16e5 3.43e5 5.12e5 7.29e5 1.00e6
8.60e5 1.49e6 2.37e6 3.55e6 5.05e6 6.94e6
1.05e3 1.51e3 2.04e3 2.66e3 3.36e3 4.13e3
ICT S C2 1.72e6 19.4 3.00e6 27.7 4.80e6 37.4 7.20e6 48.5 1.03e7 61.0 1.42e7 75.2
MICT S C2 1.72e6 30.3 3.00e6 43.2 4.80e6 58.4 7.20e6 75.8 1.03e7 95.2 1.42e7 117
Support-graph S C2 1.73e6 623 3.11e6 564 4.79e6 587 7.30e6 1.46e3 1.03e7 866 1.42e7 2.07e3
Stochastic S C2 1.71e6 4.72 3.02e6 4.84 4.87e6 5.13 7.35e6 4.78 1.06e7 5.05 1.46e7 5.15
13 For irreducibly diagonally dominant matrices, pivoting is not needed. For more general matrices to be discussed in section 8, the usage of this technique may be limited. Note that this technique is optional, and, without it, the advantages of the hybrid solver still hold.
1200
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
Table 7.2 Condition number comparison of the stochastic preconditioner against ICT, MICT, and support-graph preconditioners on the symmetric VLSI placement benchmarks. N , E, C1, S, and C2 are as defined in Table 7.1. Matrix m7 m8 m9 m10 m11 m12
N
E
C1
7.28e4 2.07e5 2.67e5 4.04e5 4.39e5 8.54e5
1.10e6 2.14e6 3.19e6 5.09e6 8.06e6 9.23e6
1.96e5 2.45e4 1.13e5 1.13e4 5.73e4 1.30e4
ICT S C2 1.28e6 743 3.00e6 234 4.06e6 278 6.32e6 310 8.07e6 181 1.26e7 385
MICT S C2 1.28e6 1.06e3 3.00e6 320 4.06e6 315 6.32e6 467 8.07e6 211 1.26e7 537
Support-graph S C2 1.26e6 1.60e3 3.04e6 2.17e3 4.08e6 1.20e4 6.45e6 8.85e3 8.07e6 1.02e4 1.25e7 9.12e3
Stochastic S C2 1.28e6 6.66 2.96e6 5.83 4.05e6 6.25 6.41e6 6.40 8.09e6 6.77 1.26e7 6.17
Table 7.3 Computational complexity comparison of CG using the stochastic preconditioner against using IC(0), ICT, MICT, and support-graph preconditioners, to solve the twelve symmetric benchmarks for one right-hand-side vector, with 10−6 error tolerance. N , E, S are as defined in Table 7.1; I is the number of iterations to reach 10−6 error tolerance; M is the total number of multiplications; R is our speedup ratio, measured by the corresponding M value divided by the M value of stochastic preconditioning. Matrix N E S Stochastic I M S IC(0) I M R S ICT I M R S MICT I M R S Support- I graph #1 M R S Support- I graph #2 M R
m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 m12 1.35e5 2.16e5 3.43e5 5.12e5 7.29e5 1.00e6 7.28e4 2.07e5 2.67e5 4.04e5 4.39e5 8.54e5 8.60e5 1.49e6 2.37e6 3.55e6 5.05e6 6.94e6 1.10e6 2.14e6 3.19e6 5.09e6 8.06e6 9.23e6 1.71e6 3.02e6 4.87e6 7.35e6 1.06e7 1.46e7 1.28e6 2.96e6 4.05e6 6.41e6 8.09e6 1.26e7 17 17 18 18 18 19 21 20 22 21 22 20 8.14e7 1.43e8 2.43e8 3.65e8 5.25e8 7.63e8 8.30e7 1.78e8 2.72e8 4.10e8 5.72e8 7.57e8 4.93e5 8.53e5 1.36e6 2.03e6 2.89e6 3.97e6 5.87e5 1.17e6 1.73e6 2.75e6 4.25e6 5.04e6 59 70 81 92 94 104 127 151 165 142 115 187 1.38e8 2.84e8 5.23e8 8.88e8 1.29e9 1.96e9 3.26e8 8.02e8 1.27e9 1.73e9 2.11e9 4.25e9 1.70 1.99 2.16 2.43 2.46 2.57 3.92 4.51 4.68 4.23 3.68 5.61 1.72e6 3.00e6 4.80e6 7.20e6 1.03e7 1.42e7 1.28e6 3.00e6 4.06e6 6.32e6 8.07e6 1.26e7 21 25 29 32 36 40 60 72 88 71 56 100 1.01e8 2.09e8 3.87e8 6.40e8 1.03e9 1.57e9 2.37e8 6.46e8 1.09e9 1.37e9 1.45e9 3.78e9 1.24 1.47 1.59 1.75 1.96 2.06 2.85 3.63 4.01 3.35 2.54 4.99 1.72e6 3.00e6 4.80e6 7.20e6 1.03e7 1.42e7 1.28e6 3.00e6 4.06e6 6.32e6 8.07e6 1.26e7 27 32 38 43 46 52 71 86 101 85 65 119 1.30e8 2.68e8 5.07e8 8.60e8 1.31e9 2.04e9 2.80e8 7.72e8 1.25e9 1.65e9 1.69e9 4.50e9 1.59 1.88 2.09 2.35 2.50 2.68 3.37 4.34 4.60 4.01 2.95 5.94 4.95e5 8.51e5 1.35e6 2.10e6 2.91e6 3.92e6 5.53e5 1.20e6 1.72e6 2.74e6 4.14e6 5.05e6 315 349 491 492 584 628 274 438 870 678 999 1000 7.40e8 1.42e9 3.16e9 4.82e9 8.05e9 1.18e10 6.84e8 2.35e9 6.69e9 8.27e9 1.81e10 2.27e10 9.09 9.93 13.0 13.2 15.3 15.5 8.24 13.2 24.6 20.2 31.6 30.0 1.73e6 3.11e6 4.79e6 7.30e6 1.03e7 1.42e7 1.26e6 3.04e6 4.08e6 6.45e6 8.07e6 1.25e7 195 194 205 319 251 387 200 315 763 545 720 633 9.41e8 1.66e9 2.73e9 6.44e9 7.19e9 1.52e10 7.83e8 2.85e9 9.48e9 1.07e10 1.87e10 2.38e10 11.6 11.7 11.3 17.6 13.7 20.0 9.42 16.0 34.9 26.1 32.7 31.4
Table 7.4 Cost of the stochastic preconditioning, measured by the total number of random-walk steps performed, and by the physical preconditioning CPU times T 1 of our implementation on a Linux workstation with 2.8 GHz CPU frequency. T 2 is the solving CPU time of our implementation with 10−6 error tolerance. The units for T 1 and T 2 are second. Matrix m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 #Step 3.67e7 6.86e7 1.16e8 1.83e8 2.74e8 3.91e8 2.26e7 4.89e7 6.80e7 9.11e7 1.14e8 T1 7.05 13.91 23.97 38.30 58.63 83.86 6.12 11.86 18.70 28.96 55.80 T2 2.38 5.64 10.71 17.68 28.56 41.04 2.04 5.29 8.34 14.45 22.80
m12 1.99e8 77.63 31.10
STOCHASTIC PRECONDITIONING
1201
Table 7.5 Condition number comparison of the stochastic preconditioner against ILUT and MILUT on the twelve asymmetric benchmarks. N , E, C1, C2 are as defined in Table 7.1; S is the number of nonzero entries of the asymmetric preconditioner (L, D, U matrices). Matrix m1’ m2’ m3’ m4’ m5’ m6’ m7’ m8’ m9’ m10’ m11’ m12’ N 1.25e5 2.16e5 3.43e5 5.12e5 7.29e5 1.00e6 7.28e4 2.07e5 2.67e5 4.04e5 4.39e5 8.54e5 E 8.60e5 1.49e6 2.37e6 3.55e6 5.05e6 6.94e6 1.10e6 2.14e6 3.19e6 5.09e6 8.06e6 9.23e6 C1 552 632 713 785 743 856 8.47e3 1.93e3 1.85e4 1.46e3 2.78e3 1.39e3 Stochastic S 2.99e6 5.27e6 8.50e6 1.28e7 1.85e7 2.55e7 2.57e6 6.12e6 8.72e6 1.28e7 1.70e7 2.62e7 C2 3.21 3.21 3.38 3.27 3.46 3.45 3.28 4.05 3.81 4.39 5.10 3.89 ILUT S 3.04e6 5.30e6 8.47e6 1.27e7 1.81e7 2.50e7 2.54e6 6.01e6 8.91e6 1.24e7 1.67e7 2.63e7 C2 15.6 17.8 20.0 22.0 22.7 24.0 60.8 35.8 42.9 54.7 35.6 50.0 MILUT S 3.04e6 5.30e6 8.47e6 1.27e7 1.81e7 2.50e7 2.54e6 6.01e6 8.91e6 1.24e7 1.67e7 2.63e7 C2 21.6 24.4 27.9 31.1 32.8 35.0 81.2 54.9 57.9 79.0 62.6 71.7
Table 7.6 Computational complexity comparison of BCG using the stochastic preconditioner against using ILU(0), ILUT, MILUT preconditioners, to solve the twelve asymmetric benchmarks for one righthand-side vector, with 10−6 error tolerance. N , E are as defined in Table 7.1; S is as defined in Table 7.5; I, M , R are as defined in Table 7.3. Matrix N E
S Stochastic I M S ILU(0) I M R S ILUT I M R S MILUT I M R
m1’ m2’ m3’ m4’ m5’ m6’ m7’ m8’ m9’ m10’ m11’ m12’ 1.25e5 2.16e5 3.43e5 5.12e5 7.29e5 1.00e6 7.28e4 2.07e5 2.67e5 4.04e5 4.39e5 8.54e5 8.60e5 1.49e6 2.37e6 3.55e6 5.05e6 6.94e6 1.10e6 2.14e6 3.19e6 5.09e6 8.06e6 9.23e6 2.99e6 5.27e6 8.50e6 1.28e7 1.85e7 2.55e7 2.57e6 6.12e6 8.72e6 1.28e7 1.70e7 2.62e7 16 17 17 16 17 18 17 17 17 17 17 18 1.37e8 2.56e8 4.11e8 5.82e8 8.87e8 1.29e9 1.34e8 3.05e8 4.37e8 6.57e8 9.04e8 1.38e9 8.60e5 1.49e6 2.37e6 3.55e6 5.05e6 6.94e6 1.10e6 N/A 3.19e6 5.09e6 8.06e6 N/A 60 66 73 78 77 87 88 N/A 90 89 82 N/A 2.59e8 4.93e8 8.68e8 1.39e9 1.95e9 3.02e9 4.32e8 N/A 1.32e9 2.06e9 2.90e9 N/A 1.88 1.93 2.11 2.38 2.20 2.34 3.24 N/A 3.01 3.14 3.20 N/A 3.04e6 5.30e6 8.47e6 1.27e7 1.81e7 2.50e7 2.54e6 6.01e6 8.91e6 1.24e7 1.67e7 2.63e7 25 27 29 31 32 34 45 47 46 43 38 54 2.17e8 4.07e8 6.98e8 1.12e9 1.65e9 2.41e9 3.51e8 8.34e8 1.20e9 1.63e9 2.00e9 4.16e9 1.58 1.60 1.70 1.92 1.86 1.86 2.63 2.73 2.75 2.48 2.21 3.01 3.04e6 5.30e6 8.47e6 1.27e7 1.81e7 2.50e7 2.54e6 6.01e6 8.91e6 1.24e7 1.67e7 2.63e7 33 34 38 39 42 45 55 57 59 54 46 67 2.86e8 5.13e8 9.15e8 1.41e9 2.16e9 3.19e9 4.29e8 1.01e9 1.54e9 2.05e9 2.42e9 5.16e9 2.09 2.01 2.23 2.42 2.44 2.46 3.21 3.31 3.52 3.12 2.68 3.73
the support-graph preconditioners, which are based on the nonrecursive version of the augmented maximum-weight-basis algorithm proposed in [5]. Our solver package [27] generates the stochastic preconditioners. The condition number measurements in Tables 7.1, 7.2, and 7.5 are all performed in MATLAB; the support-graph preconditioners and the stochastic preconditioners are read into MATLAB via binary files. The preconditioned CG and BCG solves in Tables 7.3 and 7.6 are all performed in MATLAB as well. The T 2 runtimes in Table 7.4, which are not used in any comparison, are measured on CG solves by our solver package [27]. In Tables 7.1 and 7.2, and Figure 7.1, the condition number comparisons are based on roughly equal preconditioner sizes: The dropping thresholds of ICT and MICT are tuned, and the accuracy-size trade-offs of the support-graph preconditioner as well as the proposed stochastic preconditioner are adjusted, such that the sizes of the factors produced by all four methods are similar; i.e., the S values in the tables are close. A clear trend can be observed in Figure 7.1 that, when the matrix size increases, the performances of ICT and MICT both degrade, while the performance of the
1202
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
Fig. 7.1. The condition number C2 after preconditioning as a function of the matrix dimension N for the results in Table 7.1.
stochastic preconditioner remains relatively stable. This is consistent with our argument in section 5.2: When the matrix is larger and denser, the effect of error accumulation in traditional methods becomes stronger, and the benefit of stochastic preconditioning becomes more prominent. The same explanation holds for the fact that the performance gap in Table 7.2 is bigger than in Table 7.1, and the reason is that the m7–m12 benchmarks are denser than m1–m6; i.e., m7–m12 have a higher average number of nonzeros per row. In Table 7.2, the performances of ICT and MICT fluctuate significantly due to the different structural and numerical properties of the benchmarks m7–m12; again, the stochastic preconditioner remains stable. In Table 7.3, we use a different metric to compare the stochastic preconditioning against the IC(0), ICT, MICT preconditioners, as well as the support-graph preconditioner at two different trade-off points, one with a size similar to IC(0) and the other with a size similar to ICT. The complexity metric is the number of double-precision multiplications needed at the PCG solving stage for the equation set Ax = b, in order to converge with an error tolerance of 10−6 , i.e., b − Ax 2 < 10−6 · b 2 . In Table 7.3, the I values are from MATLAB, and the M values are calculated as M = I · (S · 2 + E + N · 4), which is the best possible implementation based on the PCG pseudocodes in [2], [28]. Again, Table 7.3 suggests that PCG with stochastic preconditioning requires the least amount of computation to reach the same convergence criterion and that the speedup ratio R gets higher for larger and/or denser matrices. The computational costs of building the proposed preconditioner are reported in Table 7.4. For each benchmark, the cost is shown in two forms: the amount of computation measured by the total number of random-walk steps, as well as the corresponding physical CPU runtime T 1. A random-walk step is an element operation in the proposed algorithm and involves one random number generation plus O(log(degree(i))) logic operations, where degree(i) is the degree of the current node. The choice of random number generator (or quasi-random numbers as an alternative) is a separate issue, and our implementation uses one from [24, p. 279]. The physical solving runtime T 2 is also included as a reference. Admittedly, the preconditioning runtime T 1 is more than the typical runtime of a traditional incomplete factorization;
STOCHASTIC PRECONDITIONING
1203
however, it is not a large overhead, gets easily amortized over multiple re-solves, and is worthwhile given the speedup achieved in the solving stage, i.e., the lower T 2. In Table 7.4, T 1 is no more than three times T 2; hence, given the potential T 2 reduction suggested by Table 7.3, it is likely that, for a relatively large matrix, our preconditioning would save overall runtime even for just a single right-hand-side vector. In Tables 7.5 and 7.6, the condition number and the computational complexity comparisons are repeated for the asymmetric benchmarks m1’–m12’, and the stochastic preconditioning is compared against ILUT, MILUT, and ILU(0). The ILU(0) data points for m8’ and m12’ are unavailable due to MATLAB failures. In Table 7.6, based on the preconditioned BCG pseudocode in [2], the M formula is M = I · (S · 2 + E · 2 + N · 7). Again, Tables 7.5 and 7.6 suggest that the stochastic preconditioning results in the least condition numbers, and the least amount of computation to reach convergence, and that the advantages gets more prominent for larger and/or denser matrices. A reference implementation of the stochastic preconditioning as well as the hybrid solver is available to the public [27]. 8. Future work. So far it is required that, in the random walk game, every scaling factor s must have an absolute value of one. Therefore, the scaling in the game never changes the magnitude of a monetary transaction and only changes its sign or phase. This section discusses implications of nonunitary scaling factors. If the scaling factors are allowed to take arbitrary complex values, the allowable matrix A in (4.10) becomes any matrix such that the diagonal entries are nonzero. In other words, if a matrix A has nonzero diagonal entries, a random walk game exists such that the f values, if they uniquely exist, satisfy a set of linear equations where the matrix is A. However, if there exist scaling factors with absolute values over 1, numerical problems may potentially occur since the product of scaling factors may be unbounded. How to quantify this effect and to analyze the corresponding convergence rate is an open question for future research. Acknowledgments. The authors thank Sani R. Nassif for his contribution to the stochastic solver, Yousef Saad for helpful discussions, and Howard Elman and the two reviewers for constructive suggestions in revising this manuscript. REFERENCES [1] P. R. Amestoy, T. A. Davis, and I. S. Duff, An approximate minimum degree ordering algorithm, SIAM J. Matrix Anal. Appl., 17 (1996), pp. 886–905. [2] R. Barrett, M. Berry, T. F. Chan, J. W. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. A. van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, Philadelphia, PA, 1994. [3] M. Benzi and M. Tuma, A sparse approximate inverse preconditioner for nonsymmetric linear systems, SIAM J. Sci. Comput., 19 (1998), pp. 968–994. [4] M. Bollhofer and Y. Saad, On the relations between ILUs and factored approximate inverses, SIAM J. Matrix Anal. Appl., 24 (2002), pp. 219–237. [5] E. Boman, D. Chen, B. Hendrickson, and S. Toledo, Maximum-weight-basis preconditioners, Numer. Linear Algebra Appl., 11 (2004), pp. 695–721. [6] T. C. Chan and H. A. van der Vorst, Approximate and Incomplete Factorizations, Technical report, Department of Mathematics, University of Utrecht, The Netherlands, 1994. [7] E. Chow and Y. Saad, Experimental study of ILU preconditioners for indefinite matrices, J. Comput. Appl. Math., 86 (1997), pp. 387–414. [8] J. H. Curtiss, Sampling methods applied to differential and difference equations, in Proceedings of the IBM Seminar on Scientific Computation, 1949, pp. 87–109.
1204
HAIFENG QIAN AND SACHIN S. SAPATNEKAR
[9] E. Cuthill and J. McKee, Reducing the bandwidth of sparse symmetric matrices, in Proceedings of the ACM National Conference, 1969, pp. 157–172. [10] I. S. Duff, A. M. Erisman, and J. K. Reid, Direct Methods for Sparse Matrices, Oxford University Press, New York, 1986. [11] G. E. Forsythe and R. A. Leibler, Matrix inversion by a Monte Carlo method, Mathematical Tables and Other Aids to Computation, 4 (1950), pp. 127–129. [12] A. George and J. W. H. Liu, The evolution of the minimum degree ordering algorithm, SIAM Rev., 31 (1989), pp. 1–19. [13] A. George and J. W. H. Liu, Computer Solution of Large Sparse Positive Definite Systems, Prentice-Hall, Englewood Cliffs, NJ, 1981. [14] J. H. Halton, Sequential Monte Carlo, Proc. Cambridge Philos. Soc., 58 (1962), pp. 57–78. [15] J. M. Hammersley and D. C. Handscomb, Monte Carlo Methods, Methuen, London, 1964. [16] P. Heggernes, S. C. Eisenstat, G. Kumfert, and A. Pothen, The computational complexity of the minimum degree algorithm, in Proceedings of the 14th Norwegian Computer Science Conference, 2001, pp. 98–109. [17] R. Hersh and R. J. Griego, Brownian motion and potential theory, Scientific American, 220 (1969), pp. 67–74. [18] A. Joshi, Topics in Optimization and Sparse Linear Systems, Ph.D. thesis, University of Illinois at Urbana-Champaign, 1997. [19] D. S. Kershaw, The incomplete Cholesky-conjugate gradient method for the iterative solution of systems of linear equations, J. Comput. Phys., 26 (1978), pp. 43–65. [20] C. N. Klahr, A Monte Carlo method for the solution of elliptic partial differential equations, in Mathematical Methods for Digital Computers, John Wiley and Sons, New York, 1962. [21] A. W. Knapp, Connection between Brownian motion and potential theory, J. Math. Anal. Appl., 12 (1965), pp. 328–349. [22] A. W. Marshall, The use of multi-stage sampling schemes in Monte Carlo, in Symposium of Monte Carlo Methods, John Wiley and Sons, New York, 1956, pp. 123–140. [23] M. E. Muller, Some continuous Monte Carlo methods for the Dirichlet problem, Ann. Math. Statist., 27 (1956), pp. 569–589. [24] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C: The Art of Scientific Computing, 2nd ed., Cambridge University Press, New York, 1994. [25] H. Qian, S. R. Nassif, and S. S. Sapatnekar, Random walks in a supply network, in Proceedings of the ACM/IEEE Design Automation Conference, 2003, pp. 93–98. [26] H. Qian and S. S. Sapatnekar, A hybrid linear equation solver and its application in quadratic placement, in ACM/IEEE International Conference on Computer-Aided Design Digest of Technical Papers, 2005, pp. 905–909. [27] H. Qian and S. S. Sapatnekar, The Hybrid Linear Equation Solver Release, available at http://mountains.ece.umn.edu/∼sachin/hybridsolver. [28] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, PA, 2003. [29] Y. Saad, SPARSKIT, version 2, available at http://www-users.cs.umn.edu/∼saad/software/ SPARSKIT/sparskit.html. [30] D. Spielman and S. H. Teng, Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems, available at http://www.arxiv.org/abs/ cs.NA/0607105, 2006. [31] A. Srinivasan and V. Aggarwal, Stochastic linear solvers, in Proceedings of the SIAM Conference on Applied Linear Algebra, 2003. [32] C. J. K. Tan and M. F. Dixon, Antithetic Monte Carlo linear solver, in Proceedings of the International Conference on Computational Science, Springer, New York, 2002, pp. 383– 392. [33] S. Toledo, TAUCS, version 2.2, available at http://www.tau.ac.il/∼stoledo/taucs. [34] W. Wasow, A note on the inversion of matrices by random walks, Mathematical Tables and Other Aids to Computation, 6 (1952), pp. 78–81. [35] R. D. Yates and D. J. Goodman, Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers, John Wiley and Sons, New York, 1999.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1205–1227
c 2008 Society for Industrial and Applied Mathematics
RESTORATION OF CHOPPED AND NODDED IMAGES BY FRAMELETS∗ JIAN-FENG CAI† , RAYMOND CHAN‡ , LIXIN SHEN§ , AND ZUOWEI SHEN¶ Abstract. In infrared astronomy, an observed image from a chop-and-nod process can be considered as the result of passing the original image through a high-pass filter. Here we propose a restoration algorithm which builds up a tight framelet system that has the high-pass filter as one of the framelet filters. Our approach reduces the solution of restoration problem to that of recovering the missing coefficients of the original image in the tight framelet decomposition. The framelet approach provides a natural setting to apply various sophisticated framelet denoising schemes to remove the noise without reducing the intensity of major stars in the image. A proof of the convergence of the algorithm based on convex analysis is also provided. Simulated and real images are tested to illustrate the efficiency of our method over the projected Landweber method. Key words. tight frame, chopped and nodded image, projected Landweber method, convex analysis AMS subject classifications. 42C40, 65T60, 68U10, 94A08 DOI. 10.1137/040615298
1. Introduction. We start with a very brief introduction to the formation of chopped and nodded images and refer the readers to the papers by Bertero et al. in [2, 3, 4, 5] for details. In ground-based astronomy at midinfrared wavelengths (λ ≈ 5– 20 μm), the weak astronomical signal f is corrupted by the overwhelming thermal background produced by the atmosphere and the telescope. The observed signal s from the direction (x, y) at time t on the detector plane is the superposition of the weak signal f (x, y) together with a large time-varying background η(x, y, t) coming from the atmosphere and telescope optics, i.e., s = f (x, y) + η(x, y, t). To extract the celestial source f , we need to eliminate the effect of the background η(x, y, t). A common approach called chop and nod is employed. Chopping refers to the rapid modulation of the telescope beam between the target and an empty sky area. Nodding refers to a second chopping sequence done with the telescope pointing to an offset position. In essence, if the target position is (x, y) and the two sky areas are (x, y + Δ) and (x, y − Δ), the chopping and nodding techniques produce a second ∗ Received by the editors September 18, 2004; accepted for publication (in revised form) November 9, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/61529.html † Department of Mathematics, the Chinese University of Hong Kong, Shatin, Hong Kong, China. Current address: Temasek Laboratories and Department of Mathematics, National University of Singapore, 2 Science Drive 2, 117543 Singapore (
[email protected]). ‡ Department of Mathematics, the Chinese University of Hong Kong, Shatin, Hong Kong, China (
[email protected]). Research supported in part by HKRGC grants CUHK 400503 and CUHK DAG 2060257. This work was partially done while this author was visiting the Institute for Mathematical Sciences, National University of Singapore in 2003. The visit was supported by the Institute. § Corresponding author. Department of Mathematics, Syracuse University, Syracuse, NY 13244 (
[email protected]). ¶ Department of Mathematics, National University of Singapore, 2 Science Drive 2, 117543 Singapore (
[email protected]). Research supported in part by grant R-146-000-060-112 at the National University of Singapore.
1205
1206
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
difference of the observed signal s (1.1)
−f (x, y − Δ) + 2f (x, y) − f (x, y + Δ) + e,
where the quantity Δ is called the chopping throw or chopping amplitude and e := −η(x, y − Δ, t ) + 2η(x, y, t) − η(x, y + Δ, t ) with the times t and t being close to t. Under suitable assumptions [2], the term e can be modeled by a white Gaussian process. One gets the so-called chopped and nodded image: (1.2)
g(x, y) := −f (x, y − Δ) + 2f (x, y) − f (x, y + Δ) + e.
The effect of noise e to the accuracy of the restoration is discussed in section 5. To restore f , we need to solve the inversion problem (1.2) from g. In this paper, we consider only the case when the chopping amplitude Δ is an integral multiple of the sampling distance in the detector plane, i.e., Δ = K is an integer. We remark that this assumption can be restrictive in some situations; see [3, 5, 21]. We can write (1.2) in the following discrete form: (1.3)
gj,m = −fj,m−K + 2fj,m − fj,m+K + ej,m ,
where gj,m and fj,m are the samples of g(x, y) and f (x, y), respectively, and ej,m are the samples of e at (j, m). For each j, let gj be the vector whose mth entry is given by gj,m , m = 1, 2, . . . , N , respectively, and fj be the vector with the nth entry given by fj,n−K , n = 1, 2, . . . , N + 2K. Then (1.3) can be written in a matrix form: (1.4)
gj = Afj + ej ,
where the (m, n)th entry of A is given by A(m, n) = −δm,n + 2δm+K,n − δm+2K,n , with m = 1, 2, . . . , N , n = 1, 2, . . . , N + 2K, and δm,n = 0 if m = n and δn,n = 1. The matrix A is called the imaging matrix. Formulation (1.4) is similar to deconvolution problems except that here A is a high-pass filter instead of being a low-pass filter. One standard approach for deconvolution problems is to find fj such that Afj ≈ gj (data-fitting) while requiring fj to be smooth in a certain sense (regularization). In [4], the projected Landweber method is used to find the solution of (1.4). It is defined as follows: 6 5 (n+1) (n) (n) (1.5) fj = P+ fj + AT (gj − Afj ) , n = 0, 1, . . . , where P+ is the projection operator onto the set of nonnegative vectors (since brightness distribution of a celestial source must be nonnegative) and is a relaxation parameter which satisfies 0 < < 2/λ1 , with λ1 being the largest eigenvalue of AT A. For a detail discussion of the method, see [1, Chapter 6]. It was pointed out in [1, 4] that the projected Landweber method has a regular(n) ization property, known as semiconvergence: the iterates fj first approach the true image, but the noise will be amplified when n is larger than a certain threshold. Thus a stopping criterion, called the discrepancy principle, is introduced in order to obtain the best approximation of the required solution. However, due to the special structure of the matrix A, the restored images always have some kinds of artifacts (see Figure 5.4(b)). In this paper, we introduce a tight frame method for solving (1.4). There are many papers on using wavelet methods to solve inverse problems and, in particular,
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1207
deconvolution problems. One of the main ideas is to construct a wavelet or “waveletinspired” basis that can almost diagonalize the given operator. The underlying solution has a sparse expansion with respect to the chosen basis. The wavelet-vaguelette decomposition proposed in [20, 22] and the deconvolution in mirror wavelet bases in [28, 29] can both be viewed as examples of this strategy. Another approach is to apply Galerkin-type methods to inverse problems by using an appropriate, but fixed, wavelet basis (see, e.g., [6, 13]). Again, the idea there is that, if the given operator has a sparse representation and the solution has a sparse expansion with respect to the wavelet basis, the inversion is reduced approximately to the inversion of a truncated operator. Recently, two new iterative thresholding ideas have been proposed in [10, 11, 12] and [17, 24]. Instead of requiring the system to have an almost diagonal representation of the operator, they require only that the underlying solution has a sparse expansion with respect to a given orthonormal but not necessary wavelet basis or to a tight frame system. The main idea of [17] is to expand the current iterate with respect to the chosen orthonormal basis for a given algorithm such as the Landweber method. Then a thresholding algorithm is applied to the coefficients of this expansion. The results are then combined to form the next iterate. The algorithm of [17] is shown to converge to the minimizer of a certain cost functional. An essentially same algorithm for inverting convolution operator acting on objects that are sparse in the wavelet domain is given in [24]. The tight frame algorithm that we are going to propose for (1.4) is closer to the approach in [10, 11, 12], where a high-resolution image reconstruction or, more generally, a deconvolution problem with the convolution kernel being a low-pass filter are considered. An analysis of the convergence and optimal property of these algorithms is given in [10]. In fact, our algorithm in this paper is motivated by the ideas in [10, 11, 12] to convert the deconvolution problem g = Af into an inpainting problem in the transformed domain. To make use of the given data g for inpainting, one needs to construct a tight frame system where the given convolution operator A corresponds to one of the framelet masks of the system, say, h. Then the convolution equation g = Af can be viewed as giving us the framelet coefficients of f corresponding to the given mask h. Hence the problem is to find the framelet coefficients of f corresponding to the framelet masks other than h. In short, by choosing one of the framelet masks of the system corresponding to A, the deconvolution problem is converted to the problem of inpainting in the framelet domain—finding the missing framelet coefficients. This is to be compared with inpainting in the image domain where we are given part of the image and finding the missing part of the image. Here we iteratively regenerate the missing framelet coefficients. The noise is removed by thresholding at each iteration. We will see that this iterative algorithm converges and its limit satisfies certain optimal properties. To make all of these work, we need to build up a tight framelet system from the A given in (1.4). This can be done by using the unitary extension principle of [34]. We remark here that, for our A, it is impossible to construct an orthonormal (or nonredundant) wavelet system where A corresponds to one of the masks. We note that, since tight frames are redundant systems, information lost along one framelet direction can still be contained in, and hence recovered from, other framelet directions. In fact, the redundancy not only helps in recovering the missing framelet coefficients, it also helps in reducing artifacts introduced by the thresholding denoising scheme built in the algorithm as pointed out in [14]. We further remark that, unlike the approaches in [6, 13, 20, 22, 28, 29], our approach does not attempt
1208
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
to find a tight frame system under which the convolution operator can be sparsely represented. Instead, similar to the approaches in [10, 11, 12, 17, 24], we require only that the underlying solution f has a sparse representation under the tight frame system we constructed. It is shown in [7, 8] that piecewise smooth functions with a few spikes do have sparse representations by compactly supported tight frame systems. Hence, implicitly, we assume that f is piecewise smooth with possibly some spikes. Furthermore, the fact that the limit minimizes a functional of the 1 norm of framelet coefficients is desirable in image restoration as has already been observed by many researchers (see, e.g., [10]). Before getting into the technical details of our approach, we summarize the main ideas in the following: 1. Designing a tight framelet system. In view of (1.3), we consider the chopped and nodded image g in (1.2) as the output obtained by passing the true image f through the high-pass filter: ⎛ ⎞ (1.6)
⎝−1, 0, . . . , 0, 2, 0, . . . , 0, −1⎠ . K−1
K−1
We first design a tight framelet system from a multiresolution analysis that has this chop-and-nod high-pass filter (1.6) as one of the framelet masks. Then g can be understood as framelet coefficients of f corresponding to the framelet with (1.6) as the framelet mask. The restoration of f becomes the problem of recovering the coefficients of other framelets and the coefficients of the coarse level approximations (low-frequency information) in the tight frame representation of f . 2. Restoring the missing information in f . The missing coefficients of f are found by an iterative procedure. The previous iterate is first decomposed by a tight framelet decomposition. Then the missing coefficients of f are approximated by the corresponding coefficients in the previous iterate and combined with g in the tight framelet reconstruction algorithm to obtain a new iterate. The tight framelet decomposition and reconstruction algorithms are based on those given by [18]. We will see that the projected Landweber method (1.5) with = 1/16 is a simple version of our tight frame algorithm, where no noise removal thresholding is done on any framelet coefficients. This observation not only gives a new interpretation of the Landweber method, but it also gives a platform to understand the method in terms of framelet theory and multiresolution analysis. 3. Denoising by thresholding. In our tight frame algorithm, the denoising is done by damping the framelet coefficients by using a tight framelet denoising scheme. The scheme can denoise part of the components in each iterate and leave other components intact. It is because our tight frame approach can identify the precise components that need to be denoised. In contrast, the denoising scheme in [17, 24] is applied to every component of the iterate. As is shown in the numerical simulation, this step also helps to remove some of the artifacts. Since we have to restrict the solution onto the set of nonnegative vectors due to the physical meaning of f , the analysis in [10], which is based on framelet analysis, cannot be applied here. In this paper, we give an entirely different approach from [10] to prove the convergence of our framelet algorithm. The analysis uses the framework
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1209
of proximal forward-backward splitting proposed in [15] constructed under the theory of convex analysis and optimization. It will be shown in our numerical tests that the timing of our tight frame algorithm is higher than those of Landweber-type methods (see, e.g., [3, 5]) but is quite manageable. Our method can be used as a good postprocessing method to clean up these kinds of chopped and nodded infrared images. The outline of the paper is as follows. In section 2, we give a brief review on tight frames and design filters used in this paper. In section 3, we give our iterative algorithm. In section 4, we prove the convergence of the algorithm. Results on simulated and real images are presented in section 5. 2. Tight framelet analysis. The chopped and nodded image g in (1.2) and (1.3) can be viewed as the result of passing the original image f through a high-pass filter given in (1.6). Since we have only the high-pass filter available to us, it can only be a framelet mask. We therefore have to design a framelet system and its associated multiresolution analysis with this given high-pass filter. Here we note that the associated multiresolution analysis is important in order to have a fast algorithm for the framelet transform, and the framelet transform is a necessity for any frameletbased algorithm. To this end, we will make use of the unitary extension principle for tight frame constructions given in [34]. We also remark that there are no papers that we are aware of on building a wavelet or framelet system and its associated multiresolution analysis when one of the given wavelet or framelet masks is a highpass filter. We start with the basics of tight frames in section 2.1 and their constructions in section 2.2. As can be seen in (1.4), the recovering of the chopped and nodded images can be reduced to the restoration of one-dimensional signals (along every fixed j in (1.4)). We therefore consider the univariate setting only. However, it is straightforward to extend the analysis given here to the multivariate case. 2.1. Preliminaries on tight framelets. A system X ⊂ L2 (R) is called a tight frame of L2 (R) if
f, hh ∀f ∈ L2 (R). (2.1) f= h∈X
This is equivalent to (2.2)
f 22 =
| f, h|2
∀f ∈ L2 (R),
h∈X
where ·, · and · 2 = ·, · are the inner product and norm, respectively, of L2 (R). It is clear that an orthonormal basis is a tight frame, and a tight frame is a generalization of an orthonormal basis. A tight frame preserves the identities (2.1) and (2.2) which hold for an arbitrary orthonormal basis of L2 (R). But it sacrifices the orthonormality and the linear independence of the system in order to get more flexibility. Therefore tight frames can be redundant. This redundancy is often useful in image processing applications such as denoising; see [16]. If X(Ψ) is the collection of the dilations and the shifts of a finite set Ψ ⊂ L2 (R), i.e., 1/2
X(Ψ) = {2k/2 ψ(2k x − j) : ψ ∈ Ψ, k, j ∈ Z}, then X(Ψ) is called a wavelet (or affine) system. In this case the elements in Ψ are called the generators. When X(Ψ) is a tight frame for L2 (R), then ψ ∈ Ψ are called (tight) framelets.
1210
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
A normal framelet construction starts with a refinable function. A compactly supported function φ ∈ L2 (R) is refinable (a scaling function) with a refinement mask τφ if it satisfies & & φ(2·) = τφ φ. Here φ& is the Fourier transform of φ, and τφ is a trigonometric polynomial with τφ (0) = 1; i.e., a refinement mask of a refinable function must be a low-pass filter. One can define a multiresolution analysis from a given refinable function; we omit the detailed discussion here and refer the readers to [19, 26]. For a given compactly supported refinable function, the construction of tight framelet systems is to find a finite set Ψ that can be represented in the Fourier domain as & ψ(2·) = τψ φ& for some 2π-periodic τψ . The unitary extension principle of [34] says that the wavelet system X(Ψ) generated by a finite set Ψ forms a tight frame in L2 (R) provided that the masks τφ and {τψ }ψ∈Ψ satisfy (2.3)
τφ (ω)τφ (ω + γπ) +
τψ (ω)τψ (ω + γπ) = δγ,0 ,
γ = 0, 1,
ψ∈Ψ
for almost all ω in R. Practically, we require all masks to be trigonometric polynomials. Thus, (2.3) together with the fact that τφ (0) = 1 imply that τψ (0) = 0 for all ψ ∈ Ψ. Hence, {τψ }ψ∈Ψ must correspond to high-pass filters. The sequences of Fourier coefficients of τψ , as well as τψ itself, are called framelet masks. The construction of framelets Ψ essentially is to design, for a given refinement mask τφ , framelet masks {τψ }ψ∈Ψ such that (2.3) holds. The unitary extension principle of [34] gives the flexibility in designing filters and will be used here. A more general principle of construction tight framelets, the oblique extension principle, was obtained recently in [18]. 2.2. Filter design. For any positive integer K, we need to design a set of framelets Ψ such that the chop-and-nod high-pass filter given in (1.6) is one of the framelet masks (up to a constant factor). Note that the trigonometric polynomial corresponding to this chop-and-nod high-pass filter is sin2 (Kω/2). We have the following result. Proposition 2.1. For an arbitrary odd number K, let τ2 (ω) = sin2 (Kω/2) be the given chop-and-nod high-pass filter given in (1.6). Let + ω, + ω, + ω, √ τ0 (ω) = cos2 K and τ1 (ω) = − −2 sin K cos K . 2 2 2 Then τ0 , τ1 , and τ2 satisfy (2.3). Furthermore, 1. the function |x| 1 −K if x ∈ [−K, K], 2 (2.4) φ(x) = K 0 otherwise is the refinable function with the refinement mask τ0 , and
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1211
2. ψ1 and ψ2 , defined by (2.5)
& ψ&1 (2ω) = τ1 (ω)φ(ω),
& ψ&2 (2ω) = τ2 (ω)φ(ω),
are tight framelets; i.e., X(Ψ), with Ψ = {ψ1 , ψ2 }, forms a tight frame of L2 (R). Proof. The proof is a straightforward extension of spline framelets given in [34]. For completeness, we give an outline of the proof here. First, the spline theory says immediately that the refinable function corresponding to the mask τ0 is the piecewise |x| 1 −K linear spline with 1/K at the origin and 0 at ±K, i.e., φ(x) = K 2 for x ∈ [−K, K] 2 and 0 otherwise. Hence, φ ∈ L (R) has a support [−K, K]. Second, we can check directly that τ0 , τ1 , and τ2 satisfy (2.3). Therefore, X(Ψ) is a tight frame for L2 (R) by the unitary extension principle of [34]. The Fourier coefficients of the masks τ0 , τ1 , and τ2 are ⎧ ⎫ ⎨1 1 1⎬ , 0, . . . , 0, , 0, . . . , 0, a= (2.6) , ⎩ 4 2 4 ⎭ K−1 K−1 ⎧ ⎫ √ ⎬ ⎨ √2 2 , 0, . . . , 0, (2.7) , b1 = − ⎩ 4 4 ⎭ 2K−1 ⎫ ⎧ ⎨ 1 1 1⎬ b2 = − , 0, . . . , 0, , 0, . . . , 0, − (2.8) . ⎩ 4 2 4 ⎭ K−1
K−1
Clearly b2 matches the chop-and-nod filter in (1.6). This, together with (2.4) and (2.5), leads to ψi = 2 bi (j)φ(2 · −j), i = 1, 2. j∈Z
Hence, the framelets are precisely the piecewise linear functions supported on [−K, K]. When K is even, the simple construction of the tight frame in Proposition 2.1 does not work. Nevertheless, the filter design for even K can be done similarly. Since the real astronomical images we have are all obtained by using odd K, we omit the discussion for this case. 2.3. Matrix form. To implement our algorithm, we need to convert the filters to operators in matrix forms. For this, we need to consider boundary conditions, i.e., assumptions of the true image outside the region of interest. For simplicity, we use symmetric (reflective) extension here; see [32]. For other extensions, the derivation is similar; see, for example, [9]. In fact, for our method, the difference between symmetric and periodic boundary extensions is small. For a given sequence h = {h(j)}Jj=−J , we let T (h) be the M -by-M Toeplitz matrix ⎡ ⎤ h(0) · · · h(−J) 0 ⎢ .. ⎥ .. .. ⎢ . ⎥ . . h(−J) ⎢ ⎥ ⎢ ⎥ .. .. T (h) := ⎢ h(J) . . . , . . h(−J) ⎥ ⎢ ⎥ ⎢ ⎥ . . . . .. .. .. .. ⎣ ⎦ 0 h(J) ··· h(0)
1212
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
where M > 2J + 1. We also ⎡ h(1) ⎢ h(2) ⎢ ⎢ .. ⎢ . T (h) := ⎢ ⎢ h(J − 1) ⎢ ⎣ h(J)
define two matrices: h(2) . · ·. ·. .. . . . .. .. . ..
0J×(M −J−1) 0
0(M −J)×J and
⎡
⎢ ⎢ ⎢ ⎢ Tr (h) = ⎢ ⎢ ⎢ ⎣
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
0(M −J)×(M −J−1)
0(M −J)×(M −J−1)
⎤
0(M −J)×J ..
0 . .. . . .. h(−J) h(−J + 1) · · ·
0J×(M −J−1)
⎤
h(J − . 1) h(J) ..
..
.
. .. . .. h(−2)
h(−J) h(−J + 1) .. . h(−2) h(−1)
⎥ ⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦
They correspond to the symmetric boundary conditions on the left and on the right, respectively. Finally let S+ (h) and S− (h) be S+ (h) = T (h) + T (h) + Tr (h)
and S− (h) = −T (h) + T (h) − Tr (h).
For the masks a, b1 , and b2 given in (2.6)–(2.8), their corresponding decomposition (and, respectively, reconstruction) matrices are (2.9)
H0 = S+ (a),
H1 = S+ (b1 ),
H2 = S+ (b2 )
(and 0 = S+ (a), H
1 = S− (−b1 ), H
2 = S+ (b2 ), H
i = H T = Hi for i = 0 and 2. respectively), with J = K and M = N + 2K. Clearly H i T 1 = H = H1 . By using (2.3), it is easy to verify that Since b1 is antisymmetric, H 1 (2.10)
0 H0 + H 1 H1 + H 2 H2 = I, H
where I is the identity. 3. Framelet algorithm. In this section, we first show that the Landweber method with = 1/16 is a framelet method with no thresholding. Then we introduce a framelet thresholding scheme. By using that, we derive our main algorithm. 3.1. Landweber algorithm. Notice that H2 defined by (2.9) and the imaging matrix A in (1.4) are related by: ⎡ ⎤ ∗ 1 (3.1) H2 = ⎣ A ⎦ , 4 ∗ where ∗ denotes nonzero matrices of size K by (N +2K); cf. (1.6) and (2.8). Therefore, for any given f (n) we write (3.2)
2 H2 f (n) = H 2 ΛH2 f (n) + 1 AT Af (n) , H 16
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
where
1213
⎞
⎛
Λ = diag ⎝1, · · · , 1, 0, · · · , 0, 1, · · · , 1⎠ . K
N
K
By replacing Af (n) in (3.2) by g and using (2.10), we obtain an iteration 1 T (n+1) (n) (n) (n) (3.3) f = P+ H 0 H 0 f + H 1 H1 f + H2 ΛH2 f + A g . 16 The idea of the iteration (3.3) is as follows. We view the recovering of f as the reconstruction of the finer level approximations of f from a given framelet coefficient sequence H2 f . Note that the major part of the sequence H2 f is already given as g. We need the sequences H0 f and H1 f , which we do not have. At the (n + 1)th iteration of the algorithm, we use the corresponding coefficient sequences of f (n) to approximate the missing ones. The first term in the right-hand side of (3.3) represents the approximation of the low-frequency components of f , whereas the second term improves its high-frequency approximation. Finally P+ [·] ensures that f (n+1) is a nonnegative vector. 0 H0 f (n) + H 1 H1 f (n) = f (n) − H 2 H2 f (n) . Hence by (3.2), (3.3) can be By (2.10), H rewritten as 1 (n+1) (n) T T (n) = P+ f + (A g − A Af ) . (3.4) f 16 By comparing it with (1.5), we see that (3.3), which uses the tight frame approach, is just a reformulation of the projected Landweber method with = 1/16. As a side product of our approach, this new formulation puts the Landweber method within the multiresolution analysis framework and gives analytical representations of the chopped and nodded image g and the restored image f in terms of the tight framelet systems. The analysis tells us that, at each iteration, the Landweber method tries to improve the low-resolution approximation and the missing framelet coefficients of the true image, and it does so by combining the corresponding parts in the previous iterate with the given chopped and nodded image. In the next section, we will see that we denoise each f (n) by damping the framelet coefficients by using a framelet denoising scheme; see (3.10). By comparing (3.3) with (3.10), we see that the framelet coefficients are not denoised at all in (3.3), which is equivalent to (3.4). Our new framelet viewpoint allows us to incorporate more sophisticated nonlinear denoising schemes to identify and remove the noise more accurately than the Landweber method (3.4). 3.2. Framelet denoising scheme. Here we introduce the framelet denoising scheme. We can use any reasonable framelet systems for our noise removal. For simplicity, we just use the framelet system in √Proposition 2.1 by choosing K to be 1. √ 2 1 1 1 Its masks are α = { 4 , 2 , 4 }, β1 = {− 4 , 0, 42 }, and β2 = {− 14 , 12 , − 14 }. They are short masks and hence will give a more efficient algorithm in noise suppression. To perform a multilevel framelet decomposition without downsampling in noise removal, we need the mask α at level : ⎫ ⎧ ⎪ ⎪ ⎨1 1 1⎬ . α() = , 0, · · · , 0, , 0, · · · , 0, ⎪ ⎭ ⎩ 4 2 4 ⎪ 2(−1) −1
2(−1) −1
1214
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN ()
()
The masks β1 and β2 can be given similarly. Let ()
G0 = S+ (α() )
(3.5)
()
and Gi
()
= S+ (βi ),
i = 1, 2; = 1, 2, . . . .
Then the multilevel decomposition matrix to level L for this tight framelet system is ⎡ ⎤ :L−1 (L−) =0 G0 ⎢ (L) :L−1 (L−) ⎥ ⎡ ⎤ ⎢ G1 ⎥ GL =1 G0 ⎥ ⎢ ⎢ (L) :L−1 (L−) ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ G2 ⎥ =1 G0 ⎥≡⎢ ⎥, (3.6) G=⎢ .. ⎥ ⎢ ⎢ ⎥ ⎥ ⎣ GH ⎦ ⎢ . ⎥ ⎢ (1) ⎥ ⎢ G1 ⎦ ⎣ (1) G2 = GT . By the tight framelet theory and the corresponding reconstruction matrix G in [18], we have =G *L GL + G ; GG H GH = I.
(3.7)
For an arbitrary f , the thresholding scheme D is given by the following formula: ; *L GL f + G D(f ) = G H TuH (GH f ).
(3.8) Here
TuH ((x1 , . . . , xl , . . . )T ) = (tu1 (x1 ), . . . , tul (xl ), . . . )T , with ⎞T
⎛ (3.9)
⎟ ⎜ uH = (u1 , . . . , ul , . . . )T = ⎝λL , . . . , λL , . . . , λ , . . . , λ , . . . , λ1 , . . . , λ1 ⎠ 2(N +2K)
2(N +2K)
2(N +2K)
and tλ (x) being tλ (x) = sgn(x) max(|x| − λ, 0), which is referred to as the soft thresholding. According to [23], the thresholding parameters λ are chosen to be λ = 2−/2 κ 2 log(N + 2K), where κ is the variance of the noise contained in f (n) estimated numerically by the method given in [23]. Our thresholding denoising scheme in the framelet domain is similar to that in the orthonormal wavelet domain [23]. As already pointed out by many authors (see, for example, [18, 29, 35]), framelets give better denoising results. 3.3. Main algorithm. By applying the thresholding scheme (3.8) on (3.3), we have our main algorithm for one-dimensional (1D) signals. Algorithm 1. (i) Let L be the number of framelet decomposition levels and f (0) be an initial guess. (ii) Iterate on n until convergence: (3.10) f
(n+1)
1 T (n) (n) (n) = P+ H0 D(H0 f ) + H1 D(H1 f ) + H2 ΛH2 f + A g , 16
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1215
where D is the soft-thresholding operator given in (3.8) and P+ is the projection operator onto the set P+ of nonnegative vectors defined by P+ = {f : f ≥ 0 componentwise} . Both framelet masks in (2.6)–(2.8) and masks in the previous subsection for denoising have a total of 8 nonzero elements. Therefore the computational cost of the framelet decomposition and reconstruction in (3.10) needs 16(L + 1)(N + 2K) multiplicative operations, while the cost of applying D is O((N + 2K) log(N + 2K)). In contrast, the projected Landweber method requires (5N + 6K) multiplicative operations per iteration [2]. For 2D astronomical images, we first recall that the chop-and-nod process is a 1D process. In fact, in (1.4), we can restore the image columnwise for each fixed j. Thus there is no need to change the restoration part of Algorithm 1 . More precisely, i }2 for 2D images. However, we can still use the 1D tight framelet system {Hi , H i=0 to better capture the noise and artifacts in between the columns, we use a 2D noise by their 2D versions which can reduction scheme. More precisely, we replace G and G be obtained easily by using tensor products. For example, for 2D image f expressed as a matrix, the framelet decomposition of f gives the data: {Gf GT }, where G are given in (3.6). The denoising scheme in 2D is T T L GL f GT G T D(f ) = G L L + GL TuLH (GL f GH )GH H Tu (GH f GT )G T + G H Tu (GH f GT )G T . +G L
HL
L
HH
H
H
4. Analysis of Algorithm 1. In this section, we prove the convergence of Algorithm 1. We also show that its limit satisfies a certain minimization property. For simplicity, we give the 1D proof here. The proof can be extended to 2D images easily. Our proof is based on the framework of proximal forward-backward splitting proposed in [15] constructed under the theory of convex analysis and optimization. We first show that our algorithm can be written as an alternate direction algorithm for a minimization problem. Then we show that it converges. 4.1. An equivalent formulation. In the ? following, we partition any vector > x ∈ R(4L+3)(N +2K) into xT = xTH0 , xTH1 , xTH2 such that xH0 , xH1 ∈ R(2L+1)(N +2K) and xH2 ∈ RN +2K . Notice that by (3.1) we have ⎤ ⎡ ⎡ ⎤ ⎡ ⎤ 0 0 0 ? > 1 T 1 2 ⎣g/4⎦ . ∗ AT ∗ ⎣g/4⎦ = H2T ⎣g/4⎦ = H A g= 16 4 0 0 0 Therefore, in matrix form, the iteration (3.10) can be rewritten into ⎡ ⎤ Tu (GH0 f (n) ) (n) ⎥ 5 6⎢ ⎢ Tu (GH1 f⎡ ) ⎤⎥ (n+1) ⎥, 0G H 1G H 2 ⎢ (4.1) f = P+ H 0 ⎢ ⎥ ⎣ΛH2 f (n) + ⎣g/4⎦⎦ 0 where (4.2)
u≡
uL uH
=
0 uH
,
1216
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
with uH given in (3.9) and by (3.8) and uL is a zero vector in RN +2K . Denote that ⎡ ⎤ Tu (GH0 f (n) ) ⎡ (n) ⎤ ⎢ Tu (GH1 f (n) ) ⎥ xH ⎢ ⎥ ⎡ ⎤ ⎢ (n)0 ⎥ (n) ⎢ ⎥ (4.3) 0 ⎥ ≡ x ≡ ⎣xH1 ⎦ ∈ R(4L+3)(N +2K) . ⎢ ⎣ΛH2 f (n) + ⎣g/4⎦⎦ (n) xH2 0 We are going to show that each component here is a solution to a minimization problem. First, since the soft thresholding is equivalent to a minimization procedure (cf. [15, 33]), i.e., @ A 1 tμ (z) = arg min (z − y)2 + |μy| , μ ≥ 0, y 2 we have
@ Tu (z) = arg min y
A 1 2
z − y 2 + diag(u)y 1 . 2
(n)
Therefore, {xHi }1i=0 are solutions of the following minimization problems: (4.4)
@
(n)
xHi ≡ Tu (GHi f (n) ) = arg min y
A 1
GHi f (n) − y 22 + diag(u)y 1 , 2
i = 0, 1.
Second, we can show that, for an arbitrary vector x, ⎡ ⎤ @ A 0 1
x − y 22 = Λx + ⎣g/4⎦ , (4.5) arg min y∈C 2 0 where
⎧ ⎡ ⎤⎫ 0 ⎬ ⎨ C = h ∈ RN +2K : (I − Λ)h = ⎣g/4⎦ . ⎩ ⎭ 0
Indeed, for any vector z ∈ C, we have
x − z 22 =
K
(xi − zi )2 +
i=1
=
K
N +K i=K+1
(xi − zi )2 +
i=1
N +K i=K+1
(xi − zi )2 +
N +2K
(xi − zi )2
i=N +K+1
(xi − gi−K /4)2 +
N +2K
(xi − zi )2
i=N +K+1
⎛ ⎡ ⎤⎞2 N +K 0 2 . ⎝ ⎣ ⎦ ⎠ ≥ (xi − gi−K /4) = x − Λx + g/4 0 i=K+1 2
Equation (4.5) shows that (4.6)
(n)
xH2
(n) xH2
is the solution of the minimization problem: ⎡ ⎤ @ A 0 1
H2 f (n) − y 22 . ≡ ΛH2 f (n) + ⎣g/4⎦ = arg min y∈C 2 0
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1217
Define the indicator function ιC for the closed convex set C by 0, h ∈ C, (4.7) ιC (h) = +∞, h ∈ C. Equation (4.6) can be rewritten as @ (n)
xH2 = arg min
(4.8)
y
Denote that
A 1
H2 f (n) − y 22 + ιC (y) . 2
⎡ ⎤ GH0 B = ⎣GH1 ⎦ . H2
(4.9)
By (4.4) and (4.8), x(n) can be written as the solution of the minimization problem: (4.10) (n)
x
@ = arg min x
A 1 (n) 2
Bf − x 2 + diag(u)xH0 1 + diag(u)xH1 1 + ιC (xH2 ) . 2
By substituting (4.3) into (4.1), we obtain B5 6 C 0G H 1G H 2 x(n) = P+ [B T x(n) ]. f (n+1) = P+ H By the definition of P+ , f (n+1) is the solution of A @ 1 T (n)
B x − f 22 + ιP+ (f ) , (4.11) f (n+1) = arg min f 2 where ιP+ is the indicator function of P+ defined similar to (4.7). By combining (4.10) and (4.11), we can rewrite our iteration (3.10) in Algorithm 1 as (4.12) x(n) = arg minx 12 Bf (n) − x 22 + diag(u)xH0 1 + diag(u)xH1 1 + ιC (xH2 ) , f (n+1) = arg minf 12 B T x(n) − f 22 + ιP+ (f ) . 4.2. Convergence. To prove the convergence of the iteration (4.12), we recall the definitions of Moreau’s proximal operator and Moreau’s envelope originally introduced in [30, 31]. For any convex and lower semicontinuous function ξ, the proximal operator is defined by A @ 1
f − h 22 + ξ(h) . (4.13) proxξ (f ) ≡ arg min h 2 Moreau’s envelope, which is a convex and differentiable function, is defined by A @ 1 1 2 (4.14)
f − h 2 + ξ(h) . ξ(f ) ≡ min h 2 By Lemma 2.5 in [15], the gradient of the envelope 1 ξ is given by (4.15)
∇(1 ξ(f )) = f − proxξ (f ).
1218
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
Define ϕ(x) ≡ diag(u)xH0 1 + diag(u)xH1 1 + ιC (xH2 ). By (4.12) and (4.13), we obtain f (n+1) = proxιP (B T x(n) ) = proxιP (B T proxϕ (Bf (n) )). +
+
Since, by (2.10) and (3.7), B T B = I, we have f (n+1) = proxιP (f (n) − B T Bf (n) + B T proxϕ (Bf (n) )) +
= proxιP (f (n) − B T (Bf (n) − proxϕ (Bf (n) ))). +
By (4.15) and the chain rule (as ∇ is the gradient with respect to f (n) ), (4.16)
∇(1 ϕ(Bf (n) )) = B T (Bf (n) − proxϕ (Bf (n) )).
Therefore, (4.17)
f (n+1) = proxιP (f (n) − ∇(1 ϕ(Bf (n) ))). +
Thus we see that (3.10) in Algorithm 1, which is equivalent to (4.12), is equivalent to (4.17). Let F1 (f ) ≡ ιP+ (f ) and F2 (f ) ≡ 1 ϕ(Bf ). Then (4.17) becomes f (n+1) = proxF1 (f (n) − ∇F2 (f (n) )). This iteration is the proximal forward-backward splitting in [15] for the minimization problem min{F1 (f ) + F2 (f )}, f
which is equivalent to minf ∈P+ F2 (f ). By (4.14), it is further equivalent to @ @ AA 1 (4.18) min min .
Bf − x 22 + diag(u)xH0 1 + diag(u)xH1 1 f ∈P+ x∈{x:xH2 ∈C} 2 We now show that the iteration (4.17), which is equivalent to Algorithm 1, converges to a minimizer of (4.18). For this, we need the following result of [15], which is stated here in the finite-dimensional case. Proposition 4.1. Consider the minimization problem minf ∈R(N +2K) {F1 (f ) + F2 (f )}, where F1 is a convex and lower semicontinuous function and F2 is a convex and differentiable function with a 1/ν-Lipschitz continuous gradient. Let 0 < μ < 2ν, and then, for any initial guess f (0) , the iteration f (n+1) = proxF1 (f (n) − μ∇F2 (f (n) )) converges to a minimizer of F1 (f ) + F2 (f ) whenever a minimizer exists. We first verify that the conditions on F1 and F2 are satisfied. Lemma 4.2. F1 (f ) ≡ ιP+ (f ) is a convex and lower semicontinuous function, and F2 (f ) ≡ 1 ϕ(Bf ) is a convex and differentiable function with a 1-Lipschitz continuous gradient.
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1219
Proof. First, since P+ is a closed and convex set, ιP+ (f ) is a convex and lower semicontinuous function. Furthermore, since ϕ is lower semicontinuous and convex, by Lemma 2.5 in [15] the envelope function 1 ϕ(·) is always convex and differentiable; hence 1 ϕ(Bf ) is convex and differentiable. Next, we show the 1/ν-Lipshitz continuity of ∇(1 ϕ(Bf (n) )). For this, we note that, by Lemma 2.4 in [15], the inequality
(f − proxξ (f )) − (h − proxξ (h)) 2 ≤ f − h 2 holds for any convex and lower semicontinuous ξ. Hence by (4.16),
∇(1 ϕ(Bf )) − ∇(1 ϕ(Bh)) 2 = B T (Bf − proxϕ (Bf )) − B T (Bh − proxϕ (Bh)) 2 ≤ B T 2 (Bf − proxϕ (Bf )) − (Bh − proxϕ (Bh)) 2 ≤ B T 2 B(f − h) 2 ≤ B T 2 B 2 f − h 2 = f − h 2 . The last equality comes from the identity B T B = I, and hence B T 2 = B 2 = 1. Here the Lipshitz constant 1/ν = 1, and hence ν = 1. To show the convergence, it remains to prove the existence of a minimizer. Theorem 4.3. If K and N are relatively prime, then (4.18) has a minimizer. The proof of this theorem is long, so we put it in the appendix. We remark that the requirement on K and N can always be achieved by adjusting K or N . Now we state our convergence theorem. Theorem 4.4. When K and N are relatively prime, Algorithm 1 converges to a minimizer of (4.18) for arbitrary initial guess f (0) . Proof. By Lemma 4.2, both F1 and F2 satisfy the conditions in Theorem 4.1. Moreover, by Theorem 4.3, when K and N are relatively prime, a minimizer of minf {F1 (f ) + F2 (f )} exists. Therefore, by Theorem 4.1, (4.17) converges to a minimizer of (4.18) for any initial guess. As we have already shown, Algorithm 1 is equivalent to (4.17), and therefore Algorithm 1 converges to a minimizer of (4.18) for any initial guess f (0) . In (4.18), since part of the vector x is exactly the given data g, the term Bf −x 22 reflects the closeness of the solution Af to the given data g. Since Bf are the framelet (packet) coefficients of f , the terms also reflects the closeness of the framelet packet coefficients to x. The 1 norms of framelet (packet) coefficients are closely related to the Besov norm of f (see, e.g., [8, 25]). Hence the term 1
Bf − x 22 + diag(u)xH0 1 + diag(u)xH1 1 2 in (4.18) balances the closeness of the data and smoothness of the underlying solution. 5. Numerical results. In this section, we test Algorithm 1 against the projected Landweber method for 1D and 2D examples. For comparison, we also include the algorithms developed in [3, 5]. We have used the following two error measures proposed in [2]: 1. the relative restoration error (RRE) > ?
f (n) + mean(f ∗ − f (n) ) − f ∗ 2 ςn := ,
f ∗ 2 2. the relative discrepancy error (RDE): εn :=
Af (n) − g 2 .
g 2
1220
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
1
1.2
2.5
2
1
0.8 1.5
0.8 1
0.6
0.6
0.5
0.4
0
0.4 −0.5
0.2 0.2 −1
0 1
38
165
202
−1.5 1
38
(a)
165
202
0 1
(b)
1
1
0.8
0.8
0.8
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
38
165
(d)
202
0 1
38
165
(e)
165
202
165
202
(c)
1
0 1
38
202
0
1
38
(f)
Fig. 5.1. Example 1. (a) Original signal; (b) chopped and nodded signal with Gaussian noise of standard deviation 0.02; (c) result by the projected Landweber method (64 iterations); (d) result by the algorithm in [5] (6 iterations); (e) result by the algorithm in [3] (4 iterations); and (f) result by Algorithm 1 (650 iterations).
Here f ∗ is the true signal and f (n) is the nth iterate. In [2], RRE is used for synthetic data while RDE is for real data. For comparison, for synthetic data, we will also give ςOR,n , which is the RRE computed by restricting both f ∗ and f (n) onto the observation region (OR), i.e., the support of g. For synthetic data we stop the iteration when ςn reaches its minimum value. For real data, we stop the iteration when |εn+1 − εn | < 10−3 . This stopping criterion was used in [3, 5]. The initial guess for all algorithms is set to be zero. For Algorithm 1, the number of decomposition level L in (3.6) is 5 in all experiments. We first test three 1D examples, the first two of which are given in [2]. In all three examples, N = 128 and K = 37. The true object f ∗ has 202 points, and the observation region lies between points 38 and 165. Example 1. The true object consists of two narrow Gaussian functions (simulating two bright stars over a black background), one inside and one outside the observation region (see Figure 5.1(a)). Example 2. The true object consists of one narrow Gaussian function (simulating a bright star) over a smooth background (see Figure 5.2(a)). Example 3. The true object consists of two narrow Gaussian functions (simulating two bright stars with different intensities) over a smooth background (see Figure 5.3(a)). White Gaussian noise with standard deviations σ = 0.01, 0.02, and 0.04 is added to the chopped and nodded signals (see Figures 5.1(b)–5.3(b)). The results for these three examples are tabulated in Tables 5.1–5.3. In all three examples, we can see the significant improvement of our method over the projected Landweber method and those in [3, 5]. We remark that the algorithm in [3] (resp., [5]) assumes that the signal f ∗ lives on [K + 1, N + K] and imposes the periodic (resp., Dirichlet) boundary
1221
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS 1
2
1.5
1
0.8 1
0.8 0.6 0.5
0.6 0
0.4
0.4 −0.5
0.2 0.2
−1
0 1
38
165
202
−1.5 1
38
(a)
165
202
0
1
1
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
165
202
0 1
38
(d)
165
202
(c)
1
38
38
(b)
0.8
0 1
1
165
202
0
1
38
(e)
165
202
(f)
Fig. 5.2. Example 2. (a) Original signal; (b) chopped and nodded signal with Gaussian noise of standard deviation 0.02; (c) result by the projected Landweber method (38 iterations); (d) result by the algorithm in [5] (7 iterations); (e) result by the algorithm in [3] (3 iterations); and (f) result by Algorithm 1 (771 iterations). 1
1.2
2
1.5
1
0.8 1
0.8 0.6 0.5
0.6 0
0.4
0.4 −0.5
0.2 0.2
−1
0 1
38
165
202
−1.5 1
38
(a)
165
202
0 1
(b)
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
1
38
165
(d)
202
0 1
38
165
(e)
165
202
165
202
(c)
1
1
0
38
202
0
1
38
(f)
Fig. 5.3. Example 3. (a) Original signal; (b) chopped and nodded signal with Gaussian noise of standard deviation 0.02; (c) result by the projected Landweber method (38 iterations); (d) result by the algorithm in [5] (7 iterations); (e) result by the algorithm in [3] (3 iterations); and (f) result by Algorithm 1 (669 iterations).
1222
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN Table 5.1 Results for Example 1 by adding white noise with a varying standard derivation σ.
σ 0.01 0.02 0.04
Projected Landweber method ςn ςOR,n 0.1862 0.2409 0.1921 0.2423 0.2170 0.2561
Algorithm in [5]
Algorithm in [3]
ςn 0.6967 0.6983 0.7020
ςn 0.6968 0.6986 0.7024
ςOR,n 0.0438 0.0557 0.1035
ςOR,n 0.0344 0.0506 0.1148
Algorithm 1 ςn 0.0437 0.0496 0.1175
ςOR,n 0.0235 0.0334 0.1018
Table 5.2 Results for Example 2 by adding white noise with a varying standard derivation σ.
σ 0.01 0.02 0.04
Projected Landweber method ςn ςOR,n 0.2094 0.1558 0.2223 0.1635 0.2514 0.1848
Algorithm in [5]
Algorithm in [3]
ςn 0.3031 0.3039 0.3080
ςn 0.3632 0.3634 0.3647
ςOR,n 0.1968 0.1975 0.2052
ςOR,n 0.2724 0.2733 0.2767
Algorithm 1 ςn 0.0291 0.0368 0.0682
ςOR,n 0.0224 0.0255 0.0420
Table 5.3 Results for Example 3 by adding white noise with a varying standard derivation σ.
σ 0.01 0.02 0.04
Projected Landweber method ςn ςOR,n 0.2124 0.1568 0.2254 0.1644 0.2547 0.1857
Algorithm in [5]
Algorithm in [3]
ςn 0.3072 0.3080 0.3120
ςn 0.3925 0.3929 0.3944
ςOR,n 0.1982 0.1988 0.2065
ςOR,n 0.3074 0.3085 0.3121
Algorithm 1 ςn 0.0508 0.0695 0.0894
ςOR,n 0.0396 0.0507 0.0548
Table 5.4 Required numbers of iterations and the CPU time (in seconds) of four algorithms for three examples.
σ
Projected Landweber method Itr CPU time
Algorithm in [5]
Algorithm in [3]
Itr
Itr
CPU time
Itr
CPU time
4 4 3
0.0003 0.0005 0.0003
904 650 546
4.0630 2.9234 2.4531
3 3 3
0.0003 0.0004 0.0003
928 771 582
4.1656 3.4693 2.6312
3 3 3
0.0005 0.0005 0.0003
906 669 445
4.1172 3.2010 2.0625
0.01 0.02 0.04
148 64 39
0.0227 0.0095 0.0059
6 6 5
0.01 0.02 0.04
48 38 25
0.0073 0.0056 0.0033
7 7 6
0.01 0.02 0.04
48 38 25
0.0077 0.0058 0.0036
7 7 6
CPU time Example 1 0.0009 0.0009 0.0008 Example 2 0.0011 0.0010 0.0008 Example 3 0.0011 0.0009 0.0010
Algorithm 1
condition at K +1 to N +K. In contrast, we assume the Neumann boundary condition for the signal at indices 1 and N + 2K. We show the visual results in Figures 5.1– 5.3. Table 5.4 shows the numbers of iterations and the CPU time for generating the corresponding results listed in Tables 5.1–5.3. The CPU time is the average time of 100 runs of the algorithms on a 2.16 GHz Pentium-IV Dell laptop. We see that the timing of our algorithm is higher than those of Landweber-type methods but is quite manageable. Our method can be used as a good postprocessing method to clean up these kinds of infrared images.
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
(a)
(b)
1223
(c)
Fig. 5.4. K = 37 and N = 128. (a) The chopped and nodded image g; (b) result from the projected Landweber method (241 iterations and εn = 0.000993); and (c) result from Algorithm 1 (202 iterations and εn = 0.000994).
Finally we consider a real 2D image obtained from United Kingdom Infra-Red Telescope [2]. The results are given in Figure 5.4. It is clear from the figures that our results have much less noise. 6. Appendix. In this appendix, we prove Theorem 4.3, i.e., the existence of a minimizer of (4.18). Notice that (4.18) is equivalent to min F2 (f ),
(6.1)
f ∈P+
where
@
F2 (f ) ≡ 1 ϕ(Bf ) = min x
A 1
Bf − x 22 + diag(u)xH0 1 + diag(u)xH1 1 + ιC (xH2 ) . 2
Instead of considering the above minimization problem, we consider the following minimization problem first: (6.2)
min F2 (f ),
f ∈W
where W = w : wT 1 = 0 and 1 is the vector of all ones. We will then prove the existence of a minimizer of (6.2). Then we show that the existence of a minimizer of (6.2) implies the existence of a minimizer of (6.1). This concludes that (4.18) has a minimizer. To start, we need the following lemma on the eigenvalues and eigenvectors of the matrices appearing in the matrix B in (4.9). In what follows, the discrete cosine transform (DCT) matrix refers to the DCT matrix of type II in [27]. Lemma 6.1. The eigenvalues γi of the matrix H0 in (2.9) are given by (6.3)
γi = cos2
iKπ , 2(N + 2K)
i = 0, 1, . . . , N + 2K − 1,
and the corresponding eigenvectors are the columns of the DCT matrix. The eigen() () values γi of the matrices G0 in (3.5) are given by ()
γi
= cos2
i2−1 π , 2(N + 2K)
i = 0, 1, . . . , N + 2K − 1,
and the corresponding eigenvectors are also columns of the DCT matrix.
1224
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN
Proof. By the definition of H0 and Lemma 3.1 in [32], the matrix H0 can be diagonalized by the DCT matrix. By the formula (3.3) in [32], after calculation, we () can see that the eigenvalues of H0 are given exactly by (6.3). Since G0 are special () cases of H0 with K = 2−1 , the statements for G0 can be proved similarly. The following lemma shows the existence of a minimizer of (6.2). Lemma 6.2. If K and N are relatively prime, then (6.2) has a minimizer. Proof. Since F2 (f ) is lower semicontinuous and convex, in order to prove the existence of a minimizer of (6.2), by Theorem 2.5.1(ii) in [36], it suffices to show that F2 is coercive in W; i.e., if w ∈ W is such that w 2 → ∞, then F2 (w) → ∞. Let w ∈ W, and we define A @ 1 ∗ 2 x ≡ arg min
Bw − x 2 + diag(u)xH0 1 + diag(u)xH1 1 + ιC (xH2 ) . x 2 Then by the definition of B in (4.9) and (4.4), @ A 1 x∗H0 = arg min
GH0 w − xH0 22 + diag(u)xH0 1 = Tu (GH0 w). xH0 2 By (4.2), we have 1
Bw − x∗ 22 + diag(u)x∗H0 1 + diag(u)x∗H1 1 + ιC (x∗H2 ) 2 ≥ diag(u)x∗H0 1 = diag(uH )TuH (GH H0 w) 1 .
F2 (w) ≡
Let λm and λM be the smallest and largest entries in uH , respectively. Then F2 (w) ≥ λm GH H0 w 1 − λm λM (N + 2K) ≥ λm GH H0 w 2 − λm λM (N + 2K). Denote that v = H0 w. By (3.7), we obtain F2 (w) ≥ λm GH v 2 − λm λM (N + 2K) D = λm vT GTH GH v − λm λM (N + 2K) D = λm vT (I − GTL GL )v − λm λM (N + 2K). (6.4) D In order to estimate vT (I − GTL GL )v, we need to consider the eigenvalues and eigenvectors of I − GTL GL = I −
L 4
()
(G0 )T
=1
L−1 4
(L−)
G0
.
=0
()
By Lemma 6.1, all of the matrices G0 can be diagonalized by the DCT matrix. Therefore, the matrix I − GTL GL can be diagonalized by the DCT matrix, too, and the eigenvalues are given by 1−
L 4
()
(γi )2 ,
=0
which, by Lemma 6.1, equals 0 if and only if i = 0. Therefore, the null space of I − GTL GL is of dimension 1. It can be verified straightforwardly that (I − GTL GL )1 = 0.
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1225
Hence, the subspace spanned by 1 is the null space of I − GTL GL . It implies that wT (I − GTL GL )w ≥ σ w 22
(6.5)
∀w ∈ W,
where σ is the second smallest eigenvalue of I − GTL GL . On the other hand, by (6.3), the eigenvalues of H0 are γi = cos2 θi , where θi ≡
π iK iKπ , = 2(N + 2K) 2 N + 2K
i = 0, 1, . . . , N + 2K − 1.
If K and N are relatively prime, then θi cannot be an odd integral multiple of Therefore, H0 is nonsingular, and the smallest eigenvalues
π 2.
γ = min γi > 0.
(6.6)
i
Furthermore, when K and N are relatively prime, θi cannot be an even integral multiple of π2 unless i = 0. It implies that 1 is a simple eigenvalue of H0 . It is easy to verify that 1 is the corresponding eigenvector. Therefore, W is an invariant subspace of H0 ; i.e, v = H0 w is also in W. Hence, by (6.5), we obtain vT (I − GTL GL )v ≥ σ v 22 . This together with (6.4), (6.6), and the definition of v imply that √ √ F2 (w) ≥ λm σ v 2 − λm λM (N + 2K) = λm σ H0 w 2 − λm λM (N + 2K) √ ≥ λm σγ w 2 − λm λM (N + 2K). Thus, F2 (w) goes to infinity when w 2 goes to infinity. It means that F2 is coercive in W. Hence a minimizer of (6.2) exists by Theorem 2.5.1(ii) in [36]. Next we show that the existence of a minimizer of (6.2) implies the existence of a minimizer of (6.1). Lemma 6.3. Assume that a minimizer of (6.2) exists. Then a minimizer of (6.1) also exists. Proof. First, we observe that, if the identity F2 (f + c1) = F2 (f ) ∀c ∈ R
(6.7)
holds for every f , the conclusion follows immediately. Indeed, let w∗ ∈ W be a minimizer of (6.2), i.e., F2 (w) ≥ F2 (w∗ ) for all w ∈ W. Let p∗+ = w∗ + ρ1, where ρ is the absolute value of the smallest entry of w∗ . It is obvious that p∗+ ∈ P+ . For any p+ ∈ P+ , we orthogonally decompose it into p+ = w + δ1, where w ∈ W. Then F2 (p+ ) = F2 (w + δ1) = F2 (w) ≥ F2 (w∗ ) = F2 (w∗ + ρ1) = F2 (p∗+ ), which implies that p∗+ is a minimizer of F2 (f ) in P+ , i.e., a minimizer of (6.1). Next, we verify the equality (6.7). For this, we first define @ A 1 (L) (L) (L)
y − xH0 22 + diag(uL )xH0 1 E0 (y) ≡ min (L) 2 xH 0
and
@ (H) E0 (y)
≡ min (H)
xH
0
A 1 (H) 2 (H)
y − xH0 2 + diag(uH )xH0 1 . 2
1226
J.-F. CAI, R. CHAN, L. SHEN, AND Z. SHEN (L)
By (4.2), uL is a zero vector. Therefore, E0 (y) = 0 for all y. Define @ A 1
y − xH1 22 + diag(u)xH1 1 E1 (y) ≡ min y 2 and
@ E2 (y) ≡ min y
A 1
y − xH2 22 + ιC (xH2 ) . 2
Since the minimization of each term in 1
Bf − x 22 + diag(u)xH0 1 + diag(u)xH1 1 + ιC (xH2 ) 2 is independent, we have (L)
(H)
F2 (f ) = E0 (GL H0 f ) + E0
(GH H0 f ) + E1 (GH1 f ) + E2 (H2 f ).
By direct calculation, one has H0 1 = 1,
H1 1 = 0,
H2 1 = 0,
GH 1 = 0.
This leads to GH H0 (f + c1) = GH H0 f ,
GH1 (f + c1) = GH1 f ,
H2 (f + c1) = H2 f
for all c ∈ R. Hence (L)
(H)
F2 (f + c1) = E0 (GL H0 (f + c1)) + E0 (GH H0 (f + c1)) +E1 (GH1 (f + c1)) + E2 (GH2 (f + c1)) (H)
= E0 =
(GH H0 f ) + E1 (GH1 f ) + E2 (GH2 f ) (L) (H) E0 (GL H0 f ) + E0 (GH H0 f ) + E1 (GH1 f )
+ E1 (GH2 f ) = F2 (f ),
which completes the proof of (6.7). By combining Lemmas 6.2 and 6.3, we obtain Theorem 4.3. Acknowledgments. We thank Prof. Bertero for introducing us to the topic and his advice and help in our research. We thank the referees for providing us with valuable comments and insightful suggestions which have brought improvements to several aspects of this manuscript. REFERENCES [1] M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, Institute of Physics, Bristol, 1998. [2] M. Bertero, P. Boccacci, F. D. Benedetto, and M. Robberto, Restoration of chopped and nodded images in infrared astronomy, Inverse Problems, 15 (1999), pp. 345–372. [3] M. Bertero, P. Boccacci, A. Custo, C. De Mol, and M. Robberto, A Fourier-based method for the restoration of chopped and nodded images, Astron. Astrophys., 406 (2003), pp. 765–772. [4] M. Bertero, P. Boccacci, and M. Robberto, An inversion method for the restoration of chopped and nodded images, in Infrared Astronomical Instrumentation, Proc. SPIE 3354, A. M. Fowler, ed., 1998, pp. 877–886. [5] M. Bertero, P. Boccacci, and M. Robberto, Inversion of second-difference operators with application to infrared astronomy, Inverse Problems, 19 (2003), pp. 1427–1443. [6] G. Beylkin, R. Coifman, and V. Rokhlin, Fast wavelet transforms and numerical algorithms I, Comm. Pure Appl. Math., 44 (1991), pp. 141–183. [7] L. Borup, R. Gribonval, and M. Nielsen, Tight wavelet frames in Lebesgue and Sobolev spaces, J. Funct. Spaces Appl., 2 (2004), pp. 227–252.
RESTORATION OF CHOPPED-NODDED IMAGES BY FRAMELETS
1227
[8] L. Borup, R. Grivonbal, and M. Nielsen, Bi-framelet systems with few vanishing moments characterize Besov spaces, Appl. Comput. Harmon. Anal., 17 (2004), pp. 3–28. [9] S. Serra-Capizzano, A note on antireflective boundary conditions and fast deblurring models, SIAM J. Sci. Comput., 25 (2003), pp. 1307–1325. [10] A. Chai and Z. Shen, Deconvolution: A wavelet frame approach, Numer. Math., 106 (2007), pp. 529–587. [11] R. Chan, T. Chan, L. Shen, and Z. Shen, Wavelet algorithms for high-resolution image reconstruction, SIAM J. Sci. Comput., 24 (2003), pp. 1408–1432. [12] R. Chan, S. D. Riemenschneider, L. Shen, and Z. Shen, Tight frame: The efficient way for high-resolution image reconstruction, Appl. Comput. Harmon. Anal., 17 (2004), pp. 91– 115. [13] A. Cohen, M. Hoffmann, and M. Reiss, Adaptive wavelet Galerkin methods for linear inverse problems, SIAM J. Numer. Anal., 42 (2004), pp. 1479–1501. [14] R. Coifman and D. Donoho, Translation-Invariant De-Noising, in Wavelets and Statistics, Vol. 103, A. Antoniadis and G. Oppenheim, eds., Springer-Verlag, Berlin, 1995, pp. 125– 150. [15] P. Combettes and V. Wajs, Signal recovery by proximal forward-backward splitting, Multiscale Model. Simul. 4 (2005), pp. 1168–1200. [16] I. Daubechies, Ten Lectures on Wavelets, CBMS-NSF Regional Conf. Ser. in Appl. Math. 61, SIAM, Philadelphia, 1992. [17] I. Daubechies, M. Defrise, and C. D. Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Comm. Pure Appl. Math., 57 (2004), pp. 1413– 1541. [18] I. Daubechies, B. Han, A. Ron, and Z. Shen, Framelets: MRA-based constructions of wavelet frames, Appl. Comput. Harmon. Anal., 14 (2003), pp. 1–46. [19] C. de Boor, R. DeVore, and A. Ron, On the construction of multivariate (pre)-wavelets, Constr. Approx., 9 (1993), pp. 123–166. [20] C. De Mol and M. Defrise, A note on wavelet-based inversion methods, in inverse problem, image analysis and medical imaging, in Contemp. Math., American Mathematics Society, Providence, RI, 2002, pp. 85–96. [21] F. Di Benedetto, The m-th difference operator applied to l2 functions on a finite interval, Linear Algebra Appl., 366 (2003), pp. 173–198. [22] D. Donoho, Nonlinear solution of linear inverse problems by wavelet-vagulette decomposition, Appl. Comput. Harmon. Anal., (1995), pp. 101–126. [23] D. Donoho and I. Johnstone, Ideal spatial adaptation by wavelet shrinkage, Biometrika, 81 (1994), pp. 425–455. [24] M. Figueiredo and R. D. Nowak, An EM algorithm for wavelet-based image restoration, IEEE Trans. Image Process., 12 (2003), pp. 906–916. [25] Y. Hur and A. Ron, Caplets: Wavelet Representations without Wavelets, preprint, 2005. [26] R. Jia and Z. Shen, Multiresolution and wavelets, Proc. Edinb. Math. Soc., 37 (1994), pp. 271– 300. [27] T. Kailath and V. Olshevsky, Displacement structure approach to discrete-trigonometrictransform based preconditioners of G. Strang type and of T. Chan type, SIAM J. Matrix Anal. Appl., 26 (2005), pp. 706–734. ´, Deconvolution by thresholding in mirror wavelet bases, [28] J. Kalifa, S. Mallat, and B. Rouge IEEE Trans. Image Process., 12 (2003), pp. 446–457. [29] S. Mallat, A Wavelet Tour of Signal Processing, 2nd ed., Academic Press, New York, 1999. [30] J.-J. Moreau, Fonctions convexes duales et points proximaux dans un espace hilbertien, C.R. Acad. Sci. Paris S´er. A Math., 255 (1962), pp. 1897–2899. [31] J.-J. Moreau, Proximit´ e et dualit´ e dans un espace hilbertien, Bull. Soc. Math. France, 93 (1965), pp. 273–299. [32] M. Ng, R. Chan, and W. Tang, A fast algorithm for deblurring models with Neumann boundary conditions, SIAM J. Sci. Comput., 21 (2000), pp. 851–866. [33] M. Nikolova, Local strong homogeneity of a regularized estimator, SIAM J. Appl. Math., 61 (2000), pp. 633–658. [34] A. Ron and Z. Shen, Affine system in L2 (Rd ): The analysis of the analysis operator, J. Funct. Anal., 148 (1997), pp. 408–447. [35] L. Shen, I. Kakadiaris, M. Papadakis, I. Konstantinidis, D. Kouri, and D. Hoffman, Image denoising using a tight frame, IEEE Trans. Image Process., 15 (2006), pp. 1254– 1263. ˇlinescu, Convex Analysis in General Vector Spaces, World Scientific, River Edge, NJ, [36] C. Za 2002.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1228–1250
c 2008 Society for Industrial and Applied Mathematics
MULTILEVEL LINEAR SAMPLING METHOD FOR INVERSE SCATTERING PROBLEMS∗ JINGZHI LI† , HONGYU LIU‡ , AND JUN ZOU† Abstract. A novel multilevel algorithm is presented for implementing the widely used linear sampling method in inverse obstacle scattering problems. The new method is shown to possess asymptotically optimal computational complexity. For an n × n sampling mesh in R2 or an n × n × n sampling mesh in R3 , the proposed algorithm requires one to solve only O(nN −1 ) far-field equations for a RN problem (N=2,3), and this is in sharp contrast to the original version of the method which needs to solve nN far-field equations. Numerical experiments are presented to illustrate the promising feature of the algorithm in significantly reducing the computational cost of the linear sampling method. Key words. multilevel linear sampling method, inverse scattering problems, optimal computational complexity AMS subject classifications. 78A45, 35R30 DOI. 10.1137/060674247
1. Introduction. In their original work [5], Colton and Kirsch developed a “simple” method for the shape reconstruction in inverse scattering problems which is nowadays known as the linear sampling method (LSM). The method has been extensively studied and extended in several directions; we refer the reader to [8] and [12] for a comprehensive review. The current work is mainly concerned with an implementation technique of the LSM. We take as our model problem the inverse acoustic sound-soft obstacle scattering by time-harmonic plane waves. But like the original LSM, our algorithm can be equally applied to other inverse problems, such as the acoustic sound-hard obstacle scattering or electromagnetic obstacle scattering. Consider a sound-soft scatterer D, which is assumed to be the open complement of an unbounded domain of class C 2 in RN (N = 2, 3), that is, we include scattering from more than one (but finitely many) component obstacle in our analysis. Given an incident field ui , the presence of the obstacle will give rise to a scattered field us . Throughout, we take ui (x) = exp{ikx · d} to be a time-harmonic plane wave, where √ i = −1, d ∈ RN −1 , and k > 0 are, respectively, the incident direction and wave number. We define u(x) = ui (x) + us (x) to be the total field, which satisfies the following Helmholtz system (cf. [6], [7]): ⎧ 2 ¯ ⎪ in RN \D, ⎨ Δu + k u = 0 (1.1) u(x) = 0 on ∂D, ⎪ ⎩ (N −1)/2 ∂us ( ∂r − ikus ) = 0, limr→∞ r where r = |x| for any x ∈ RN . The direct problem (1.1) has been well understood, and ¯ ∩ C(RN \D). Moreover, it is known that there exists a unique solution u ∈ C 2 (RN \D) ∗ Received by the editors November 7, 2006; accepted for publication (in revised form) November 15, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/67424.html † Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong (
[email protected],
[email protected]). The third author’s work was substantially supported by Hong Kong RGC grants 404105 and 404606. ‡ Department of Mathematics, University of Washington, Box 354350, Seattle, WA 98195 (hyliu@ math.washington.edu).
1228
MULTILEVEL LINEAR SAMPLING METHOD
1229
the asymptotic behavior at infinity of the scattered wave us is governed by A @ eik|x| 1 s u∞ (ˆ as |x| → ∞, (1.2) u (x) = x) + O (N −1)/2 |x| |x| uniformly for all directions x ˆ = x/|x| ∈ SN −1 . The analytic function u∞ (ˆ x) is defined N −1 and called the far-field pattern (see [7]). We shall write on the unit sphere S x; D, d, k) to specify its dependence on the observation direction x ˆ, the obstacle u∞ (ˆ D, the incident direction d, and the wave number k. The inverse obstacle scattering x; d, k) for x ˆ, d ∈ SN −1 problem is to determine ∂D from the measurement of u∞ (ˆ and fixed k > 0. This problem has been playing an indispensable role in many areas of sciences and technology such as radar and sonar, medical imaging, geophysical exploration, and nondestructive testing (see, e.g., [7]). Next, we shall give a brief description of the LSM for solving this important inverse problem. First, we introduce the far-field operator F : L2 (SN −1 ) → L2 (SN −1 ) defined by (1.3) (F g)(ˆ x) := u∞ (ˆ x, d)g(d)ds(d), x ˆ ∈ SN −1 . SN −1
The LSM uses g as an indicator and solves the following far-field equation: (1.4)
(F g)(ˆ x) = Φ∞ (ˆ x, z),
x ˆ ∈ SN −1 , z ∈ RN ,
where x, z) = γ exp{−ikˆ x · z} Φ∞ (ˆ √ with γ = 1/4π in R3 and γ = eiπ/4 / 8πk in R2 . The following theorem forms the basis of the LSM (see, e.g., Theorem 4.1 in [8]). Theorem 1.1. Assume that k 2 is not a Dirichlet eigenvalue for −Δ in D. Then the following hold: 1. For z ∈ D and a fixed ε > 0 there exists a gεz ∈ L2 (SN −1 ) such that (1.5)
F gεz − Φ∞ (·, z) L2 (SN −1 ) < ε and lim gεz L2 (SN −1 ) = ∞.
z→∂D
¯ and any given ε > 0, every gεz ∈ L2 (SN −1 ) that satisfies 2. For z ∈ RN \D
F gεz − Φ∞ (·, z) L2 (SN −1 ) < ε ensures lim gεz L2 (SN −1 ) = ∞.
ε→0
The LSM elegantly turns the reconstruction of the shape of obstacle D into the process of numerically determining the indicator function g z in Theorem 1.1. The general procedure is stated as follows (see also Chapter 4 in [3]). Algorithm LSM. 1. Select a mesh Th of sampling points in a region Ω which contains D.
1230
JONGZHI LI, HONGYU LIU, AND JUN ZOU
2. Use the Tikhonov regularization and the Morozov discrepancy principle to compute an approximate solution g z to the far-field equation (1.4) for each mesh point z of Th . 3. Select a cut-off value c ; then, count z ∈ D if g z L2 (SN −1 ) ≤ c and z ∈ D if
g z L2 (SN −1 ) > c. A mathematical justification was given in [1] for the use of the LSM to determine D through the information of the indicator function g z . The LSM has been proven to be numerically very successful and shown to possess several remarkable merits; see [8], [12], and the references therein. In fact, it has become one of the most important reconstruction algorithms in inverse obstacle scatterings. But there are also some drawbacks for the method; e.g., it fails to work when meeting some interior eigenvalue problem, and there is no standard strategy to choose the cut-off values; see [3]. Though the computational complexity is also one of its drawbacks, it is considerably more efficient and less computationally expensive than many other nonlinear methods such as the one based on the Lippmann–Schwinger equation (cf. [10]). Nevertheless, the LSM can still be very expensive in numerical implementations, as one has to solve the far-field equation (1.4) for each sampling point. There are several works which attempt to either circumvent the cost of sampling or improve the image results; see, e.g., [2], [4]. In the present work, we shall make an effort to reduce the computational complexity of the method. As one can see, for an R2 problem, by using an n × n mesh, we have to solve one linear integral equation (1.4) at each mesh point, which amounts to n2 totally, while it is n3 in the R3 case. And the computational counts can be huge in certain particular situations such as the following: When little a priori information on D is available, the initial guess region Ω should be chosen to be moderately larger than D. In order to achieve a high-resolution reconstruction of D, one needs a very fine mesh over Ω, thus leading to a very large n. When the scatterer consists of multiple component obstacles with the distance between each two being several times larger than the sizes or diameters of the two obstacles, the initial region Ω is then required to contain all these components, which means that Ω must be chosen to be much larger than actually needed. Our main goal in this paper is to provide a fast numerical procedure to implement the LSM, and thus further consolidate its effectiveness, and, more importantly, to rigorously justify that the numerical algorithm has optimal computational complexity in terms of the mesh size n. To the best of our knowledge, this important issue on computational complexity of the LSM has not been seriously investigated yet. In the next section, we will address the motivations and implementation details of the new algorithm and then prove its asymptotically optimal computational complexity. In section 3, some numerical experiments are performed to illustrate the promising feature of the algorithm in significantly reducing the computational cost of the LSM. 2. Multilevel linear sampling method. In this section, we will present a multilevel linear sampling method (MLSM), together with some theoretical analysis. For the sake of simplicity, we will carry out our discussion in R2 , but all of the subsequent results can be straightforwardly extended to the three-dimensional case. Let D ⊂ R2 be a bounded domain as shown in Figure 1 (top-left) and suppose that we are going to use an n × n mesh for the LSM with (n − 1)2 cells of equal size. Clearly, in order to get some satisfactory reconstruction for the profile of D, the mesh must be moderately fine in some sense. However, by performing the LSM on this fine mesh, we have to spend considerable computational cost in finding the indicator
1231
MULTILEVEL LINEAR SAMPLING METHOD
remote cell
D
D
inner cell
remote cell inner cell remote cell at fine level
D
remote cell at fine level remote cell at fine level
inner cell inner cell at fine level
inner cell
remote cell
inner cell at fine level
Fig. 1. Label-and-remove scheme.
functions in those “remote” cells which are far away from the scatterer D, or in those “inner” cells which lie deeply inside D; e.g., see the red and blue colored regions in Figure 1 (top-right). So, it would be very advantageous if we could get rid of the remote and inner cells in our computations. This can be naturally realized with a coarser mesh. In fact, this is reasonable since the indicator function g z has very large norms for z in those remote cells while very small norms for z in those inner cells. Moreover, it is noted that the cut-off value c for the LSM in the fine mesh is still applicable on the coarsened mesh. Here, we would like to remark that, as pointed out in [3], the choice of c is rather heuristic and there is still no standard strategy for it. To be more precise, we first choose a coarse grid covering the sampling region Ω and perform the LSM on this coarse level. Then based on the results of the LSM we will label and remove those remote and inner cells. Then, we refine the mesh on the remaining sampling region and perform the LSM again in this fine level to label and remove those fine remote and inner cells; e.g., see Figure 1 (bottom-left) for those remote and inner cells. By doing this labeling and removing technique in a multilevel way, we can reconstruct the profile of D more accurately. We would like to remark that in many cases the number of trimmed cells could be very large and thus save a lot of computational time, especially when the scatterer is composed of multiple components with the distance between each two being several times larger than the
1232
JONGZHI LI, HONGYU LIU, AND JUN ZOU
sizes or diameters of the two components (see, e.g., Figure 1 (bottom-right)). Now, we are ready to formulate our algorithm in detail. In the following, the sampling region Ω is always chosen to be a square in R2 . Then, let {Tk }L k=1 be a nested sequence of meshes on the sampling domain Ω such that Tk+1 is a refinement of Tk for k = 1, . . . , L − 1. Throughout, we assume that Tk+1 is an nk+1 × nk+1 mesh while Tk is an nk × nk mesh, where nk+1 = 2nk − 1 for k = 1, . . . , L − 1. That is, we refine the mesh Tk by equally subdividing every subsquare in Tk into four subsquares of Tk+1 . Then if the mesh length of Tk is hk for k = 1, . . . , L − 1, then hk+1 = hk /2. Now, the MLSM can be formulated as follows. Algorithm MLSM. 1. Set k = 0 and choose an initial mesh for the sampling region Ω. 2. Apply the LSM scheme on the kth-level mesh to investigate those mesh points which have not been examined previously. 3. For a given cut-off value c, independent of the level k, classify and label the kth-level subsquares (cells) into three sets—namely, remote cells, boundary cells, and inner cells—based on the cut-off value principle in the LSM. A cell is labeled as “remote” if the norms of the indicator functions at the vertices of the cell are all larger than c, while a cell is labeled as “inner” if the norms of the indicator functions at the vertices of the cell are all less than or equal to c, and other remaining cells will be labeled as “boundary cells.” Then remove the remote and inner cells. 4. Refine the remaining sampling mesh. 5. Set k = k + 1, and if k ≤ L, go to Step 2. It is remarked that in order to exclude the extreme case that the obstacle is trapped into a single subsquare of the sampling mesh, the initial mesh should be chosen to be mildly fine such that both “remote” cells and “inner” cells exist. Next, we will show that the MLSM algorithm is asymptotically optimal in computational complexity. For this purpose, we first present some lemmas. In the following, we denote by Γ a C 2 -smooth curve in R2 which forms the boundary of a bounded domain G. For any h > 0, we define two curves parallel to Γ:
(2.1)
Γ+ h :={x + hν(x), x ∈ Γ, and ν(x) is the unit normal to Γ at x directed to the exterior of G},
(2.2)
Γ− h :={x − hν(x), x ∈ Γ, and ν(x) is the unit normal to Γ at x directed to the exterior of G}.
Then we have the following lemma. + Lemma 2.1. There exist constants h+ 0 > 0 and 0 < α0 ≤ 1 such that (2.3)
+ dist(Γ, Γ+ h ) ≥ α0 h
whenever 0 < h < h+ 0.
+ Proof. Assume contrarily that there are no constants h+ 0 and α0 such that (2.3) ˆ 1 and ˆ 1 = 1/2, there must exist an h1 such that 0 < h1 < h holds. Then, for h + + + ˆ dist(Γ, Γh1 ) < h1 /2; otherwise Lemma 2.1 is true with h0 = h1 and α0 = 1/2. ˆ 2 and ˆ 2 = min{h1 , 1/22 }, there must exist an h2 such that 0 < h2 < h Next, for h + + + 2 2 ˆ dist(Γ, Γh2 ) < h2 /2 ; otherwise Lemma 2.1 is true with h0 = h2 and α0 = 1/2 . Conˆ k = min{hk−1 , 1/2k } (k ≥ tinuing with this procedure, we have by induction that for h k ˆ 3), there exists an hk such that 0 < hk < hk and dist(Γ, Γ+ hk ) < hk /2 . So we obtain
MULTILEVEL LINEAR SAMPLING METHOD
1233
a positive sequence {hk }∞ k=1 such that (2.4)
dist(Γ, Γ+ hk ) lim = 0. k→∞ hk
lim hk = 0 and
k→∞
+ + 2 Since both Γ and Γ+ hk are compact sets in R , there exist xk ∈ Γ and yk ∈ Γhk for any k ∈ N such that + dist(Γ, Γ+ hk ) = |xk − yk |.
(2.5) Set (2.6)
yk = yk+ − hk ν(yk+ ) ∈ Γ
for k ∈ N,
+ where ν(yk+ ) is the unit outward normal to Γ+ hk at yk . By extracting subsequences if necessary, we may assume that
(2.7)
lim xk = x0
k→∞
and
lim yk = y0 .
k→∞
By (2.6), (2.7), we see that limk→∞ yk+ = y0 , which together with (2.4), (2.5) implies that (2.8)
x0 = y0 = x∗
for some x∗ ∈ Γ. Noting that ν(x) is continuous, for an arbitrary ε > 0 there exists δ > 0 such that (2.9)
|ν(x) − ν(x∗ )| < ε ∀ x ∈ Bδ (x∗ ) ∩ Γ,
where Bδ (x∗ ) = {x ∈ R2 ; |x − x∗ | < δ}. By (2.7) and (2.8), we know that there exists kε ∈ N such that xk ∈ Bδ (x∗ ), yk ∈ Bδ (x∗ ) ∀k > kε . Furthermore, by (2.4), we can assume that kε is chosen such that (2.10)
dist(Γ, Γ+ 1 hk ) < hk 2
∀k > kε ,
1 hk 2
∀k > kε .
namely (2.11)
|xk − yk+ | <
It is noted that by (2.11) we must have xk = yk for all k > kε , since otherwise we would have |xk − yk+ | = |yk − yk+ | = hk . Let τ (x) be the tangential to Γ at x, and we know from (2.9) that (2.12)
|τ (x) − τ (x∗ )| < ε ∀ x ∈ Bδ (x∗ ) ∩ Γ.
−−→ → Next, we investigate the angle ∠(− x− k yk , τ (yk ))(∈ [0, π/2]) between the two vectors xk yk and τ (yk ) for k > k . From the geometric interpretation of Lagrange’s theorem, we
1234
JONGZHI LI, HONGYU LIU, AND JUN ZOU
→ know that there exists ξk ∈ Bδ (x∗ ) ∩ Γ such that τ (ξk ) is parallel to − x− k yk . By (2.12), we know that (2.13) λk := τ (ξk ), τ (yk ) = τ (x∗ ) − O(ε), τ (x∗ ) − O(ε) = 1 − O(ε)
as ε → +0,
where ·, · is the inner product in R2 . Hence, (2.14)
√ → θk := ∠(− x− k yk , τ (yk )) = arccos λk = O( ε)
as ε → +0.
Now, let xk yk yk+ denote the triangle with vertices xk , yk , and yk+ . It is easily seen −→ → −− + x− that the interior angle of xk yk yk+ at yk , namely ∠(− k yk , yk yk ), is either π/2 + θk or π/2 − θk . Then, by (2.10) and (2.14), we take ε0 > 0 to be sufficiently small and kε0 ∈ N be sufficiently large such that for all k > kε0 , (2.15)
|xk − yk+ | 1 < hk 2
and
sin
+π
, 1 − θk > . 2 2
−→ → −− + Then, in the case that ∠(− x− k yk , yk yk ) = π/2 + θk > π/2, |xk − yk+ | > |yk − yk+ | = hk , −→ → −− + and in the case that ∠(− x− k yk , yk yk ) = π/2 − θk , |xk − yk+ | |xk − yk+ | 1 = + ≥ sin(π/2 − θk ) > . hk 2 |yk − yk | In both cases, we have a contradiction with the first inequality in (2.15). This completes the proof of Lemma 2.1. − Lemma 2.2. There exist constants h− 0 > 0 and 0 < α0 ≤ 1 such that (2.16)
− dist(Γ, Γ− h ) ≥ α0 h
whenever 0 < h < h− 0.
Proof. The lemma can be proved in a most similar way to that of Lemma 2.1. Lemma 2.3. There exist constants h0 > 0 and α0 > 0 such that √ dist(Γ, Γ± 2h whenever 0 < h < h0 . α0 h ) ≥ Proof. Set √ α0 =
2
min(α0+ , α0− )
and h0 =
− min(h+ 0 , h0 ) , α0
where α0± and h± 0 are constants given in Lemmas 2.1 and 2.2. Then, it is easy to − verify that when h < h0 , namely α0 h < min(h+ 0 , h0 ), √ + 2h (2.17) dist(Γ, Γ+ α0 h ) ≥ α0 α0 h ≥ and (2.18)
− dist(Γ, Γ− α0 h ) ≥ α0 α0 h ≥
√
2h.
MULTILEVEL LINEAR SAMPLING METHOD
1235
The following theorem is crucial to our subsequent investigation. Theorem 2.4. Let T be an n × n mesh on the sampling region Ω. There exist two constants κ0 > 0 and n0 ∈ N such that ∂D lies on at most κ0 n subsquares of T for all n ≥ n0 . Proof. To ease the discussion, we assume that the scatterer D is composed of a single component obstacle. That is, D is a bounded domain. But we remark that our subsequent proof can be easily modified to the case that D has finitely many connected components. Let Γ := ∂D in Lemma 2.3. Take n ∈ N to be sufficiently large such that the mesh length h of T satisfies h < h0 . Suppose that ∂D lies on m subsquares of T . By (2.17) and (2.18), it is easily seen that these m subsquares must lie in the ring-shaped − region formed by Γ+ α0 h and Γα0 h . Let s0 denote the area occupied by this ring-shaped region, ω0 = |Ω| be the area of Ω, and η0 = |∂D| be the length of the boundary curve ∂D. Then, we have mh2 ≤ s0 ≤ 2η0 α0 h; hence m≤
2η0 α0 . h
By noting n2 h2 = ω0 , we further have 2η0 α0 m≤ √ n. ω0 Now, the theorem is seen to be held with F E 2η0 α0 κ0 = √ , ω0 where for a positive number a, a denotes the smallest integer not less than a. The above theorem shows that for a sufficient fine n × n mesh, ∂D lies on at most O(n) subsquares. We next show that ∂D also lies on at least O(n) subsquares. Theorem 2.5. Let T be an n × n mesh on the sampling region Ω. There exist two constants β0 > 0 and m0 ∈ N such that ∂D lies on at least β0 n subsquares of T for all n ≥ m0 . Proof. As in Theorem 2.4, we need only to consider the simple case that D is a connected bounded domain, and that the subsequent proof is easily modified to the case that D has finitely many connected components. By our assumption on the sampling mesh, we may choose T to be fine enough such that there is at least one inner cell. Take one of the edges of this cell and denote its connected extension in D by AB with the two endpoints A and B lying on ∂D (see Figure 2). We suppose that AB lies on m subsquares of T . Let A0 , A1 , . . . , Am be the vertices of those subsquares, all lying on the extended line of AB and ordered in the direction from A to B (see Figure 2). By our organization, A either is A0 or lies between A0 and A1 , and B either is Am or lies between Am−1 and Am , whereas Aj , j = 1, 2, . . . , m − 1, all lie inside of D. We denote by l0 , l1 , . . . , lm those line segments of T in Ω which, respectively, pass through A0 , A1 , . . . , Am . Noting that D is connected, by the fundamental property of connected set, we know that lj , j = 1, 2, . . . , m − 1 must have intersection with ∂D. We denote by A 1 , A 2 , . . . , A m−1 those intersection
1236
JONGZHI LI, HONGYU LIU, AND JUN ZOU
A0
l0
A A’ 1 A
l1
1
A’2
A2
l2
D A
m−1
B
A’m−1
l
m−1
lm
Am
Fig. 2. Illustration of the proof of Theorem 2.5.
points which lie on one side of AB. It is remarked that Aj for j = 1, . . . , m − 1 is not necessarily unique. Now, by the connectedness of ∂D, we know that between A and A 1 , B and A m−1 , and A j and A j+1 for j = 1, . . . , m − 2 there must be a connected part of ∂D which lies in the stripped region, respectively formed by l0 and l1 , lm−1 and lm , and lj and lj+1 for j = 1, . . . , m − 2. Therefore, if we suppose that ∂D lies on m subsquares of T , then there must be at least one from those subsquares which lies in the stripped region formed by lj and lj+1 for j = 0, 1, . . . , m − 1. Hence, we have m ≤ m. Next, we set A0 = A and Am = B, and by noting that |Aj Aj+1 | ≤ h for j = 0, 1, . . . , m − 1, we have m−1
|Aj Aj+1 | ≤ mh,
i.e., |AB| ≤ mh.
j=0
Finally, we have by noting n2 h2 = |Ω| that (2.19)
m ≥ m ≥ |AB|
1 ≥ β0 n, h
with β0 = |AB|/ |Ω|. This completes the theorem. Theorems 2.4 and 2.5 reveal that in order to achieve a good reconstruction of the scatterer D, we need to at least solve O(n) far-field equations (1.4) with a fine n × n sampling mesh. Now, we are ready to present the main result—that the algorithm MLSM possesses the asymptotically optimal computational complexity. Theorem 2.6. Consider an L-level MLSM algorithm with a nested sequence of sampling mesh {Tk }L k=1 . Suppose that for each k, Tk is of size nk × nk with mesh length hk such that 0 < h1 < h0 , where h0 is given in Lemma 2.3 corresponding to ∂D. Then, by using the MLSM to reconstruct ∂D, the far-field equation (1.4) is solved O(nL ) times in total. Proof. We denote by Ck , k = 1, 2, . . . , L, the points to be investigated on the kth level. By Theorem 2.4, we know that ∂D lies on at most κ0 nk subsquares of Tk . Next, when we turn to the (k + 1)th level, by our description of the MLSM, we need
MULTILEVEL LINEAR SAMPLING METHOD
1237
unexplored point
checked point
Fig. 3. Illustration of the proof of Theorem 2.6.
only to investigate the mesh points on those subsquares of Tk+1 which have not been examined before and can be easily seen to be at most 5κ0 nk mesh points as shown in Figure 3. Hence, we have (2.20)
Ck ≤ Ck−1 + 5κ0 nk−1 ,
k = 2, . . . , L,
where nk−1 = (nk + 1)/2. Recursively, we can obtain CL ≤ CL−1 + 5κ0 nL−1 , CL−1 ≤ CL−2 + 5κ0 nL−2 , ······ ······ C2 ≤ C1 + 5κ0 n1 . By summing up the above inequalities we get CL ≤ C1 + 5κ0 [nL−1 + nL−2 + · · · + n1 ]. 'k Since it is easy to deduce that nL−k = nL /2k + j=1 1/2j for k = 1, 2, . . . , L − 1, we see that CL ≤ C1 + 5κ0 (L + nL ), i.e., CL ≤ O(nL )
for sufficiently large nL ∈ N.
This means that the MLSM has the asymptotically optimal computational complexity. This completes the proof. Remark 2.7. As we have pointed out earlier, all of the results in this section can be modified to the R3 case, where the MLSM algorithm needs to solve far-field equations (1.4) O(n2L ) times.
1238
JONGZHI LI, HONGYU LIU, AND JUN ZOU
3. Numerical experiments with discussions. In this section, we perform three tests to illustrate the effectiveness and efficiency of the newly proposed MLSM algorithm. All of the programs in our experiments are written in MATLAB and run on a Pentium 3GHz PC. The scatterer in system (1.1) will be chosen to be the kite-shaped object which has been widely tested in inverse scattering problems (see, e.g., [5], [8], and [12]). There are a total of three tests to be considered, which are, respectively, referred to as SK, SKn, and DKn. For experiments SK and SKn, the scatterer D is composed of a single kite. However, in experiment SK, we would not add noise to the synthetic far-field data, and in experiment SKn, we add random noise. For experiment DKn, the scatterer D is composed of two kites, and the synthetic far-field data is also added with random noise. The other parameters chosen for these experiments are listed in Table 1. Table 1 Experimental parameters for the tests.
Sampling domain Ω Incident wave number k Finest level nL Upper threshold c1 Lower threshold c2 Noise level δ No. of incident directions No. of observation directions
Test 1 (SK)
Test 2 (SKn)
Test 3 (DKn)
[−3, 3] × [−3, 3] 1 129 0.03 0.03 0
[−3, 3] × [−3, 3] 1 129 0.032 0.032 0.10
[−4, 8] × [−4, 8] 1 129 0.03 0.02 0.05
32 32
It is noted that for experiment DKn, we have taken two cut-off values, c1 and c2 , c1 < c2 , instead of only one cut-off value, c. Since in DKn, the scatterer is composed of two kites, it is better to take a range of cut-off values, i.e., [c1 , c2 ], which enables us to get a buffer region of locating the boundary of the underlying object. Like in the original LSM, we label as inner those points with the norm of distributed density g less than c1 and remote those points with the norm of distributed density g greater than c2 . The synthetic far-field data are generated by solving the layer potential operator equation with Nystr¨ om’s method (see section 3.5, Chapter 3 in [7]). We compute the far-field patterns at 32 equidistantly distributed observation points (cos tj , sin tj ), tj = 2jπ/32, j = 0, 1, . . . , 31, and 32 equidistantly distributed incident directions (cos τj , sin τj ), τj = 2jπ/32, j = 0, 1, . . . , 31. The far-field patterns we obtain are subjected pointwise to uniform random noise. The uniform random noise is added according to the following formula: uδ∞ = u∞ + δr1 |u∞ | exp(iπr2 ), where r1 and r2 are two uniform random numbers, both ranging from –1 to 1, and δ is the noise level. For each mesh point z, the corresponding far-field equation (1.4) is solved by using the Tikhonov regularization method (cf. [3]), with the regularization parameter determined by the standard L-curve method. In tests 1 and 2, the kite-shaped object D is shown in Figure 4 with the boundary ∂D given by the following parametric form: (3.1)
x(t) = (cos t + 0.65 cos 2t − 0.65, 1.5 sin t),
0 ≤ t ≤ 2π.
MULTILEVEL LINEAR SAMPLING METHOD
1239
3
2
1
0
−1
−2
−3 −3
−2
−1
0
1
2
3
Fig. 4. Kite-shaped obstacle.
8
6
4
2
0
−2
−4 −4
−2
0
2
4
6
8
Fig. 5. Two kite-shaped objects.
For test 3, the two kite-shaped objects are shown in Figure 5, which are derived from the kite in Figure 4 by rigid motions: the bottom-left one is given by the one in Figure 4 after a counterclockwise π/2 rotation, and the top-right one is given by the one in Figure 4 after a counterclockwise π/4 rotation and 5-unit displacement in both longitude and latitude directions. We now turn to experiment SK. First, we solve the far-field equation (1.4) on the finest mesh (129 × 129) to find g z with z being a sampling mesh point. In order to have a view of the behavior of this g z over the sampling mesh, we plot the negative logarithm of its L2 -norm, namely − log g z L2 (S1 ) , in a 3D graph (see Figure 6(a)), but
1240
JONGZHI LI, HONGYU LIU, AND JUN ZOU
3
3
2
2
1
1 0 0 −1 −1 4 2
4
−2
2
0
0
−2
−2 −4
−3 −3
−4
(a) The negative logarithm of the L2 -norm of g z plotted in 3D.
−2
−1
0
1
2
3
(b) Contours of the negative logarithm of the L2 -norm of g z .
3
2
−2 −4
1 −6 0
−8 −10
−1 −12 4 2
4
−2
2
0
0
−2
−2 −4
−4
(c) The negative logarithm of the L2 -norm of g z plotted in 3D without regularization in deriving g z .
−3 −3
−2
−1
0
1
2
3
(d) Contours of the negative logarithm of the L2 -norm of g z without regularization in deriving g z .
Fig. 6. Test 1 (SK): Surface and contours of the negative logarithm of the L2 -norm of g z with and without regularization.
such scalings are not needed in our MLSM procedure for those tests. The corresponding contours for − log g z L2 (S1 ) are also given in Figure 6(b) for a 2D view. Then, we can use the cut-off value principle to detect the kite, and this gives the original LSM. In this case, we would like to refer the reader to [9] for a glance at the numerical outcome. We remark that the regularization is crucial in the numerical procedure. Even in this noise-free case with δ = 0, regularization is still necessary since the exact far-field data u∞ is not available and computed here numerically by using Nystr¨ om’s method, thus causing some approximation errors in addition to the normal roundingoff errors. We have also plotted the negative logarithm of the L2 -norm of g z obtained by solving the far-field equation without regularization, from which it can be seen that the reconstruction would be rather unsatisfactory; see the 3D display and 2D contour curves in Figure 6(c) and Figure 6(d), respectively. This phenomenon reflects the ill-posed nature of the problem at hand and is consistent with the one observed in [9]. Next, we apply our (6-level) MLSM to this problem with nL := n6 = 129 and plot the evolution of the detected boundary of the underlying object level-by-level. Figures 7 and 8 demonstrate that the boundary of the kite-shaped object can be approximated in a clearly improving manner as we go from coarse to fine meshes, but the points
MULTILEVEL LINEAR SAMPLING METHOD
1241
level 1
level 2
level 3
Fig. 7. MLSM iteration for test 1 (SK). Figures on the left: Refinement of the previous coarse grid. Figures on the right: The remote and inner cells are removed.
1242
JONGZHI LI, HONGYU LIU, AND JUN ZOU
level 4
level 5
level 6 Fig. 8. MLSM iteration for test 1 (SK). Figures on the left: Refinement of the previous coarse grid. Figures on the right: The remote and inner cells are removed.
1243
MULTILEVEL LINEAR SAMPLING METHOD
Table 2 Number of points checked by the MLSM at each level and the total number of points checked by MLSM and LSM in the tests.
Test 1 Test 2 Test 3
1
2
25 25 25
8 8 24
Level of grid 3 4 5 25 34 36
56 61 71
121 129 163
6
MLSM
LSM
248 260 375
483 517 694
16641 16641 16641
examined are kept within the order O(nL ). This first experiment SK suggests that the MLSM performs as well as the original LSM method but the computational costs have reduced significantly. We have also counted the numbers of points which have been exploited in the MLSM and listed them in Table 2. For test 1, it is 483 and this is roughly one thirtieth of that for the LSM, which is 16641 (= 129 × 129). Here we would like to point out an important observation about the implementation of the MLSM: If at a certain level, a cell is labeled as remote (or inner ) and we trim it from this level, but part of its boundary is left for the next level, then all of the sampling points of this fine level lying on that part of the boundary should be labeled as having been exploited, since they are obviously remote (resp., inner ). We take the number 8 in the second level for test 1 as an illustration. From the left subfigure of level 2 in Figure 7, we know that a total of 16 new points come out from the refinement of the mesh. But eight of these sixteen points, which lie on the outermost boundary of the second level mesh, need not be exploited by our MLSM. This is because they are on the boundary of some trimmed remote cells from the first level; we know that they are remote points without exploiting. The same rule applies in order to interpret the numbers at the finer levels and the other tests. Next, we add 10% uniform random noise to the far-field data and run the MLSM again for the SKn case. The evolution of the boundary of the kite is illustrated in Figures 9 and 10. We see the total number of the points examined to locate the boundary is 517, almost the same as the previous SK case (see Table 2). Then, we test our MLSM for the DKn case with 5% uniform random noise to the far-field data and plot the evolution of the boundary of these two kites in Figures 11 and 12. Note that there is a slight increase in the number of exploited points which is due to a buffer range of cut-off values used in this test. Finally, we plot all the subsquares that have been checked in the MLSM procedure in a single figure for all of the above experiments; see Figure 13 for test 1 (SK), Figure 14 for test 2 (SKn), and Figure 15 for test 3 (DKn). From those figures, we get a have concrete feeling about how MLSM works to identify the boundary of the underlying object. For comparison, we list in Table 2 the number of points examined at each level in the MLSM procedure and the total number of points examined by MLSM and LSM for all of the three tests. It can be seen from Table 2 that the number of points examined at each level is about σ0 nk with σ0 ≈ 2, which is consistent with our theoretical analysis in section 2. To consolidate the asymptotically optimal computational complexity, we perform the three tests again with the mesh size nL in the finest level being 33, 65, 129, and 257, respectively. But for all of those experiments, we start with the coarsest mesh given by n1 = 5. Furthermore, we let the cut-off value c be the average of c1 and c2 in Table 1, and this is to eliminate the possible deterioration due to the additional
1244
JONGZHI LI, HONGYU LIU, AND JUN ZOU
level 1
level 2
level 3
Fig. 9. MLSM iteration for test 2 (SKn). Figures on the left: Refinement of the previous coarse grid. Figures on the right: The remote and inner cells are removed.
MULTILEVEL LINEAR SAMPLING METHOD
1245
level 4
level 5
level 6 Fig. 10. MLSM iteration for test 2 (SKn). Figures on the left: Refinement of the previous coarse grid. Figures on the right: The remote and inner cells are removed.
1246
JONGZHI LI, HONGYU LIU, AND JUN ZOU
level 1
level 2
level 3
Fig. 11. MLSM iteration for test 3 (DKn).
MULTILEVEL LINEAR SAMPLING METHOD
level 4
level 5
level 6 Fig. 12. MLSM iteration for test 3 (DKn).
1247
1248
JONGZHI LI, HONGYU LIU, AND JUN ZOU
Fig. 13. One kite-shaped object (SK).
Fig. 14. One kite-shaped object (SKn).
points checked in the buffer region. The total number of points examined and the time for each test are listed in Table 3. Moreover, we compare the time cost between MLSM and LSM only for test 2, since the computational cost for investigating one point is relatively fixed, the time cost for tests 1 and 3 is of slight difference compared with that for test 2 by using the LSM. As shown in Table 3, the computational cost for MLSM grows linearly as nL increases, compared with the quadratic increase of the time consumption of the traditional LSM. It can be seen that the number of far-field
1249
MULTILEVEL LINEAR SAMPLING METHOD
Fig. 15. Two kite-shaped objects (DKn). Table 3 Comparison of different nL in the tests. MLSM nL
Pts.
Test 1 Time (sec.)
33 65 129 257
114 235 483 989
0.61 1.29 2.77 5.60
Pts. 128 257 517 1037
Test 2 Time (sec.) 0.67 1.40 2.96 5.81
LSM Pts.
Test 3 Time (sec.)
Pts.
Test 2 Time (sec.)
138 266 516 1016
0.72 1.43 2.95 5.75
1089 4225 16641 66049
5.98 24.07 99.16 396.90
equations that have been solved in each test is around ζ0 nL with ζ0 ≈ 4, and this further verifies our results in section 2. 4. Concluding remarks. A novel multilevel linear sampling method (MLSM) is investigated in detail for reconstructing scatterers from far-field measurements. Both theoretical analysis and numerical experiments demonstrate the asymptotically optimal computational complexity of the MLSM. The new method is mainly a new implementation of the linear sampling method (LSM) for inverse scattering problems. It can significantly reduce the computational cost of LSM without any deterioration in quality of the reconstructed scatterers. Exactly as for LSM, MLSM can be applied equally to sound-soft obstacle scattering as well as inverse acoustic sound-hard obstacle scattering, inverse electromagnetic obstacle scattering, and the factorization method in inverse scattering problems (see [11]). Acknowledgment. The authors would like to thank the two anonymous referees for their constructive and thoughtful comments which have led to a significant improvement of the results and presentation of the current work.
1250
JONGZHI LI, HONGYU LIU, AND JUN ZOU REFERENCES
[1] T. Arens, Why linear sampling works, Inverse Problems, 20 (2004), pp. 163–173. [2] R. Aramini, M. Brignone, and M. Piana, The linear sampling method without sampling, Inverse Problems, 22 (2006), pp. 2237–2254. [3] F. Cakoni and D. Colton, Qualitative Methods in Inverse Scattering Theory, Springer-Verlag, Berlin, 2006. [4] D. Colton, H. Haddar, and M. Piana, The linear sampling method in inverse electromagnetic scattering theory, Special section on imaging, Inverse Problems, 19 (2003), S105–CS137. [5] D. Colton and A. Kirsch, A simple method for solving inverse scattering problems in the resonance region, Inverse Problems, 12 (1996), pp. 383–393. [6] D. Colton and R. Kress, Integral Equation Method in Scattering Theory, John Wiley and Sons, New York, 1983. [7] D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, 2nd ed., Springer-Verlag, Berlin, 1998. [8] D. Colton and R. Kress, Using fundamental solutions in inverse scattering, Inverse Problems, 22 (2006), pp. R49–R66. [9] D. Colton, M. Piana, and R. Potthast, A simple method using Morozov’s discrepancy principle for solving inverse scattering problems, Inverse Problems, 13 (1997), pp. 1477– 1493. [10] A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problems, Springer-Verlag, New York, 1996. [11] A. Kirsch and N. Grinberg, The Factorization Method for Inverse Problems, Oxford University Press, Oxford, UK, to appear. [12] R. Potthast, A survey on sampling and probe methods for inverse problems, Inverse Problems, 22 (2006), pp. R1–R47.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1251–1277
c 2008 Society for Industrial and Applied Mathematics
A DISTRIBUTED SDP APPROACH FOR LARGE-SCALE NOISY ANCHOR-FREE GRAPH REALIZATION WITH APPLICATIONS TO MOLECULAR CONFORMATION∗ PRATIK BISWAS† , KIM-CHUAN TOH‡ , AND YINYU YE§ Abstract. We propose a distributed algorithm for solving Euclidean metric realization problems arising from large 3-D graphs, using only noisy distance information and without any prior knowledge of the positions of any of the vertices. In our distributed algorithm, the graph is first subdivided into smaller subgraphs using intelligent clustering methods. Then a semidefinite programming relaxation and gradient search method are used to localize each subgraph. Finally, a stitching algorithm is used to find affine maps between adjacent clusters, and the positions of all points in a global coordinate system are then derived. In particular, we apply our method to the problem of finding the 3-D molecular configurations of proteins based on a limited number of given pairwise distances between atoms. The protein molecules, all with known molecular configurations, are taken from the Protein Data Bank. Our algorithm is able to reconstruct reliably and efficiently the configurations of large protein molecules from a limited number of pairwise distances corrupted by noise, without incorporating domain knowledge such as the minimum separation distance constraints derived from van der Waals interactions. Key words. semidefinite programming, anchor-free graph realization, molecular conformation AMS subject classifications. 49M27, 90C06, 90C22, 90C26, 92E10, 92-08 DOI. 10.1137/05062754X
1. Introduction. Semidefinite programming (SDP) relaxation techniques can be used for solving a wide range of Euclidean distance geometry problems, such as data compression, metric-space embedding, covering and packing, and chain folding [12, 20, 23, 38]. More recently, SDP has been applied to machine learning problems such as nonlinear dimensionality reduction [35]. One particular instance of the distance geometry problem arises in sensor network localization [5, 6, 8] where, given the positions of some sensors in a network (the sensor nodes’ known positions are called anchors) and a set of pairwise distances between sensors and between sensors and anchors, the positions of all the sensor nodes in the network have to be computed. This problem can be abstracted into a 2-D or 3-D graph realization problem, that is, finding the positions of the vertices of a graph given some constraints on the edge lengths. Another instance of the graph realization problem arises in molecular conformation, specifically, protein structure determination. It is well known that protein structure determination is of great importance for studying the functions and properties of proteins. In order to determine the structure of protein molecules, nuclear magnetic resonance (NMR) experiments are performed to estimate lower and upper bounds on interatomic distances [13, 18]. Additional knowledge about the bond angles and lengths between atoms also yields information about relative positions of atoms. ∗ Received by the editors March 24, 2005; accepted for publication (in revised form) November 27, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/62754.html † Electrical Engineering, Stanford University, Stanford, CA 94305 (
[email protected]). ‡ Department of Mathematics, National University of Singapore, 2 Science Drive 2, Singapore 117543, Singapore (
[email protected]). § Management Science and Engineering and, by courtesy, Electrical Engineering, Stanford University, Stanford, CA 94305 (
[email protected]).
1251
1252
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
In the simplest form, given a subset of all the pairwise distances between the atoms of a molecule, the objective is to find a conformation of all the atoms in the molecule such that the distance constraints are satisfied. Since this problem is also an abstraction of graph realization, most of the concepts that were developed for the sensor network localization problem can be used directly for the molecular conformation problem. In fact, the terms localization and realization will be used interchangeably throughout this paper. However, some crucial improvements to the basic algorithm have to be made before it can be applied to the molecular conformation problem. In [8], the problem was described in two dimensions, although it can be extended to higher dimensions, as is illustrated in this paper. The improvements suggested in [5] provide much better performance for noisy distance data. However, the SDP methods described in [5] and [8] solved problems of relatively small sizes with the number of points typically below 100 or so. A distributed version of the algorithm was described in [7] for larger sets of points, in which the larger problem was broken down into smaller subproblems corresponding to local clusters of points. The assumption made in the large-scale sensor network problem was the existence of anchor nodes all across the network, i.e., points whose positions are known prior to the computation. These anchor nodes play a major role in determining the positions of unknown nodes in the distributed algorithm, since the presence of anchors in each cluster is crucial in facilitating the clustering of points and the process of stitching together different clusters. The anchor nodes are used to create clusters by including all sensors within one or two hops of their radio range. When the positions for unknown nodes are computed, they are already in the global coordinate system since the anchor positions have been incorporated into the computation. Furthermore, the presence of anchors helps to dampen the propagation of errors in the estimated positions to other clusters when the problem is solved by a distributed algorithm. Without anchors, we need alternative methods of clustering the points and stitching the clusters together. In this paper, we propose a distributed algorithm for solving large-scale noisy anchor-free Euclidean metric realization problems arising from 3-D graphs, to address precisely the issues just mentioned. In the problem considered here, there are no a priori determined anchors, as is the case in the molecular conformation problem. Therefore the strategy of creating local clusters for distributed computation is different. We perform repeated matrix permutations on the sparse distance matrix to form local clusters within the structure. The clusters are built keeping in mind the need to maintain a reasonably high degree of overlap between adjacent clusters; i.e., there should be enough common points considered between adjacent clusters. This is used to our advantage when we combine the results from different clusters during the stitching process. Within each cluster, the positions of the points are first estimated by solving an SDP relaxation of a nonconvex minimization problem that seeks to minimize the sum of errors between given and estimated distances. A gradient-descent minimization method, first described in [22], is used as a postprocessing step, after the SDP computation to further reduce the estimation errors. The refinement of errors by a local optimization method is especially important, as distance data in a cluster may be too sparse, or the noise in the distance measures may be too high, to determine a unique realization in the required dimensional space. In that case, the SDP solution is in a higher-dimensional space, and a simple projection of that solution into a lowerdimensional space does not yield correct positions. Fortunately, it can often serve as a good starting iterate for a local optimization method to obtain a lower-dimensional
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1253
realization that satisfies the given distance constraints. After the gradient-descent postprocessing step, poorly estimated points within each cluster are isolated, and their positions are recomputed when more points are correctly estimated. The solution of each individual cluster yields different orientations in its local coordinate systems since there are no anchors to provide global coordinate information. The local configuration may be rotated, reflected, or translated while still respecting the distance constraints. This was not a problem in the case when anchors were available, as they would perform the task of ensuring that each cluster followed the same global coordinate system. Instead in this paper, we use a least squares–based affine mapping between local coordinates of common points in overlapping clusters to create a coherent conformation of all points in a global coordinate system. We test our algorithm on protein molecules of varying sizes and configurations. The protein molecules, all with known molecular configurations, are taken from the Protein Data Bank [4]. Our algorithm is able to reliably reconstruct the configurations of large molecules with thousands of atoms quite efficiently and accurately based on given upper and lower bounds on limited pairwise distances between atoms. To the best of our knowledge, there are no computational results reported in the literature for determining molecular structures of this scale by using only sparse and noisy distance information. However, there is still room for improvement in our algorithm in the case of very sparse or highly noisy distance data. For simplicity, our current SDP-based distributed algorithm does not incorporate the lower constraints generated from van der Waals (VDW) interactions between atoms. However, such constraints can naturally be incorporated into the SDP model. Given that our current algorithm performs quite satisfactorily without the VDW lower bound constraints, we are optimistic that with the addition of such constraints and other improvements in the future, our algorithm would perform well even for the more difficult case of highly noisy and very sparse distance data. Section 2 of this paper describes related work in distance geometry, SDP relaxations and molecular conformation, and attempts to situate our work in that context. Section 3 elucidates the distance geometry problem and the SDP relaxation models. A preliminary theory for anchor-free graph realization is also developed. In particular, regularization ideas for improving the SDP solution quality in the presence of noise are discussed. The intelligent clustering and cluster stitching algorithms are introduced in sections 4 and 5, respectively. Postprocessing techniques to refine the SDP estimated positions are discussed in section 6. Section 7 describes the complete distributed algorithm. Section 8 discusses the performance of the algorithm on protein molecules from the Protein Data Bank [4]. Finally, in section 9, we conclude with a summary of the paper and outline some work in progress to improve our distributed algorithm. 2. Related work. Owing to their large applicability, a lot of attention has been paid to Euclidean distance geometry problems. The use of SDP relaxations to solve this class of problems involves relaxing the nonconvex quadratic distance constraints into convex linear constraints over the cone of positive semidefinite matrices. It is illustrated through sensor network problems in [8]. Similar relaxations have also been developed in [2, 21, 35]. As the number of points and pairwise distances increases, it becomes computationally intractable to solve the entire SDP in a centralized fashion. With special focus on anchored sensor network localization problems, a distributed technique is proposed in [7]. This involves solving smaller clusters of points in parallel and using information from points in different clusters in subsequent iterations to refine the
1254
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
solutions. Building on the ideas in [7], the authors in [11] proposed an adaptive rule-based clustering strategy to sequentially divide a global anchored graph localization problem (in two dimensions) into a sequence of subproblems. The technique in localizing each subproblem is similar to that in [7], but the clustering and stitching strategies are different. It is reported that the improved techniques can localize anchored graphs very efficiently and accurately. Interestingly, not only does the SDP relaxation method solve for unknown points, but the solution matrix also provides an error measure for each estimation. Furthermore, the dual of the SDP relaxation gives insights into the localizability of the given set of points. In fact, the issue of localizability—that is, the existence of a unique configuration of points satisfying the distance constraints—is closely linked to the rigidity of the graph structure underlying the set of points. These issues are explored in detail in [28]. Many approaches have been developed for the molecular distance geometry problem. An overall discussion of the methods and related software is provided in [39]. Some of the approaches are briefly described below. When the exact distances between all pairs of n points are given, a valid configuration can be obtained by computing the eigenvalue decomposition of an (n−1)×(n−1) matrix (which can be obtained through a linear transformation of the squared distance matrix). Note that if the configuration of n points can be realized in a d-dimensional space, then the aforementioned matrix must have rank d, and the eigenvectors corresponding to the d nonzero eigenvalues give a valid configuration. So a decomposition can be found and a configuration constructed in O(n3 ) arithmetic operations. The EMBED algorithm [13] exploits this idea for sparse and noisy distances by first performing bound smoothing, that is, preprocessing the available data to remove geometric inconsistencies and finding valid estimates for unknown distances. Then a valid configuration is obtained through the eigenvalue decomposition of the inner product matrix, and the estimated positions are then used as the starting iterate for local optimization methods on certain nonlinear least squares problems. Classical multidimensional scaling (MDS) is the general class of methods that takes inexact distances as input and extracts a valid configuration from them based on minimizing the discrepancy between the inexact measured distances and the distances corresponding to the estimated configuration. The inexact distance matrix is referred to as a dissimilarity matrix in this framework. Since the distance data is also incomplete, the problem also involves completing the partial distance matrix. The papers [31, 32, 33] consider this problem of completing a partial distance matrix, as well as the more general problem of finding a distance matrix of prescribed embedding dimension that satisfies specified lower and upper bounds, for use in MDS-based algorithms. In [32], good conformation results were reported for molecules with a few hundred atoms each, under the condition that all the pairwise distances (specified in the form of lower and upper bounds with a gap of 0.02˚ A) below 7˚ A were given. Also worth noting in this regard is the distance geometry program APA described in [26], which applies the idea of a data box, a rectangular parallelepiped of dissimilarity matrices that satisfy some given upper and lower bounds on distances. An alternating projection–based optimization technique is then used to solve for both a dissimilarity matrix that lies within the data box, and a valid embedding of the points, such that the discrepancy between the dissimilarity matrix and the distances from the embedding is minimized.
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1255
The ABBIE software package [19], on the other hand, exploits the concepts of graph rigidity to solve for smaller subgraphs of the entire graph defined by the points and distances and finally combining the subgraphs to find an overall configuration. It is especially advantageous to solve for smaller parts of the molecule and to provide certificates confirming that the distance information is not enough to uniquely determine the positions of certain atoms. Our approach tries to retain these advantages by solving the molecule in a distributed fashion, that is, solving smaller clusters and later assembling them together. Some global optimization methods attempt to attack the problem of finding a conformation which fits the given data as a large nonlinear least squares problem. For example, a global smoothing and continuation approach is used in the DGSOL algorithm [24]. To prevent the algorithm from getting stuck at one of the large number of possible local minimizers, the nonlinear least squares problem (with an objective that is closely related to the refinement stage of the EMBED algorithm) is mollified to smoother functions so as to increase the chance of locating the global minimizer. However, it can still be difficult to find the global minimizer from various random starting points, especially with noisy distance information. More refined methods that try to circumvent such difficulties have also been developed in [25], though with limited success. Another example is the GNOMAD algorithm [36], also a global optimization method, which takes special care to satisfy the physically inviolable minimum separation distance, or VDW constraints. For GNOMAD, the VDW constraints are crucial in reducing the search space in the case of very sparse distance data. Obviously, the VDW constraints can easily be incorporated into any molecular conformation problem that is modeled by an optimization problem. In [36], the success of the GNOMAD algorithm was demonstrated on four molecules (the largest one has 5591 atoms) under the assumption that 30–70% of the exact pairwise distances below 6˚ A were given. In addition, the given distances included those from covalently bonded atoms and those between atoms that share covalent bonds with the same atom. Besides optimization-based methods, there are geometry-based methods proposed for the molecular conformation problem. The effectiveness of simple geometric build up (also known as triangulation) algorithms has been demonstrated in [14] and [37] for molecules when exact distances within a certain cut-off radius are all given. Basically, this approach involves using the distances between an unknown atom and previously determined neighboring atoms to find the coordinates of the unknown atom. The algorithm progressively updates the number of known points and uses them to compute points that have not yet been determined. However, the efficacy of such methods for large molecules with very sparse and noisy data has not yet been demonstrated. In this paper, we will attempt to find the structures of molecules with sizes varying from hundreds to several thousands of atoms, given only upper and lower bounds on some limited pairwise distances between atoms. The approach described in this paper also performs distance matrix completion, similar to some of the methods described above. The extraction of the point configuration after matrix completion is still the same as the MDS methods. The critical difference lies in the use of an SDP relaxation for completing the distance matrix. Furthermore, our distributed approach avoids the issue of intractability for very large molecules by splitting the molecule into smaller subgraphs, much like the ABBIE algorithm [19], and then stitching together the different clusters. Some of the atoms which are incorrectly estimated are solved separately using the correctly estimated atoms as anchors. The latter approach bears
1256
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
some similarities to the geometric build up algorithms. In this way, we have adapted and improved some of the techniques used in previous approaches, but also introduced new ideas generated from recent advances in SDP to attack the twin problems of dealing with noisy and sparse distance data, and the computational intractability of large-scale molecular conformation. 3. The SDP model. We first present a nonconvex quadratic programming formulation of the position estimation problem (in the molecular conformation context) and then introduce its SDP relaxation model. Assume that we have m known points (called anchors), ak ∈ Rd (note that m = 0 if no anchor exists), k = 1, . . . , m, and n unknown points, xj ∈ Rd , j = 1, . . . , n. Suppose that we know the upper and lower bounds on the Euclidean distances between some pairs of unknown points specified in the edge set N , and the upper and lower bounds on the Euclidean distances between some pairs of unknown points and anchors specified in the edge set M. For the rest of the point pairs, the upper and lower bounds would be the trivial bounds, ∞ and 0. We define the lower bound distance matrices D = (dij ) and H = (hik ), where dij is specified if (i, j) ∈ N , and dij = 0 otherwise; and hik is specified if (i, k) ∈ M, ¯ ik ) ¯ = (h and hik = 0 otherwise. The upper bound distance matrices D = (d¯ij ) and H ¯ ¯ are defined similarly with dij = 0 if (i, j) ∈ N , and hik = 0 if (i, k) ∈ M. We let D = (dij ) be the mean of D and D, i.e., dij = (dij + d¯ij )/2. The realization problem for the graph ({1, . . . , n}, N ; {a1 , . . . , am }, M) is to determine the coordinates of the unknown points x1 , . . . , xn , given the upper and lower bound distance matrices, D, D, H, and H. Let X = [x1 x2 . . . xn ] be the d × n matrix that needs to be determined. The realization problem just mentioned can be formulated as the following feasibility problem: Find
(1)
X
s.t.
d2ij ≤ xi − xj 2 ≤ d¯2ij
∀(i, j) ∈ N ,
¯2 h2ik ≤ xi − ak 2 ≤ h ik
∀(i, k) ∈ M.
We can write
xi − xj 2 = eTij X T Xeij ,
xi − ak 2 = (ei ; −ak )T [X I]T [X I](ei ; −ak ),
where eij = ei − ej . Here ei is the ith unit vector of appropriate dimension, I is the d × d identity matrix, and (ei ; −ak ) is the vector obtained by appending −ak to ei . Let Y = X T X and Z = [Y X T ; X I]. Then problem (1) can be rewritten as Find Z
s.t.
d2ij ≤ eTij Y eij ≤ d¯2ij
∀(i, j) ∈ N ,
¯2 h2ik ≤ (ei ; −ak )T Z(ei ; −ak ) ≤ h ik (2)
Z = [Y X T ; X I],
Y = X T X.
∀(i, k) ∈ M,
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1257
The above problem (2) is unfortunately nonconvex. Our method is to relax it to a semidefinite program by relaxing the constraint Y = X T X to Y X T X (meaning that Y − X T X is positive semidefinite). The last matrix inequality is equivalent to (see Boyd et al. [9]) Y XT 0, Z symmetric. Z= X I Thus, the SDP relaxation of (2) can be written as the following standard SDP problem: Find Z
s.t.
d2ij ≤ eTij Zeij ≤ d¯2ij
∀ (i, j) ∈ N ,
¯2 h2ik ≤ (ei ; −ak )T Z(ei ; −ak ) ≤ h ik
∀ (i, k) ∈ M,
eTi Zei = 1 ∀ n + 1 ≤ i ≤ n + d, (ei + ej )T Z(ei + ej ) = 2 ∀ n + 1 ≤ i < j ≤ n + d, (3)
Z 0.
Note that the last two sets of equality constraints in (3) specify that the lower-right d × d block of Z is the identity matrix. We note that if there are additional constraints of the form xi − xj ≥ L coming from knowledge about the minimum separation distance between any two points, such constraints can be included in the semidefinite program (3) by adding inequality constraints of the form eTij Zeij ≥ L2 . In molecular conformation, the minimum separation distances corresponding to the VDW interactions are used in an essential way to reduce the search space in the atom-based constrained optimization algorithm (GNOMAD) described in [36]. The minimum separation distance constraints are also easily incorporated into the MDS framework [26, 32]. For the anchor-free case where M = ∅ (empty set), the SDP problem (3) can be reduced in dimension by replacing Z by Y and removing the last d(d + 1)/2 equality constraints, i.e., Find Y
s.t.
d2ij ≤eTij Y eij ≤ d¯2ij
∀ (i, j) ∈ N ,
Y e = 0, (4)
Y 0.
This is because when M = ∅, the (1, 2) and (2, 1) blocks of Z are always equal to zero if the starting iterate for the interior-point method used to solve (3) is chosen to be so. Note that we add the extra constraint Y e = 0 to eliminate the translational invariance of configuration by putting the center of gravity of the points at the 'the n origin, i.e., i=1 xi = 0. Note also that, in the anchor-free case, if the graph is localizable (defined in the next subsection), a realization X ∈ Rd×n can no longer be obtained from the (2, 1) block of Z but needs to be computed from the inner product matrix Y by factorizing
1258
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
it to the form Y = X T X via eigenvalue decomposition (as has been done in previous methods discussed in the literature review). In the noisy case, the inner product matrix Y would typically have rank greater than d. In practice, X is chosen to be the best rank-d approximation, by choosing the eigenvectors corresponding to the d largest eigenvalues. The configuration so obtained is a rotated or reflected version of the actual point configuration. 3.1. Theory of anchor-free graph realization. In order to establish the theoretical properties of the SDP relaxation, we will consider the cases where all the given distances in N are exact, i.e., without noise. A graph G = ({1, . . . , n}, D) is localizable in dimension d if (i) it has a realization X in Rd×n such that xi − xj = dij for all (i, j) ∈ N ; (ii) it cannot be realized (nontrivially) in a higher-dimensional space. We let D = (dij ) be the n × n matrix such that its (i, j) element dij is the given distance between points i and j when (i, j) ∈ N , and zero otherwise. It is shown for the exact distances case in [28] that if the graph with anchors is localizable, then the SDP relaxation will produce a unique optimal solution Z with its (1,1) block equal to X T X. For the anchor-free case where M = ∅, it is clear that the realization cannot be unique since the configuration may be translated, rotated, or reflected, and still preserve the same distances. To remove the translational invariance, we will add an objective function to minimize the norm of the solution in the problem formulation: minimize (5)
n
xj 2
j=1
s.t.
xi − xj 2 = d2ij
∀ (i, j) ∈ N .
What this minimization does is to translate the center of gravity of the points to the origin; that is, if x ¯j , j = 1, . . . , n, is the realization of the problem,' then the realization n ¯, j = 1, . . . , n, where x ¯ = n1 j=1 x ¯j , subject to generated from (5) will be x ¯j − x only rotation and reflection. The norm minimization also helps the following SDP relaxation of (5) to have bounded solutions: minimize Trace(Y ) = I • Y (6)
s.t.
eTij Y eij = d2ij
∀ (i, j) ∈ N ,
Y 0, where Y ∈ S n (the space of n × n symmetric matrices), I is the identity matrix, and • denotes the standard matrix inner product. We note that a model similar to (6) is also proposed in [1] but with the objective function replaced by Y 2F . The dual of the SDP relaxation (6) is given by wij d2ij maximize (i,j)∈N
(7) s.t.
I−
wij · eij eTij 0.
(i,j)∈N
Note that the dual is always feasible and has an interior, since wij = 0 for all (i, j) ∈ N is an interior feasible solution. Thus the primal optimal value in (6) is always attained. However, the dual optimal value in (7) may not always be attainable unless the primal
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1259
problem (6) is strictly feasible. From the standard duality theorem for SDP, we have the following proposition. Proposition 1. Let Y¯ 0 be an optimal solution of (6), and suppose that the ¯ = I −' ¯ij · eij eTij 0 being an dual optimal value in (7) is attained, with U (i,j)∈cN w optimal slack matrix. Then we have the following: ¯ = 0 or Y¯ U ¯ = 0. 1. The complementarity condition holds: Y¯ • U ¯ ¯ 2. Rank(Y ) + Rank(U ) ≤ n. In general, a primal (dual) max-rank solution is a solution that has the highest rank among all solutions for primal (6) (dual (7)). It is known that various path-following interior-point algorithms compute the max-rank solutions for both the primal and dual in polynomial time. We now investigate when the SDP (6) will have an exact relaxation, given that the partial distance data (dij ) is exact. For the anchored case, it was proved in [28] that the condition of exact relaxation is equivalent to the rank of the SDP solution Y¯ being d. However, for the anchor-free case, we are unable to prove this. Instead, we derive an alternative result. Definition 1. Problem (5) is d-localizable if there is no xj ∈ Rh , j = 1, . . . , n, where h = d, such that
xi − xj 2 = d2ij
∀ (i, j) ∈ N .
For h > d, the condition should exclude the trivial case when we set xj = (¯ xj ; 0) for j = 1, . . . , n. The d-localizability indicates that the distances cannot be embedded by a nontrivial realization in higher-dimensional space, and cannot be “flattened” to a lower-dimensional space either. We now develop the following theorem. Theorem 1. If problem (5) is d-localizable, then the solution matrix, Y¯ , of (6) ¯T X ¯ and the dual optimal is unique, and its rank equals d. Furthermore, if Y¯ = X d×n ¯ = (¯ value of (7) is attained, then X x , . . . , x ¯ ) ∈ R is the unique minimum-norm 1 n 'n ¯j = 0 (subject to only rotation and reflection). localization of the graph with j=1 x Proof. Since problem (5) is d-localizable, by definition every feasible solution matrix Y of (6) has rank d. Thus, there is a rank-d matrix V¯ ∈ Rd×n such that any feasible solution matrix Y can be written as Y = V¯ T P V¯ , where P is a d × d symmetric positive definite matrix. We show that the solution is unique by contradiction. Suppose that there are two feasible solutions Y 1 = V¯ T P 1 V¯
and Y 2 = V¯ T P 2 V¯ ,
where P 1 = P 2 and, without loss of generality, P 2 − P 1 has at least one negative eigenvalue. (Otherwise, if P 1 − P 2 has at least one negative eigenvalue, we can interchange the role of P 1 and P 2 ; the only case left to be considered is when all the eigenvalues of P 2 − P 1 are equal to zero, but this case is not possible since it implies that P 1 = P 2 .) Let Y (α) = V¯ T (P 1 + α(P 2 − P 1 ))V¯ . Clearly, Y (α) satisfies all the linear constraints of (6), and it has rank d for 0 ≤ α ≤ 1. But there is an α > 1 such that P 1 + α (P 2 − P 1 ) is positive semidefinite but not positive definite; that is, one of the eigenvalues of P 1 + α (P 2 − P 1 ) becomes zero. Thus, Y (α ) has a rank less than d but feasible to (6), which contradicts the fact that the graph cannot be “flattened” to a lower-dimensional space.
1260
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
¯ be an optimal dual matrix of (7). Then, any optimal solution matrix Y¯ Let U ¯ = 0. Note that U ¯ e = e, where e is the satisfies the complementarity condition Y¯ U T ¯ ¯ ¯ ¯ ¯ vector of all ones. Thus, we have X Xe = Y e = Y U e = 0, which further implies ¯ = 0. that Xe Theorem 1 has an important implication in a distributed graph localization algorithm. It suggests that if a subgraph is d-localizable, then the subconfiguration is the same (up to translation, rotation, and reflection) as the corresponding portion in the global configuration. Thus, one may attempt to localize a large graph by finding a sequence of d-localizable subgraphs. Theorem 1 also says that if the graph G is d-localizable, then the optimal solution ¯ for some X ¯ = [¯ ¯ = 0. ¯T X x1 , . . . , x ¯n ] ∈ Rd×n such that Xe of the SDP is given by Y¯ = X T ¯ ¯ ¯ It is now clear that when G is d-localizable, we have Y = X X, and hence Y¯jj =
¯ xj 2 for j = 1, . . . , n. But in general, when the given distances are not exact but ¯ 0. This inequality, however, may give ¯T X corrupted with noises, we have only Y¯ − X a measure of the quality of the estimated positions. For example, the individual trace (8)
Tj := Y¯jj − ¯ xj 2
may give an indication of the quality of the estimated position x ¯j , where a smaller trace indicates more accuracy in the estimation. 3.2. Regularization term. When the measured distances have errors, the distance constraints usually contradict each other, and so there is no localization in Rd . In other words, Y = X T X. However, since the SDP approach relaxes the constraint Y = X T X into Y X T X, it is still possible to locate the points in a higher-dimensional space (or choose a Y with a higher rank) such that they satisfy the distance constraints exactly. The optimal solution in a higher-dimensional space always results in a smaller violation of the distance constraints than the one constrained in Rd . Furthermore, the max-rank property [17] implies that solutions obtained through interior-point methods for solving SDPs converge to the maximum rank solutions. Hence, because of the relaxation of the rank requirement, the solution is “lifted” to a higher-dimensional space. For example, imagine a rigid structure consisting of the set of points in a plane (with the points having specified distances from each other). If we perturb some of the specified distances, the configuration may need to be readjusted by setting some of the points outside the plane. The above discussion leads us to the question of how to round the higherdimensional (higher rank) SDP solution into a lower-dimensional (in this case, rank-3) solution. One way is to ignore the augmented dimensions and use the projection X ∗ as a suboptimal solution, which is the case in [8]. However, the projection typically leads to points getting “crowded” together. (Imagine the projection of the top vertex of a pyramid onto its base.) This is because a large contribution to the distance between two points could come from the dimensions we choose to ignore. In [35], regularization terms have been incorporated into the SDP arising from kernel learning in nonlinear dimensionality reduction. The purpose is to penalize folding and try to find a stretched map of the set of points while respecting local distance constraints. Here we propose a similar strategy to ameliorate the difficulty of crowding. Our strategy is to convert the feasibility problem (1) into a maximization problem using the following regularization term as the objective function: (9)
n n i=1 j=1
xi − xj 2 .
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1261
The new optimization problem is maximize
n n
xi − xj 2
i=1 j=1
s.t.
d2ij ≤ xi − xj 2 ≤ d¯2ij n
(10)
∀(i, j) ∈ N ,
xi = 0,
i=1
and the SDP relaxation is maximize s.t. (11)
I − (eeT /n), Y d2ij ≤ eTij Y eij ≤ d¯2ij Y e = 0,
∀ (i, j) ∈ N ,
Y 0,
'n where e is the vector of all ones. As mentioned before, the constraints i=1 xi = 0 and Y e = 0 are to remove the translational invariance of the configuration of points by putting the center of gravity at the origin. The addition of a regularization term penalizes folding between the points and maximizes the separation between them, while still respecting local distance constraints. The idea of regularization has also been linked to tensegrity theory and realizability of graphs in lower dimensions; see [27]. The notion of stress is used to explain this. By maximizing the distance between some vertices in a graph, the graph gets stretched out, and there is a nonzero stress induced on the edges in the graph. For the configuration to remain in equilibrium, the total stress on a vertex must sum to zero. In order for the overall stress to cancel out completely, the graph must be in a low-dimensional space. One important point to be noted is that in the case of very sparse distance data, often there may be two or more disjoint blocks within the given distance matrix; that is, the graph represented by the distance matrix may not be connected and may have more than one component. In that case, using the regularization term leads to an unbounded objective function since the disconnected components can be pulled as far apart as possible. Therefore, care must be taken to identify the disconnected components before applying (11) to the individual components. 4. Clustering. The SDP problem (11) is computationally intractable when there are several hundred points. Therefore we divide the entire molecule into smaller clusters of points and solve a separate SDP for each cluster. The clusters need to be chosen such that there should be enough distance information between the points in a cluster for it to be localized accurately, but at the same time only enough that it can also be solved efficiently. In order to do this, we make use of matrix permutations that reorder the points in such a way that the points which share the most distance information among each other are grouped together. In the problem described in this paper, we have an upper and a lower bound distance matrix, but for simplicity, we will describe the operations in this section on just the partial distance matrix D. In the actual implementation, the operations described are performed on both the upper and lower bound distance matrices. This
1262
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE (a)
(b)
D
2
Fig. 1. (a) Schematic diagram of the quasi-block-diagonal structure considered in the distributed SDP algorithm. (b) The shaded region is the (2, 2) block of D2 that is not overlapping D1 . Points corresponding to this shaded block are reordered again via a symmetric reversed Cuthill–McKee permutation.
does not make a difference, because the operations performed here basically exploit information only about the connectivity graph of the set of points. We perform a symmetric permutation of the partial distance matrix D to aggre˜ be the permuted matrix. gate the nonzero elements towards the main diagonal. Let D In our implementation, we used the function symrcm in Matlab to perform the symmetric reverse Cuthill–McKee permutation [15] on D. In [40], the same permutation is also used in a domain decomposition method for fast manifold learning. ˜ is next partitioned into a quasi-block-diagonal matrix The permutated matrix D with variable block-sizes. Let the blocks be denoted by D1 , . . . , DL . A schematic diagram of the quasi-block-diagonal structure is shown in Figure 1. The size of each block (except the last) is determined as follows. Starting with a minimum block-size, say 50, we extend the block-size incrementally until the number of nonzero elements in the block is above a certain threshold, say 1000. We start the process of determining the size of each block from the upper-left corner and sequentially proceed to the lower˜ Observe that there are overlapping subblocks between adjacent right corner of D. blocks. For example, the second block overlaps with the first at its upper-left corner and overlaps with the third at its lower-right corner. The overlapping subblocks serve an important purpose. For convenience, consider the overlapping subblock between the second and third blocks. This overlapping subblock corresponds to points that are common to the configurations defined by their respective distance matrices, D2 and D3 . If the third block determines a localizable configuration X3 , then the common points in the overlapping subblock can be used to stitch the localized configuration X3 to the current global configuration determined by the first two blocks. In general, if the kth block is localizable, then the overlapping subblock between the k − 1st and kth blocks will be used to stitch the kth localized configuration to the global configuration determined by the aggregation of all the previous blocks. Geometrically, the above partitioning process splits the job of determining the ˜ into L smaller jobs, each of which tries to deterglobal configuration defined by D mine the subconfiguration defined by Dk and then assemble the subconfigurations sequentially from k = 1 to L to reconstruct the global configuration. As the overlapping subblocks are of great importance in stitching a subconfiguration to the global configuration, they should have as high a connectivity as possible.
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1263
In our implementation, we find the following strategy to be reasonably effective. After the blocks D1 , . . . , DL are determined, starting with D2 , we perform a symmetric reverse Cuthill–McKee permutation on the (2,2) subblock of D2 that is not overlapping D1 , and repeat the same process sequentially for all subsequent blocks. To avoid introducing excessive notation, we still use Dk to denote the permuted kth block. It is also worth noting that, in the case of highly noisy or very sparse data, the size of the overlapping subblocks needs to be set higher for the stitching phase to succeed. The more common points there are between two blocks, the more robust the stitching between them. This is also true as the number of subblocks which need to be stitched is large (that is, the number of atoms in the molecules is large). However, increasing the number of common points also has an impact on the runtime, and therefore we must choose the overlapping subblock sizes judiciously. Experiments indicated that a subblock size of 15–20 was sufficient for molecules with less than 3000 atoms, but subblock sizes of 25–30 were more suitable for larger molecules. 5. Stitching. After all the individual localization problems corresponding to D1 , . . . , DL have been solved, we have L subconfigurations that need to be assembled ˜ together to form the global configuration associated with D. Suppose that the current configuration determined by the blocks Di , i = 1, . . . , k − 1, is given by the matrix X (k−1) = [U (k−1) , V (k−1) ]. Suppose also that F (k−1) records the global indices of the points that are currently labeled as localized in the current global configuration. Let Xk = [Vk , Wk ] be the points in the subconfiguration determined by Dk . (For k = 1, Vk is the null matrix. For k = 2, Vk and Wk correspond to the unshaded and shaded subblocks, respectively, of D2 in Figure 1(b).) Here V (k−1) and Vk denote the positions of the points corresponding to the overlapping subblock between Dk−1 and Dk , respectively. indices of the Let Ik be the global indices of the points in Wk . Note that the global Gk−1 unlocalized points for the blocks D1 , . . . , Dk−1 are given by J (k−1) = i=1 Ii \ F (k−1) . We will now concentrate on stitching the subconfiguration Dk with the global index set Ik . Note that the points in the subconfiguration Dk , which have been obtained by solving the SDP on Dk , will most likely contain points that have been correctly estimated and points that have been estimated less accurately. It is essential that we isolate the badly estimated points, so as to ensure that their errors are not propagated when estimating subsequent blocks and that we may recalculate their positions when more points have been correctly estimated. To detect the badly estimated points, we use a combination of two error measures. Let xj be the position estimated for point j. Set F& ← F (k−1) . We use the trace error Tj from (8) and the local error ' 2 H ( xi − xj − dij ) &j = i∈Nj E (12) , &j | |N &j = {i ∈ F& : i < j, D ˜ ij = 0}. We require max{Tj , E &j } ≤ Tε as a necessary where N condition for the point to be correctly estimated. In the case when we are just provided with upper and lower bounds on distances, we use the mean distance in the local error measure calculations, i.e., (13)
dij = (d¯ij + dij )/2.
The use of multiple error measures is crucial, especially in cases when the distance information provided is noisy. In the noise-free case, it is much easier to isolate points
1264
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
which are badly estimated using only the trace error Tj . But in the noisy case, there are no reliable error measures. By using multiple error measures, we hope to identify all the bad points which might possibly escape detection when only a single error measure is used. Setting the tolerances Tε for the error is also an important parameter selection issue. For very sparse distance data and highly noisy cases, where even accurately estimated points may have significant error measure values, there is a trade-off between selecting the tolerance and the number of points that are flagged as badly estimated. If the tolerance is too tight, we might end up discarding too many points that are actually quite accurately estimated. The design of informative error measures is still an open issue, and there is room for improvement. As our results will show, the stitching phase of the algorithm is one which is most susceptible to noise and inaccurate estimation, and we need better error measures to make it more robust. Now we have two cases to consider in the stitching process: (i) Suppose that there are enough points in the overlapping subblocks Vk and V (k−1) that are well estimated/localized; then we can stitch the kth subconfiguration directly to the current global configuration X (k−1) by finding the affine mapping that matches points in V (k−1) and Vk as closely as possible. Mathematically, we solve the following linear least squares problem: B C min B(Vk − α) − (V (k−1) − β) F : B ∈ Rd×d , (14) where α and β are the centroids of Vk and V (k−1) , respectively. Once an optimal B is found, set & = [U (k−1) , V (k−1) , β + B(Wk − α)], X
F& ← F (k−1)
I
Ik .
We should mention that in the stitching process it is very important to exclude points in V (k−1) and Vk that are badly estimated/unlocalized when solving (14) to avoid destroying the current global configuration. It should also be noted that there may be points in Wk that are incorrectly estimated in the SDP step. Performing the affine transformation on these points is useless, because they are in the wrong position in the local configuration to begin with. To deal with these points, we re-estimate the positions using those correctly estimated points as anchors. This procedure is exactly the same as that described in case (ii). (ii) If there are not enough common points in the overlapping subblocks, then the stitching process described in (i) cannot be carried out successfully. In this case, the solution obtained from the SDP step for Dk is discarded. That is, the positions in Wk are discarded, and they are to be determined via the current global configuration X (k−1) point-by-point as follows. & ← X (k−1) and F& ← F (k−1) . Let Tε = 10−4 . Set X G For j ∈ J (k−1) Ik , do the following: (a) Formulate the new SDP problem (3) with N = ∅ and M = {(i, j) : i ∈ ˜ ij = 0}, where the anchor points are given by {X(:, & i) : (i, j) ∈ F&, D M}. (b) Let xj and Tj be the newly estimated position and trace error from the &j . previous step. Compute the local error measure E
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1265
& j) = xj ; else, set X & = [X, & xj ]. If min{Tj , E &j } ≤ If j ∈ J (k−1) , set X(:, Tε , then set F& ← F& ∪ {j}, end. Notice that in attempting to localize the points corresponding to Wk , we also attempt to estimate the positions of those previously unlocalized points, whose indices are recorded in J (k−1) . Furthermore, we use previously estimated points as anchors to estimate new points. This not only helps in stitching new points into the current global configuration, but also increases the chances of correcting the positions of previously badly estimated points (since more anchor information is available when more points are correctly estimated). 6. Postprocessing refinement by a gradient descent method. The positions estimated from the SDP and stitching steps can be further refined by applying a local optimization method to the following problem: ⎧ ⎫ ⎨ ⎬ ( xi − xj − dij )2 + ( xj − ak − hjk )2 . (15) min f (X) := ⎭ X∈Rd×n ⎩ (i,j)∈N
(j,k)∈M
The method we suggest to improve the current solution is to move every position along the negative gradient direction of the function f (X) in (15) to reduce the error function value. Now, we will explain the gradient method in more detail. Let Nj = {i : (i, j) ∈ N } and Mj = {k : (j, k) ∈ M}. By using the fact that ∇x x − b = (x − b)/ x − b if x = b, it is easy to show that for the objective function f (X) in (15), the gradient ∇j f with respect to the position xj is given by dij hjk (xj − xi ) + 2 (xj − ak ). 1− 1− ∇j f (X) = 2
xj − xi
xj − ak
i∈Nj
i∈Mj
(16) Notice that ∇j f involves only points that are connected to xj . Thus ∇j f can be computed in a distributed fashion. The gradient-descent method is a local optimization method that generally does not deliver the global optimal solution of a nonconvex problem, unless a good starting iterate is available. The graph realization problem described in this paper is a nonconvex optimization problem. Hence a pure gradient-descent method would not work. However, the SDP estimated solutions are generally close to the global minimum, and so they serve as excellent initial points to start the local optimization. Different objective functions can also be used in the gradient-descent method; for ' example, one could have considered the objective function (i,j)∈N ( xi −xj 2 −d2ij )2 + ' 2 2 2 (j,k)∈M ( xj − ak − hjk ) instead of the one in (15). But we have found that there are no significant differences in the quality of the estimated positions produced by the gradient method using either objective function. The one considered in (15) is easily computed and is a good indicator of the estimation error. In our implementation, we use the gradient refinement step quite extensively, both after the SDP step for a single block, and also after each stitching phase between two blocks. In the single block case, the calculation does not involve the use of any anchor points, but when used after stitching, we fix the previous correctly estimated points as anchors. 7. A distributed SDP algorithm for anchor-free graph realization. We will now describe the complete distributed algorithm for solving a large-scale
1266
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
anchor-free graph realization problem. To facilitate the description of our distributed algorithm for anchor-free graph realization, we first describe the centralized algorithm for solving (1). It is important to note here that the terms localization and realization are used interchangeably. Centralized graph localization (cgl) algorithm. Input: (D, D, N ; {a1 , . . . , am }, M). ¯n ] ∈ Rd×n , and corresponding accuracy meaOutput: Estimated positions, [¯ x1 , . . . , x &1 , . . . , E &n . sures; trace errors, T1 , . . . , Tn , and local error measures, E 1. Formulate the optimization problem (10) and solve the resulting SDP. Let X = [x1 , . . . , xn ] be the estimated positions obtained from the SDP solution. 2. Perform the gradient-descent algorithm on (15) using the SDP solution as ¯n ]. the starting iterate to get more refined estimated positions [¯ x1 , . . . , x 3. For each j = 1, . . . , n, label the point j as localized or unlocalized based on &j . the error measures Tj and E Distributed anchor-free graph localization (dafgl) algorithm. Input: Upper and lower bounds on a subset of the pairwise distances in a molecule. Output: A configuration of all the atoms in the molecule that is closest (in terms of the RMSD error described in section 8) to the actual molecule (from which the measurements were taken). 1. Divide the entire point set into subblocks using the clustering algorithm described in section 4 on the sparse distance matrices. 2. Apply the CGL algorithm to each subblock. 3. Stitch the subblocks together using the procedure described in section 5. After each stitching phase, refine the point positions again using the gradientdescent method described in section 6 and update their error measures. Some remarks are in order for the DAFGL algorithm. In step 3, we can solve each cluster individually, and the computation is highly distributive. In using the CGL algorithm to solve each cluster, the computational cost is dominated by the solution of the SDP problem (the SDP cost is in turn determined by the number of given distances). For a graph with n nodes and m given pairwise distances, the computational complexity in solving the SDP is roughly O(m3 ) + O(n3 ), provided that sparsity in the SDP data is fully exploited. For a graph with 200 nodes and the number of given distances being 10% of the total number of pairwise distances, the SDP would have roughly 2000 equality constraints and matrix variables of dimension 200. Such an SDP can be solved on a Pentium IV 3.0 GHz PC with 2GB RAM in about 36 and 93 seconds using the general purpose SDP software SDPT3-3.1 [34] and SeDuMi-1.05 [29], respectively. The computational efficiency in the CGL algorithm can certainly be improved in various ways. First, the SDP problem need not be solved to high accuracy. It is sufficient to have a low accuracy SDP solution if it is used only as a starting iterate for the gradient-descent algorithm. There are various highly efficient methods (such as iterative solver-based interior-point methods [30] or the SDPLR method of Burer and Monteiro [10]) to obtain a low accuracy SDP solution. Second, a dedicated solver based on a dual scaling algorithm can also speed up the SDP computation. Substantial speed up can be expected if the computation exploits the low rank structure present in the constraint matrices. However, as our focus in this paper is not on improving the computational efficiency of the CGL algorithm, we shall not discuss this issue further. In the numerical experiments conducted in section 8, we use the software packages SDPT3-3.1 and SeDuMi to solve the SDP in the CGL algorithm. An alternative is to
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1267
use the software DSDP5.6 [3], which is expected to be more efficient than SDPT3-3.1 or SeDuMi. The stitching process in step 4 is sequential in nature. But this does not imply that the distributed DAFGL algorithm is redundant and that the centralized CGL algorithm is sufficient for computational purposes. For a graph with 10000 nodes and the number of given distances being 1% of the total number of all pairwise distances, the SDP problem that needs to be solved by the CGL algorithm would have 500000 constraints and matrix variables of dimension 10000. Such a large-scale SDP is well beyond the range that can be solved routinely on a standard workstation available today. By considering smaller blocks, the distributed algorithm does not suffer from the limitation faced by the CGL algorithm. If there are multiple computer processors available, say p of them, the distributed algorithm can also take advantage of the extra computing power. The strategy is to divide the graph into p large blocks using step 2 of the DAFGL algorithm and apply the DAFGL algorithm to localize one large block on each processor. We end this section with two observations on the DAFGL algorithm. First, we observed that usually the outlying points, which have low connectivity, are not well estimated in the initial stages of the method. As the number of well-estimated points grows gradually, more and more of these “loose” points are estimated by the gradientdescent algorithm. As the molecules get larger, the frequency of having nonlocalizable subconfigurations in step 4 also increases. Thus the point-by-point stitching procedure of the algorithm described in section 5 gets visited more and more often. Second, for large molecules, the sizes of the overlapping blocks need to be larger for the stitching algorithm in section 5 to be robust (more common points generally lead to more accuracy in stitching). But to accommodate larger overlapping blocks, each subgraph in the DAFGL algorithm will correspondingly be larger, and that in turn increases the problem size of the SDP relaxation. In our implementation, we apply the idea of dropping redundant constraints to reduce the computational effort in selecting large subblock sizes of 100–150. This strategy works because many of the distance constraints are for some of the densest parts of the subblock, and the points in these dense sections can actually be estimated quite well with only a fraction of those distance constraints. Therefore in the SDP step, we limit the number of distance constraints for each point to fewer than 6. If there are more distance constraints, they are not included in the SDP step. This allows us to choose large overlapping block sizes while the corresponding SDPs for larger clusters can be solved without too much additional computational effort. 8. Computational results. To evaluate our DAFGL algorithm, numerical experiments were performed on protein molecules, with the number of atoms in each molecule ranging from a few hundreds to a few thousands. We conducted our numerical experiments in Matlab on a single Pentium IV 3.0 GHz PC with 2GB of RAM. The known 3-D coordinates of the atoms were taken from the Protein Data Bank (PDB) [4]. These were used to generate the true distances between the atoms. Our goal in the experiments was to reconstruct as closely as possible the known molecular configuration for each molecule, using only distance information generated from a sparse subset of all the pairwise distances. This information was in the form of upper and lower bounds on the actual pairwise distances. For each molecule, we generated the partial distance matrix as follows. If the distance between two atoms was less than a given cut-off radius R, the distance was kept; otherwise, no distance information was known about the pair. The cut-off
1268
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
radius R is chosen to be 6˚ A (1˚ A = 10−8 cm), which is roughly the maximum distance that NMR techniques can measure between two atoms. Therefore, in this case, N = A}. {(i, j) : xi − xj ≤ 6˚ We then perturb the distances to generate upper and lower bounds on the given distances in the following manner. Assuming that dˆij is the true distance between atom i and atom j, we set εij |), d¯ij = dˆij (1 + |¯ dij = dˆij max(0, 1 − |εij |), 2 where ε¯ij , εij ∼ N (0, σij ). By varying σij (which we keep as the same for all pairwise distances), we control the noise in the data. This is a multiplicative noise model, where a higher distance value means more uncertainty in its measurement. For the rest of our discussion, we will refer to σij in percentage values. For example, σij = 0.1 will be referred to as 10% noise on the upper and lower bounds. Typically, not all the distances below 6˚ A are known from NMR experiments. Therefore, we will also present results for the DAFGL algorithm when only a fraction of all the distances below 6˚ A are chosen randomly and used in the calculation. Let Q be the set of orthogonal matrices in Rd×d (d = 3). We measure the accuracy performance of our algorithm by the following criteria:
(17)
1 RMSD = √ min{ X true − QX − h F | Q ∈ Q, h ∈ Rd }, n ⎛
(18)
LDME = ⎝
1 |N |
+
xi − xj − dij
,2
⎞1/2 ⎠
.
(i,j)∈N
The first criterion RMSD (root mean square deviation) requires the knowledge of the true configuration, whereas the second does not. Thus the second criterion LDME (local distance matrix error) is more practical, but it is also less reliable in evaluating the true accuracy of the constructed configuration. The practically useful measure LDME gives lower values than the RMSD, and as the noise increases, it is not a very reliable measure. When 90% of the distances (below 6˚ A) or more were given, and were not corrupted by noise, the molecules considered here were estimated very precisely with RMSD = 10−4 –10−6 ˚ A. This goes to show that the algorithm performs remarkably well when there is enough exact distance data. In this regard, our algorithm is competitive compared to the geometric build-up algorithm in [37] which is designed specifically for graph localization with exact distance data. With exact distance data, we have solved molecules that are much larger and with much sparser distance data than those considered in [37]. However, our focus in this paper is more on molecular conformation with noisy and sparse distance data. We will present results for only such cases. In Figure 2, the original true and the estimated atom positions are plotted for some of the molecules, with varying amounts of distance data and noise. The open green circles correspond to the true positions and solid red dots to their estimated positions from our computation. The error offset between the true and estimated positions for an individual
1269
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
0.6 0.4 0.2 0 0.2 0.4
0.6 0.4 0.5
0.2
0.4
0
(a) 1PTQ(402 atoms) with 30% of distances below 6˚ A and 1% noise on upper and lower bounds, RMSD = 0.9858 ˚ A.
(b) 1AX8(1003 atoms) with 70% of distances below 6˚ A and 5% noise on upper and lower bounds, RMSD = 0.8184 ˚ A.
(c) 1TOA(4292 atoms) with 100% of distances below 6˚ A and 10% noise on upper and lower bounds, RMSD = 1.5058 ˚ A.
Fig. 2. Comparison of actual and estimated molecules.
atom is depicted by a solid blue line. The solid lines, however, are not visible for most of the atoms in the plots because we are able to estimate the positions very accurately. The plots show that even for very sparse data (as in the case of 1PTQ), the estimation for most of the atoms is accurate. The atoms that are badly estimated are the ones that have too little distance information to be localized. The algorithm also performs well for very noisy data, and for large molecules, as demonstrated for 1AX8 and 1TOA. The largest molecule we tried the algorithm on is 1I7W, with 8629 atoms. Figure 3 shows that even when the distance data is highly noisy (10% error
1270
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
Fig. 3. 1I7W(8629 atoms) with 100% of distances below 6˚ A and 10% noise on upper and lower bounds, RMSD = 1.3842 ˚ A.
on upper and lower bounds), the estimation is close to the original with an RMSD error = 1.3842 ˚ A. However, despite using more common points for stitching, the DAFGL algorithm can sometimes generate estimations with high RMSD error for very large molecules, due to a combination of irregular geometry, very sparse distance data, and noisy distances. Ultimately, the problem boils down to being able to correctly identify when a point has been badly estimated. Our measures using trace error (8) and local error (12) are able to isolate the majority of the points that are badly estimated, but do not always succeed when many of the points are badly estimated. In fact, for such molecules, a lot of the computation time is spent in the point-by-point stitching phase, where we attempt to repeatedly solve for better estimations of the badly estimated points, and if that fails repeatedly, the estimations continue to be poor. If the number of badly estimated points is very high, it may affect the stitching of the subsequent clusters as well. In such cases, the algorithm more or less fails to find a global configuration. Examples of such cases are the 1NF7 molecule (5666 atoms) and the 1HMV molecule (7398 atoms) solved with 10% noise. While they are estimated correctly when there is no noise, the 1NF7 molecule estimation returns an RMSD of 25.1061 ˚ A and the 1HMV molecule returns 28.3369 ˚ A. A more moderate case of stitching failure can be seen in Figure 4 for the molecule 1BPM with 50% of the distance below 6˚ A and 5% error on upper and lower bounds; the problem is, in particular, clusters (which are circled in the figure). Although they have correct local geometry, their positions with respect to the entire molecule do not. This indicates that the stitching procedure has failed because some of the common points are not estimated correctly and are then used in the stitching process, thus destroying the entire local configuration. So far, this is the weakest part of the algorithm, and future work is heavily focussed on developing better error measures to isolate the badly estimated points and to improve the robustness of the stitching process. In Figure 5, we plotted the 3-D configuration (via Swiss PDBviewer [16]) of some of the molecules to the left and their estimated counterparts (with different distance
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1271
Fig. 4. 1BPM(3672 atoms) with 50% of distances below 6˚ A and 5% noise on upper and lower bounds, RMSD = 2.4360 ˚ A.
(a) 1HOE(558 atoms) with 40% of distances below 6˚ A and 1% noise on upper and lower bounds, RMSD = 0.2154 ˚ A.
(b) 1PHT(814 atoms) with 50% of distances below 6˚ A and 5% noise on upper and lower bounds, RMSD = 1.2014 ˚ A.
(c) 1RHJ(3740 atoms) with 70% of distances below 6˚ A and 5% noise on upper and lower bounds RMSD = 0.9535 ˚ A.
(d) 1F39(1534 atoms) with 85% of distances below 6˚ A and 10% noise on upper and lower bounds, RMSD = 0.9852 ˚ A.
Fig. 5. Comparison between the original (left) and reconstructed (right) configurations for various protein molecules using Swiss PDB viewer.
1272
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE Table 1 Results for 100% of distances below 6˚ A, and 10% noise on upper and lower bounds.
PDB ID
No. of atoms
% of total pairwise distances used
RMSD(˚ A)
LDME(˚ A)
CPU time (secs)
1PTQ
402
8.79
0.1936
0.2941
107.7
1HOE
558
6.55
0.2167
0.2914
108.7
1LFB
641
5.57
0.2635
0.1992
129.1
1PHT
814
5.35
1.2624
0.2594
223.9
1POA
914
4.07
0.4678
0.2465
333.1
1AX8
1003
3.74
0.6408
0.2649
280.1
1F39
1534
2.43
0.7338
0.2137
358.0
1RGS
2015
1.87
1.6887
0.1800
665.9
1KDH
2923
1.34
1.1035
0.2874
959.1
1BPM
3672
1.12
1.1965
0.2064
1234.7
1RHJ
3740
1.10
1.8365
0.1945
1584.4
1HQQ
3944
1.00
1.9700
0.2548
1571.8
1TOA
4292
0.94
1.5058
0.2251
979.5
1MQQ
5681
0.75
1.4906
0.2317
1461.1
data inputs) to the right. As can be seen clearly, the estimated counterparts closely resemble the original molecules. The results shown in the following tables are a good representation of the performance of the algorithm on different size molecules with different types of distance data sets. The numerical results presented in Table 1 are for the case when all distances below 6˚ A are used and perturbed with 10% noise on lower and upper bounds. Table 2 contains the results for the case when only 70% of distances below 6˚ A are used and perturbed with 5% noise, and also for the case when only 50% of distances below 6˚ A are used and perturbed with 1% noise. The results for 50% distance and 1% noise are representative of cases with sparse distance information and low noise, 100% distance and 10% noise represent relatively denser but highly noisy distance information, and 70% distance and 5% noise is a middle ground between the two extreme cases. We can see from the values in Table 1 that LDME is not a very good measure of the actual estimation error as given by RMSD, since the former does not correlate well with the latter. Therefore we do not report the LDME values in Table 2. From the tables, it can be observed that for relatively dense distance data (100% of all distances below 6˚ A), the estimation error stays below 2˚ A even when the upper and lower bounds are very loose. The algorithm is seen to be quite robust to high noise when there is enough distance information. The estimation error is also quite low for most molecules for cases when the distance information is very sparse but much more precise. In the sparse distance cases, it is the molecules that have more irregular geometries that suffer the most from lack of enough distance data and exhibit high estimation errors. The combination of sparsity and noise has detrimental impact on the algorithm’s performance, as can be seen for the results with 70% distances and 5% noise. In Figure 6, we plot the CPU times required by our DAFGL algorithm to localize a molecule with n atoms versus n (with different types of distance inputs). As the
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1273
Table 2 ˚ and 5% noise on upper and lower bounds, and with Results with 70% of distances below 6A 50% of distances below 6˚ A and 1% noise on upper and lower bounds. PDB ID
No. of atoms
70% Distances, 5% Noise CPU time ˚) RMSD(A (secs)
50% Distances, 1% Noise CPU time ˚) RMSD(A (secs)
1PTQ
402
0.2794
93.8
0.7560
22.1
1HOE
558
0.2712
129.6
0.0085
32.5
1LFB
641
0.4392
132.5
0.2736
41.6
1PHT
814
0.4701
129.4
0.6639
53.6
1POA
914
0.4325
174.7
0.0843
54.1
1AX8
1003
0.8184
251.9
0.0314
71.8
1F39
1534
1.1271
353.1
0.2809
113.4
1RGS
2015
4.6540
613.3
3.5416
308.2
1KDH
2923
2.5693
1641.0
2.8222
488.4
1BPM
3672
2.4360
1467.1
1.0502
384.8
1RHJ
3740
0.9535
1286.1
0.1158
361.5
1HQQ
3944
8.9106
2133.5
1.6610
418.4
1TOA
4292
9.8351
2653.6
1.5856
372.6
1MQQ
5681
3.1570
1683.4
2.3108
1466.2
3000 100 % distance, 10% error 70 % distance, 5% error 50 % distance, 1% error
Total Solution Time(sec)
2500
2000
1500
1000
500
0
0
1000
2000
3000 Number of atoms n
4000
5000
6000
Fig. 6. CPU time taken to localize a molecule with n atoms versus n.
distance data becomes more and more sparse, the number of constraints in the SDP also reduce, and therefore they take less time to be solved in general. However, many points may be incorrectly estimated in the SDP phase, and so the stitching phase usually takes longer in these cases. This behavior is exacerbated by the presence of higher noise. Our algorithm is reasonably efficient in solving large problems. The spikes that we see in the graphs for some of the larger molecules also correspond to cases with high RMSD error, in which the algorithm fails to find a valid configuration of points, either
1274
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
due to very noisy or very sparse data. A lot of time is spent in recomputing points in the stitching phase, and many of these points are repeatedly estimated incorrectly. The number of badly estimated points grows at each iteration of the stitching process and becomes more and more time-consuming. But, given the general computational performance, we expect our algorithm to be able to handle even larger molecules if the number of badly estimated points can be kept low. On the other hand, we are also investigating methods that discard repeatedly badly estimated points from future calculations. 9. Conclusion and work in progress. An SDP-based distributed method to solve the distance geometry problem in three dimensions with incomplete and noisy distance data and without anchors is described. The entire problem is broken down into subproblems by intelligent clustering methods. An SDP relaxation problem is formulated and solved for each cluster. Matrix decomposition is used to find local configurations of the clusters, and a least squares–based stitching method is applied to finding a global configuration. Gradient-descent methods are also employed in intermediate steps to refine the quality of the solution. The performance of the algorithm is evaluated by using it to find the configurations of large protein molecules with a few thousands atoms. The distributed SDP approach can solve large problems having favorable geometries with good accuracy and speed when 50–70% distances (corrupted by moderate level of 0–5% noise in both the lower and upper bounds) below 6˚ A are given. The current DAFGL algorithm needs to be improved in order to work on very sparse (30–50% of all distances below 6˚ A) and highly noisy (10–20% noise) data, which is often the case for actual NMR data used to deduce molecular conformation. For the rest of this section, we outline some possible improvements that can be made to our current algorithm. One of the main difficulties we encounter in the current distributed algorithm is the propagation of position estimation errors in a cluster to other clusters during the stitching process when the given distances are noisy. Even though we have had some successes in overcoming this difficulty, it is not completely alleviated in the current paper. The difficulties faced by our current algorithm with very noisy or very sparse data cases are particularly noticeable for very large molecules (which correspond to a large number of subblocks that need stitching). Usually, the estimation for many of the points in some of the subblocks from the CGL step is not accurate enough in these cases. This is especially problematic when there are too few common points that can then be used for stitching with the previous block, or if the error measures used to identify the bad points are unable to filter out some of the badly estimated common points. This usually leads to the error propagating to subsequent blocks as well. Therefore, the bad point detection and stitching phases need to be made more robust. To reduce the effect of inadvertently using a badly estimated point for stitching, we can increase the number of common points used in the stitching process, and at the same time, use more sophisticated stitching algorithms that not only stitch correctly estimated points but also isolate the badly estimated ones. As far as stitching is concerned, currently we use the coordinates of the common points between two blocks to find the best affine mapping that would bring the two blocks into the same coordinate system. Another idea is to fix the values in the matrix Y in (11) that correspond to the common points, based on the values that were obtained for it in solving the SDP for the previous block. By fixing the values, we are indirectly using the common points to anchor the new block with the previous one. The dual of the
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1275
SDP relaxation also merits further investigation, for possible improvements in the computational effort required and for more robust stitching results. With regard to error measures, it would be useful to study whether the local error measure (12) can be made more sophisticated by including a larger set of points to check its distances with, as opposed to just its immediate neighborhood. In some cases, the distances are satisfied within local clusters, but the entire cluster itself is badly estimated (and the local error measure fails to filter out many of the badly estimated points in this scenario). Also, we need to investigate the careful selection of the tolerance Tε in deciding which points have been estimated correctly and can be used for stitching. As has been noted before, our current algorithm does not use the VDW minimum separation distance constraints of the form xi − xj ≥ Lij described in [26] and [36]. The VDW constraints played an essential role in those previous works in reducing the search space of valid configurations, especially in the case of sparse data where there are many possible configurations fitting the sparse distance constraints. As mentioned in section 3, VDW lower bound constraints can be added to the SDP model, but we would need to keep track of the type of atom to which each point corresponds. However, one must be mindful that the VDW constraints should only be added when necessary so as not to introduce too many redundant constraints. If the VDW constraints between all pairs of atoms are added to the SDP model, then the number of lower bound constraints is of the order n2 , where n is the number of points. Thus even if n is below 100, the total number of constraints could be in the range of thousands. However, many of the VDW constraints are for two very remote points, and they are often inactive or redundant at the optimal solution. Therefore, we can adopt an iterative active constraint generation approach. We first solve the SDP problem by completely ignoring the VDW lower bound constraints to obtain a solution. Then we verify the VDW lower bound constraints at the current solution for all the points and add the violated ones into the model. We can repeat this process until all the VDW constraints are satisfied. Typically, we would expect only O(n) VDW constraints to be active at the final solution. We are optimistic that by combining the ideas presented in previous work on molecular conformation (especially incorporating domain knowledge such as the minimum separation distances derived from VDW interactions) with the distributed SDPbased algorithm in this paper, the improved distributed algorithm would likely be able to calculate the conformation of large protein molecules with satisfactory accuracy and efficiency in the future. Based on the performance of the current DAFGL algorithm, and the promising improvements to the algorithm we have outlined, a realistic target for us to set in the future is to correctly calculate the conformation of a large molecule (with 5000 atoms or more) given only about 50% of all pairwise distances below 6˚ A, and corrupted by 10–20% noise. Acknowledgments. We would like to thank the anonymous referees for their insightful comments and suggestions that led to a major revision of the paper. REFERENCES [1] S. Al-Homidan and H. Wolkowicz, Approximate and exact completion problems for Euclidean distance matrices using semidefinite programming, Linear Algebra Appl., 406 (2005), pp. 109–141. [2] A. Y. Alfakih, A. Khandani, and H. Wolkowicz, Solving Euclidean distance matrix completion problems via semidefinite programming, Comput. Optim. Appl., 12 (1999), pp. 13–30.
1276
PRATIK BISWAS, KIM-CHUAN TOH, AND YINYU YE
[3] S. J. Benson, Y. Ye, and X. Zhang, Solving large-scale sparse semidefinite programs for combinatorial optimization, SIAM J. Optim., 10 (2000), pp. 443–461. [4] H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat, H. Weissig, I. N. Shindyalov, and P. E. Bourne, The protein data bank, Nucleic Acids Res., 28 (2000), pp. 235–242. [5] P. Biswas, T.-C. Liang, K.-C. Toh, T.-C. Wang, and Y. Ye, Semidefinite programming approaches for sensor network localization with noisy distance measurements, IEEE Trans. Automat. Sci. Engrg, Special Issue on Distributed Sensing, 3 (2006), pp. 360–371. [6] P. Biswas, T.-C. Liang, T.-C. Wang, and Y. Ye, Semidefinite programming based algorithms for sensor network localization, ACM Trans. Sensor Networks, 2 (2006), pp. 188–220. [7] P. Biswas and Y. Ye, A distributed method for solving semidefinite programs arising from ad hoc wireless sensor network localization, in Mutiscale Optimization Methods and Applications, W. W. Hager, S.-J. Huang, P. M. Pardalos, and O. A. Prokopyev, eds., SpringerVerlag, NY, 2006, pp. 69–82. [8] P. Biswas and Y. Ye, Semidefinite programming for ad hoc wireless sensor network localization, in Proceedings of the Third International Symposium on Information Processing in Sensor Networks, ACM Press, New York, 2004, pp. 46–54. [9] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, Stud. Appl. Math. 15, SIAM, Philadelphia, 1994. [10] S. Burer and R. D. C. Monteiro, A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization, Math. Programming, 95 (2003), pp. 329–357. [11] M. W. Carter, H. H. Jin, M. A. Saunders, and Y. Ye, Spaseloc: An adaptive subproblem algorithm for scalable wireless sensor network localization, SIAM J. Optim., 17 (2006), pp. 1102–1128. [12] B. Chazelle, C. Kingsford, and M. Singh, The side-chain positioning problem: A semidefinite programming formulation with new rounding schemes, in Proceedings of the Paris C. Kanellakis Memorial Workshop on Principles of Computing & Knowledge (PCK50), ACM Press, New York, 2003, pp. 86–94. [13] G. Crippen and T. Havel, Distance Geometry and Molecular Conformation, Wiley, New York, 1988. [14] Q. Dong and Z. Wu, A geometric build-up algorithm for solving the molecular distance geometry problem with sparse distance data, J. Global Optim., 26 (2003), pp. 321–333. [15] A. George and J. W. Liu, Computer Solution of Large Sparse Positive Definite Systems, Prentice–Hall Professional Technical Reference, Prentice–Hall, Englewood Cliffs, NJ, 1981. [16] N. Guex and M. C. Peitsch, Swiss-model and the Swiss-Pdbviewer: An environment for comparative protein modeling, Electrophoresis, 18 (1997), pp. 2714–2723. ¨ler and Y. Ye, Convergence behavior of interior point algorithms, Math. Programming, [17] O. Gu 60 (1993), pp. 215–228. ¨thrich, An evaluation of the combined use of nuclear magnetic res[18] T. F. Havel and K. Wu onance and distance geometry for the determination of protein conformation in solution, J. Molec. Biol., 182 (1985), pp. 281–294. [19] B. A. Hendrickson, The Molecular Problem: Determining Conformation from Pairwise Distances, Ph.D. thesis, Department of Computer Science, Cornell University, Ithaca, NJ, 1991. [20] G. Iyengar, D. Phillips, and C. Stein, Approximation algorithms for semidefinite packing problems with applications to maxcut and graph coloring, in Eleventh Conference on Integer Programming and Combinatorial Optimization, Berlin, 2005. [21] M. Laurent, Matrix completion problems, The Encyclopedia of Optimization, 3 (2001), pp. 221–229. [22] P. Biswas, T.-C. Liang, T.-C. Wang, and Y. Ye, Semidefinite programming based algorithms for sensor network localization, ACM Trans. Sensor Networks, 2 (2006), pp. 188–220. [23] N. Linial, E. London, and Y. Rabinovich, The geometry of graphs and some of its algorithmic applications, Combinatorica, 15 (1995), pp. 215–245. ´ and Z. Wu, Global continuation for distance geometry problems, SIAM J. Optim., [24] J. J. More 7 (1997), pp. 814–836. ´ and Z. Wu, -optimal Solutions to Distance Geometry Problems via Global Contin[25] J. J. More uation, Preprint MCS–P520–0595, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL, 1994. [26] R. Reams, G. Chatham, W. K. Glunt, D. McDonald, and T. L. Hayden, Determining protein structure using the distance geometry program APA, Computers & Chemistry, 23 (1999), pp. 153–163.
SDP METHOD FOR ANCHOR-FREE 3-D GRAPH REALIZATION
1277
[27] A. M.-C. So and Y. Ye, A semidefinite programming approach to tensegrity theory and realizability of graphs, in Proceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Miami, FL, SIAM, Philadelphia, 2006, pp. 766–775. [28] A. M.-C. So and Y. Ye, Theory of semidefinite programming relaxation for sensor network localization, Math. Programming, 109 (2007), pp. 367–384. [29] J. F. Sturm, Using SeDuMi 1.02, A MATLAB toolbox for optimization over symmetric cones, Optim. Methods Softw., 11 & 12 (1999), pp. 625–633. [30] K.-C. Toh, Solving large scale semidefinite programs via an iterative solver on the augmented systems, SIAM J. Optim., 14 (2003), pp. 670–698. [31] M. W. Trosset, Applications of multidimensional scaling to molecular conformation, Comput. Sci. Statist., 29 (1998), pp. 148–152. [32] M. W. Trosset, Distance matrix completion by numerical optimization, Comput. Optim. Appl., 17 (2000), pp. 11–22. [33] M. W. Trosset, Extensions of classical multidimensional scaling via variable reduction, Comput. Statist., 17 (2002), pp. 147–162. ¨ tu ¨ncu ¨, K. C. Toh, and M. J. Todd, Solving semidefinite-quadratic-linear programs [34] R. H. Tu using SDPT3, Math. Programming, Ser. B, 95 (2003), pp. 189–217. [35] K. Q. Weinberger, F. Sha, and L. K. Saul, Learning a kernel matrix for nonlinear dimensionality reduction, in Proceedings of the Twenty-First International Conference on Machine Learning (ICML ’04), ACM Press, New York, 2004, pp. 839–846. [36] G. A. Williams, J. M. Dugan, and R. B. Altman, Constrained global optimization for estimating molecular structure from atomic distances, J. Comput. Biol., 8 (2001), pp. 523– 547. [37] D. Wu and Z. Wu, An updated geometric build-up algorithm for molecular distance geometry problems with sparse distance data, J. Global Optimization, 37 (2007), pp. 661–673. [38] K. Varadarajan, S. Venkatesh, Y. Ye, and J. Zhang, Approximating the radii of point sets, SIAM J. Comput., 36 (2007), pp. 1764–1776. [39] J.-M. Yoon, Y. Gad, and Z. Wu, Mathematical Modeling of Protein Structure Using Distance Geometry, in Numerical Linear Algebra and Optimization, Y. Yuan, ed., Scientific Press, Beijing, China, 2004. [40] Z. Zhang and H. Zha, Principal manifolds and nonlinear dimensionality reduction via tangent space alignment, SIAM J. Sci. Comput., 26 (2004), pp. 313–338.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1278–1295
THE GENERALIZED SINGULAR VALUE DECOMPOSITION AND THE METHOD OF PARTICULAR SOLUTIONS∗ TIMO BETCKE† Abstract. A powerful method for solving planar eigenvalue problems is the method of particular solutions (MPS), which is also well known under the name “point matching method.” The implementation of this method usually depends on the solution of one of three types of linear algebra problems: singular value decomposition, generalized eigenvalue decomposition, or generalized singular value decomposition. We compare and give geometric interpretations of these different variants of the MPS. It turns out that the most stable and accurate of them is based on the generalized singular value decomposition. We present results to this effect and demonstrate the behavior of the generalized singular value decomposition in the presence of a highly ill-conditioned basis of particular solutions. Key words. eigenvalues, method of particular solutions, point matching, subspace angles, generalized singular value decomposition AMS subject classifications. 65F15, 65F22, 65N25 DOI. 10.1137/060651057
1. Introduction. The idea of the method of particular solutions (MPS) is to approximate eigenvalues and eigenfunctions of (1.1a) (1.1b)
−Δu = λu u=0
in Ω, on ∂Ω,
from a space of particular solutions that satisfy (1.1a) but not necessarily (1.1b). In this article we assume that Ω is a planar region. A famous article on this method was published in 1967 by Fox, Henrici, and Moler [17] who used the MPS to compute the smallest eigenvalues of the L-shaped region to up to 8 digits of accuracy. Very similar ideas were also contained in the earlier papers by Conway and Farnham [10] and Conway and Leissa [11]. The MPS is also known under the name “point matching method” in the literature (see, for example, [10]). Closely related is also the method of fundamental solutions [9, 23]. The results of this paper apply equally well to the application of these methods to elliptic eigenvalue problems. The MPS is especially effective for very accurate computations. Although meshbased methods like FEM can be tuned to deliver exponential convergence on certain regions (e.g., hp-FEM methods), the implementation can be a difficult task, while the MPS can often be implemented in a few lines of Matlab code (see, for example, the Matlab code given in [8]). Also the computation of eigenvalues with very large wave numbers seems to be very suitable for the MPS. While the matrix sizes in FEM-based methods grow rapidly for high eigenvalues, the computational effort of the MPS still stays reasonable, especially if accelerated variants like the “scaling method” are used [1]. ∗ Received by the editors January 27, 2006; accepted for publication (in revised form) November 28, 2007; published electronically March 21, 2008. http://www.siam.org/journals/sisc/30-3/65105.html † School of Mathematics, The University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom (
[email protected]).
1278
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1279
Unfortunately, the MPS suffers from problems for complicated regions coming from ill-conditioning of the basis functions. These problems were observed in the paper of Fox, Henrici, and Moler [17] and also noted by later authors (see, for example, [13]). In [8] we returned to the original idea of the MPS and showed that the reformulation of the MPS as a problem of computing the angle between certain subspaces makes it applicable to a variety of polygonal and other planar regions. Numerical examples show that, even for complicated regions, eigenvalues can be computed to 10 digits or more with this subspace angle approach. While writing [8] we were not aware that independently of the numerical analysis community, physicists had developed very similar methods in connection with semiclassical mechanics and quantum chaos. This fact was brought to our attention in 2004 by Barnett. The physicists are particularly interested in eigenmodes related to high wave numbers. One of the leaders of this effort has been Heller, who together with his colleagues has developed methods closely related to the MPS [19, 20], though using different terminology. Another key contribution in this area was the “scaling method” of Vergini and Saraceno. These ideas were recently brought together and improved in Barnett’s thesis [1]. In this paper we review the various methods of particular solutions and show that a suitable tool to describe them is the generalized singular value decomposition. From the various linear algebra tools which are used in the different methods, i.e., the singular value decomposition (SVD), the generalized eigenvalue decomposition (GEVD), and the generalized singular value decomposition (GSVD), it turns out that the GSVD leads to the most robust and widely applicable approach. Furthermore, it turns out that the subspace angle method proposed in [8] is just a GSVD in disguise. Hence, the stability results which we discuss in this paper are also valid for the subspace angle method and lead to a further understanding of this method. The paper is organized as follows. In section 2 we present the MPS and its implementations using the SVD, GEVD, and GSVD. While singular values are perfectly conditioned this is not true for generalized singular values. Therefore, in section 3 we investigate the numerical stability of the GSVD approach. In section 4 we analyze a regularization strategy for the GSVD, which was proposed by Barnett for the GEVD approach to the MPS. In section 5 we discuss the limits of the GSVD if the basis of particular solutions admits only ill-conditioned representations of approximate eigenfunctions. The paper finishes in section 6 with a short summary and conclusions. All notation is standard. We will frequently use mach for the machine precision ( mach ≈ 2.2 × 10−16 in IEEE arithmetic). 2. The method of particular solutions. The MPS approximates eigenpairs (λk , uk ) of (1.1) from a space of functions that satisfy (1.1a) but not necessarily (1.1b). Let (2.1)
A(λ) := span{Φ1 (λ; z), . . . , Φp (λ; z)} ⊂ C 2 (Ω) ∩ C(Ω)
be such a space. Therefore, −ΔΦk (λ; z) = λΦk (λ; z),
k = 1, . . . , p,
for z ∈ Ω. Fox, √ Henrici, and Moler used Fourier–Bessel functions of the form Φk (λ; z) = Jαk ( λr) sin(αkθ) that are the exact solutions of (1.1) in a wedge with interior angle π/α. In physics frequently real plane waves and evanescent plane waves
1280
TIMO BETCKE
are used as particular solutions [5]. Other possible sets of basis functions are fundamental solutions which solve the eigenvalue equation (1.1a) in Ω but have singularities located outside of Ω [15]. To make the notation easier we will from now on always write Φk (z) instead of Φk (λ; z), since the dependence of the particular solutions on λ will be clear from the context. 2.1. An SVD-based formulation of the MPS. Let z1 , . . . , zn ∈ ∂Ω be boundary collocation points. We are looking for a value λ for which there exists a linear combination Φ=
p
ck Φk
k=1
of basis functions which is small at these points. Then we hope that this is a good approximation to an eigenfunction of (1.1). Let AB (λ) be the matrix of basis functions evaluated at the zk , i.e., (AB (λ))jk = Φk (zj ), j = 1, . . . , n, k = 1, . . . , p. The method is then formulated as the following minimization problem: (2.2)
min λ
AB (λ)c 2 = min ξp (λ), λ
c 2 \{0}
min p
c∈R
where ξp (λ) is the smallest singular value of the matrix AB (λ). The formulation (2.2) is due to Moler [24]. In earlier approaches the number of collocation points n was chosen identically to the number of basis functions p. In that case AB (λ) is square and λ was determined by solving det(AB (λ)) = 0 (see, for example, [17]). The SVD approach can fail if AB (λ) is ill-conditioned for some λ > 0 far from an eigenvalue. Assume that λ is not close to an eigenvalue and that AB (λ) is illconditioned. Then there exists a vector c ∈ Rp with c 2 = 1 such that AB (λ)c 2 1. But the unique solution of (1.1) if λ is not an eigenvalue is the zero function. Hence, the function defined by the coefficient vector c will approximate this zero function. In [8] these functions are called “spurious solutions” of (1.1). In Figure 2.1 we demonstrate this failure for the example of the L-shaped region from Figure 2.2. The upper left plot shows the √ curve ξp (λ) for p = 10 Fourier–Bessel basis functions of the form Φk (r, θ) = J 2k ( λr) sin 2k 3 θ. The origin of the polar 3 coordinates is at the re-entrant corner with the line θ = 0 directed as in Figure 2.2. The minima of ξp (λ) in the upper left plot of Figure 2.1 point to the first three eigenvalues of (1.1) on this region. In the lower left plot we have chosen p = 60. Now the minima at the eigenvalues are not visible any more on this plotting scale since ξp (λ) is small also away from the eigenvalues. The cure is to choose basis functions that are approximately orthogonal in Ω. Such bases were analytically constructed for some regions by Moler in [24]. An automatic way to obtain bases that are approximately orthogonal in the interior of the region is delivered by the GSVD. 2.2. A GSVD formulation. We want to cure the problem of ill-conditioned bases in the SVD approach by orthogonalizing the basis functions in the interior of Ω. Let z¯1 , . . . , z¯m ∈ Ω be a set of interior points, and let AI (λ) be the matrix of basis
1281
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS GSVD, p=10
SVD, p=10 0.6 σ (λ)
0.2
1
ξp(λ)
0.3
0.4 0.2
0.1 2
4
6
8
−4
x 10
10 12 λ SVD, p=60
14
16
18
20
3.7
2
4
6
8
2
4
6
8
10 12 λ GSVD, p=60
14
16
18
20
14
16
18
20
1.5 σ (λ)
3.5
1
1
ξp(λ)
3.6
0.5
3.4 3.3 2
4
6
8
10 λ
12
14
16
18
20
10 λ
12
Fig. 2.1. Comparison of the SVD and GSVD approach on the L-shaped region for p = 10 and p = 60 basis functions.
θ=0 θ=
3π 2
Fig. 2.2. The L-shaped region.
functions evaluated at these interior points, i.e., (AI (λ))jk = Φk (¯ zj ), j = 1, . . . , m, k = 1, . . . , p. For the moment assume that m ≥ p and that AI (λ) has full column rank. Let AI (λ) = Q(λ)R(λ) be the QR decomposition of AI (λ). Instead of the discrete basis set given by the > B (λ) ? columns of A we use AI (λ)
AB (λ) AI (λ)
−1
R(λ)
AB (λ)R(λ)−1 = Q(λ)
in the SVD approach. This is equivalent to orthogonalizing the particular solutions in a discrete inner product over the interior discretization points. We obtain σ1 (λ) :=
AB (λ)R(λ)−1 y 2 .
y 2 \{0}
min p
y∈R
This approach guarantees that every coefficient vector y with y 2 = 1 leads to a trial function that is of unit discrete norm over the interior points z¯1 , . . . , z¯m . We therefore avoid spurious solutions that are nearly zero in the interior of the region.
1282
TIMO BETCKE
For y = R(λ)x it follows that σ1 (λ) =
AB (λ)R(λ)−1 y 2
AB (λ)x 2 = min .
y 2 \{0} x∈Rp \{0} AI (λ)x 2
min p
y∈R
We can reformulate the last equation as the generalized eigenvalue problem AB (λ)T AB (λ)x(λ) = σ1 (λ)2 AI (λ)T AI (λ)x(λ), where the value σ1 (λ)2 is the smallest eigenvalue of the pencil {AB (λ)T AB (λ), AI (λ)T AI (λ)}. However, by using the GSVD we can compute the value σ1 (λ) directly without using the squared matrices AB (λ)T AB (λ) and AI (λ)T AI (λ). The definition of the GSVD in the following theorem is a simplified version of the definition given by Paige and Saunders in [25]. n×p and B ∈ Rm×p be given with n ≥ p. Theorem> 2.1 ? (GSVD). Let A ∈ R A Define Y = B and assume that rank(Y ) = p. There exist orthogonal matrices U ∈ Rn×n and W ∈ Rm×m and a nonsingular matrix X ∈ Rp×p such that A = U SX −1 ,
B = W CX −1 ,
where S ∈ Rn×p and C ∈ Rm×p are diagonal matrices defined as S = diag(s1 , . . . , sp ) and C = diag(c1 , . . . , cmin{m,p} ) with 0 ≤ s1 ≤ · · · ≤ sp ≤ 1 and 1 ≥ c1 ≥ · · · ≥ cmin{m,p} ≥ 0. Furthermore, it holds that s2j + c2j = 1 for j = 1, . . . , min{m, p} and sj = 1 for j = m + 1, . . . , p. If m < p, we define (2.3)
cm+1 = · · · = cp = 0.
Then s2j + c2j = 1 for all j = 1, . . . , p. The values σj = sj /cj are called the generalized singular values of the pencil {A, B}. If cj = 0, then σj = ∞. The jth column xj of X is the right generalized singular vector associated with σj . From Theorem 2.1 it follows that c2j AT Axj = s2j B T Bxj . Hence, the finite generalized singular values are the square roots of the finite generalized eigenvalues of the pencil {AT A, B T B}. But, as in the case of the standard SVD, they can be computed without using this squared formulation. In Matlab this is implemented by the gsvd function. Similarly to singular values the finite generalized singular values of a pencil {A, B} have a minimax characterization as (2.4)
σj =
min
max
x∈H H⊂Rp dim(H)=j Bx=0
Ax 2 .
Bx 2
This minimax characterization is an immediate consequence of the minimax characterization of singular values. A short proof is, for example, contained in [6, Thm. 3.4.2]. It follows that the value σ1 (λ) is the smallest generalized singular value of the pencil {AB (λ), AI (λ)}. Approximations to the eigenvalues of (1.1) are then given by
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1283
the minima of σ1 (λ) in dependence on λ. Note that the GSVD does not require m ≥ p for AI (λ). In the two right-hand plots of Figure 2.1 we show the smallest generalized singular value σ1 (λ) for different values of λ on the L-shaped region. While for p = 10 basis functions it is similar to the curve computed by the SVD approach, we see a large difference for p = 60 basis functions. As explained earlier, the SVD fails here but the GSVD still lets us easily spot the three minima that point to the eigenvalues. The application of the GSVD to the MPS was also considered in unpublished work by Eisenstat.1 His motivation was the minimization of error bounds for the MPS. In the physics community a related approach was introduced by Heller under the name plane wave decomposition method (PWDM) [19, 20]. He used only one point in the interior of the region to normalize the approximate eigenfunctions. A discussion of this method is contained in [1]. In the engineering literature the GSVD has been used in a related context to regularize boundary element formulations for the Laplace eigenvalue problem [21]. The GSVD has an interesting interpretation in terms of angles between subspaces. The smallest principal angle θ1 between two spaces S1 ⊂ Rn and S2 ⊂ Rn is defined by cos θ1 =
max
x∈S1 , x 2 =1 y∈S2 , y 2 =1
x, y.
Theorem 2.2. Define D0 ⊂ Rn+m as the space of vectors whose first n entries > B (λ) ? . Let θ1 (λ) be are zero. Denote by A(λ) the span of the columns of A(λ) := A AI (λ) the smallest principal angle between D0 and A(λ). Then tan θ1 (λ) = σ1 (λ), where singular value of the pencil {AB (λ), AI (λ)}. σ1 (λ) is the smallest generalized > ? Proof. Let D0 := I0 ∈ R(n+m)×m . Then
D0 x, A(λ)y xT AI (λ)y = max x∈R , D0 x 2 A(λ)y 2 x∈Rm , x 2 A(λ)y 2 p p
cos θ1 (λ) = max m y∈R
y∈R
AI (λ)y 2 = maxp , y∈R
AB (λ)y 22 + AI (λ)y 22 from which it follows that tan θ1 (λ) = minp y∈R
AB (λ)y 2 = σ1 (λ).
AI (λ)y 2
We can therefore interpret the GSVD approach in a different way. We want to minimize the angle θ1 (λ) between the space of functions that are zero on the boundary collocation points and the space of particular solutions evaluated on boundary and interior points. Based on this idea the subspace angle method was introduced in [8]. Theorem 2.2 shows that this idea is completely equivalent to the GSVD approach. Indeed, let (c1 (λ), s1 (λ)) be the generalized singular value pair associated with σ1 (λ), that is, σ1 (λ) = s1 (λ)/c1 (λ) and s1 (λ)2 +c1 (λ)2 = 1. Then the subspace angle method from [8] computes the value s1 (λ). 1 He used it to compute the first eigenvalues of the C-shaped region on the occasion of Cleve Moler’s 60th birthday in 1999.
1284
TIMO BETCKE
2.3. Comparison to generalized eigenvalue formulations. Based on the minimization of error bounds for the MPS, Kuttler and Sigillito [22] published in 1978 a formulation of the MPS which uses the GEVD. (Eisenstat remarked that this idea even goes back to Bergman in 1936 [4].) This was independently rediscovered by Heller’s student Barnett [1]. Let the minimal error on the boundary within the space A(λ) be defined as (2.5)
tm (λ) =
Φ ∂Ω , Φ∈A(λ)\{0} Φ Ω min
where
Φ ∂Ω :=
2
12
Φ(s) ds ∂Ω
1
2 = Φ, Φ∂Ω ,
Φ Ω :=
(2.6)
Φ(x, y)2 dxdy
12
Ω
1
= Φ, ΦΩ2
are the L -norms of u on the boundary ∂Ω and in the interior of Ω and ·, ·∂Ω and
·, ·Ω are the associated inner products. If tm (λ) = 0, then λ is an eigenvalue. Usually we will not be able to exactly represent an eigenfunction as a linear combination of functions in A(λ). Therefore, we are looking for the minima of tm (λ). These are then approximations to the eigenvalues of (1.1). This strategy was also proposed by Eisenstat in [14]. tm (λ) can be expressed as 2
tm (λ)2 =
Φ 2∂Ω xT F (λ)x , = min 2 Φ∈A(λ)\{0} Φ Ω x∈Rp \{0} xT G(λ)x min
where (F (λ))jk := Φj , Φk ∂Ω and (G(λ))jk := Φj , Φk Ω . Hence, the value tm (λ)2 is just the smallest eigenvalue μ1 (λ) of the generalized eigenvalue problem (2.7)
F (λ)x(λ) = μ(λ)G(λ)x(λ).
Barnett used this formulation to compute eigenvalues on the stadium billiard to several digits of accuracy [1]. In praxis the integrals appearing in this method are usually evaluated by quadrature rules of the form n m Φ(s)2 ds ≈ wjB Φ2 (zj ) and Φ(s)2 ds ≈ wjI Φ2 (¯ zj ) ∂Ω
Ω
j=1
j=1
with positive weights wjB and wjI . Let (2.8)
WB = diag(w1B , . . . , wnB )
and WI = diag(w1I , . . . , wnI ).
Then (2.9)
F¯ (λ) = AB (λ)T WB AB (λ),
¯ G(λ) = AI (λ)T WI AI (λ)
are the matrices obtained by the quadrature rules. For the smallest eigenvalue μ ¯1 (λ) ¯ we have tm (λ) ≈ μ ¯1 (λ)1/2 . However, the structure of the penof the pencil {F¯ , G} ¯ allows the application of the GSVD to directly compute an approximacil {F¯ , G} ¯1 (λ) of the pencil tion of tm (λ); namely, for the smallest generalized singular value σ
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS −15
−13
x 10
x 10
12
12
10
8
1
1
1
σ1(λ)−σ1(λ1)
10 μ (λ)−μ (λ )
1285
6
8
6
4
4
2
2
0 −5
0 λ−λ
1
5 −7
x 10
0 −5
0 λ−λ
1
5 −12
x 10
Fig. 2.3. Left: The computed value μ1 (λ) − μ1 (λ1 ) close to λ1 on the L-shaped region. Right: The value σ1 (λ) − σ1 (λ1 ) on the same region.
1
1
{WB2 AB (λ), WI2 AI (λ)} we have σ ¯1 (λ) = μ ¯1 (λ)1/2 ≈ tm (λ). The only differences from 1/2 1/2 the formulation in section 2.2 are the matrices WB and WI from the quadrature rules. But it is not important to choose a very accurate quadrature rule. Numerical experiments suggest that we just need a sufficient number of boundary points to sample the trial functions on ∂Ω and sufficiently many random interior points to ensure that we do not get spurious solutions that are almost zero in the interior of Ω [8]. While we directly compute with the GSVD an approximation for tm (λ), we compute with the GEVD an approximation for tm (λ)2 , which can limit the attainable accuracy in computing the minima of tm (λ), as we demonstrate now. In [2] Barnett showed that around an eigenvalue λk the function μ1 (λ) behaves quadratically. We can therefore model it there as μ1 (λ) ≈ μ1 (λk ) + C(λ − λk )2 for some C > 0. ¯ Computing the eigenvalues of {F¯ (λ), G(λ)} by a standard solver like Matlab’s eig can produce absolute errors at least in the order of machine precision. In an interval around λk with width 2 mach /C these are of the same magnitude or larger than |μ1 (λ) − μ1 (λk )| due to the quadratic behavior of μ1 (λ) there. Hence, in this interval √ of size Θ( mach ) we may not be able to detect the minimum of tm (λ). If we directly compute the smallest generalized singular value σ1 (λ) of the pencil {AB (λ), AI (λ)} by a standard GSVD solver like gsvd in Matlab, we can expect errors in the computed value σ ˜1 (λ) on the order of machine precision if the problem is well conditioned. Since σ1 (λ) is almost linear close to λk the floating point errors are of the same magnitude or larger than |σ1 (λ) − σ1 (λk )| only in an interval around λk with width Θ( mach ). Hence, we expect to find the minima of σ1 (λ) to an accuracy of almost machine precision. In Figure 2.3 we demonstrate this for the example of the L-shaped region from Figure 2.2. From now on we plot only the computed points rather than connected lines to better emphasize numerical errors in the plotted curves. We approximate μ1 (λ) by the smallest eigenvalue of the pencil {AB (λ)T AB (λ), AI (λ)T AI (λ)}, where the boundary points are equally spaced and the interior points are randomly chosen. √ Since the reason for the mach accuracy problem is the quadratic nature of σ1 (λ) close to an eigenvalue λk and not the accuracy of the quadrature rule, this simple approximation is justified. The smallest generalized singular value σ1 (λ) is computed
1286
TIMO BETCKE
by the GSVD of {AB (λ), AI (λ)}. In the left-hand plot of Figure 2.3 the function √ μ1 (λ) has a plateau of width on the order of mach close to λ1 in which the values are essentially determined by numerical errors, making it hard to detect the minimum to more than the square root of machine precision. In contrast, in the right-hand plot of Figure 2.3 we show the computed value σ1 (λ) on a finer scale. The function behaves almost linearly, and the minimum can easily be determined to 12 digits and more (in [8] we give 14 digits). Another attempt to solve this problem is to compute the zeros of the derivative μ1 (λ) of μ1 (λ) instead of the minima of μ(λ). Such an approach was used by Driscoll with great success in a related method [13]. But this approach makes it necessary to accurately compute derivatives of F (λ) and G(λ), which might not always be possible. 3. The effect of ill-conditioning. The SVD approach of the MPS fails if the matrix AB (λ) is highly ill-conditioned for some λ far away from an eigenvalue, since the MPS then approximates functions which are zero everywhere in the region. This cannot happen with the GSVD approach since we scale the approximate eigenfunctions to have unit norm in the interior of the region. But while the singular values of a matrix A are perfectly conditioned, the generalized singular values of a pencil {A, B} might be ill-conditioned, introducing large errors in the computed generalized singular values. In this section we investigate these errors and their influence on the ability to detect eigenvalues with the GSVD approach. In Figure 3.1 we show the famous GWW-1 isospectral drum [13, 18]. Eigenfunctions on this region can have singularities at the four corners which are marked by black dots. To obtain accurate eigenvalue and eigenfunction approximations we need to represent these singularities in the approximation basis. With 60 basis functions around each singularity we obtain the approximation λ1 ≈ 2.53794399979 for the first eigenvalue on this region (for details see [8]). We believe all digits to be correct. Let us have a look at the corresponding plot of tan θ(λ) in Figure 3.2, which is computed with Matlab’s GSVD function as the smallest generalized singular value of the pencil {AB (λ), AI (λ)}. On the boundary we used 120 Chebyshev distributed points on each line segment, and in the interior we spread 200 randomly distributed points in the 3
2
1
0
−1
−2
−3
−3
−2
−1
0
1
2
3
Fig. 3.1. The famous GWW-1 isospectral drum. Eigenfunctions can have singularities only at the dotted corners.
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1287
1 0.9 0.8 0.7
tan θ(λ)
0.6 0.5 0.4 0.3 0.2 0.1 0 1
1.5
2 λ
2.5
3
Fig. 3.2. Plot of tan θ1 (λ) for the GWW-1 isospectral drum. Before coming close to the first eigenvalue we can observe large variation.
smallest rectangle that contains the isospectral drum and then used those 88 points that were inside the drum. The values for different λ show a large variation before coming close to the first eigenvalue where the variation seems to disappear on this plotting scale. The matrix A(λ) = [AB (λ)T AI (λ)T ]T is numerically almost singular for all values λ > 0.2 However, still we are able to detect the minimum of the subspace angle curve. In the following section we investigate this behavior in more detail. >A? 3.1. The error of the GSVD. Let A ∈ Rn×p and B ∈ Rm×p , let Y = B , and ˜ ˜ ˜ assume that rank(Y ) = p. We define a perturbed pencil {A, B} as A = A + ΔA and ˜ = B +ΔB. If (s, c) is a generalized singular value pair of {A, B}, the corresponding B ˜ B} ˜ is denoted by (˜ perturbed generalized singular value pair of {A, s, c˜). Furthermore, s s˜ let σ = c and σ ˜ = c˜ be the corresponding generalized singular values. The right generalized singular vector associated with σ is denoted by x, and the right generalized singular vector associated with σ ˜ is denoted by x ˜. From Theorem 2.1 it follows that
Ax 2 = s and Bx 2 = c with corresponding identities for the perturbed quantities. The difference of σ ˜ and σ can be estimated by considering condition numbers of generalized singular values. Define cond(σ) = lim
sup
δ→0 max( E 2 , F 2 )≤δ
|˜ σ − σ| . δ
In [26] Sun showed that for a simple finite generalized singular value σ the condition number cond(σ) is (3.1)
cond(σ) =
x 2 ( Ax 2 + Bx 2 )
x 2
x 2 (1 + σ). = (1 + σ) = 2
Bx 2
Bx 2 c
The forward error of the GSVD is given as [26, eq. (2.3)] ) ( (3.2) |˜ σ − σ| ≤ cond(σ) max( ΔA 2 , ΔB 2 ) + O max( ΔA 2 , ΔB 2 )2 . 2 We always scale the columns of A(λ) to the unit norm in order to avoid artificial ill-conditioning which is just due to the bad scaling of the Fourier–Bessel functions.
1288
TIMO BETCKE
Let us now return to the GSVD-based MPS. In order for the GSVD approach to be successful we need to ensure that the perturbed value σ ˜1 (λ) is small only if λ is close to an eigenvalue λk . In [14] it is shown that (3.3)
Φ ∂Ω |λ − λk | ≤C min λk Φ∈A(λ)\{0} Φ Ω
for a constant C > 0 that depends only on the region. If we choose a sufficient number of well-distributed boundary and interior points for the method, then we can assume that
Φ ∂Ω
AB (λ)x ∂Ω ≈ C˜
Φ Ω
AI (λ)x Ω for the vector x of coefficients of u in the basis particular solutions and a constant C˜ > 0. A precise relationship between these quantities can be established by the use of quadrature rules and estimating their error. We obtain (3.4)
|λ − λk |
AB (λ)x ∂Ω ˆ 1 (λ) Cˆ min = Cσ p λk x∈R \{0} AI (λ)x Ω
ˆ Numerical experiments in [6] suggest that this is indeed a good for a constant C. estimate. Hence, the unperturbed generalized singular value σ1 (λ) cannot be small if λ is not close to λk and if we choose a sufficient number of discretization points. In practice we are working with the perturbed generalized singular value σ ˜1 (λ) of the pencil {AB (λ) + ΔAB (λ), AI (λ) + ΔAI (λ)}. For a backward stable GSVD method we can assume that max{ ΔAB (λ) 2 , ΔAI (λ) 2 } ≤ K mach , where K is a moderate constant that depends only on the dimension of the problem. Hence, from (3.2) it follows to first order that (3.5)
|˜ σ1 (λ) − σ1 (λ)| ≤ K
x1 (λ) 2 (1 + σ1 (λ)) mach . c1 (λ)
Therefore, if x1 (λ) 2 is of moderate size, then we can expect that the errors in σ ˜1 (λ) are small. The following lemma gives an estimate on x1 (λ) depending on λ. Lemma 3.1. Let σ = s/c be a generalized singular value of the pencil {A, B}, and let x be its corresponding right generalized singular vector. Then (3.6)
x 2 ≤
s , ξp
where ξp is the smallest singular value of A. Proof. From Theorem 2.1 we have Ax 2 = s. Since Ax 2 ≥ ξp x 2 , the result follows. Combining this lemma with (3.5) leads to first order in mach to (3.7)
|˜ σ1 (λ) − σ1 (λ)| ≤ K
σ1 (λ) (1 + σ1 (λ)) mach , ξp (λ)
where ξp (λ) is the smallest singular value of AB (λ). Assume that (3.8)
˜1 (λ)) mach ξ˜p (λ) ≥ (1 + σ
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1289
for the smallest singular value ξ˜p (λ) of AB (λ) + ΔAB (λ). Then if we treat {AB (λ), AI (λ)} as a perturbation of {AB (λ) + ΔAB (λ), AI (λ) + ΔAI (λ)}, we obtain from (3.7) to first order the bound ˜1 (λ). |˜ σ1 (λ) − σ1 (λ)| ≤ K σ Combining this with (3.4), it follows that |λ − λk | ˆ + K)˜ C(1 σ1 (λ). λk
(3.9)
Hence, even if we perturb AB (λ) and AI (λ), under the assumption that (3.8) holds we have a bound on the relative distance to the next eigenvalue, which also implies that σ ˜1 (λ) can become small only close to an eigenvalue. However, (3.8) is likely to hold for ΔAB (λ) 2 ≈ mach , since then ξp (λ) is perturbed by a quantity in the order of mach . 4. Regularizing the GSVD. In practice it is often useful to remove the oscillations in the computed values for tan θ1 (λ). Hence, we want to regularize the GSVD approach. In this section we will discuss a regularization strategy that is based on an idea by Barnett to regularize the generalized eigenvalue formulation. Let us plot the smallest generalized eigenvalue μ1 (λ) of AB (λ)T AB (λ)x(λ) = μ(λ)AI (λ)T AI (λ)x(λ), which is obtained by setting all weights to 1 in the quadrature rule used for the GEVD approach (see (2.8)). The resulting curve in Figure 4.1 shows large variation. Several of the computed values are negative, and Matlab even returned some complex values for μ1 (λ). The problem is the large common numerical nullspace of AB (λ) and AI (λ). In [1] Barnett projected out this nullspace. Using our notation, this can be done in the following way. Let AI (λ) = U (λ)Σ(λ)V (λ)T be the SVD of AI (λ). Now define a threshold ˆ, and let η1 (λ) ≥ · · · ≥ ηk (λ) > ˆ be the singular values of AI (λ) that are larger than ˆ. Partition V (λ) as V (λ) = [V1 (λ) V2 (λ)], 2.5 2 1.5
0.5
1
μ (λ)
1
0 −0.5 −1 −1.5 −2 1
1.5
2 λ
2.5
3
Fig. 4.1. μ1 (λ) in the case of the GWW-1 isospectral drum.
1290
TIMO BETCKE
where V1 (λ) contains the first k columns and V2 (λ) contains the last p − k columns of V (λ). Then the regularized generalized eigenvalue problem is defined as V1 (λ)T AB (λ)T AB (λ)V1 (λ)ˆ x(λ) = μ ˆ(λ)V1 (λ)T AI (λ)T AI (λ)V1 (λ)ˆ x(λ). A similar strategy was proposed and analyzed by Fix and Heiberger in [16]. The righthand-side matrix now has the singular values η1 (λ)2 ≥ · · · ≥ ηk (λ)2 > ˆ2 . Therefore, to remove all numerically zero singular values of AI (λ)T AI (λ) we need to choose √ ˆ > mach . In [1] Barnett uses a threshold of ˆ = 10−7 . We can apply the same strategy to the GSVD formulation. Then instead of finding the smallest generalized singular value σ1 (λ) of the pencil {AB (λ), AI (λ)} we find the smallest generalized singular value σ ˆ1 (λ) of {AB (λ)V1 (λ), AI (λ)V1 (λ)}. But for the GSVD the following strategy for obtaining a regularization matrix V1 (λ) is more suitable. Let
QB (λ) AB (λ) = R(λ) AI (λ) QI (λ) be the QR decomposition of A(λ). Compute the SVD of R(λ) as R(λ) = UR (λ)ΣR (λ)VR (λ)T .
(4.1)
?T > Note that the singular values of R(λ) are identical to those of AB (λ)T AI (λ)T . The regularization matrix V1 (λ) is defined as the first k columns of VR (λ) associated with those singular values of R(λ) which are above the threshold ˆ. The generalized singular values of {AB (λ)V1 (λ), AI (λ)V1 (λ)} are now identical to those of {QB (λ)U1 (λ), QI (λ)U1 (λ)}, where U1 (λ) contains the first k columns of UR (λ). The smallest generalized singular value of {AB (λ), AI (λ)} is only modestly changed with this strategy if it is not too ill-conditioned. This is shown in the following theorem. Theorem 4.1. Let σ1 = s1 /c1 be the smallest generalized singular value and x1 its corresponding right generalized singular vector of the pencil {A, B} with A ∈ Rn×p and B ∈ Rm×p . Let the regularization matrix V1 ∈ Rp×k be obtained by the strategy described above, and denote by σ ˆj , j = 1, . . . , k, the generalized singular values of the pencil {AV1 , BV1 }. Then the following hold: (a) For all generalized singular values σ ˆj of the pencil {AV1 , BV1 }, σj ≤ σ ˆj . (b) If ˆ x1 2 < c1 , then ˆ1 ≤ σ1 ≤ σ
s1 + ˆ x1 2 . c1 − ˆ x1 2
Proof. Let V2 be the orthogonal complement of V1 ; i.e., V = [V1 V2 ] is an orthogonal Then AV2 y 2 ≤ ˆ y 2 and BV2 y 2 ≤ ˆ y 2 for all y ∈ Rp−k since > AVmatrix. ? 2 ≤ ˆ. Let x1 = V1 y1 + V2 y2 . We have BV2
2
AV1 y1 2 = Ax1 − AV2 y2 2 ≤ Ax1 2 + AV2 y2 2 ≤ s1 + ˆ y2 2 and
BV1 y1 2 = Bx1 − BV2 y2 2 ≥ Bx1 2 − BV2 y2 2 ≥ c1 − ˆ y2 2 .
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1291
1.4
1.2
regularized tanθ(λ)
1
0.8
0.6
0.4
0.2
0 1
1.5
2 N
2.5
3
Fig. 4.2. The regularized curve of tan θ1 (λ) for the GWW-1 isospectral drum. The variation has disappeared on this plotting scale (compare with Figure 3.2).
With y2 2 ≤ x1 2 and the minimax characterization in (2.4) it follows that σ ˆ1 ≤
AV1 y1 2 s1 + ˆ x1 2 ≤ .
BV1 y1 2 c1 − ˆ x1 2
The fact that σj ≤ σ ˆj , j = 1, . . . , k, also follows immediately from (2.4), since restricting the pencil {A, B} to {AV1 , BV1 } corresponds to minimizing only over a subset of all possible subspaces. A similar result for the regularization of ill-conditioned generalized eigenvalue problems was proved in [16]. Let us apply this theorem to the pencil {AB (λ), AI (λ)}. If σ1 (λ) 1 close to an eigenvalue, then c1 (λ) ≈ 1, and we obtain σ ˆ1 (λ) ≤
s1 (λ) + ˆ x1 (λ) 2 σ(λ) + ˆ x1 (λ) 2 ≈ c1 (λ) − ˆ x1 (λ) 2 1 − ˆ x1 (λ) 2
= σ1 (λ) + (1 + σ1 (λ))ˆ x1 (λ) 2 + O((ˆ x1 (λ) 2 )2 ). Hence, to first order the change of σ1 (λ) is essentially at most ˆ x1 (λ) 2 . This result can also be obtained by noting that AB (λ) − AB (λ)V1 (λ)V1 (λ)T 2 ≤ ˆ, AI (λ) − AI (λ)V1 (λ)V1 (λ)T 2 ≤ ˆ and applying (3.2). Therefore, close to an eigenvalue we can expect only a small penalty due to this regularization strategy if x1 (λ) 2 is of moderate size there. For example in the case of ˆ1 (λ1 ) = 2.3 × 10−11 , the GWW-1 isospectral drum the parameter ˆ = 10−14 leads to σ −11 while the original value is σ1 (λ1 ) = 1.90 × 10 . The upper bound from Theorem 4.1 is σ ˆ1 (λ1 ) ≤ 7.76×10−11 . The right generalized singular vector x1 (λ1 ) has a magnitude of 103 in that example. In Figure 4.2 the regularized curve tan θˆ1 (λ) = σ ˆ1 (λ) is plotted. The variation away from the eigenvalue is not visible on this scale any more. The following argument due to Eisenstat explains this effect. Let ˆ be the regularization parameter, let y(λ), y(λ) 2 = 1 be a right singular vector corresponding to the smallest singular value ξp (λ) of AB (λ)V1 (λ), and let
1292
TIMO BETCKE
ψp (λ) = AI (λ)V1 (λ)y(λ) 2 . Then by definition 2
A (λ) B V1 (λ)y(λ) ≥ ˆ2 ξp (λ)2 + ψp (λ)2 = AI (λ) 2
ξp (λ) ψp (λ) .
and σ ˆ1 (λ) ≤ Ignoring the higher order term and the factor 1 + σ1 (λ) in (3.7), the computed value σ ˜1 (λ) from the regularized problem satisfies |˜ σ1 (λ) − σ ˆ1 (λ)| ≤ σ ˆ1 (λ)K If ξp (λ) ≥
√ˆ , 2
mach . ξp (λ)
then |˜ σ1 (λ) − σ ˆ1 (λ)| ≤
which is a relative bound if ˆ > this together with σ ˆ1 (λ) ≤
√
ξp (λ) ψp (λ) ,
√
2ˆ σ1 (λ)
K mach , ˆ
2K mach . If ξp (λ) <
√ˆ , 2
then ψp (λ) ≥
√ˆ . 2
Taking
it follows that
ˆ1 (λ)| ≤ K |˜ σ1 (λ) − σ
√ mach mach ≤ 2K , ψp (λ) ˆ
an absolute bound. By increasing ˆ we reduce the bound on the difference between the computed and the exact smallest generalized singular value of the regularized problem in both cases. The SVD-based regularization strategy proposed in this section is not the only > B (λ) ? possible strategy. One can also apply a rank-revealing QR decomposition to A AI (λ) that selects a subset of the columns of this matrix and thereby avoids round-off errors introduced by multiplying QB (λ)U1 (λ). In practice both strategies behaved similarly for our examples. 5. Limits of the GSVD approach. What are the limits of the GSVD approach? Assume that we have a basis of particular solutions for which (5.1)
min Φ∈A(λk )
Φ ∂Ω = O( mach ),
Φ Ω
where λk is an eigenvalue of (1.1). Hence, with a good discretization it also follows that σ1 (λk ) = O( mach ). Then it is still possible that the coefficient vector c = (c1 , . . . , cp )T of the function Φ=
p
ck Φk
k=1
from A(λ) that achieves the minimum in (5.1) has very large coefficients; that is,
c 2 1. But then we can also expect that x1 (λk ) 2 1, where x1 (λk ) is the right generalized singular vector associated with the generalized singular value σ1 (λk ). This may limit the accuracy to which we can compute σ1 (λk ) in floating point arithmetic. From (3.5) it follows that to first order |˜ σ1 (λ) − σ1 (λ)| ≤ K
x1 (λ) 2 (1 + σ1 (λ)) mach ≈ K x1 (λ) 2 mach c1 (λ)
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1293
Fig. 5.1. A circular L region. 2
10
Fourier−Bessel Real Plane Waves 0
10
−2
10
1
tan(λ )
−4
10
−6
10
−8
10
−10
10
−12
10
0
10
20
30
40
50
60
70
N
Fig. 5.2. Convergence of σ1 (λ1 ) = tan θ1 (λ1 ) on the circular L region using a Fourier–Bessel and a real plane wave basis set.
for σ1 (λ) 1. We therefore have to expect in the worst case that σ ˜1 (λk ) ≈ σ1 (λk ) + K x1 (λk ) 2 mach σ1 (λk ). This shows that it is not enough to have a basis of particular solutions that can approximate an eigenfunction to high accuracy. We also need to ensure that the coefficients of the approximate eigenfunction in that basis do not grow too much. Figure 5.2 shows σ1 (λ1 ) = tan θ1 (λ1 ) for two different basis sets at the smallest eigenvalue λ1 of the circular L region in Figure 5.1. Using a growing number of Fourier–Bessel functions, we can minimize σ1 (λ1 ) until 10−12 . But with a real plane wave basis that theoretically leads to the same rate of convergence on this region, we can minimize σ1 (λ) only up to 10−3 . This becomes clear by looking at x1 (λ1 ) 2 . If N = 20, we have for the Fourier–Bessel basis the value x1 (λ1 ) 2 ≈ 10, while the same value for the real plane wave basis is approximately 9.7×1012 . One might be tempted to explain this effect purely algebraically with the condition numbers of the > B (λ1 ) ? discrete basis A(λ1 ) = A . At N = 20 for the Fourier–Bessel basis set we have AI (λ1 ) 3 κ2 (A(λ1 )) ≈ 3.8 × 10 , and for the plane waves we obtain the value 9.7 × 1014 , where κ2 (A(λ1 )) is the condition number in the 2-norm of A(λ1 ). However, at N = 70 the
1294
TIMO BETCKE
condition number of the Fourier–Bessel basis set has grown to κ2 (A(λ1 )) ≈ 9 × 1013 . But still we only have x1 (λ1 ) 2 ≈ 2.8 × 103 for this basis set. This behavior cannot be improved by regularization, since it follows from Theorem 4.1 that the error in σ1 (λ) introduced by regularizing is itself on the order of ˆ x1 (λ) 2 . We emphasize that the coefficient growth phenomenon is not a property of a certain algorithm for finding approximate eigenfunctions of a set of particular solutions but a property of the underlying basis set itself. For fundamental solution bases this was recently investigated in [3]. 6. Conclusions. In this article we showed that the GSVD is the right framework for computing accurate approximations of eigenvalues and eigenfunctions of (1.1) from a basis of particular solutions. While SVD-based approaches fail if AB (λ) is highly ill-conditioned, the GSVD still allows accurate approximations of eigenvalues and eigenfunctions in this case, as the two examples suggest. Eigenvalues and eigenfunctions on several challenging regions are also computed in [7, 8, 27] with the subspace angle method which is equivalent to the GSVD approach as we showed in section 2.2. The advantage compared to the GEVD is that we do not work with a squared formulation that may suffer from limited accuracy. Furthermore, the regularization strategy discussed in section 4 allows us to smooth the curve σ1 (λ) with only a small penalty on the minimum of the curve at an eigenvalue λk . Accurate bounds for the relative distance of an approximation λ to the next eigenvalue λk can also be obtained from the smallest generalized singular value σ1 (λ). This is discussed in [8, 6, 14]. The choice of optimal sets of particular solutions for different regions is currently under investigation. But if the basis admits approximations to high accuracy, then, as this paper shows, the GSVD approach is a robust and easily implementable way to obtain them. Acknowledgments. Most of this work was done while the author was a Ph.D. student of Nick Trefethen at Oxford University; his comments and ideas were invaluable for this work. I am also very grateful for the discussions with Stan Eisenstat; his remarks significantly improved this paper. Alex Barnett pointed me to the work done by physicists on this subject and also contributed many fruitful suggestions. I would also like to thank Heike Fassbender and Jens Zemke for their comments on the first drafts of this paper. REFERENCES [1] A. H. Barnett, Dissipation in Deforming Chaotic Billiards, Ph.D. thesis, Department of Physics, Harvard University, Cambridge, MA, 2000. [2] A. H. Barnett, Inclusion of Dirichlet eigenvalues in the semiclassical limit via a boundary spectral problem, in preparation. [3] A. H. Barnett and T. Betcke, Stability and convergence of the method of fundamental solutions for Helmholtz problems on analytic domains, J. Comput. Phys., submitted. ¨ [4] S. Bergman, Uber ein Verfahren zur Konstruktion der N¨ aherungsl¨ osungen der Gleichung ¨ die Knickung von rechteckigen Platten bei Δu + τ 2 u = 0. Anhang zur Arbeit: Uber Schubbeanspruchung, Appl. Math. Mech., 3 (1936), pp. 97–106. [5] M. V. Berry, Evanescent and real waves in quantum billiards and Gaussian beams, J. Phys. A, 27 (1994), pp. L391–L398. [6] T. Betcke, Numerical Computation of Eigenfunctions of Planar Regions, Ph.D. thesis, Computing Laboratory, Oxford University, Oxford, UK, 2005.
THE GSVD AND THE METHOD OF PARTICULAR SOLUTIONS
1295
[7] T. Betcke and L. N. Trefethen, Computations of eigenvalue avoidance in planar domains, Proc. Appl. Math. Mech., 4 (2004), pp. 634–635. [8] T. Betcke and L. N. Trefethen, Reviving the method of particular solutions, SIAM Rev., 47 (2005), pp. 469–491. [9] A. Bogomolny, Fundamental solutions method for elliptic boundary value problems, SIAM J. Numer. Anal., 22 (1985), pp. 644–669. [10] H. D. Conway and K. A. Farnham, The free flexural vibrations of triangular, rhombic and parallelogram plates and some analogies, Int. J. Mech. Sci., 7 (1965), pp. 811–816. [11] H. D. Conway and A. W. Leissa, A method for investigating certain eigenvalue problems of the buckling and vibration of plates, J. Appl. Mech., 27 (1960), pp. 557–558. [12] J. Descloux and M. Tolley, An accurate algorithm for computing the eigenvalues of a polygonal membrane, Comput. Methods Appl. Mech. Engrg., 39 (1983), pp. 37–53. [13] T. A. Driscoll, Eigenmodes of isospectral drums, SIAM Rev., 39 (1997), pp. 1–17. [14] S. C. Eisenstat, On the rate of convergence of the Bergman–Vekua method for the numerical solution of elliptic boundary value problems, SIAM J. Numer. Anal., 11 (1974), pp. 654–680. [15] R. Ennenbach and H. Niemeyer, The inclusion of Dirichlet eigenvalues with singularity functions, ZAMM Z. Angew. Math. Mech., 76 (1996), pp. 377–383. [16] G. Fix and R. Heiberger, An algorithm for the ill-conditioned generalized eigenvalue problem, SIAM J. Numer. Anal., 9 (1972), pp. 78–88. [17] L. Fox, P. Henrici, and C. Moler, Approximations and bounds for eigenvalues of elliptic operators, SIAM J. Numer. Anal., 4 (1967), pp. 89–102. [18] C. Gordon, G. Webb, and S. Wolpert, Isospectral plane domains and surfaces via Riemannian orbifolds, Invent. Math., 110 (1992), pp. 1–22. [19] E. J. Heller, Bound-state eigenfunctions of classically chaotic Hamiltonian systems: Scars of periodic orbits, Phys. Rev. Lett., 53 (1984), pp. 1515–1518. [20] E. J. Heller, Wavepacket dynamics and quantum chaology, in Proceedings of the 1989 Les Houches Summer School on “Chaos and Quantum Physics,” M. J. Giannoni, A. Voros, and J. Zinn-Justin, eds., Elsevier Science, North–Holland, New York, 1991, pp. 547–663. [21] S. R. Kuo, W. Yeih, and Y. C. Wu, Applications of the generalized singular-value decomposition method on the eigenproblem using the incomplete boundary element formulation, J. Sound and Vibration, 235 (2000), pp. 813–845. [22] J. R. Kuttler and V. G. Sigillito, Bounding eigenvalues of elliptic operators, SIAM J. Math. Anal., 9 (1978), pp. 768–773. [23] R. Mathon and R. L. Johnston, The approximate solution of elliptic boundary-value problems by fundamental solutions, SIAM J. Numer. Anal., 14 (1977), pp. 638–650. [24] C. B. Moler, Accurate Bounds for the Eigenvalues of the Laplacian and Applications to Rhombical Domains, Report CS-TR-69-121 (1969), Department of Computer Science, Stanford University, Palo Alto, CA; also available online at ftp://reports.stanford.edu/ pub/cstr/reports/cs/tr/69/121/CS-TR-69-121.pdf [25] C. C. Paige and M. A. Saunders, Towards a generalized singular value decomposition, SIAM J. Numer. Anal., 18 (1981), pp. 398–405. [26] J.-G. Sun, Condition number and backward error for the generalized singular value decomposition, SIAM J. Matrix Anal. Appl., 22 (2000), pp. 323–341. [27] L. N. Trefethen and T. Betcke, Computed eigenmodes of planar regions, AMS Contemp. Math., 412 (2006), pp. 297–314.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1296–1317
c 2008 Society for Industrial and Applied Mathematics
A MONOMIAL CHAOS APPROACH FOR EFFICIENT UNCERTAINTY QUANTIFICATION IN NONLINEAR PROBLEMS∗ JEROEN A. S. WITTEVEEN† AND HESTER BIJL† Abstract. A monomial chaos approach is presented for efficient uncertainty quantification in nonlinear computational problems. Propagating uncertainty through nonlinear equations can be computationally intensive for existing uncertainty quantification methods. It usually results in a set of nonlinear equations which can be coupled. The proposed monomial chaos approach employs a polynomial chaos expansion with monomials as basis functions. The expansion coefficients are solved for using differentiation of the governing equations, instead of a Galerkin projection. This results in a decoupled set of linear equations even for problems involving polynomial nonlinearities. This reduces the computational work per additional polynomial chaos order to the equivalence of a single Newton iteration. Error estimates are derived, and monomial chaos is applied to uncertainty quantification of the Burgers equation and a two-dimensional boundary layer flow problem. The results are compared with results of the Monte Carlo method, the perturbation method, the Galerkin polynomial chaos method, and a nonintrusive polynomial chaos method. Key words. uncertainty quantification, polynomial chaos, computational fluid dynamics, non deterministic approaches AMS subject classifications. 65C20, 65C30, 65N30 DOI. 10.1137/06067287X
1. Introduction. In practical computational problems, physical parameters and boundary conditions are often subject to uncertainty. Until recently, these physical uncertainties were usually neglected, which resulted in a single deterministic run for the mean values of the uncertain parameters. Nowadays, such a deterministic simulation is no longer adequate for reliable computational predictions. Therefore, there has recently been a growing interest in accounting for physical uncertainties in computational problems [4]. Modeling uncertainty has been common in computational structure mechanics for some time now [15]. Uncertainty analysis in computational fluid dynamics is relatively new [12, 19, 24]. An important characteristic of problems in fluid dynamics is that they are governed by a system of nonlinear partial differential equations. The discretization of these equations for realistic flows results in computational problems involving millions of unknowns. This makes deterministic computational fluid dynamics already computationally intensive. Therefore, it is important to apply uncertainty quantification methods which can deal with the nonlinearities in an efficient way. By developing efficient methods, uncertainty quantification can become economically feasible in practical flow problems. Physical problems are usually described mathematically in terms of differential equations. The uncertainty in these problems can often be modeled by parametric uncertainties. Therefore, parametric uncertainty is considered in a physical model ∗ Received by the editors October 20, 2006; accepted for publication (in revised form) November 29, 2007; published electronically March 21, 2008. This research is supported by the Technology Foundation STW, applied science division of NWO, and the technology programme of the Ministry of Economic Affairs. http://www.siam.org/journals/sisc/30-3/67287.html † Department of Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629 HS Delft, The Netherlands (
[email protected],
[email protected]).
1296
A MONOMIAL CHAOS APPROACH
1297
described by the following differential equation for u(x, t, ω) with operator L and source term S: (1.1)
L(x, t, α(ω); u(x, t, ω)) = S(x, t, α(ω)),
x ∈ D, t ∈ [0, T ],
with appropriate initial and boundary conditions and α(ω) an uncertain parameter with a known uncertainty distribution. The argument ω is used to emphasize the fact that an uncertain variable is a function of the random event ω ∈ Ω of the probability space (Ω, σ, P ). Equation (1.1) is an uncertainty quantification problem for the uncertain variable u(x, t, ω). Below, four widely used uncertainty quantification methods are briefly reviewed for comparison with the proposed monomial chaos approach: the Monte Carlo method, the perturbation method [10], the Galerkin polynomial chaos method [3], and a nonintrusive polynomial chaos method [9]. For simplicity, the methods are reviewed for the case of a single uncertain parameter α(ω). They all have extensions to higher dimensions. The Monte Carlo method. A robust approach to solving (1.1) is the Monte Carlo method. It is based on solving the deterministic problem multiple times for a set of N realizations of the uncertain parameter {αk }N k=1 with αk ≡ α(ωk ), (1.2)
L(x, t, αk ; uk (x, t)) = S(x, t, αk ),
k = 1, . . . , N,
with uk (x, t) ≡ u(x, t, ωk ). The stochastic properties of the output can be obtained from the set of N realizations of the uncertain variable {uk (x, t)}N k=1 . Due to the slow convergence rate, the standard Monte Carlo approach can be impractical when solving a single deterministic problem already involving a large amount of computational work. Methods exist to improve the convergence rate of standard Monte Carlo, such as Latin-hypercube sampling and variance reduction techniques; see, for example, [6]. The perturbation method. A fast method for determining low-order statistics is the perturbation method (also called the moment method) [7, 10, 16]. It has recently been applied to problems in computational fluid dynamics [12, 17]. In the perturbation method the statistical moments of the output are expanded around the expected value of the uncertain parameter using Taylor series expansions. These expansions are usually truncated at second order, since for higher orders the equations become extremely complicated [3, 10]. The second-order estimate of the mean value is given by [10] as ∂ 2 u 1 + Var(α(ω)) 2 , (1.3) E[u(x, t, ω)] ≈ u(x, t, ω) 2 ∂α α=μα α=μα with μα ≡ E[α(ω)]. For the first-order approximation, this relation reduces to E[u(x, t, ω)] ≈ u(x, t, ω)|μα . The first-order estimate of the variance is given as (1.4)
Var[u(x, t, ω)] ≈
2 ∂u Var[α(ω)]. ∂α α=μα
The moment approximations require the computation of the first and second sensitivity derivatives of the solution u(x, t, ω) with respect to the uncertain parameter α(ω) for α(ω) = μα . A method for evaluating these sensitivity derivatives is the continuous sensitivity equation method [7, 16]. In the continuous sensitivity equation method a ∂iu differential equation for the ith sensitivity derivative ∂α i |μα is obtained by implicit
1298
JEROEN A. S. WITTEVEEN AND HESTER BIJL
differentiation of the governing equation (1.1) with respect to α for α(ω) = μα . The resulting equation is called the ith continuous sensitivity equation ∂i ∂i (1.5) L(x, t, α(ω); u(x, t, ω)) = S(x, t, α(ω)) . i i ∂α ∂α μα μα The application of the perturbation method is limited to low-order approximations for small perturbations, i.e., inputs with a small variance. Furthermore, the method cannot readily be extended to compute the probability distribution function of the response process [3, 10]. The Galerkin polynomial chaos method. A method that is not limited to loworder statistics and small perturbations is the polynomial chaos expansion introduced by Ghanem and Spanos [3]. The method has recently been applied to computational fluid dynamics [19, 24]. The polynomial chaos expansion is a polynomial expansion of orthogonal polynomials in terms of random variables to approximate the uncertainty distribution of the output. The method is based on the homogeneous chaos theory of Wiener [21]. The homogeneous polynomial chaos expansion, which is based on Hermite polynomials and Gaussian random variables, can approximate any functional in L2 (C) and converges in the L2 (C) sense [2]. It can achieve an exponential convergence rate for Gaussian input distributions due to the orthogonality of the Hermite polynomials with respect to the Gaussian measure. The exponential convergence has been extended to other input distributions by employing other basis polynomials [20, 22, 23]. The expansion coefficients are determined in the context of a stochastic finite element approach by using a Galerkin projection in probability space. The polynomial chaos expansions of the uncertain input parameter α(ω) and the uncertain solution u(x, t, ω) are (1.6)
α(ω) =
1
αj Φj (ξ(ω)),
j=0
u(x, t, ω) =
∞
ui (x, t)Φi (ξ(ω)),
i=0
where {Φi (ξ)}∞ i=0 is a set of orthogonal polynomials and the random variable ξ(ω) is given by a linear transformation of α(ω) to an appropriate standard domain, i.e. [−1, 1], [0, ∞), or (−∞, ∞). Due to this linear transformation the polynomial chaos expansion of α(ω) in (1.6) is exact within the first two terms. For the numerical implementation the polynomial chaos expansion for u(x, t, ω) in (1.6) is truncated to (p+1) terms, where p is the polynomial chaos order of the approximation. Substituting the truncated expansions into (1.1) and performing a Galerkin projection onto each polynomial basis {Φi (ξ)}pi=0 results in a coupled set of (p + 1) deterministic equations ⎞ ⎞ K J ⎛ K J ⎛ p 1 1 αj Φj ; ui Φi ⎠ , Φk = S ⎝x, t, αj Φj ⎠ , Φk (1.7) L ⎝x, t, j=0
i=0
j=0
for k = 0, 1, . . . , p. This system of equations can be solved using standard iterative methods [5]. The Galerkin polynomial chaos method can be intrusive to implement and computationally intensive to solve, due to the coupled set of equations (1.7). A nonintrusive polynomial chaos method. To avoid solving a coupled set of equations, a nonintrusive polynomial chaos method can be used. It approximates the polynomial chaos coefficients by solving a series of deterministic problems. An example of a nonintrusive polynomial chaos method is the method of Hosder and
1299
A MONOMIAL CHAOS APPROACH
Walters (see [9, 18]). The polynomial chaos expansion coefficients {uk (x, t)}pk=0 in (1.6) are approximated by evaluating the deterministic problem at (p + 1) points in random space {ξk }pk=0 , with ξk ≡ ξ(ωk ), L(x, t, αk ; u∗k (x, t)) = S(x, t, αk ),
(1.8)
k = 0, 1, . . . , p,
where u∗k (x, t) is the realization of u(x, t, ω) for α(ω) = αk . The polynomial chaos coefficients {uk (x, t)}pk=0 are then approximated by the following relatively small linear system: ⎛ (1.9)
Φ0 (ξ0 )
⎜ Φ (ξ ) ⎜ 0 1 ⎜ .. ⎜ ⎝ . Φ0 (ξp )
Φ1 (ξ0 )
···
Φ1 (ξ1 ) .. .
··· .. .
Φ1 (ξp )
···
Φp (ξ0 )
⎞⎛
u0 (x, t)
⎟ ⎜ u (x, t) ⎟⎜ 1 ⎟⎜ .. ⎟⎜ ⎠⎝ . up (x, t) Φp (ξp ) Φp (ξ1 ) .. .
⎞
⎛
u∗0 (x, t)
⎟ ⎜ u∗ (x, t) ⎟ ⎜ 1 ⎟=⎜ .. ⎟ ⎜ ⎠ ⎝ . u∗p (x, t)
⎞ ⎟ ⎟ ⎟, ⎟ ⎠
which can be solved using a single LU decomposition. This nonintrusive polynomial chaos method can be shown to converge to the Galerkin polynomial chaos expansion coefficients under certain conditions [9]. As for the Monte Carlo method (1.2), nonintrusive polynomial chaos results in a set of equations (1.8) which coincide with the deterministic problem for varying parameter values. However, the number of deterministic evaluations can be orders of magnitude smaller than for a standard Monte Carlo simulation due to the combination with the polynomial chaos expansion. Compared to solving the problem deterministically, using a nonintrusive polynomial chaos method results in a multiplication of computational work by a factor (p + 1). For computationally very intensive problems this increase of computational work can be a major drawback for the application of uncertainty quantification. Consider, for example, practical applications of nonlinear computational fluid dynamics in time dependent problems involving complex geometries. These deterministic problems can already take weeks or even longer to solve. An increase of this amount of computational work by a factor (p + 1) is significant. Especially in iterative design processes of industrial applications this can make uncertainty quantification impractical. On the other hand, uncertainty quantification is in these cases essential for robust design optimization. Therefore, there is a need for a further reduction of the computational costs of uncertainty quantification methods. In this paper, a monomial chaos approach is proposed to reduce the costs of uncertainty quantification in computationally intensive nonlinear problems. The method employs the polynomial chaos expansion with monomials as basis functions. The monomial chaos expansion coefficients are solved for using differentiation of the governing equations, instead of a Galerkin projection. This results in a decoupled set of linear equations even for problems involving polynomial nonlinearities. This reduces the computational work per additional polynomial chaos order to the equivalence of a single Newton iteration. Therefore, monomial chaos can be a computationally efficient alternative for existing uncertainty quantification methods in nonlinear problems. The monomial chaos approach is introduced in this paper for one uncertain input parameter to demonstrate the properties of the method and to make a basic comparison with other uncertainty quantification methods. The extension of monomial chaos to multiple uncertain parameters and random fields is briefly addressed. The paper is organized as follows. The monomial chaos is introduced and error estimates derived in section 2. In section 3 the monomial chaos is applied to
1300
JEROEN A. S. WITTEVEEN AND HESTER BIJL
the Burgers equation to demonstrate the properties of the proposed approach for a standard nonlinear advection-diffusion test problem in one dimension. The results are compared with results of the perturbation method, the Galerkin polynomial chaos method, and the nonintrusive polynomial chaos method in section 4. In section 5 the monomial chaos is applied to a two-dimensional boundary layer flow problem as an example of a standard nonlinear test problem from two-dimensional incompressible fluid dynamics. In section 6 the conclusions are summarized. 2. The monomial chaos approach. In this section the monomial chaos approach is proposed. In section 2.1 the monomial chaos approach is introduced in general as applied to (1.1). Error estimates are given in section 2.2. 2.1. The monomial chaos formulation. The monomial chaos approach employs a polynomial chaos expansion with monomials as basis functions to determine the uncertainty distribution of the output. The equations for the monomial chaos expansion coefficients are obtained by differentiating the deterministic equation with respect to the uncertain input parameter. This results in a decoupled set of (p + 1) equations for the (p + 1) coefficients of a pth-order monomial chaos expansion, in which each equation solves for a monomial chaos coefficient sequentially. Due to the product rule the differentiation of the governing equations also results in a set of linear equations even for problems involving polynomial nonlinearities. This reduces the additional computational work per polynomial chaos order to the equivalence of a single Newton iteration. Therefore, monomial chaos can be an efficient alternative for uncertainty quantification in computationally intensive nonlinear problems. Consider the application of monomial chaos to a physical model involving polynomial nonlinearities and parametric uncertainty given by (1.1), L(x, t, α(ω); u(x, t, ω)) = S(x, t, α(ω)). The uncertain parameter α(ω) and the solution u(x, t, ω) are expanded into a polynomial chaos expansion (2.1)
α(ω) =
1
αj Ψj (ξ(ω)),
u(x, t, ω) =
j=0
∞
ui (x, t)Ψi (ξ(ω)),
i=0
where the random variable ξ(ω) is given by a linear transformation of the uncertain input parameter α(ω) to an appropriate standard domain, i.e., [−1, 1], [0, ∞), or (−∞, ∞) [23]. Due to this linear transformation the polynomial chaos expansion of α(ω) in (2.1) is exact within the first two terms. In the monomial chaos the basis polynomials {Ψi (ξ)}∞ i=0 are monomials around ξ(ω) = μξ with μξ ≡ E[ξ(ω)]: (2.2)
Ψi (ξ(ω)) = (ξ(ω) − μξ )i ,
i = 0, 1, . . . .
These monomials are chosen as basis functions because they satisfy the property i!, i = j, dj Ψi (2.3) = j dξ ξ=μξ 0, i = j, which says that taking the jth derivative of the monomials {Ψi (ξ)}∞ i=0 with respect to ξ at ξ(ω) = μξ results in a nonzero term for i = j only. This property of monomials results in the decoupled set of equations for the monomial chaos coefficients {ui (x, t)}.
1301
A MONOMIAL CHAOS APPROACH
Substitution of the monomial chaos expansions (2.1) with (2.2) into the governing equation (1.1) results in ⎞ ⎛ ⎞ ⎛ 1 ∞ 1 αj Ψj (ξ); ui (x, t)Ψi (ξ)⎠ = S ⎝x, t, αj Ψj (ξ)⎠ . (2.4) L ⎝x, t, j=0
i=0
j=0
To obtain a set of equations for the expansion coefficients of the solution {ui (x, t)}, (2.4) is differentiated with respect to ξ for ξ(ω) = μξ . Taking the kth derivative of (2.4) results in an equation for the kth expansion coefficient uk (x, t), ⎛ ⎞ 1 ∞ k ∂ L ⎝x, t, αj Ψj (ξ); ui (x, t)Ψi (ξ)⎠ k ∂ξ j=0
i=0
ξ=μξ
⎛
⎞ 1 ∂k ⎝ = k S x, t, αj Ψj (ξ)⎠ ∂ξ
(2.5)
j=0
ξ=μξ
for k = 0, 1, . . . . This set of equations can be discretized using standard discretization techniques [8]. Due to the combination of differentiation of (2.4) and property (2.3), all higher-order coefficients {ui (x, t)}∞ i=k+1 drop out of the equation, which results in a decoupled set of equations for uk (x, t) in terms of {ui (x, t)}k−1 i=0 . This is illustrated in section 3 where the monomial chaos is applied to the Burgers equation. Furthermore, the decoupled set of equations (2.5) is linear in uk (x, t) due to the product rule in differentiation, even if the governing equation (1.1) contains polynomial nonlinearities (except for k = 0). The equation for k = 0 coincides with the deterministic problem for the expected value of the uncertain parameter μα . For nonlinear problems solved using Newton linearization, the additional computational work per polynomial chaos order is proportional to one Newton iteration. A pth-order monomial chaos approximation of the solution u(x, t, ω) is given by truncating the monomial chaos expansion for u(x, t, ω) in (2.1) at p. The monomial chaos coefficients {αj }1j=0 of the uncertain parameter α(ω) with a known uncertainty distribution can be determined by differentiating the monomial chaos expansion for α(ω) in (2.1) with respect to ξ for ξ(ω) = μξ , which results, using property (2.3), in 1 dj α (2.6) αj = , j = 0, 1, j! dξ j ξ=μξ j
where ddξαj |μξ is known and α0 = μα . Equations (2.5) are similar to the continuous sensitivity equations (1.5) of the perturbation method, which are obtained by implicit differentiation. The monomial chaos method can be viewed as an extension of the perturbation method, which is usually limited to second-order approximations of the first two moments. The monomial chaos approach can be employed for obtaining higher-order approximations of the uncertainty distribution and the statistical moments of the output at computational costs equivalent to those of the perturbation method. On the other hand, in the monomial chaos approach the uncertain parameter and variables are expanded in a polynomial expansion as in the polynomial chaos method, using monomials instead of orthogonal polynomials in the polynomial chaos method.
1302
JEROEN A. S. WITTEVEEN AND HESTER BIJL
The monomial chaos approach can therefore be applied to the same set of arbitrary input probability distributions as the polynomial chaos method. The outputs of the monomial chaos approach are higher-order approximations of the distribution and the statistical moments by solving a decoupled set of linear equations, instead of a possibly coupled set of nonlinear equations in the polynomial chaos method. Next to the relatively low computational work per polynomial chaos order the monomial chaos has additional advantages which are important for practical applications. First, the polynomial chaos order of the monomial chaos approximation can be determined during the computation while solving for the higher-order coefficients sequentially. Second, the equations (2.5) depend only on the mean value of the uncertain input parameter μα . Therefore, the influence of different input uncertainty distributions and variances can be studied after solving (2.5) once. This is an important property since in practical problems the input distribution itself can also be subject to uncertainty. The monomial chaos is moderately intrusive, since for solving (2.5) the summation of the matrix and vector entries in the deterministic solver have to be modified. For decreasing the intrusiveness of the monomial chaos, the differentiation of the governing equations can be replaced by finite difference differentiation in random space. Here, the monomial chaos approach is considered for a single uncertain input parameter. The monomial chaos approach can be extended to multiple uncertain input parameters by introducing a vector of random variables ξ(ω) = (ξ1 (ω), . . . , ξn (ω)), where n is the number of uncertain input parameters. The basis consists in that case of multidimensional monomials Ψi (ξ(ω)), which are tensor products of the onedimensional monomials Ψi (ξj (ω)). The set of equations for the monomial chaos coefficients (2.5) is then derived using mixed partial derivatives of (2.4) with respect to the random variables {ξj (ω)}nj=0 . The uncertainty quantification methods reviewed in section 1 can also be extended to multiple uncertain input parameters. For comparison, in the extension of the perturbation method to multiple uncertain input parameters, the statistical moments of the output are expanded around the expected value of the uncertain parameters using multidimensional Taylor series expansions. The polynomial chaos method can be extended to n uncertain parameters by using a multidimensional polynomial chaos expansion in (1.6). The multidimensional polynomial chaos expansion is based on a vector of random variables ξ(ω) = (ξ1 (ω), . . . , ξn (ω)) and multidimensional orthogonal polynomials Φi (ξ(ω)). A random field can be handled by the monomial chaos method by first representing the random field in terms of a finite number of independent random input parameters using a Karhunen–Lo`eve expansion [11] as in the polynomial chaos method. For a random field with a relatively high spatial correlation, the number of random input parameters needed to reach a reasonable accuracy with the Karhunen–Lo`eve expansion can be small. In that case the monomial chaos method can be applied to the random input parameters to resolve the effect of the random field. For random fields and random processes with low correlation the required number of random input parameters can be much higher, and approaches other than the monomial chaos method or the polynomial chaos method can be more competitive. 2.2. Error estimates. In this section error estimates for the monomial chaos approach are derived. For simplicity the arguments x and t are dropped. After computing the monomial chaos coefficients in (2.5), approximations of the mean μu , variance σu2 , higher-order moments, and the distribution function can be derived. If
A MONOMIAL CHAOS APPROACH
1303
the uncertain variable u(ω) is expanded in an infinite monomial chaos series, the mean μu is given in terms of the monomial chaos coefficients {ui }∞ i=0 by (2.7)
μu =
∞
ui μξ,i ,
i=0
with μξ,i the ith central statistical moment of ξ(ω), (2.8) μξ,i = Ψi (ξ)pξ (ξ)dξ, supp(ξ)
with μξ,0 = 1, μξ,1 = 0 and where supp(ξ) and pξ (ξ) are the support and the probability density of ξ(ω), respectively. The variance σu2 is given by σu2 =
(2.9)
∞ ∞
u ˜i u ˜j μξ,i+j ,
i=0 j=0
with (2.10)
u ˜i =
ui − μu , ui ,
i = 0, i = 1, 2, . . . .
In the numerical implementation the infinite series in (2.7) and (2.9) are truncated at a polynomial chaos order p. The errors in the approximation in the mean εμu and the variance εσu2 due to the truncation of the monomial chaos expansion are then given by εμu = −
(2.11)
∞
ui μξ,i ,
i=p+1
and (2.12)
ε
σu2
= −2
∞
∞
ui uj μξ,i+j .
i=p+1 j=p+1
If the monomial chaos coefficients ui decrease fast enough with i for i = p+1, p+2, . . . , such that the leading truncation error term is due to neglecting the (p+1)th coefficient, then the truncation errors can be estimated as (2.13)
εμu ≈ −up+1 μξ,p+1
and (2.14)
εσu2 ≈ −2u2p+1 μ2ξ,p+1 ,
which says that the leading error term in the approximation of the variance σu2 results in an underestimation. These a posteriori error estimates can be used in a stopping criterion for determining the polynomial chaos order p of the monomial chaos approximation. Another contribution to the error in the mean μu and the variance σu2 can be due to the divergence of the monomial chaos expansion in a part of the domain of ξ(ω). In
1304
JEROEN A. S. WITTEVEEN AND HESTER BIJL
case of an input distribution with an infinite support, i.e., ξ(ω) ∈ (−∞, ∞), there is always a domain ξ(ω) ∈ (−∞, ξ − ]∪[ξ + , ∞) in which the monomial chaos expansion of u(ω), (2.1), diverges. However, it is demonstrated in the following propositions that the divergence of the monomial chaos expansion in ξ(ω) ∈ (−∞, ξ − ] ∪ [ξ + , ∞) results in errors ε˜μu , ε˜σu2 which are in general small with respect to the truncation errors εμu and εσu2 , and the mean μu and variance σu2 . Proposition 2.1. Let ξ(ω) ∈ (−∞, ∞) and let ξ(ω) ∈ (ξ − , ξ + ) be the domain of convergence of the monomial chaos expansion of u(ω), (2.1). If the probability density pξ (ξ) of ξ(ω) decreases fast enough as ξ → ±∞ such that ∞ ξ+ ∞ (2.15) |ui | Ψi (ξ)pξ (ξ)dξ ui Ψi (ξ)pξ (ξ)dξ , (−∞,ξ− ]∪[ξ+ ,∞) ξ− i=p+1 i=0 then the error in the monomial chaos approximation of the mean μu due to the divergence of the monomial chaos expansion in ξ(ω) ∈ (−∞, ξ − ] ∪ [ξ + , ∞) is small compared to the truncation error; i.e., |˜ εμu | |εμu |. Proof. The error ε˜μu in the monomial chaos approximation of the mean μu due to the divergence in ξ(ω) ∈ (−∞, ξ − ] ∪ [ξ + , ∞) is defined as ∞ (2.16) ε˜μu = ui Ψi (ξ)pξ (ξ)dξ, i=0
(−∞,ξ − ]∪[ξ + ,∞)
with (2.17) ∞ ∞ ui Ψi (ξ)pξ (ξ)dξ ≤ |ui | Ψi (ξ)pξ (ξ)dξ . (−∞,ξ− ]∪[ξ+ ,∞) (−∞,ξ − ]∪[ξ + ,∞) i=0
i=0
The truncation error εμu in the monomial chaos approximation of the mean μu due to the truncation of the monomial chaos expansion at p is given by (2.11), ∞ ∞ ui Ψi (ξ)pξ (ξ)dξ (2.18) εμu = − −∞
i=p+1 ∞
=−
i=p+1
ui
(−∞,ξ − ]∪[ξ + ,∞)
∞
Ψi (ξ)pξ (ξ)dξ −
i=p+1
ui
ξ+
ξ−
Ψi (ξ)pξ (ξ)dξ,
with (2.19) ∞ ∞ ui Ψi (ξ)pξ (ξ)dξ ≤ |ui | Ψi (ξ)pξ (ξ)dξ . − + − + (−∞,ξ ]∪[ξ ,∞) (−∞,ξ ]∪[ξ ,∞) i=p+1 i=0 According to the assumption, |˜ εμu | |εμu |. Proposition 2.2. Let ξ(ω) ∈ (−∞, ∞), and let ξ(ω) ∈ (ξ − , ξ + ) be the domain of convergence of the monomial chaos expansion of u(ω), (2.1). If (i) the probability density pξ (ξ) of ξ(ω) decreases fast enough as ξ → ±∞ such that ∞ ξ+ ∞ (2.20) ui Ψi (ξ)pξ (ξ)dξ ui Ψi (ξ)pξ (ξ)dξ , (−∞,ξ − ]∪[ξ + ,∞) ξ− i=0
i=0
1305
A MONOMIAL CHAOS APPROACH
or (ii) the probability density pξ (ξ) of ξ(ω) decreases fast enough as ξ → ±∞ such that (2.15) holds and the probability density pξ (ξ)of ξ(ω) decreases fast enough as ξ → ±∞ such that p ∞ ∞ ∞ (2.21) ui Ψi (ξ)pξ (ξ)dξ ui Ψi (ξ)pξ (ξ)dξ , −∞ −∞ i=p+1 i=0 then |˜ εμu | |μu |. Proof. The error ε˜μu in the monomial chaos approximation of the mean μu due to the divergence in ξ(ω) ∈ (−∞, ξ − ] ∪ [ξ + , ∞) is given by (2.16), ε˜μu =
∞
ui
i=0
(−∞,ξ − ]∪[ξ + ,∞)
Ψi (ξ)pξ (ξ)dξ.
The mean μu is given by (2.7) and (2.8), which can be written as (2.22)
μu =
∞
ui
i=0
(−∞,ξ − ]∪[ξ + ,∞)
Ψi (ξ)pξ (ξ)dξ +
∞
ui
i=0
ξ+
ξ−
Ψi (ξ)pξ (ξ)dξ.
According to assumption (i), |˜ εμu | |μu |. The mean μu based on an infinite monomial chaos series expansion of u(ω) is given by (2.7) and (2.8), which can also be written as ∞ ∞ p ∞ ui Ψi (ξ)pξ (ξ)dξ + ui Ψi (ξ)pξ (ξ)dξ. (2.23) μu = i=0
−∞
i=p+1
−∞
The error εμu in the monomial chaos approximation of the mean μu due to the truncation of the monomial chaos expansion at p is given by (2.11), (2.24)
εμu = −
∞ i=p+1
ui
∞
−∞
Ψi (ξ)pξ (ξ)dξ.
According to assumption (ii), we have |εμu | |μu |. The result of Proposition 2.1 gives |˜ εμu | |μu |. An example of a probability distribution that can satisfy the assumptions of Propositions 2.1 and 2.2 is the Gaussian distribution with density function pξ (ξ) = D
(1/ 2πσξ2 )exp(−(ξ − μξ )2 /(2σξ2 )), and μξ and σξ2 the mean and variance of ξ(ω), respectively. This probability density function is exponentially decreasing as ξ → ±∞. Whether a given Gaussian probability distribution satisfies the assumptions depends on the combination of a not too large variance of the uncertain input parameter through σξ2 and sufficient regularity of the uncertain variable u(ω) through {ui }∞ i=0 , − + ξ , and ξ . One can use (2.15), (2.20), and (2.21) to verify whether the monomial chaos expansion of a certain order p is appropriate to use in a particular application. Similar propositions hold for the variance σu2 and the errors ε˜σu2 and εσu2 . 3. Application of monomial chaos. In this section the monomial chaos is applied to the Burgers equation. The test problem is intended for demonstrating the properties of monomial chaos applied to a nonlinear problem and for comparing the results to those of other methods. The Burgers equation is often used to study
1306
JEROEN A. S. WITTEVEEN AND HESTER BIJL
1
0.8
velocity u
ν=0→ 0.6
←ν=0.25 ←ν=0.5 ←ν=1 ν=2→ ν=∞→
0.4
0.2 sensor location x=0.5→ 0 0
0.2
0.4 0.6 position x
0.8
1
Fig. 1. Deterministic solution of the nonlinear advection-diffusion problem for several values of the viscosity parameter ν = {0; 0.25; 0.5; 1; 2; ∞}.
the nonlinear advection-diffusion phenomena of fluid dynamics in one dimension [1], and also in combination with the effect of uncertainty [13, 19, 25]. The efficiency of uncertainty quantification in computational fluid dynamics applications is important, since deterministically fluid dynamics simulations can already result in high computational costs. The monomial chaos formulation for the Burgers equation is given in section 3.1. In section 3.2 numerical results for monomial chaos are presented. 3.1. Burgers’ equation. In this section the one-dimensional steady nonlinear advection-diffusion problem known as the viscous Burgers equation is considered [1]. The Burgers equation for the velocity u(x, ω) in one dimension is given by (3.1)
u
∂u ∂2u − ν 2 = 0, ∂x ∂x
x ∈ [0, 1],
with an uncertain viscosity ν(ω). The deterministic boundary conditions are u(0, ω) = 1 and u(1, ω) = 0. The solution of the deterministic variant of (3.1) is shown in Figure 1 for several values of the viscosity ν = {0; 0.25; 0.5; 1; 2; ∞}. In Figure 1 also the sensor location xsl = 0.5 is shown. The monomial chaos expansions for the uncertain viscosity ν(ω) and the velocity u(x, ω) are (3.2)
ν(ω) =
1
νj Ψj (ξ(ω)),
u(x, ω) =
j=0
∞
ui (x)Ψi (ξ(ω)),
i=0
where ξ(ω) is a linear transformation of ν(ω) to a standard domain and {Ψi (ξ)}∞ i=0 are monomials around ξ(ω) = μξ given by (2.2). The expansion coefficients {νj }1j=0 of the viscosity with a known uncertainty distribution are given by 1 dj ν (3.3) νj = , j = 0, 1, j! dξ j ξ=μξ j
where ddξνj |μξ is known. Substituting the monomial chaos expansions (3.2) into the Burgers equation (3.1) results in (3.4)
∞ ∞
∞
dui d2 ui Ψi (ξ)Ψj (ξ)uj Ψi (ξ)Ψj (ξ)νj = 0. − dx dx2 i=0 j=0 i=0 j=0 1
A MONOMIAL CHAOS APPROACH
1307
Taking the kth derivative of (3.4) with respect to ξ for ξ(ω) = μξ and using the Leibniz identity and property (2.3) results in a differential equation for uk (x), (3.5)
k k l=0
dul − uk−l (x) l dx
k l=max{0,k−1}
k d2 ul νk−l 2 = 0, l dx
k = 0, 1, . . . .
Terms without uk (x) can be brought to the right-hand side of (3.5), which results in (3.6a)
u0
du0 d2 u0 = 0, − ν0 dx dx2
k = 0,
(3.6b)
k−1 k du0 duk d2 uk d2 uk−1 dul + u0 − ν0 u + kν = − (x) , uk k−l 1 dx dx dx2 dx dx2 l
k = 1, 2, . . . .
l=1
As mentioned before, the equation for k = 0, (3.6a), coincides with the deterministic problem for the mean value of the uncertain viscosity ν0 . Equations (3.6b) form a decoupled set of equations for the higher-order monomial chaos coefficients uk (x), with k = 1, 2, . . . , as function of {uj (x)}k−1 j=0 which can be solved sequentially for increasing k. These equations are linear in uk (x). The computational work for solving each equation of (3.6b) is equivalent to one Newton iteration for solving (3.6a). Therefore, monomial chaos results in relatively low computational costs per additional polynomial chaos order compared to the deterministic solve. A pth-order approximation of the solution for u(x, ω) can be obtained by truncating the monomial expansion for u(x, ω) in (3.2) at p. The error estimates (2.13) and (2.14) can be used to determine a suitable polynomial chaos order p of the approximation. Equations (2.7) and (2.9) can be used to determine the approximation of the mean and the variance of the velocity u(x, ω). 3.2. Results for Burgers’ equation. In this section results of the monomial chaos for the Burgers equation are presented. In section 4, the results of the monomial chaos approach are compared to results of the perturbation method, the Galerkin polynomial chaos method, and a nonintrusive polynomial chaos method as reviewed in section 1. For this comparison two error measures are used for the error in the mean εμu (x) and the variance εσu2 (x) at the sensor location xsl = 0.5: σ 2 (xsl ) − σ 2 (xsl ) μu (xsl ) − μu,ref (xsl ) u u,ref , εσu2 = (3.7) εμu = . 2 μu,ref (xsl ) σu,ref (xsl ) The reference solution is a Monte Carlo simulation based on 106 realizations of the uncertain parameter ν(ω) evenly spaced in sample space ω ∈ [0, 1]. Approximations of the probability distribution function and the probability density function are also presented. A second-order finite volume method is used to discretize the spatial domain. The nonlinear problem is solved using Newton linearization with an appropriate convergence criterion εnl = 10−9 for the L∞ -norm of the residual, which results for this problem in four Newton iterations. The mean value of the uncertain input is assumed to be μν = 1. Probability distributions with either a finite or a (semi-)infinite support of the uncertain viscosity ν(ω) are considered. The uniform distribution is chosen for the distribution on the finite domain. This corresponds to the assumption of an interval uncertainty, which
1308
JEROEN A. S. WITTEVEEN AND HESTER BIJL
1
1 probability distribution function
p=7
velocity u
0.8
0.6
0.4
0.2 mean uncertainty bars 0 0
0.2
0.4 0.6 postition x
0.8
(a) mean and uncertainty bars
1
exact MCh p=7 MCh p=3
0.8 0.6 0.4 0.2 0 0.52
0.54
0.56
0.58 0.6 velocity u
0.62
0.64
0.66
(b) distribution function at xsl
Fig. 2. Monomial chaos (MCh) results for the uniform input distribution.
is often used in practical applications in case not enough information is available to prescribe an uncertainty distribution. The input coefficient of variation for the uniform distribution is covν = 0.3. Physical uncertainties are often described using a normal distribution. Since the viscosity is a positive physical parameter, the lognormal distribution is selected instead of the normal distribution for the distribution on the (semi-)infinite domain. For the lognormal distribution an input coefficient of variation of covν = 0.2 is considered to limit the main parameter variations to the same range as for the uniform distribution. It has been verified that variation of the input coefficient of variation covν and the choice of the sensor location xsl do not affect the results significantly in comparison with the other methods. 3.2.1. Results for the uniform input distribution. In Figure 2 the monomial chaos results for the uniform input distribution are presented. In Figure 2(a) the mean μu and the 90% uncertainty intervals are given as function of x. The uncertainty is largest in the interior of the domain due to the deterministic boundary conditions. The uncertainty bars are asymmetrical with respect to the mean, which was expected from the deterministic parameter study of Figure 1. In Figure 2(b) the approximation of the probability distribution function at the sensor location xsl is shown. The monomial chaos approximations for p = 3 and p = 7 are compared to the reference solution. The 7th-order approximation is very accurate, and the 3rd-order approximation results in a less accurate resolution of the tails of the distribution. In Figure 3 the error convergence of the monomial chaos is given as a function of both the polynomial chaos order and the computational work for the uniform input distribution. In the same figure results for the perturbation method are given, which are discussed in section 4. The mean and the variance converge on average exponentially as functions of polynomial chaos order; see Figure 3(a). The odd coefficients do not contribute to the approximation of the mean (2.7), since the central moments μν,i of ν(ω) are zero for odd i. This is the reason for the staircase convergence of the approximation of the mean. In Figure 3(b), the error convergence as a function of the computational work is given in terms of the equivalent number of deterministic solves. The error convergence with respect to computational work is four times faster than the convergence with respect to polynomial chaos order. For p = 0 the monomial chaos results in a
1309
A MONOMIAL CHAOS APPROACH
0
0
10
10
MCh: mean MCh: variance PM: mean PM: variance
−1
10
−1
10
−2
−2
10 error
error
10
−3
10
−4
10
−5
−5
10
10
−6
0
−3
10
−4
10
10
MCh: mean MCh: variance PM: mean PM: variance
−6
2
4 6 polynomial chaos order p
8
(a) polynomial chaos order
10
10
0
2 4 6 8 computational work [# deterministic solves]
10
(b) computational work
Fig. 3. Error convergence of the monomial chaos (MCh) and the perturbation method (PM) for the uniform input distribution.
deterministic solve for the mean value of the uncertain input μν . In this case four Newton iterations are required to solve the nonlinear problem. Per additional polynomial chaos order a linear problem has to be solved. The computational work for these linear solves is equivalent to one Newton iteration for the nonlinear problem. For an 8th-order monomial chaos approximation of the mean with an error of 1 · 10−5 this results in computational costs equivalent to three deterministic solves. These results depend on the number of Newton iterations required for the deterministic problem. 3.2.2. Results for the lognormal input distribution. The results of the monomial chaos for the lognormal input distribution are given in Figure 4. In Figure 4(a) the monomial chaos approximation of the probability density function for p = 3 and p = 7 is compared to the reference solution at the sensor location xsl . Especially near the tails of the distribution the 7th-order approximation is more accurate than the 3rd-order approximation. In Figure 4(b) the weighted error in the approximation of the probability density function is shown. The error weighted with its probability is small near the point of highest probability, which corresponds approximately to μν , and it vanishes in the tails. In Figure 5 the error convergence of the monomial chaos is given for the lognormal input distribution. The mean and the variance converge, but the error convergence is less regular than for the uniform input distribution. Initially the convergence is less smooth due to the alternating over- and underestimation in combination with the asymmetrical input distribution. The first-order coefficient u1 (xsl ) has no contribution to the approximation of the mean, since the first-order central moment μν,1 of ν(ω) is by definition zero. The error convergence with respect to computational work is again four times faster than with respect to polynomial order; see Figure 5(b). 4. Comparison with other methods. In this section the results of the monomial chaos for the Burgers equation are compared to the results of the perturbation method [10], the polynomial chaos method [3], and a nonintrusive polynomial chaos method [9]. An error convergence study with respect to the Monte Carlo reference solution is performed as a function of polynomial chaos order and computational work. For the Galerkin polynomial chaos method an optimal polynomial basis is constructed based on the input uncertainty distribution. For a nonintrusive polyno-
1310
JEROEN A. S. WITTEVEEN AND HESTER BIJL
exact MCh p=7 MCh p=3
25 20 15 10 5 0 0.52
MCh p=7 MCh p=3
0.5 weighted error
probability density function
30
0.4 0.3 0.2 0.1
0.54
0.56
0.58 0.6 velocity u
0.62
0.64
0 0.52
0.66
(a) probability density
0.54
0.56
0.58 0.6 velocity u
0.62
0.64
0.66
(b) weighted error
Fig. 4. Monomial chaos (MCh) results for the lognormal input distribution.
0
0
10
10
MCh: mean MCh: variance PM: mean PM: variance
−1
10
−1
10
−2
MCh: mean MCh: variance PM: mean PM: variance
−2
error
10
error
10
−3
−3
10
10
−4
−4
10
0
10
2
4 6 polynomial chaos order p
(a) polynomial order
8
10
0
2 4 6 8 computational work [# deterministic solves]
10
(b) computational work
Fig. 5. Error convergence of the monomial chaos (MCh) and the perturbation method (PM) for the lognormal input distribution.
mial chaos method the solution is not unique since the samples ξk in random space in (1.9) can be chosen arbitrarily [9]. Here the sampling points are chosen uniformly distributed in ω. 4.1. Comparison with the perturbation method. In contrast with the monomial chaos approach, the perturbation method results only in low-order approximations of the mean and the variance. These results are compared to the results of the monomial chaos in Figures 3 and 5 for the uniform and lognormal input distribution, respectively. The results of the perturbation method are similar to those of the monomial chaos for p = 0, 1, 2. Higher-order monomial chaos approximations for the uniform input distribution are orders of magnitude more accurate than those of the perturbation method. This demonstrates that the monomial chaos method can be viewed as an extension of the perturbation method to higher-order approximations of the mean, the variance, and the distribution function. Also the computational costs of the monomial chaos approach and the perturbation method of the same order are
1311
A MONOMIAL CHAOS APPROACH
0
0
10
10 MCh: mean MCh: variance PC: mean PC: variance
−2
−2
error
10
error
10
MCh: mean MCh: variance PC: mean PC: variance
−4
−4
10
10
−6
10
0
−6
2
4 6 polynomial chaos order p
(a) polynomial order
8
10
10
0
2 4 6 8 10 12 computational work [# deterministic solves]
(b) computational work
Fig. 6. Error convergence of the monomial chaos (MCh) and the Galerkin polynomial chaos (PC) for the uniform input distribution.
similar; see Figures 3(b) and 5(b). For higher-order approximations the monomial chaos approach maintains these low computational costs per additional polynomial chaos order. 4.2. Comparison with the Galerkin polynomial chaos method. The mean and variance approximations of the monomial chaos approach and the Galerkin polynomial chaos method are compared in Figures 6 and 7 for the uniform and lognormal input distribution, respectively. In terms of polynomial chaos order, the Galerkin polynomial chaos method results in exponential and faster convergence than the monomial chaos approach; see Figures 6(a) and 7(a). However, solving the coupled set of nonlinear equations in the Galerkin polynomial chaos method results in a relatively fast increase of computational work per polynomial chaos order in comparison with the monomial chaos approach. Let p be the polynomial chaos order, nN be the number of Newton iterations for solving the nonlinear problem, and nGS be the number of Gauss–Seidel iterations for solving the coupled system of the Galerkin polynomial chaos. Then the monomial chaos approach results in an amount of computational work equivalent to ( npN + 1) deterministic solves. The computational work for the Galerkin polynomial chaos is equivalent to nGS (p + 1) deterministic solves. Therefore, in this case the monomial chaos approach converges as a function of computational work by approximately a factor of three faster than the Galerkin polynomial chaos method; see Figures 6(b) and 7(b). 4.3. Comparison with a nonintrusive polynomial chaos method. In Figures 8 and 9 the error convergence of the monomial chaos and a nonintrusive polynomial chaos method is compared for the uniform and lognormal distribution, respectively. The nonintrusive polynomial chaos method achieves a slightly higher error convergence rate as a function of the polynomial chaos order; see Figures 8(a) and 9(a). The absolute errors are approximately of the same order of magnitude as those of the monomial chaos. The computational work of the nonintrusive polynomial chaos method per additional polynomial chaos order is equivalent to a nonlinear deterministic solve. So, the nonintrusive polynomial chaos results in an amount of computational work equivalent to (p + 1) deterministic solves compared to ( npN + 1) for the monomial chaos approach. This results in this case in an approximately two times higher
1312
JEROEN A. S. WITTEVEEN AND HESTER BIJL
0
0
10
10 MCh: mean MCh: variance PC: mean PC: variance
−2
−2
error
10
error
10
MCh: mean MCh: variance PC: mean PC: variance
−4
−4
10
10
−6
10
0
−6
2
4 6 polynomial chaos order p
8
10
10
0
(a) polynomial order
2 4 6 8 10 12 computational work [# deterministic solves]
(b) computational work
Fig. 7. Error convergence of the monomial chaos (MCh) and the Galerkin polynomial chaos (PC) for the lognormal input distribution.
MCh: mean MCh: variance NIPC: mean NIPC: variance
0
10
−2
error
error
−2
10
−4
10
−6
0
10
−4
10
10
MCh: mean MCh: variance NIPC: mean NIPC: variance
0
10
−6
2
4 6 polynomial chaos order p
(a) polynomial order
8
10
10
0
2 4 6 8 computational work [# deterministic solves]
10
(b) computational work
Fig. 8. Error convergence of the monomial chaos (MCh) and a nonintrusive polynomial chaos method (NIPC) for the uniform input distribution.
error convergence rate as a function of computational work for the monomial chaos approach compared to the nonintrusive polynomial chaos method; see Figures 8(b) and 9(b). 5. Application to two-dimensional boundary layer flow. In this section the monomial chaos approach is applied to a two-dimensional incompressible boundary layer flow as a standard test problem of computational fluid dynamics [14]. Uncertainty quantification in computational fluid dynamics can be highly expensive in practical applications due to the large computational work already involved in solving the deterministic problem. Monomial chaos can be a computationally efficient alternative for uncertainty quantification in this type of problem. For two-dimensional flow along a flat plate the Navier–Stokes equations of viscous fluid dynamics reduce to the nonlinear two-dimensional incompressible boundary layer
1313
A MONOMIAL CHAOS APPROACH
2
2
10
10 MCh: mean MCh: variance NIPC: mean NIPC: variance
0
−2
10
−4
−2
10
−4
10
0
0
10 error
error
10
MCh: mean MCh: variance NIPC: mean NIPC: variance
10 2
4 6 polynomial chaos order p
8
10
(a) polynomial order
0
2 4 6 8 computational work [# deterministic solves]
10
(b) computational work
Fig. 9. Error convergence of the monomial chaos (MCh) and a nonintrusive polynomial chaos method (NIPC) for the lognormal input distribution.
equations ∂u ∂v + = 0, ∂x ∂y
(5.1a)
(5.1b)
ρu
∂u ∂2u ∂u + ρv − μ 2 = 0, ∂x ∂y ∂x
where u and v are the velocity components parallel and perpendicular to the plate, respectively. The flow is assumed to be laminar, the pressure gradient normal to the plate is neglected, and the density ρ and viscosity μ are assumed to be uniform and independent of temperature. The boundary layer equations describe the conservation of mass (5.1a) and the conservation of momentum in the free stream direction (5.1b). The flat plate is aligned with the free stream direction x; see Figure 10. The free stream velocity u∞ equals unity, and the density at standard sea level conditions, ρISA = 1.225kg/m3 , is used. The computational domain has length 1m and height 0.05m and is discretized with cells of length Δx = 1 · 10−3 m with an aspect ratio of 2. A mixed upwind-central discretization is used. The flat plate of length 0.9m starts at x = 0.1m. To solve the deterministic problem, eight Newton iterations were required to reach the convergence criterion of εu = 1 · 10−4 in the L1 -norm. The uncertainty is introduced in terms of an uncertain dynamic viscosity coefficient μ(ω). The uncertainty is described by a lognormal distribution, since viscosity is a positive physical parameter. The mean value is the viscosity at standard sea level conditions, μISA = 1.789 · 10−5 kg/ms, and the coefficient of variation is covμ = 5%. The effect of the uncertainty in the viscosity on the velocity field and the drag of the flat plate are considered. A third-order monomial chaos expansion is employed to solve for the uncertainty propagation in the boundary layer flow. The uncertain velocity components u(x, y, ω) and v(x, y, ω) and the viscosity μ(ω) are expanded in a monomial chaos expansion. After substitution and differentiation of the governing equations (5.1), the uncertainty
1314
JEROEN A. S. WITTEVEEN AND HESTER BIJL
y .05 8
u 8
u=u v=0
0
.1
u=v=0
x
1
Fig. 10. The two-dimensional boundary layer flow problem.
quantification problem is given by ∂vk ∂uk + = 0, ∂x ∂y
(5.2a)
(5.2b)
k k l=0
l
uk−l
k ∂ul k ∂ul + − vk−l l ∂x ∂y l=0
k l=max{0,k−1}
k ∂ 2 ul μk−l 2 = 0, ∂x l
for k = {0, 1, 2, 3}. In Figure 11 the results for the mean μu (x, y) and the standard deviation σu (x, y) of the u-velocity field are shown. The presence of the flat plate results in a typical boundary layer behavior of the mean u-velocity field; see Figure 11(a). The standard deviation of the u-velocity field has local maxima inside the boundary layer and near the leading edge of the flat plate. It vanishes both near the flat plate further downstream and in the outer flow; see Figure 11(b). The error estimates (2.13) and (2.14) estimate a maximum error of 4 · 10−6 and 8 · 10−5 in the mean and variance field, respectively. The drag Fdrag (ω) of the two-sided flat plate is a function of the uncertain viscosity μ(ω) and the uncertain velocity gradient at the wall ∂u ∂y |y=0 (ω), 1 ∂u (5.3) Fdrag (ω) = 2 τw (ω)dx = 2 μ(ω) (ω)dx, ∂y L 0.1 y=0
where τw (ω) is the skin friction. In Figure 12 the third-order monomial chaos approximation of the uncertainty distribution of the drag is shown. In Figure 12(a) the probability distribution function is compared to a Monte Carlo simulation based on 100 realizations uniformly sampled in ω. The results show good agreement. In Figure 12(b) the error in the distribution function weighted by its probability is given. The error is minimal for the drag corresponding to the mean value of the viscosity and vanishes in the tails. The additional computational costs of the presented uncertainty quantification are equivalent to less than a deterministic solve. As mentioned before, solving the nonlinear deterministic problem requires eight Newton iterations. The third-order monomial chaos results in three linear solves in addition to the deterministic solve
1315
A MONOMIAL CHAOS APPROACH
(a) mean μu (x, y)
(b) standard deviation σμ (x, y)
Fig. 11. Uncertain u-velocity field in the two-dimensional boundary layer flow problem subject to uncertain viscosity.
0.14 0.12
0.8
0.1 0.6
weighted error
probability distribution function
1
0.4 0.2 Monomial Chaos Monte Carlo
0 3
3.2
3.4 drag [N]
3.6
3.8
0.08 0.06 0.04 0.02
−3
x 10
0
(a) probability distribution function
3
3.2
3.4 drag [N]
3.6
3.8 −3
x 10
(b) weighted error
Fig. 12. Uncertainty distribution of the drag in the two-dimensional boundary layer flow problem subject to uncertain viscosity.
for the mean value of the uncertain input parameter. So, the additional computational costs for the uncertainty quantification using monomial chaos are in this case equivalent to 38 of the computational cost for solving the deterministic problem. Performing uncertainty quantification in computationally intensive practical applications is economically feasible with this order of computational costs. 6. Summary. A monomial chaos approach is proposed for efficient uncertainty quantification in computationally intensive nonlinear problems. The proposed approach employs a polynomial chaos expansion with monomials as basis functions. The equations for the deterministic coefficients are obtained by differentiating the governing equations. Propagating uncertainty through nonlinear equations can be computationally intensive for other polynomial chaos methods. It usually results in a set of nonlinear equations which can be coupled. The proposed monomial chaos approach results in a decoupled set of linear equations even for problems involving
1316
JEROEN A. S. WITTEVEEN AND HESTER BIJL
polynomial nonlinearities. This reduces the computational work per additional polynomial chaos order to the equivalence of a single Newton iteration. Error estimates for the monomial chaos approach have been presented. It has been demonstrated numerically that the monomial chaos approach can achieve a 2–3 times faster convergence as a function of computational work than other polynomial chaos methods. Application to a two-dimensional flow problem demonstrated that the additional computational work for performing an uncertainty quantification using monomial chaos can be smaller than a single deterministic solve. REFERENCES [1] D. A. Anderson, J. C. Tannehill, and R. H. Pletcher, Computational Fluid Mechanics and Heat Transfer, Ser. Comput. Methods Mech. Thermal Sci., McGraw–Hill, New York, 1997. [2] R. H. Cameron and W. T. Martin, The orthogonal development of nonlinear functionals in series of Fourier-Hermite functionals, Ann. Math., 48 (1947), pp. 385–392. [3] R. G. Ghanem and P. Spanos, Stochastic Finite Elements: A Spectral Approach, SpringerVerlag, New York, 1991. [4] R. G. Ghanem and S. F. Wojtkiewicz, eds., Special issue on uncertainty quantification, SIAM J. Sci. Comput., 26 (2004), issue 2. [5] A. Greenbaum, Iterative Methods for Solving Linear Systems, SIAM, Philadelphia, 1997. [6] J. M. Hammersley and D. C. Handscomb, Monte Carlo Methods, Methuen’s Monographs on Applied Probability and Statistics, Fletcher & Son, Norwich, CT, 1964. [7] E. J. Haug, K. Choi, and V. Komkov, Design Sensitivity Analysis of Structural Systems, Academic Press, Orlando, FL, 1986. [8] C. Hirsch, Numerical Computation of Internal and External Flows, Vol. 1: Fundamentals of Numerical Discretization, Wiley, Chichester, UK, 1988. [9] S. Hosder, R. W. Walters, and R. Perez, A non-intrusive polynomial chaos method for uncertainty propagation in CFD simulations, in Proceedings of the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 2006, AIAA-2006-891. [10] M. Kleiber and T. D. Hien, The Stochastic Finite Element Method, John Wiley and Sons, New York, 1992. `ve, Probability Theory, 4th ed., Springer-Verlag, New York, 1977. [11] M. Loe ´ [12] J.-N. Mahieu, S. Etienne, D. Pelletier, and J. Borggaard, A second-order sensitivity equation method for laminar flow, Int. J. Comput. Fluid D, 19 (2005), pp. 143–157. [13] L. Mathelin and O. P. Le Maˆitre, A posteriori error analysis for stochastic finite element solutions of fluid flows with parametric uncertainties, in Proceedings of the European Conference on Computational Fluid Dynamics (ECCOMAS CFD 2006), Egmond aan Zee, The Netherlands, P. Wesseling, E. O˜ nate, and J. P´eriaux, eds., TU Delft, The Netherlands, 2006. [14] H. T. Schlichting and K. Gersten, Boundary-Layer Theory, Springer, Berlin, 2000. ¨ller, ed., A state-of-the-art report on computational stochastic mechanics, Prob. [15] G. I. Schue Engrg. Mech., 12 (1997), pp. 197–321. [16] L. G. Stanley and D. L. Stewart, Design Sensitivity Analysis: Computational Issues of Sensitivity Equation Methods, Frontiers Appl. Math. 25, SIAM, Philadelphia, 2002. ´ Turgeon, D. Pelletier, and J. Borggaard, A general continuous sensitivity equation [17] E formulation for complex flows, Numer. Heat Transfer B, 42 (2002), pp. 485–408. [18] R. W. Walters, Towards stochastic fluid mechanics via polynomial chaos—Invited, in Proceedings of the 41st AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 2003, AIAA-2003-0413. [19] R. W. Walters and L. Huyse, Uncertainty Analysis for Fluid Mechanics with Applications, NASA research report NASA/CR-2002-21149, NASA Langley Research Center, Hampton, VA, 2002; available online from http://historical.ncstrl.org/tr/pdf/icase/TR-2002-1.pdf. [20] X. Wan and G. E. Karniadakis, Beyond Wiener-Askey expansions: Handling arbitrary PDFs, J. Sci. Comput., 27 (2006), pp. 455–464. [21] N. Wiener, The homogeneous chaos, Amer. J. Math., 60 (1938), pp. 897–936. [22] J. A. S. Witteveen and H. Bijl, Modeling arbitrary uncertainties using Gram-Schmidt polynomial chaos, in Proceedings of the 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 2006, AIAA-2006-896.
A MONOMIAL CHAOS APPROACH
1317
[23] D. Xiu and G. E. Karniadakis, The Wiener–Askey polynomial chaos for stochastic differential equations, SIAM J. Sci. Comput., 24 (2002), pp. 619–644. [24] D. Xiu and G. E. Karniadakis, Modeling uncertainty in flow simulations via generalized polynomial chaos, J. Comput. Phys., 187 (2003), pp. 137–167. [25] D. Xiu and G. E. Karniadakis, Uncertainty modeling of Burgers’ equation by generalized polynomial chaos, in Computational Stochastic Mechanics, 4th International Conference on Computational Stochastic Mechanics, Corfu, Greece, 2002, P. D. Spanos and G. Deodatis, eds., Millpress, Rotterdam, The Netherlands, 2003, pp. 655–661.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1318–1340
c 2008 Society for Industrial and Applied Mathematics
ON MULTISYMPLECTICITY OF PARTITIONED RUNGE–KUTTA METHODS∗ BRETT N. RYLAND† AND ROBERT I. MCLACHLAN‡ Abstract. Previously, it has been shown that discretizing a multi-Hamiltonian PDE in space and time with partitioned Runge–Kutta methods gives rise to a system of equations that formally satisfy a discrete multisymplectic conservation law. However, these previous studies use the same partitioning of the variables into two parts in both space and time. This gives rise to a large number of cases to be considered, each with its own set of conditions to be satisfied. We present here a much simpler set of conditions, covering all of these cases, where the variables are partitioned independently in space and time into an arbitrary number of parts. In general, it is not known when such a discretization of a multi-Hamiltonian PDE will give rise to a well-defined numerical integrator. However, a numerical integrator that is explicit will typically be well defined. In this paper, we give sufficient conditions on a multi-Hamiltonian PDE for a Lobatto IIIA–IIIB discretization in space to give rise to explicit ODEs and an algorithm for constructing these ODEs. Key words. partitioned Runge–Kutta, multisymplectic, multi-Hamiltonian, Lobatto IIIA-IIIB AMS subject classifications. 37M15, 37K05, 35L05 DOI. 10.1137/070688468
1. Introduction. A multi-Hamiltonian PDE in one time and one space dimension is a PDE which can be written as a first order system in the form (1.1)
Kzt + Lzx = ∇z S(z),
where z ∈ Rn , K and L are nonzero skew-symmetric matrices, and S(z) is a smooth function [5]. Along solutions, z(t, x), to (1.1) the multisymplectic conservation law, (1.2)
ωt + κx = 0,
holds, where ω = 12 Kdz ∧ dz and κ = 12 Ldz ∧ dz are 2-forms and dz satisfies the first variation of the PDE, (1.3)
Kdzt + Ldzx = Dzz S(z)dz,
where Dzz S(z) is a symmetric matrix. One definition of a multisymplectic integrator is a numerical method that exactly preserves a discrete analogue of (1.2) (a so-called discrete multisymplectic conservation law) by applying a symplectic one-step method in space and time [11]. An important fact here is that multisymplectic integrators do not conserve (1.2) exactly, but rather different multisymplectic integrators preserve different discrete multisymplectic conservation laws, i.e., different discretizations of (1.2). This is in contrast ∗ Received by the editors April 17, 2007; accepted for publication (in revised form) November 29, 2007; published electronically March 21, 2008. This work was supported in part by the Marsden Fund of the Royal Society of New Zealand. http://www.siam.org/journals/sisc/30-3/68846.html † Department of Mathematics, University of Bergen, Johannes Bruns gate 12, Bergen S008, Norway (
[email protected]). The work of this author was partially supported by Education New Zealand. ‡ Mathematics Department, Massey University, Private Bag 11-221, Palmerston North, New Zealand (
[email protected]).
1318
ON MULTISYMPLECTICITY OF PRK METHODS
1319
to symplectic integrators for ODEs, which conserve symplecticity exactly. Some of the consequences of preserving a discrete multisymplectic conservation law are the following: (i) exact preservation of some integrals, e.g., potential vorticity [14]; (ii) both energy and momentum are approximately locally conserved [5, 7, 18]; (iii) quasi-periodic orbits and chaotic regions are preserved (KAM theory) [22]; (iv) the ability to take comparatively large time-steps and retain long-time stability [12]. In the past several authors [7, 13, 17, 18] have given discretizations of (1.1) which they have shown to formally satisfy a discrete multisymplectic conservation law. What these authors typically fail to consider is whether the resulting system of equations forms a well-defined numerical integrator. Some problems that may occur in such discretizations are [20] (i) there may be no obvious choice of dependent variables; (ii) the discrete equations may not be well defined locally (i.e., there may not be one equation per dependent variable per cell); (iii) the discrete equations may not be well defined globally (i.e., there may not be one equation per dependent variable across all spatial grid points when boundary conditions are imposed); (iv) the discrete equations may not have a solution, or may not have a unique solution or isolated solutions. Difficulties due to these problems already occur for the most popular multisymplectic integrator, the Preissman box scheme. With periodic boundary conditions in one space dimension, the discrete equations typically have only solutions with an odd number of grid points, while with an even number of grid points they have no solution (nonlinear problems) or an infinite number of solutions (linear problems). With higher order Runge–Kutta (RK) methods these problems are even worse [19]. Problems (iii) and (iv) will, in general, be avoided if a discretization method is used which gives rise to explicit multisymplectic integrators. In order to construct an explicit multisymplectic integrator, it is necessary for the discretization in each dimension to be explicit and symplectic. For PDEs in one space and one time dimension, this condition means that a symplectic spatial discretization must give rise to explicit ODEs in time (or vice versa, since space and time are treated on an equal footing). This rules out discretization by symplectic RK methods. However, for some partitioned Runge–Kutta (PRK) methods this is possible, e.g., the well-known 5point method obtained by applying leapfrog in space and time to the nonlinear wave equation, utt − uxx = −V (u), gives the explicit multisymplectic integrator [7] ⎡ ⎤ 1 ? 1 ⎢ 1 > ⎥ 1 −2 1 u = −V (u), (1.4) ⎣ −2 ⎦ u − 2 2 (Δt) (Δx) 1 where we have used the notation of centered stencils. Thus, in this paper we will be concerned with applying a PRK discretization in space to obtain explicit ODEs in time. In particular we will consider the Lobatto IIIA–IIIB class of PRK discretization, which, under certain requirements on the PDE, avoids problems (i) and (ii) and allows explicit ODEs to be obtained. The remainder of this paper consists of four sections. In section 2 we will describe a PRK discretization with an arbitrary number of parts and show that such a discretization in time and space gives rise to a natural discrete multisymplectic
1320
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
conservation law which is formally satisfied. In section 3 we give the conditions on the coefficients of a PRK discretization to be of Lobatto IIIA–IIIB type and specify our reasons for considering the Lobatto IIIA–IIIB class of PRK discretization. In section 4 we give the conditions on a multi-Hamiltonian PDE such that the application of a Lobatto IIIA–IIIB discretization in space allows one to construct explicit ODEs and then present an algorithm for constructing these ODEs. We follow this with several examples of PDEs that satisfy these conditions (such as the nonlinear wave equation and the nonlinear Schr¨ odinger equation) and some examples of PDEs that do not. In section 5 we will discuss some properties of the ODEs formed through our construction algorithm and give a shortcut for constructing these ODEs. We will also discuss the discretization of these ODEs in time and their behavior with respect to boundary conditions. 2. PRK discretization. When a differential equation, (2.1)
zt = f (z),
n is discretized with a PRK discretization, the vector ' of dependent variables z ∈ R is (γ) nγ partitioned into several parts, z ∈ R with γ nγ = n. Typically, the number of parts is two, but it is possible for the number of parts to be as high as n. A grid is then introduced where we take the grid points (or nodes) (for convenience only) to have equal spacing Δt, and we adopt the following notation: let cell i be the region in (γ) the domain defined by t ∈ [iΔt, (i + 1)Δt), let z γ be the entry γ in z, let zi ∈ Rnγ (γ) be the vector of variables in part γ at the node in cell i, let Zi,j ∈ Rnγ be the vector of variables in part γ at stage j in cell i, and let the lack of a raised index (γ) indicate the unpartitioned variable. For an r-stage PRK discretization of (2.1) one obtains a set of equations coupling the node values zi to the stage values Zi,j at r internal stages given by (γ)
(γ)
Zi,j = zi
+ Δt
r
(γ)
(γ)
(γ)
(γ)
ajk ∂t Zi,k ,
j = 1, . . . , r,
k=1
(2.2) (γ)
(γ)
zi+1 = zi
+ Δt
r
bj ∂t Zi,j ,
j=1
for each γ, where the new variables ∂t Zi,j satisfy (2.1), i.e., (2.3)
∂t Zi,j = f (Zi,j ), (γ)
(γ)
and the coefficients bj and ajk are chosen to satisfy certain order conditions. The conditions for a two-part PRK discretization of a canonical Hamiltonian ODE, with partitioning z(1) = q and z(2) = p, to be symplectic are [1] (2.4)
(1) (2)
(1) (2)
(1) (2)
−akj bk − bj ajk + bj bk = 0
for all j, k,
while the conditions for an RK discretization (i.e., a one-part PRK discretization with z(1) = z, n1 = n) of the same ODE to be symplectic are [21] (2.5)
(1) (1)
(1) (1)
(1) (1)
−akj bk − bj ajk + bj bk = 0
for all j, k.
Generally, for a PRK discretization with coefficients satisfying (2.4), the coefficients will not satisfy (2.5).
ON MULTISYMPLECTICITY OF PRK METHODS
1321
When the PDE (1.1) is discretized in space with an r-stage PRK discretization, the set of equations that one obtains is given by (γ)
(γ)
Zi,j = zi
+ Δx
r
(γ)
(γ)
(γ)
(γ)
ajk ∂x Zi,k ,
j = 1, . . . , r,
k=1
(2.6) (γ)
(γ)
zi+1 = zi
+ Δx
r
bj ∂x Zi,j ,
j=1
for each γ, where the new variables ∂x Zi,j satisfy (1.1), i.e., K∂t Zi,j + L∂x Zi,j = ∇z S(Zi,j ).
(2.7)
Equations (2.6) and (2.7) form a differential-algebraic equation (DAE) for Zi,j and zi . However, in this DAE there are no ODEs for the node values, and the constraints apply only to LZi,j , not Zi,j . Furthermore, L may not have full rank, which may prevent one from obtaining a system of explicit ODEs for the Zi,j . Previous studies of the PDE (1.1) discretized in space and time with PRK methods have concluded that such discretizations satisfy a natural discrete approximation of the multisymplectic conservation law (1.2) [13]. However, these studies use the same partitioning of the variables for both the space and time discretizations, which leads to a large number of cases to be considered, each with its own set of conditions to be satisfied. This choice of partitioning in each dimension is important, as the conditions for the discretized equations to satisfy the discrete multisymplectic conservation law depend upon K and L. For example, given a multi-Hamiltonian PDE and a two-part PRK discretization in time with coefficients satisfying (2.4), if the PDE has no time derivatives of the variables in the second part, then the discretization is in fact an RK discretization with the same coefficients as the first of the PRK pair, which will not in general satisfy (2.5). To consider the most general case, we will now assume the finest possible partitioning of the variables, namely n parts, where for each entry γ in z we have that nγ = 1 and the part z(γ) consists simply of the variable z γ . We will use the notation γ,n,m to represent the entry γ in z at stage j of cell i in space and stage m of cell dZi,j n in time, where a lack of either the index j or m indicates the node variable of cell i be the coefficients of the in space or cell n in time, respectively. Also, let bj(γ) and a(γ) ij (γ) (γ) spatial PRK discretization associated with the variable z γ , and let Bm and Anm be the coefficients of the temporal PRK discretization associated with the variable z γ . The following theorem gives a much simpler set of conditions for PRK discretizations of (1.1) in space and time to satisfy a discrete multisymplectic conservation law. Since it immediately applies to any other partitioning of the variables by simply (γ) (γ) equating the bj and aij coefficients of the appropriate parts in space or time, this set of conditions encompasses all of the cases considered in previous studies. Theorem 2.1. A multi-Hamiltonian PDE (1.1) discretized by a PRK method in space and another PRK method in time has a discrete multisymplectic conservation law, given by (2.8)
Δx
j
n+1 n bj (ωi,j − ωi,j ) + Δt
m
n,m Bm (κn,m ) = 0, i+1 − κi
1322
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
' γ,n β,n n where ωi,j = 12 β,γ Kβγ dZi,j ∧ dZi,j and κn,m = i when the following conditions hold: (γ)
bj
'
1 2
β,γ
Lβγ dZiγ,n,m ∧ dZiβ,n,m
= bj ,
(2.9) (γ) (β)
(γ) (β)
(γ) (β)
−akj bk − bj ajk + bj bk
=0
for all j, k and pairs (β, γ) such that Lβγ = 0 and (γ) = Bm , Bm
(2.10) (β) (γ) (β) (γ) (β) −A(γ) nm Bn − Bm Amn + Bm Bn = 0
for all m, n and pairs (β, γ) such that Kβγ = 0. Proof. (2.11) (
n,m κn,m i+1 − κi
=
)
, 1 + γ,n,m β,n,m Lβγ dZi+1 ∧ dZi+1 − Lβγ dZiγ,n,m ∧ dZiβ,n,m 2 β,γ
1 Lβγ = 2
dZiγ,n,m
+ Δx
(γ) γ,n,m bj ∂x dZi,j
∧
dZiβ,n,m
j
β,γ
+ Δx
(β) β,n,m bk ∂x dZi,k
dZiγ,n,m
−
∧
dZiβ,n,m
k
(β) (γ) 1 γ,n,m β,n,m γ,n,m β,n,m = Lβγ Δx dZi ∧ bk ∂x dZi,k + bj ∂x dZi,j ∧ dZi 2 j β,γ
k
2
+ (Δx)
(γ) (β) γ,n,m bj bk ∂x dZi,j
∧
β,n,m ∂x dZi,k
j,k
(γ) 1 (β) γ,n,m γ,n,m β,n,m dZi,k ∧ bk ∂x dZi,k = Lβγ Δx − Δx akj ∂x dZi,j 2 j β,γ
k
+ Δx
(γ) γ,n,m bj ∂x dZi,j
∧
β,n,m dZi,j
− Δx
j
+ (Δx)2
(β) β,n,m ajk ∂x dZi,k
k
(γ) (β)
γ,n,m β,n,m bj bk ∂x dZi,j ∧ ∂x dZi,k
j,k
(β) γ,n,m (γ) 1 β,n,m γ,n,m β,n,m = Lβγ Δx bk dZi,k ∧ ∂x dZi,k + bj ∂x dZi,j ∧ dZi,j 2 j β,γ
k
ON MULTISYMPLECTICITY OF PRK METHODS
+ (Δx)2
(γ) (β)
(γ) (β)
(γ) (β)
− akj bk − bj ajk + bj bk
1323
γ,n,m β,n,m ∂x dZi,j ∧ ∂x dZi,k
j,k
= Δx
(γ)
γ,n,m β,n,m bj Lβγ ∂x dZi,j ∧ dZi,j
β,γ,j
( 1 (γ) (β) (γ) (β) (γ) (β) ) γ,n,m β,n,m − akj bk − bj ajk + bj bk ∂x dZi,j + (Δx)2 Lβγ ∧ ∂x dZi,k . 2 β,γ
j,k
When Lβγ is nonzero, the (Δx)2 term above is zero if (γ) (β)
(γ) (β)
(γ) (β)
−akj bk − bj ajk + bj bk
(2.12)
=0
for all j, k.
Similarly, (2.13)
(
) γ,n,m β,n,m n+1 n (γ) = Δt − ωi,j Bm Kβγ ∂t dZi,j ∧ dZi,j ωi,j β,γ,m
( ) 1 (γ) (β) γ,n,m β,n,m (γ) (β) (γ) (β) − Alm Bl − Bm + (Δt)2 Kβγ Aml + Bm Bl ∂t dZi,j ∧ ∂t dZi,k 2 β,γ
m,l
and when Kβγ is nonzero, the (Δt)2 term is zero if (β) (γ) (β) (γ) (β) −A(γ) nm Bn − Bm Amn + Bm Bn = 0
(2.14)
for all m, n.
Now, writing (1.3) in components and taking its wedge product with dz β gives ( ) Kβγ ∂t dz γ ∧ dz β + Lβγ ∂x dz γ ∧ dz β = 0 for all β (2.15) γ
since Dzz S(z) is symmetric. Thus, in general (γ) ( γ,n,m β,n,m ) (γ) (2.16) bj Bm Lβγ ∂x dZi,j ∧ dZi,j γ,j,m
=−
( (γ) (γ) γ,n,m β,n,m ) bj Bm Kβγ ∂t dZi,j ∧ dZi,j
γ,j,m (γ)
(γ)
when bj = bj and Bm = Bm for all j, m, and γ. Therefore, if (2.9) and (2.10) hold, then we can see from (2.11) and (2.13) that the discrete multisymplectic conservation law (2.8) holds. The discrete multisymplectic conservation law (i.e., (2.8)) is an approximation to the integral
(i+1)Δx
(ω(x, (n + 1)Δt) − ω(x, nΔt)) dx
(2.17) iΔx
(n+1)Δt
(κ((i + 1)Δx, t) − κ(iΔx, t)) dt = 0,
+ nΔt
which is the integral of (1.2) over the cell with one corner at (iΔx, nΔt).
1324
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
Now, suppose we have a two-part PRK discretization in space where the coefficients satisfy (2.4) but not (2.5); then for (2.12) to be satisfied the partitioning of the variables in space must be chosen such that κ has terms only of the form dz (1) ∧ dz (2) . Similarly, given a two-part PRK discretization in time where the coefficients satisfy (2.4) but not (2.5), for (2.14) to be satisfied the partitioning of the variables in time must be chosen such that ω has only terms of the form dz (1) ∧ dz (2) . Theorem 2.1 shows that if the partitioning in space and time is chosen appropriately, then a PRK discretization in space and time with coefficients satisfying (2.4) will result in an integrator that formally satisfies a multisymplectic conservation law given by (2.8). However, this does not guarantee that the integrator is well defined. The approach we take to obtaining a well-defined multisymplectic integrator is to apply an explicit symplectic PRK discretization in each dimension. We define an explicit discretization in space as a discretization for which the time derivatives of the dependent variables may be written explicitly in terms of the dependent variables. Their derivation may involve solving linear systems, but these must be independent of the PDE. An explicit local discretization is an explicit discretization for which these ODEs depend only on nearby values of the dependent variables. In section 4 we will give the conditions on a multi-Hamiltonian PDE such that one can obtain an explicit local symplectic PRK discretization in space based on Lobatto IIIA–IIIB, and we will give an algorithm for obtaining the explicit ODEs in time. 3. Lobatto IIIA–IIIB. The particular class of PRK discretization that we consider in this paper is a two-part discretization known as Lobatto IIIA–IIIB. For (1) (2) (1) (2) these methods, the coefficients aij , aij and bj = bj = bj are determined by [8] r
B(r) :
bi ck−1 = i
i=1
(3.1)
C(r) :
r
(1)
aij ck−1 = j
j=1
D(r) :
r i=1
(2)
bi ck−1 aij = i
1 k
for k ≤ r,
1 k c k i
for i = 1, . . . , r and k ≤ r,
1 bj (1 − ckj ) k
for j = 1, . . . , r and k ≤ r,
where the ci are zeros of the Lobatto quadrature polynomial (3.2)
) dr−2 ( r−1 x (x − 1)r−1 . r−2 dx
While the Lobatto IIIA and Lobatto IIIB classes of RK methods have each been known since the mid 1960s, their coefficients do not satisfy (2.5), and it was discovered only relatively recently that the Lobatto IIIA–IIIB class of PRK methods formed by combining Lobatto IIIA and Lobatto IIIB has coefficients that satisfy (2.4) [16, 23]. Thus for a discretization of (1.1), if the partitioning of the variables in each of the space and time dimensions can be chosen such that the 2-form associated with each dimension has terms only of the form dz (1) ∧ dz (2) , then the resulting integrator will satisfy a discrete multisymplectic conservation law.
1325
ON MULTISYMPLECTICITY OF PRK METHODS
The reason we consider the Lobatto IIIA–IIIB class of PRK discretizations is because their coefficients are related in the following way: (1)
arj = bj
(2)
ai1 = b1
(3.3)
a1j = 0,
(3.4)
air = 0,
(1)
for all j,
(2)
for all i,
and the (r − 2) × (r − 2) matrix C with entries (3.5)
Ci−1,j−1 =
(1)
(2)
aik (bl − δkl )alj
for 2 ≤ i, j ≤ r − 1
k,l
is invertible. The relations given in (3.3) and (3.4) are a direct consequence of (3.1) and (3.2) and give us three properties which will be required in our algorithm for constructing explicit ODEs in the next section. First, from (3.3) we can see that, for γ = 1, a node value is equal to the first stage value associated with that node and also equal to the last stage value associated with the previous node. Second, (3.4) gives us that both ' ' (2) (2) j bj ajr and b1 − j bj aj1 are zero. Lastly, (3.3) and (3.4) together give
(3.6)
(1)
(2)
aik (bl − δkl )alj = 0
if either i ∈ {1, r} or j ∈ {1, r},
k,l
where δkl is the Kronecker delta. The invertibility of C can then be shown via the Frobenius inequality and will be used directly in the construction algorithm. The coefficients for Lobatto IIIA–IIIB methods can be written succinctly as pairs of Butcher tableaux; we give below the coefficients for r = 2, 3, and 4:
(3.7)
r=2:
IIIA:
0
0
0
1
1 2
1 2
1 2
1 2
0 ,
IIIB:
1
1 2 1 2
0
1 2
1 2
0 .
Second order Lobatto IIIA–IIIB is often referred to as generalized leapfrog:
(3.8)
r=3:
IIIA:
0
0
0
0
0
1 2
5 24 1 6
1 3 2 3
1 − 24 1 6
1 2
1 6
2 3
1 6
1
,
IIIB:
1
1 6 1 6 1 6
− 16
0
1 3 5 6
0
1 6
2 3
1 6
0
,
1326
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
r=4:
IIIA:
0
0
1
√ 5− 5 10 √ 5+ 5 10
IIIB:
√ 5− 5 10 √ 5+ 5 10
1
0
0
11+ 5 120 √ 11− 5 120 1 12
25−13 5 120 √ 25+ 5 120 5 12
√ −1+ 5 120 √ −1− 5 120 1 12
1 12
5 12
5 12
1 12
(3.9) 0
0
√ 25− 5 120 √ 25+13 5 120 5 12
√
1 12 1 12 1 12 1 12
√ −1− 5 24 √ 25+ 5 120 √ 25+13 5 120 √ 11− 5 24
√ −1+ 5 24 √ 25−13 5 120 √ 25− 5 120 √ 11+ 5 24
1 12
5 12
5 12
√
,
0 0 0 . 0 1 12
4. Explicit ODEs. In the one dimensional situation (i.e., time integration), the dependent variables are the zi ; (2.2) determines the stage variables Zi,j and defines a map from zi to zi+1 . In contrast, for situations where the dimension is greater than one (e.g., for PDEs of the form of (1.1)), if one applies a PRK discretization in space, then the dependent variables will typically be the stage variables Zi,j , while the node variables zi and the new variables ∂x Zi,j will be eliminated using the PDE to yield a set of ODEs in time for the Zi,j . As we shall see in the following theorem, this elimination depends upon the structure not only of K and L, but also of S(z). Theorem 4.1. Consider a multi-Hamiltonian PDE (1.1), where the K and L matrices have the following structure: ⎡ ⎢ K = ⎣ I 12 (d1 +d2 )
(4.1)
⎤
−I 12 (d1 +d2 )
⎥ ⎦,
⎡
Id1
⎢ L=⎣
0d2
⎤ ⎥ ⎦,
−Id1
0d1
where d1 = n − rank(K), d2 = n − 2d1 ≤ d1 , Id is the d × d identity matrix, and 0d is the d × d zero matrix. Let the variables z be partitioned into two parts z(1) ∈ Rd1 +d2 and z(2) ∈ Rd1 , where we denote the first d1 components of z(1) by q, the last d2 components of z(1) by v, and the components of z(2) by p such that the PDE may be written as ⎡ (4.2)
⎢ ⎣ I 12 (d1 +d2 )
⎤⎡
−I 12 (d1 +d2 ) 0d1
⎤ q ⎥⎢ ⎥ ⎦⎣ v ⎦ p t
⎡
Id1
⎢ +⎣
0d2 −Id1
⎤⎡
⎤ ⎤ ⎡ q ∇q S(z) ⎥⎢ ⎥ ⎥ ⎢ ⎦ ⎣ v ⎦ = ⎣ ∇v S(z) ⎦ . p x ∇p S(z)
If the function S(z) can be written in the form (4.3)
S(z) = T (p) + V (q) + V& (v),
ON MULTISYMPLECTICITY OF PRK METHODS
1327
where T (p) = 12 pt βp and V& (v) = 12 vT αv such that |β| = 0 and |α| = 0, then applying an r-stage Lobatto IIIA–IIIB PRK discretization in space to the PDE leads to a set of explicit local ODEs in time in the stage variables associated with q. Proof. A general outline of the proof of this theorem is as follows. We first make use of the form of S(z) to rewrite (4.2) by eliminating the v variables. The r-stage Lobatto IIIA–IIIB discretization is then applied to the resulting PDE. Next, we make use of the requirements of the theorem in order to eliminate the node variables and the stage variables associated with p and to rearrange the resulting equations to obtain explicit local ODEs in time in the stage variables associated with q. This elimination and rearrangement is carried out by way of a five-step construction algorithm. Due to the form of S(z), the central d2 rows of (4.2) allow us to write entry i in v as (4.4)
vi =
d2
(α−1 )i,j ∂t qj+ 12 (d1 −d2 )
j=1
and hence (4.5)
∂t vi =
d2
(α−1 )i,j ∂t2 qj+ 12 (d1 −d2 ) .
j=1
Substituting (4.5) into (4.2), we can eliminate the v variables in favor of higher order derivatives in time of the q variables. This lets us write (4.2) as (4.6)
Kzt + Lzx − Eztt = ∇z S(z),
where z, K, L, E, and S(z) are the new vectors, matrices, and functions given below: ⎡ ⎤ −I 12 (d1 −d2 )
⎢ ⎥ ⎥ ⎢ q 0d2 ⎥, ⎢ z= , K=⎢ ⎥ p ⎦ ⎣ I 12 (d1 −d2 ) 0d1 (4.7) ⎤ ⎡ 0 12 (d1 −d2 )
⎥ ⎢ ⎥ ⎢ Id1 α−1 ⎥, ⎢ L= , E =⎢ ⎥ −Id1 0 12 (d1 −d2 ) ⎦ ⎣ 0d1 and S(z) = T (p) + V (q). Note that if d2 = 0, then (4.2) and (4.6) are identical; i.e., V& (v) ≡ 0 and E is a d1 × d1 matrix of zeros. We shall now give a five-step algorithm for constructing explicit local ODEs in time from an r-stage Lobatto IIIA–IIIB PRK discretization of (4.6). However, before we begin, it is necessary to introduce the following notation which will be used throughout the remainder of this text: (i) ziη is the node variable in cell i for the entry η in z, η is the stage variable at stage j in cell i for the entry η in z, (ii) Zi,j η (iii) Zi is the vector of stage variables in cell i for the entry η in z,
1328
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
(iv) Zi is the tensor of stage variables for all values of η in cell i, η is a variable representing the first (n = 1) and second (n = 2) time (v) ∂tn Zi,j η , derivatives of Zi,j (vi) ∂zη S(Zi ) is the vector of stage values at cell i obtained by taking the derivative of the function S(z) with respect to the entry η in z, (vii) A(1) is the r × r matrix of aij values for Lobatto IIIA, (viii) A(2) is the r × r matrix of aij values for Lobatto IIIB, (ix) b is the common vector of length r of bj values for Lobatto IIIA and IIIB, (x) 1 is a vector of length r with all entries equal to 1. Now, (4.6) discretized in space by an r-stage Lobatto IIIA–IIIB PRK discretization results in the following system of implicit ODEs: Qηi = qiη 1 + ΔxA(1) (−∂pη T (Pi )),
(4.8)
η = qiη + ΔxbT (−∂pη T (Pi )), qi+1
(4.9)
Pηi = pηi 1 + ΔxA(2) (∂qη V (Qi ) + giη ),
(4.10)
pηi+1 = pηi + ΔxbT (∂qη V (Qi ) + giη ),
(4.11)
for 1 ≤ η ≤ d1 , where (4.12) ⎧ η+ 1 (d +d ) ⎪ ⎪ ∂t Qi 2 1 2 , ⎪ ⎨ η− 1 (d +d ) giη = −∂t Qi 2 1 2 , ⎪ ⎪ 1 ⎪ ⎩ 'd2 (α−1 ) 1 2 θ+ 2 (d1 −d2 ) , η− 2 (d1 −d2 ),θ ∂t Qi θ=1
1 ≤ η ≤ 12 (d1 − d2 ), 1 2 (d1 1 2 (d1
+ d2 ) < η ≤ d1 ,
− d2 ) < η ≤ 12 (d1 + d2 ).
It should be noted that for the simpler case where d2 = 0, the third option for giη vanishes. CONSTRUCTION ALGORITHM. Step 1. A special property of the Lobatto IIIA discretization is that the first row of the coefficient matrix A(1) is zero and the last row of A(1) is bT . Due to this property, we can see that the first row of (4.8) gives qiη = Qηi,1 , and η = Qηi,r . Furthermore, from comparing the last row of (4.8) with (4.9) gives qi+1 η η these two identities we can conclude that Qi,r = Qi+1,1 , ∂t Qηi,r = ∂t Qηi+1,1 , and ∂t2 Qηi,r = ∂t2 Qηi+1,1 . Step 2. Since T (p) = 12 pT βp and |β| = 0 we have that Pi = β −1 ∇p T (Pi ). Also a property of all RK and PRK discretizations is that bT 1 = 1. Therefore we can substitute Pηi from (4.10) into (4.9) and rearrange to get
(4.13)
pηi = −
d1 + , 1 (β −1 )η,ζ (Qζi+1,1 − Qζi,1 ) − ΔxbT A(2) (∂qη V (Qi ) + giη ). Δx ζ=1
Note that this rearrangement is possible since β operates on the index η, while b and A(2) operate on the index j as given in the notation scheme.
ON MULTISYMPLECTICITY OF PRK METHODS
1329
Step 3. Substituting Pηi from (4.10) into (4.8) and then substituting pηi from (4.13) into the resulting equation gives ⎛ ⎞ d1 βη,ζ (Pζi )⎠ Qηi = Qηi,1 1 − ΔxA(1) ⎝ ζ=1
⎛ ⎞ d1 = Qηi,1 1 − ΔxA(1) ⎝ βη,ζ (pζi 1 + ΔxA(2) (∂qζ V (Qi ) + giζ ))⎠ (4.14)
= Qηi,1 1 − ΔxA(1)
ζ=1 d1
βη,ζ
ζ=1
−
d1 + , 1 (β −1 )ζ,ξ (Qξi,r − Qξi,1 ) Δx ξ=1
− ΔxbT A(2) (∂qζ V (Qi ) + giζ ) 1 + ΔxA(2) (∂qζ V (Qi ) + giζ )
.
Rearranging and applying β −1 gives
(4.15)
d1 6 5 1 (β −1 )η,ζ Qζi − Qζi,1 1 − A(1) (Qζi,r − Qζi,1 )1 2 (Δx) ζ=1 5 6 = A(1) (bT A(2) (∂qη V (Qi ) + giη ))1 − A(2) (∂qη V (Qi ) + giη )
= A(1) (1bT − I)A(2) (∂qη V (Qi ) + giη ). Now, the first and last rows of the left-hand side of (4.15) are zero, as are the first and last rows and columns of A(1) (1bT − I)A(2) . Therefore, we denote rows 2 to r − 1 of [Qζi − Qζi,1 1 − A(1) (Qζi,r − Qζi,1 )1] by dζi , the block of A(1) (1bT − I)A(2) from (2, 2) to (r − 1, r − 1) by C, and rows 2 to r − 1 of ∂qη V (Qi ) + giη by eηi . Then, noting that C has full rank due to (3.5), we can write d1 1 (β −1 )η,ζ C−1 dζi = eηi . (Δx)2
(4.16)
ζ=1
Recalling the definition of giη , (4.16) immediately allows us to write down explicit formulas for ∂t Qηi,k in terms of Qi for 1 < k < r and 1 ≤ η ≤ 12 (d1 −d2 ) or 12 (d1 +d2 ) < η ≤ d1 and for ∂t2 Qηi,k in terms of Qi for 1 < k < r and 12 (d1 − d2 ) < η ≤ 12 (d1 + d2 ). Step 4. Substituting pηi from (4.13) into (4.11) for both pηi and pηi+1 gives (4.17)
−
d1 1 (β −1 )η,ζ (Qζi+2,1 − 2Qζi+1,1 + Qζi,1 ) (Δx)2 ζ=1
η = bT A(2) (∂qη V (Qi+1 ) + gi+1 ) + (bT − bT A(2) )(∂qη V (Qi ) + giη )
for each η. Of importance here is that (4.17) does not involve the variables ∂t Qηi+1,r or 2 η ∂t Qi+1,r since the last entry of bT A(2) is zero. Neither does it involve the variables ∂t Qηi,1 or ∂t2 Qηi,1 since the first entry of bT − bT A(2) is also zero.
1330
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
Step 5. Substituting the formulas for ∂t Qηi,k and ∂t2 Qηi,k found in Step 3 into (4.17) and recalling that ∂t Qηi,r = ∂t Qηi+1,1 and ∂t2 Qηi,r = ∂t2 Qηi+1,1 , we can obtain explicit formulas for ∂t Qηi+1,1 in terms of Qi and Qi+1 for 1 ≤ η ≤ 12 (d1 − d2 ) and 1 1 2 η 2 (d1 + d2 ) < η ≤ d1 and for ∂t Qi+1,1 in terms of Qi and Qi+1 for 2 (d1 − d2 ) < η ≤ 1 2 (d1 + d2 ). Thus, for each cell i in our grid, we have a system of explicit ODEs for either the first or second time derivatives of the stage variables Qi in terms of local values of Qi . While the conditions on K, L, and S(z) in the above theorem may at first appear restrictive, they allow several important equations such as the nonlinear wave and nonlinear Schr¨ odinger equations. A notable exception is the Korteweg–de Vries equation for which S(z) is not separable. It is also worth noting that the conditions on K, L, and S(z) are the same as those required for the continuous system to be written as a system of PDEs in the variables q and are similar to those required for a separable Hamiltonian system to be written as a system of second order ODEs. The structure of K is known as the “Darboux normal form” of K and a change of coordinates will allow any skew-symmetric matrix to be written this way. If putting K in Darboux normal form gives L the structure ⎤ ⎡ Λ ⎥ ⎢ 0d2 (4.18) L=⎣ ⎦ −ΛT for some d1 × d1 matrix Λ with |Λ| = 0, then the following change of coordinates in the p variables can put L in the form given in (4.1). Let p ˆ = Λp and T&(ˆ p) = −1 p) = Λ∇p T (p) = Λβp = ΛβΛ−1 p ˆ ) = T (p); then ∇pˆ S(z) = ∇pˆ T&(ˆ ˆ and S(z) T (Λ p ˆ T (ΛβΛ−1 )ˆ p. still has the desired structure S(z) = V (q) + 12 p The upper left (d1 + d2 ) × (d1 + d2 ) block of L being all zeros is fulfilled for PDEs which, when written as a first order system with K in Darboux normal form, have no equations involving both a time and space derivative of the same variable; i.e., ztη + zxη = f (z) does not appear for any η. 4.1. Examples. Here we give several examples of common multi-Hamiltonian PDEs. For the PDEs that satisfy the requirements of Theorem 4.1 we give the ODEs that one obtains by applying the construction algorithm to those PDEs. For PDEs that do not satisfy the requirements of Theorem 4.1 we show why they fail and where the construction algorithm breaks down. We also give a PDE constructed so as to require the full use of Theorem 4.1. 4.1.1. Nonlinear wave equation. Our first example is the nonlinear wave equation, (4.19)
utt = uxx − V (u),
which can be written as a multi-Hamiltonian PDE in the form ⎡ ⎤ ⎡ ⎤ ⎡ u 0 −1 0 0 ⎢ ⎥ ⎢ ⎥ ⎢ (4.20) z = ⎣ v ⎦, K = ⎣ 1 0 0 ⎦, L = ⎣ 0 w 0 0 0 −1 and S(z) = V (u) + 12 v 2 − 12 w2 .
of (4.2) with [5] ⎤ 0 1 ⎥ 0 0 ⎦ 0 0
ON MULTISYMPLECTICITY OF PRK METHODS
1331
Here, d1 = d2 = 1 with z(1) = {u, v} and z(2) = {w}. We also have α = −β = 1; thus we can see that K, L, and S(z) satisfy the requirements of Theorem 4.1. Upon eliminating the variable v, we obtain the PDE (4.6) with
u 0 0 0 1 1 0 (4.21) z= , K= , L= , E= w 0 0 −1 0 0 0 and S = V (u) − 12 w2 . Applying the construction algorithm for r = 2 gives the following pair of ODEs for each cell i: 1 ∂t2 Ui,1 = (Ui−1,1 − 2Ui,1 + Ui+1,1 ) − V (Ui,1 ), 2 (Δx) (4.22) ∂t2 Ui,2 = ∂t2 Ui+1,1 . Recalling from Step 1 that qi = Qi,1 and noting that the last ODE is simply the first ODE of the next cell, it is convenient to drop the second ODE and rewrite the first ODE in terms of the node variable ui : 1 (4.23) ∂t2 ui = (ui−1 − 2ui + ui+1 ) − V (ui ). (Δx)2 Applying the construction algorithm for r = 3 gives the following triplet of ODEs for each cell i: 1 ∂t2 Ui,1 = (−Ui−1,1 + 8Ui−1,2 − 14Ui,1 + 8Ui,2 − Ui+1,1 ) − V (Ui,1 ), (Δx)2 (4.24)
∂t2 Ui,2 =
1 (4Ui,1 − 8Ui,2 + 4Ui+1,1 ) − V (Ui,2 ), (Δx)2
∂t2 Ui,3 = ∂t2 Ui+1,1 , which cannot be written in terms of the node variables alone. 4.1.2. NLS equation. Our second example is the famous cubic-potential nonlinear Schr¨ odinger (NLS) equation, (4.25)
iψt + ψxx + 2|ψ|2 ψ = 0,
where ψ ∈ C. Taking ψ = p + iq and separating the real and imaginary components of NLS allows the PDE to be written in the form of (4.2) with [15] ⎡ ⎤ ⎤ ⎡ ⎤ ⎡ 0 0 1 0 0 −1 0 0 p ⎢ ⎥ ⎢ 0 ⎢ 1 0 0 0 ⎥ ⎢ q ⎥ 0 0 1 ⎥ ⎥ ⎢ ⎥ ⎥, K = ⎢ (4.26) z=⎢ ⎥ ⎥, L = ⎢ ⎢ ⎢ ⎥ ⎦ ⎣ ⎦ ⎣ −1 0 0 0 0 0 0 0 ⎣ v ⎦ 0 −1 0 0 0 0 0 0 w and S = − 12 (p2 + q 2 )2 − 12 (v 2 + w2 ). Here we have d1 = 2 and d2 = 0 with z(1) = {p, q} and z(2) = {v, w}. S(z) can be written as (4.3) with V (q) = − 12 (p2 + q 2 ) and T (p) = 12 pT βp, where
−1 0 v (4.27) β= and p = , 0 −1 w and thus the NLS equation also satisfies the requirements of Theorem 4.1.
1332
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
Applying the construction algorithm for an r-stage discretization gives r ODEs for each element of z(1) at cell i. As with the nonlinear wave equation, if we use the 2-stage discretization, then for each element of z(1) at cell i we can drop the ODE for the second stage variable and write the ODE for the first stage variable in terms of the node variables. The resulting ODEs are ∂t pi = − (4.28) ∂t qi =
1 (qi−1 − 2qi + qi+1 ) − 2(p2i + qi2 )qi , (Δx)2
1 (pi−1 − 2pi + pi+1 ) + 2(p2i + qi2 )pi . (Δx)2
These are precisely the ODEs one obtains by applying second order finite differences in space to (4.25). The same statement applies for other PDEs that satisfy the conditions of Theorem 4.1; thus we note that 2-stage Lobatto IIIA–IIIB discretization in space is equivalent to second order finite differences in space up to second order differences when applied to such a PDE. For r = 3 we obtain a triplet of ODEs for each element of z(1) at cell i: ∂t Pi,1 = −
1 (−Qi−1,1 + 8Qi−1,2 − 14Qi,1 + 8Qi,2 − Qi+1,1 ) (Δx)2 2 + Q2i,1 )Qi,1 , − 2(Pi,1
∂t Pi,2 = −
1 2 (4Qi,1 − 8Qi,2 + 4Qi+1,1 ) − 2(Pi,2 + Q2i,2 )Qi,2 , (Δx)2
∂t Pi,3 = ∂t Pi+1,1 , (4.29) ∂t Qi,1 =
1 (−Pi−1,1 + 8Pi−1,2 − 14Pi,1 + 8Pi,2 − Pi+1,1 ) (Δx)2 2 + 2(Pi,1 + Q2i,1 )Pi,1 ,
∂t Qi,2 =
1 2 (4Pi,1 − 8Pi,2 + 4Pi+1,1 ) + 2(Pi,2 + Q2i,2 )Pi,2 , (Δx)2
∂t Qi,3 = ∂t Qi+1,1 . 4.1.3. Boussinesq equation. Our third example is the “good” Boussinesq equation, (4.30)
ptt = (εpxx + V (p))xx ,
which, when written as a multi-Hamiltonian PDE, shares the same z, z(1) , z(2) , K, and L as the NLS equation above [9]. The only difference is the function S(z) which is given by S(z) = −V (p) + 12 (w2 − 1ε v 2 ). (The class of Boussinesq equations includes a broad range of PDEs, some of which satisfy the conditions of Theorem 4.1.) As before, the requirements of Theorem 4.1 are satisfied, and applying the construction algorithm gives r ODEs for each element of z(1) at cell i. For r = 2, we once again drop the ODEs for the second stage variables and write the first ODEs in
1333
ON MULTISYMPLECTICITY OF PRK METHODS
terms of the node variables as ∂t pi =
1 (qi−1 − 2qi + qi+1 ), (Δx)2
∂t qi =
ε (pi−1 − 2pi + pi+1 ) + V (p). (Δx)2
(4.31)
For r = 3 we get
(4.32)
∂t Pi,1 =
1 (−Qi−1,1 + 8Qi−1,2 − 14Qi,1 + 8Qi,2 − Qi+1,1 ), (Δx)2
∂t Pi,2 =
1 (4Qi,1 − 8Qi,2 + 4Qi+1,1 ), (Δx)2
∂t Pi,3 = ∂t Pi+1,1 , ∂t Qi,1 =
ε (−Pi−1,1 + 8Pi−1,2 − 14Pi,1 + 8Pi,2 − Pi+1,1 ) + V (Pi,1 ), (Δx)2
∂t Qi,2 =
ε (4Pi,1 − 8Pi,2 + 4Pi+1,1 ) + V (Pi,2 ), (Δx)2
∂t Qi,3 = ∂t Qi+1,1 . 4.1.4. Korteweg–de Vries (KdV) equation. Our fourth example is the KdV equation, (4.33)
ut = V (u)x + νuxxx ,
which can be written in the form of (4.2) with [6] ⎡ ⎤ ⎡ ⎤ u 0 −1 0 0 ⎢ φ ⎥ ⎢ 1 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ (4.34) z=⎢ ⎥, K = ⎢ ⎥, ⎣ v ⎦ ⎣ 0 0 0 0 ⎦ w 0 0 0 0
⎡
0 ⎢ 0 ⎢ L=⎢ ⎣ −1 0
0 0
1 0
0 −1
0 0
⎤ 0 1 ⎥ ⎥ ⎥ 0 ⎦ 0
1 2 v . Here, d1 = 2, d2 = 0 and z is partitioned into and with S(z) = − 12 uw − V (u) − 2ν (1) (2) z = {u, φ} and z = {v, w}. While the K and L matrices have the required structure for Theorem 4.1, the function S(z) does not. Specifically, the −uw term in S(z) prevents us from writing T (p) = 12 pT βp, and so step 2 of the construction algorithm cannot be carried out. For example, discretizing the KdV equation with two-stage Lobatto IIIA–IIIB gives
1 vi+ 12 = vi− 12 + Δx(∂t φi − V (ui ) − (wi+ 12 + wi− 12 )), 4 wi+ 12 = wi− 12 − Δx∂t ui , (4.35)
1 −ui+1 = −ui − Δx vi+ 12 , ν 1 −φi+1 = −φi − Δx (ui + ui+1 ), 4
1334
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
where ui = Ui,1 , ui+1 = Ui,2 , φi = Φi,1 , φi+1 = Φi,2 , vi+ 12 = Vi,1 = Vi,2 , and wi+ 12 = Wi,1 = Wi,2 . 1 (ui+1 − ui ) and M ui = Introducing the operators D and M , where Dui = Δx 1 (u + u ), allows us to write this system as i+1 i 2 1 Dvi− 21 = ∂t φi − V (ui ) − M wi− 12 , 2 Dwi− 12 = −∂t ui , (4.36)
1 −Dui = − vi+ 12 , ν 1 −Dφi = − M ui . 2
Eliminating all the variables other than the original variable u gives the implicit ODE (4.37)
M ∂t ui = DV (ui ) + νD3 ui−1 .
In general, M is not invertible; thus further conditions are required (e.g., periodic boundary conditions with an odd number of grid points) to form a well-defined integrator from this implicit ODE. This is none other than the narrow box scheme, introduced in [2] and derived as a finite volume scheme (and shown to be more accurate than the box scheme) in [3]. Thus, we have shown that the narrow box scheme is multisymplectic. 4.1.5. Benjamin–Bona–Mahony (BBM) equation. Our fifth example is the BBM equation [4], (4.38)
ut − αuxxt = V (u)x .
This equation can be written in the ⎡ α 0 − 12 2 ⎢ α 0 ⎢ −2 0 ⎢ 1 ⎢ 0 0 (4.39) K=⎢ 2 ⎢ 0 0 ⎣ 0 0 0 0
form of (1.1) [19] with ⎤ ⎡ 0 0 0 ⎥ ⎢ 0 0 ⎥ ⎢ 0 ⎥ ⎢ ⎥ 0 0 ⎥, L = ⎢ ⎢ 0 ⎢ ⎥ 0 0 ⎦ ⎣ 0 − α2 0 0
z = [u, θ, φ, w, ρ]T , ⎤ α 0 0 0 2 ⎥ 0 0 0 0 ⎥ ⎥ 0 0 −1 0 ⎥ ⎥ ⎥ 0 1 0 0 ⎦ 0 0 0 0
and S(z) = uw − V (u) − α2 θρ. Putting K into its Darboux normal form results in an L of the form
03 Λ (4.40) L= , −ΛT 02 where Λ is a 3 × 2 matrix. The matrix L does not have the form of (4.18), and so it cannot be written in the form of (4.1) by applying a change of variables. Thus, the BBM equation does not satisfy the requirements of Theorem 4.1. However, partitioning z into z(1) = {u, θ, φ} and z(2) = {w, ρ}, then discretizing the BBM equation with two-stage Lobatto IIIA–IIIB using the D and M notation
1335
ON MULTISYMPLECTICITY OF PRK METHODS
gives α α 1 Dρi− 12 = M wi− 12 − V (ui ) − ∂t θi + ∂t φi , 2 2 2 α α 0 = − M ρi− 12 + ∂t ui , 2 2 1 −Dwi− 12 = − ∂t ui , 2
(4.41)
Dφi = M ui , α α − Dui = − M θi . 2 2 Eliminating θ, φ, w, and ρ gives the implicit ODE (M 2 − αD2 )∂t ui = M DV (ui ).
(4.42)
As with the KdV equation, the operator on the left-hand side cannot be locally inverted, although it is at least typically invertible. 4.1.6. Pad´ e–II equation. Our sixth example is the equation ut − αuxxt = V (u)x + νuxxx ,
(4.43)
which contains a mixture of the third order derivatives found in the KdV and BBM 9 equations. This equation is referred to as the Pad´e–II equation in [10] when ν = 10 , 19 1 2 1 3 α = 10 , and V (u) = − 2 u − 6 u . It can be written in the form of (1.1) [19] with z = [u, θ, φ, w, ρ, v]T , ⎡
(4.44)
0
⎢ −α ⎢ 2 ⎢ 1 ⎢ 2 K=⎢ ⎢ 0 ⎢ ⎢ ⎣ 0 0
α 2
− 12
0
0
0 0 0 0
0
⎡
⎤
0
0
0
0
0 0 0
0 ⎥ ⎥ ⎥ 0 0 0 ⎥ ⎥, 0 0 0 ⎥ ⎥ ⎥ 0 0 0 ⎦
0
0
0
0
⎢ 0 ⎢ ⎢ ⎢ 0 L=⎢ ⎢ 0 ⎢ ⎢ α ⎣ −2 −ν
0
⎤
0
0
0
α 2
0 0 0
0 0 1
0 −1 0
0 0 0
0
0
0
0
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0 ⎦
0
0
0
0
0
ν 0 0 0
and S(z) = uw − V (u) − ν2 v 2 − α2 θρ. Putting K into its Darboux normal form results in an L of the form (4.45)
L=
03
Λ
−ΛT
03
,
where Λ is a 3 × 3 matrix with rank(Λ) = 2. Thus, we cannot write L in the form of (4.1), and so the Pad´e–II equation does not satisfy the requirements of Theorem 4.1. However, partitioning z into z(1) = {u, θ, φ} and z(2) = {w, ρ, v}, then discretizing the Pad´e–II equation with two-stage Lobatto IIIA–IIIB using the D and M notation
1336
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
gives α α 1 Dρi− 12 + νDvi− 12 = M wi− 12 − V (ui ) − ∂t θi + ∂t φi , 2 2 2 α α 0 = − M ρi− 12 + ∂t ui , 2 2 1 −Dwi− 12 = − ∂t ui , 2
(4.46)
Dφi = M ui , α α − Dui = − M θi , 2 2 −νDui = −νvi− 12 . Eliminating θ, φ, w, ρ, and v gives the implicit ODE (M 2 − αD2 )∂t ui = M DV (ui ) + νM D3 ui−1 .
(4.47)
As in the previous two examples, the operator on the left-hand side of (4.47) cannot be locally inverted. 4.1.7. A made-up example. Our last example is contrived to satisfy the requirements of Theorem 4.1 and demonstrates the case when d2 = d1 and d2 = 0. We have chosen d1 = 3, d2 = 1, and a multi-Hamiltonian PDE (1.1) with z = [q 1 , q 2 , q 3 , v, p1 , p2 , p3 ]T , ⎡
0 0 1
⎢ ⎢ ⎢ ⎢ ⎢ ⎢ (4.48) K = ⎢ 0 ⎢ ⎢ 0 ⎢ ⎢ ⎣ 0 0
0 0 0
−1 0 0
0 −1 0
1 0 0 0
0 0 0 0
0 0 0 0
⎤ 0 0 0 ⎥ 0 0 0 ⎥ ⎥ 0 0 0 ⎥ ⎥ ⎥ 0 0 0 ⎥, ⎥ 0 0 0 ⎥ ⎥ ⎥ 0 0 0 ⎦ 0 0 0
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ L=⎢ ⎢ ⎢ ⎢ ⎢ ⎣
0
0
0 0
0 0
0 −1 0 0
0 0 −1 0
and S(z) = V (q) + 12 pT βp + α2 v 2 , where α is a constant and ⎡
1 ⎢ β=⎣ 1 − 12
(4.49)
⎤ 1 − 12 ⎥ 1 0 ⎦. 0 1
This corresponds to the PDE 1 2 ∂t q 1 = −2qxx + 2qxx + ∂q3 V (q),
(4.50)
1 2 2 1 2 3 ∂ q = −4qxx + 3qxx − 2qxx − ∂q2 V (q), α t 1 2 3 − 4qxx + 2qxx − ∂q1 V (q). ∂t q 3 = 4qxx
0
0
1
0
0
0 0
0 0
0 0
1 0
0 1
⎤
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 0 0 0 0 0 ⎥ ⎥ 0 0 0 0 0 ⎥ ⎥ ⎥ 0 0 0 0 0 ⎦ −1 0 0 0 0
1337
ON MULTISYMPLECTICITY OF PRK METHODS
Eliminating the variable v in favor of with (4.51) ⎡ ⎡ 1 ⎤ 0 0 −1 0 0 q ⎢ 0 0 0 0 0 ⎢ q2 ⎥ ⎢ ⎥ ⎢ ⎢ ⎢ 3 ⎥ ⎢ 1 0 0 0 0 ⎢ q ⎥ ⎥, K = ⎢ z=⎢ ⎢ 0 0 0 0 0 ⎢ p1 ⎥ ⎢ ⎥ ⎢ ⎢ ⎢ 2 ⎥ ⎣ 0 0 0 0 0 ⎣ p ⎦ 0 0 0 0 0 p3
higher derivatives in time of q 2 gives (4.6)
0 0 0
⎤
⎥ ⎥ ⎥ ⎥ ⎥, 0 ⎥ ⎥ ⎥ 0 ⎦ 0
⎡
0 0 ⎢ 0 0 ⎢ ⎢ ⎢ 0 0 L=⎢ ⎢ −1 0 ⎢ ⎢ ⎣ 0 −1 0 0
0 0
⎤ 0 0 ⎥ ⎥ ⎥ 0 0 1 ⎥ ⎥, 0 0 0 ⎥ ⎥ ⎥ 0 0 0 ⎦ 0 0 0 1 0
0 0 0 −1
0 1
S(z) = V (q) + 12 pT βp, and the only nonzero entry of E given by E2,2 = α1 . If we apply the construction algorithm for r = 2, then once again we can drop the ODEs for the second stage variables and write the ODEs for the first stage variables in terms of the node variables giving the following ODEs at cell i: (4.52) 1 1 2 1 2 ∂t qi1 = (−2qi−1 + 2qi−1 + 4qi1 − 4qi2 − 2qi+1 + 2qi+1 ) + ∂q3 V (qi ), (Δx)2 ∂t2 qi2 =
α 1 2 3 1 2 3 (−4qi−1 + 3qi−1 − 2qi−1 + 8qi1 − 6qi2 + 4qi3 − 4qi+1 + 3qi+1 − 2qi+1 ) (Δx)2 − α∂q2 V (qi ),
∂t qi3 =
1 1 2 3 (4qi−1 − 4qi−1 + 2qi−1 − 8qi1 + 8qi2 − 4qi3 (Δx)2 1 2 3 + 4qi+1 − 4qi+1 + 2qi+1 ) − ∂q1 V (qi ).
5. Discussion. We would like to point out that the discretization in space by Lobatto IIIA–IIIB in the above examples only modifies the linear component of the multi-Hamiltonian PDE, i.e., the discrete approximation of Lzx . The reason for this is that, throughout the construction algorithm, the nonlinear components of the multiHamiltonian PDE always appear coupled to the time derivatives as the expression ∂qη V (Qi ) + giη . Furthermore, we note that, in the examples above, the same pattern of coefficients arises from discretizing different PDEs with the same order Lobatto IIIA–IIIB discretization. For example, with r = 2 the coefficients in the approximation of qxx have a weighting proportional to [1, −2, 1], while for r = 3 these coefficients are proportional to [−1, 8, −14, 8, −1] for the first ODE and [4, −8, 4] for the second ODE. This behavior continues for higher values of r; e.g., for r = 4 the approximation of qxx in the first ODE has coefficients proportional to [1, 12 (25 − 15 (5)), 12 (25 + 15 (5)), −52, 12 (25 + 15 (5)), 12 (25 − 15 (5)), 1], the second ODE has coefficients 3 (5)], and the third ODE has coefficients proportional to [5 + 3 (5), −20, 10, 5 − proportional to [5−3 (5), 10, −20, 5+3 (5)]. For higher values of r these patterns of the coefficients in the approximation of qxx become increasingly complicated, yet for a given value of r these patterns remain the same regardless of the PDE under consideration. The reason these patterns of coefficients occur for different PDEs is due to (4.16) and (4.17). For a given value of r, C and dζi are fixed regardless of the PDE. Similarly, the coefficients bT A(2) and bT − bT A(2) in (4.17) are completely determined by r.
1338
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
Thus, when solving (4.16) and (4.17) for giη , the same weighting of the nearby stage variables occurs for qxx for different PDEs. ζ at stage j of cell i is given For an r stage discretization, the approximation to qxx by (5.1) −
r−1 1 −1 1 −1 ζ (C d ) = (C )j−1,k−1 ((1 − ck )Qζi,1 − Qζi,k + ck Qζi,r ) i j−1 (Δx)2 (Δx)2 k=2
for 2 ≤ j ≤ r − 1 and 1 ≤ ζ ≤ r, where C and ck are given by (3.5) and (3.2), ζ at stage 1 of cell i + 1 is given by respectively. The approximation to qxx (5.2)
1 2b1 (Δx)2
r−1 +
(bT A(2) )k (C−1 dζi+1 )k−1 + (bT − bT A(2) )k (C−1 dζi )k−1
,
k=2
+
(Qζi+2,1
−
2Qζi+1,1
+
Qζi,1 )
,
where b1 is the first entry in b. This suggests the following shortcut: 1. Write the PDE with only terms of the form zxx (no zx ). 2. Replace the zxx terms with the PRK finite differences of the desired order. Now, the system of ODEs that one obtains from applying Theorem 4.1 to an appropriate PDE can be written as a Hamiltonian system; e.g., for the Boussinesq equation and r = 2, the system of ODEs at node i can be written as ∂t zi = J−1 ∇zi Hi ,
(5.3) where zi =
(5.4)
qi pi
,
J−1 =
0
1
−1
0
and (5.5)
Hi =
1 (−qi−1 qi + qi2 − qi qi+1 + εpi−1 pi − εp2i + εpi pi+1 ) + V (pi ). (Δx)2
If the nonlinear terms in such a Hamiltonian system are separable, then one can apply an explicit symplectic PRK discretization in time to obtain an explicit (and hence well defined) high order local multisymplectic integrator. If the nonlinear terms are not separable, then other explicit time integrators may be applied, e.g., symplectic splitting methods [20], which may give superior performance (in terms of speed and stability) over implicit integrators. Even if no explicit time integrator can be applied to the Hamiltonian system, there may be some benefits to having a spatial discretization that gives rise to explicit ODEs; e.g., the ODEs may be less stiff than those obtained from an implicit discretization. In the examples in the previous section, the systems of ODEs arising from the nonlinear wave equation and the Boussinesq equation both have separable Hamiltonians and thus allow for a high order explicit symplectic PRK discretization to be applied in time. The NLS equation is not so fortunate, however; its nonlinearity is
ON MULTISYMPLECTICITY OF PRK METHODS
1339
only quadratic, and thus for an r-stage Lobatto IIIA–IIIB discretization in time it is necessary to solve a system of r − 1 coupled quadratic equations for each update of Pi or Qi . For r = 2, this quadratic equation can be solved explicitly (in particular, the same root of the quadratic is always taken) and an explicit (and hence well defined), local, high order in space, multisymplectic integrator can be formed. Another point that we would like to make is about how the ODEs that one obtains from our construction algorithm handle boundary conditions. Many other discretization schemes (e.g., implicit midpoint, higher order Gaussian Runge–Kutta) either do not remain well defined or require extra conditions to be so [2, 19]. However, our ODEs remain well defined under periodic, Dirichlet, and Neumann boundary conditions without any further restrictions. For example, 3-stage Lobatto IIIA–IIIB applied to the NLS equation with Neumann boundary conditions, ψx = 0, applied to the left boundary as v1 = w1 = 0 leads to the following ODEs: ∂t P1,1 = − (5.6) ∂t Q1,1 =
1 2 (−14Q1,1 + 16Q1,2 − 2Q2,1 ) − 2(P1,1 + Q21,1 )Q1,1 , (Δx)2
1 2 (−14P1,1 + 16P1,2 − 2P2,1 ) + 2(P1,1 + Q21,1 )P1,1 , (Δx)2
which are equivalent to the first and fourth lines of (4.29), where the points outside the domain are treated as phantom points, i.e., Q0,1 = Q2,1 and Q0,2 = Q1,2 . Finally, we would like to point out that although Theorem 4.1 is stated for the Lobatto IIIA–IIIB class of PRK discretizations, it applies equally well to any PRK discretization satisfying (3.3), (3.4), and (3.5). We leave it as an open question whether there are any other PRK discretizations that satisfy (3.3), (3.4), and (3.5). In this paper we have deliberately restricted our attention to the structural properties of PRK discretization, namely its multisymplecticity and explicitness. Results on its dynamical properties, such as order and dispersion, will be reported elsewhere. Acknowledgments. BR would like to thank the CWI, Amsterdam, for their hospitality, and Education New Zealand for financial support. REFERENCES [1] L. Abia and J. M. Sanz-Serna, Partitioned Runge–Kutta methods for separable Hamiltonian systems, Math. Comp., 60 (1993), pp. 617–634. [2] U. M. Ascher and R. I. McLachlan, Multisymplectic box schemes and the Korteweg–de Vries equation, Appl. Numer. Math, 48 (2004), pp. 255–269. [3] U. M. Ascher and R. I. McLachlan, On symplectic and multisymplectic schemes for the Korteweg–de Vries equation, J. Sci. Comput., 25 (2005), pp. 83–104. [4] T. B. Benjamin, J. L. Bona, and J. J. Mahony, Model equations for long waves in nonlinear dispersive systems, Philos. Trans. Roy. Soc. London Ser. A, 272 (1972), pp. 47–78. [5] T. J. Bridges, Multi-symplectic structures and wave propagation, Math. Proc. Cambridge Philos. Soc., 121 (1997), pp. 147–190. [6] T. J. Bridges and G. Derks, Unstable eigenvalues and the linearization about solitary waves and fronts with symmetry, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 455 (1999), pp. 2427–2469. [7] T. J. Bridges and S. Reich, Multi-symplectic integrators: Numerical schemes for Hamiltonian PDEs that conserve symplecticity, Phys. Lett. A, 284 (2001), pp. 184–193. [8] J. C. Butcher, Implicit Runge–Kutta processes, Math. Comp., 18 (1964), pp. 50–64. [9] J. B. Chen, Multisymplectic geometry, local conservation laws and Fourier pseudospectral discretization for the “good” Boussinesq equation, Appl. Math. Comput., 161 (2005), pp. 55– 67. [10] R. Fetecau and D. Levy, Approximate model equations for water waves, Commun. Math. Sci., 3 (2005), pp. 159–170.
1340
BRETT N. RYLAND AND ROBERT I. MCLACHLAN
[11] J. Frank, B. E. Moore, and S. Reich, Linear PDEs and numerical methods that preserve a multisymplectic conservation law, SIAM J. Sci. Comput., 28 (2006), pp. 260–277. [12] E. Hairer, C. Lubich, and G. Wanner, Geometric Numerical Integration, 2nd ed., Springer– Verlag, Berlin, 2006. [13] J. L. Hong, Y. Liu, and G. Sun, The multi-symplecticity of partitioned Runge–Kutta methods for Hamiltonian PDEs, Math. Comp., 75 (2005), pp. 167–181. [14] P. E. Hydon, Multisymplectic conservation laws for differential and differential-difference equations, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461 (2005), pp. 1627–1637. [15] A. L. Islas and C. M. Schober, Multi-symplectic methods for generalized Schr¨ odinger equations, Future Generation Computer Systems, 19 (2003), pp. 403–413. [16] L. Jay, Symplectic partitioned Runge–Kutta methods for constrained Hamiltonian systems, SIAM J. Numer. Anal., 33 (1996), pp. 368–387. [17] B. E. Moore and S. Reich, Multi-symplectic integration methods for Hamiltonian PDEs, Future Generation Computer Systems, 19 (2003), pp. 395–402. [18] S. Reich, Multi-symplectic Runge–Kutta collocation methods for Hamiltonian wave equations, J. Comput. Phys., 157 (1999), pp. 473–499. [19] B. N. Ryland, Multisymplectic Integration, Ph.D. thesis, Institute of Fundamental Sciences, Massey University, Palmerston North, New Zealand, 2007. [20] B. N. Ryland, R. I. McLachlan, and J. Frank, On multisymplecticity of partitioned Runge– Kutta and splitting methods, Int. J. Comput. Math., 84 (2007), pp. 847–869. [21] J. M. Sanz-Serna, Runge–Kutta schemes for Hamiltonian systems, BIT, 28 (1988), pp. 877–883. [22] Z. Shang, KAM theorem of symplectic algorithms for Hamiltonian systems, Numer. Math., 83 (1999), pp. 477–496. [23] G. Sun, Symplectic partitioned Runge–Kutta methods, J. Comput. Math., 11 (1993), pp. 365–372.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1341–1361
c 2008 Society for Industrial and Applied Mathematics
A NINE POINT SCHEME FOR THE APPROXIMATION OF DIFFUSION OPERATORS ON DISTORTED QUADRILATERAL MESHES∗ ZHIQIANG SHENG† AND GUANGWEI YUAN† Abstract. A nine point scheme is presented for discretizing diffusion operators on distorted quadrilateral meshes. The advantage of this method is that highly distorted meshes can be used without the numerical results being altered remarkably, and it treats material discontinuities rigorously and offers an explicit expression for the face-centered flux; moreover, it has only the cell-centered unknowns. We prove that the method is stable and has first-order convergence on distorted meshes. Numerical experiments show that the method has second-order or nearly second-order accuracy on distorted meshes. Key words. finite volume scheme, nine point scheme, diffusion equation, distorted meshes, convergence AMS subject classifications. 65M06, 65M12, 65M55 DOI. 10.1137/060665853
1. Introduction. Accurate and reliable discretization methods for the diffusion equation on distorted meshes are very important for numerical simulations of Lagrangian hydrodynamics and magnetohydrodynamics. As with the finite element method and the finite difference method, the finite volume method is a discretization technique for solving partial differential equations (PDEs). It is obtained by integrating the PDEs over a control volume, and it represents in general the conservation of certain physical quantities of interest, such as mass, momentum, or energy. Due to this natural association, the finite volume method is widely used in practical problems such as computational fluid dynamics [18]. Moreover, it is flexible enough to be applied to complex space domains, and because it works directly on the physical domain rather than on the computational domain through coordinate transformations, it can be easily used in adaptive mesh strategies. A finite volume scheme for solving diffusion equations on nonrectangular meshes is proposed in [10], which is the so-called nine point scheme on arbitrary quadrilateral meshes. This scheme has only cell-centered unknowns after cell-vertex unknowns are eliminated by taking them as the arithmetic average of the neighboring cell-centered unknowns. However, this simple interpolation loses significant accuracy on moderately and highly skewed meshes. Some similar schemes are discussed in [8, 17]. The scheme in [7] has both cell-centered unknowns and vertex unknowns and leads to a symmetric positive definite matrix. Although numerical experiments show that this scheme indeed has second-order accuracy, no theoretical proof is given. In [19], the authors analyze the scheme theoretically, and give a construction of a finite volume scheme for diffusion equations with discontinuous coefficients. The theoretical ∗ Received by the editors July 24, 2006; accepted for publication (in revised form) December 4, 2007; published electronically March 21, 2008. This work was partially supported by the National Basic Research Program (2005CB321703), the National Nature Science Foundation of China (60533020), and Basic Research Project of National Defense (A1520070074). http://www.siam.org/journals/sisc/30-3/66585.html † Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing, 100088, China (
[email protected], yuan
[email protected]).
1341
1342
ZHIQIANG SHENG AND GUANGWEI YUAN
analysis of the scheme is also given in [6]; moreover, the method is extended to Leray– Lions-type elliptic problems in [5]. The scheme in [9] consists of a cell-centered variational method based on a smooth mapping between the logical mesh coordinates and the spatial coordinates. This scheme leads to a symmetric positive definite matrix. However, the assumption of a smooth mapping is too restrictive. It also loses significant accuracy on moderately and highly skewed meshes. The MDHW scheme in [12] rigorously treats material discontinuities and yields second-order accuracy regardless of the smoothness of the mesh. This scheme has face-centered unknowns in addition to cell-centered unknowns. The support operators method (SOM) in [11, 13, 15, 16] gives second-order accuracy on both smooth and nonsmooth meshes either with or without material discontinuities, and SOM generally leads to a symmetric positive definite matrix. However, SOM has both cell-centered and face-centered unknowns or has a dense diffusion matrix, and there is no explicit discrete expression for the normal flux on a cell edge. The method of multipoint flux approximations (MPFA) [1, 2, 3, 4] has only cellcentered unknowns and gives an explicit expression for the face-centered flux. The flux is approximated by a multipoint flux expression based on transmissibility coefficients. These coefficients are computed by locally solving a small linear system. In this paper we present a nine point scheme for diffusion equations with continuous and discontinuous coefficients on distorted quadrilateral meshes. The basic idea for constructing the scheme is as follows. First, by integrating the diffusion equation over a control volume (cell) and using the Gauss theorem, we obtain the expression for the integral flux on a cell side. Second, we discretize the flux using cell-centered unknowns, vertex unknowns, and face-centered unknowns on each cell. Third, we eliminate the face-centered unknowns by continuity of the normal flux component across a cell edge. Then we eliminate the vertex unknowns using the cell-centered unknowns, and the elimination procedure is a focus of this work. Hence, there are only cell-centered unknowns in the expression of our discrete normal flux. Furthermore, it is worth pointing out that the expression for the normal flux component has a specific physical meaning and an intuitive geometric explanation, so it facilitates implementation in codes. In addition, we can obtain the vertex values, which are important in some numerical methods of Lagrangian magnetohydrodynamics. Our scheme reduces to the standard five point scheme on rectangular grids and often leads to a nonsymmetric matrix for general quadrilateral meshes. We prove that our scheme is stable and gives first-order convergence. Moreover, we extend the method to general model diffusion problems. Although we prove only that the scheme is first-order accurate, the numerical experiments show that it is nearly second-order accurate on distorted meshes. The rest of this paper is organized as follows. In section 2, we describe the construction of the finite volume scheme for stationary diffusion problems on distorted quadrilateral meshes. In section 3, we prove that the scheme is stable and has first-order accuracy. We extend the method to general model diffusion problems in section 4. Then we present some numerical experiments to show their performance on several test problems in section 5. Finally, we end with some concluding remarks. 2. Construction of scheme for diffusion equation. 2.1. Problem and notation. Consider the stationary diffusion problem
(2.1)
−∇ · (κ(x)∇u) = f (x)
in Ω,
1343
NINE POINT SCHEME
KA
LA A dK ,V
V
K
L dL,V B
KB
LB
Fig. 2.1. The mesh stencil.
(2.2)
u(x) = 0
on ∂Ω,
where Ω is an open bounded polygonal set of R2 with boundary ∂Ω. Let us denote the cell and cell center by K and L, the vertex by A and B, and the cell side by σ (see Figure 2.1). If the cell side σ is a common edge of cells K and L, and its vertices are A and B, then we denote σ = K|L = BA. Let J be the set of all cells, E be the set of all cell sides, Eint be the set of all cell sides not on ∂Ω, Eext be the set of all cell sides on ∂Ω, and EK be the set of all cell sides of cell K. Denote h = (supK∈J m(K))1/2 , where m(K) is the area of cell K. We need the following two assumptions. Assumption (H1): u(x) ∈ C 2 (K), κ(x) ∈ C 1 (K), and f (x) ∈ C 1 (K) for all K ∈ J. Assumption (H2): There is a constant C > 0 such that 1 |A − B| ≤ |K − L| ≤ C|A − B|, C
K ∈ J.
Assumption (H2) implies that the aspect ratio is bounded; i.e., there are no extremely long and thin cells. 2.2. The expression of flux. By integrating (2.1) over the cell K and using the Green formula, one obtains (2.3) FK,σ = f (x)dx, K
σ∈EK
where FK,σ is the normal face flux on the edge σ, defined by (2.4) FK,σ = − κ(x)∇u(x) · nK,σ dl, σ
with nK,σ the outward unit normal on the edge σ of cell K (see Figure 2.2). In Figure 2.2, I is the midpoint of σ = BA; τBA and τKI are unit tangential vectors on BA and KI, respectively; and θK,σ is the angle between nK,σ and τKI . Notice that
1344
ZHIQIANG SHENG AND GUANGWEI YUAN
A
G W BA G nL,V G W LI K
T L,V
I
G W KI T K ,V G nK ,V
L
G Q KI
G W AB B
Fig. 2.2. Some notation.
nK,σ = − tan θK,σ τBA + and the normal face flux can be written as (2.5) FK,σ = tan θK,σ κ(x)∇u(x) · τBA dl − σ
1 τKI , cos θK,σ
1 cos θK,σ
κ(x)∇u(x) · τKI dl. σ
By the Taylor expansion, one has 1 u(I) − u(K) = ∇u(x) · (I − K) + (HI − HK ), 2 1 u(A) − u(B) = ∇u(x) · (A − B) + (HA − HB ), 2 where HI = (∇2 u(rx + (1 − r)I)(I − x), I − x), ∇2 u(rx + (1 − r)I) is the Hessian matrix of u at the point rx + (1 − r)I, and r is a number in [0, 1]. The other notations HK , HA , and HB have similar meanings. It follows that κ(x)∇u(x) · τKI =
HI − HK κ(K) (u(I) − u(K)) − κ(K) |I − K| 2|I − K| − (∇κ(rK + (1 − r)x) · (K − x)) ∇u(x) · τKI ,
κ(x)∇u(x) · τBA =
κ(K) H A − HB (u(A) − u(B)) − κ(K) |A − B| 2|A − B| − (∇κ(rK + (1 − r)x) · (K − x)) ∇u(x) · τBA .
Substitute the above two equations into (2.5) to obtain (2.6)
FK,σ = −τK,σ (u(I) − u(K) − DK,σ (u(A) − u(B))) + RK,σ ,
1345
NINE POINT SCHEME |A−B|κ(K) where τK,σ = |I−K| cos θK,σ = Similarly, we have
|A−B|κ(K) , dK,σ
DK,σ =
sin θK,σ |I−K| , |A−B|
and RK,σ = O(h2 ).
FL,σ = −τL,σ (u(I) − u(L) − DL,σ (u(B) − u(A))) + RL,σ ,
(2.7)
sin θL,σ |I−L| , |A−B|
|A−B|κ(L) |A−B|κ(L) , DL,σ = where τL,σ = |I−L| cos θL,σ = dL,σ By continuity of the normal flux component
and RL,σ = O(h2 ).
FK,σ = −FL,σ ,
(2.8) we can obtain u(I) =
(2.9)
1 (τK,σ u(K) + τL,σ u(L) + (τK,σ DK,σ − τL,σ DL,σ )(u(A) − u(B))) τK,σ + τL,σ +
1 (RK,σ + RL,σ ). τK,σ + τL,σ
Substitute (2.9) into (2.6) to obtain (2.10) where τσ =
¯ K,σ , FK,σ = −τσ (u(L) − u(K) − Dσ (u(A) − u(B))) + R τK,σ τL,σ τK,σ +τL,σ
=
|A−B| dK,σ κ(K)
d
L,σ + κ(L)
, Dσ =
(L−K,A−B) , |A−B|2
¯ K,σ = and R
τL,σ RK,σ −τK,σ RL,σ τK,σ +τL,σ
= O(h2 ). By (2.8) and (2.10), we immediately get (2.11)
¯ L,σ , FL,σ = −τσ (u(K) − u(L) − Dσ (u(B) − u(A))) + R
¯ K,σ . ¯ L,σ = −R where R The expressions for the normal flux components have specific physical meanings and clear geometric meanings. For instance, FK,σ consists of two parts: one is the flux from the cell center K to the cell center L, and the other is the flux from the vertex B to the vertex A. If the lines KL and AB are perpendicular with each other, then the normal flux component FK,σ is equal to the flux from the cell center K to the cell center L, and the flux from the vertex B to the vertex A has no contribution for FK,σ . If the lines KL and AB are not perpendicular with each other, then both the flux from the cell center K to the cell center L and the flux from the vertex B to the vertex A have contribution for FK,σ . It is obvious that there are vertex unknowns u(A) and u(B) in addition to cellcentered unknowns in the expressions (2.10) and (2.11) of the normal flux component. In the next section, we will consider how to eliminate the vertex unknowns. 2.3. The expression of vertex unknowns. The main goal of this section is to obtain the local expression of the normal flux component with only cell-centered unknowns. We will propose a method of eliminating the vertex unknowns in (2.10) and (2.11) by locally approximating them with surrounding cell-centered unknowns. ¯ K,σ = O(h2 ) in (2.10) and R ¯ L,σ = O(h2 ) in (2.11), we require that the Noticing that R approximation of vertex unknowns be second order so as not to affect the approximate accuracy of the normal flux. Now, we will describe in detail a method of eliminating the vertex unknowns.
1346
ZHIQIANG SHENG AND GUANGWEI YUAN
2.3.1. Continuous coefficient. Let KA , LA , K, and L be the cells sharing the vertex A. When the coefficient κ(x) is continuous, we use the following Taylor expansions: u(KA ) = u(A) + ux xKA A + uy yKA A + O(h2 ), u(K) = u(A) + ux xKA + uy yKA + O(h2 ), u(L) = u(A) + ux xLA + uy yLA + O(h2 ), u(LA ) = u(A) + ux xLA A + uy yLA A + O(h2 ), where xKA A = xKA − xA , yKA A = yKA − yA , and the others have similar definitions. As referred to above, we will approximate u(A) by the linear weighted combination of u(KA ), u(LA ), u(K), and u(L). In order to make this approximation second order, we should select some coefficients such that the terms containing ux and uy vanish. Specifically, multiply the above four equations by ωAi (i = 1, 2, 3, 4), respectively, and add the resulting equations to obtain (2.12)
u(A) = ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) + RA ,
where ωAi (i = 1, 2, 3, 4) satisfy the following relation: ⎧ ωA1 + ωA2 + ωA3 + ωA4 = 1, ⎪ ⎨ xKA A ωA1 + xKA ωA2 + xLA ωA3 + xLA A ωA4 = 0, (2.13) ⎪ ⎩ yKA A ωA1 + yKA ωA2 + yLA ωA3 + yLA A ωA4 = 0, and RA = O(h2 ). The matrix associated with the system of equations (2.13) is denoted by M , which is of size 3 × 4. Thus the linear system (2.13) reduces to an underdetermined system M ω = b, where ω = (ωA1 , ωA2 , ωA3 , ωA4 )T and b = (1, 0, 0)T . We can obtain the solution ω by different methods. In this paper, we set ω = M T ω and solve the system of equations about ω : M M T ω = b. Since the meshes are convex quadrangles, the four points KA , K, LA , and L are not on a straight line. It follows that the determinant of M M T is not equal to zero. Once the solution ω is computed, the original unknown ω can be obtained (see [14]). In a similar way, we have (2.14)
u(B) = ωB1 u(K) + ωB2 u(KB ) + ωB3 u(LB ) + ωB4 u(L) + RB ,
where ωBi (i = 1, 2, 3, 4) are required to satisfy a similar relation and RB = O(h2 ). 2.3.2. Discontinuous coefficient. When the diffusion coefficient κ(x) is discontinuous (piecewise continuous), the gradient of u(x) is discontinuous across the discontinuity. That is, ux and uy in the Taylor expansion formulae of the above subsection may be discontinuous across the discontinuity. In this case, we use the continuity of the normal flux components at a vertex and the continuity of the tangential gradients on a cell side to eliminate the vertex unknowns. Next, we present the elimination procedure in detail. We use the following Taylor expansions: (2.15)
¯K , u(KA ) = u(A) + ∇u(A)|KA · (xKA − xA ) + R A
1347
NINE POINT SCHEME
(2.16)
¯K , u(K) = u(A) + ∇u(A)|K · (xK − xA ) + R
(2.17)
¯L, u(L) = u(A) + ∇u(A)|L · (xL − xA ) + R
(2.18)
¯L , u(LA ) = u(A) + ∇u(A)|LA · (xLA − xA ) + R A
where ∇u(A)|KA is the gradient of u(x) on cell KA taking the value at the vertex A, ¯K = and ∇u(A)|K , ∇u(A)|L , and ∇u(A)|LA have similar meanings. Moreover, R A 2 2 2 2 ¯ ¯ ¯ O(h ), RK = O(h ), RLA = O(h ), and RLA = O(h ). To approximate u(A) by the linear combination of u(KA ), u(K), u(L), and u(LA ) with second-order accuracy, we choose the combination coefficients such that the terms containing ∇u(A)|KA , ∇u(A)|K , ∇u(A)|L , and ∇u(A)|LA vanish. This can be done by using the continuity of the normal flux component at a vertex and the continuity of the tangential gradients on a cell side. The normal fluxes at vertex A on the sides S1 , S2 , S3 , and S4 are denoted by f1 , f2 , f3 , and f4 , respectively, where counterclockwise is taken to be the positive flux direction (see Figure 2.3). By the continuity of the normal flux components at vertex A for each edge, we have (2.19)
¯ 1 = −κ(K)∇u(A)|K · nKK + R ¯ 2 ≡ f1 , κ(KA )∇u(A)|KA · nKA K + R A
(2.20)
¯ 3 = −κ(L)∇u(A)|L · nLK + R ¯ 4 ≡ f2 , κ(K)∇u(A)|K · nKL + R
(2.21)
¯ 5 = −κ(LA )∇u(A)|L · nL L + R ¯ 6 ≡ f3 , κ(L)∇u(A)|L · nLLA + R A A
(2.22)
¯ 7 = −κ(KA )∇u(A)|K · nK L + R ¯ 8 ≡ f4 . κ(LA )∇u(A)|LA · nLA KA + R A A A
Here nKA K is the unit normal vector to the common side of cells KA and K with the ¯ i = O(h), (i = direction from KA to K (the others have similar definitions), and R 1, 2, . . . , 8). From (2.19) and (2.22), we have ⎧ ¯1 f1 −R ⎨ nT KA K · ∇u(A)|KA = κ(KA ) , ¯8 f4 −R ⎩ nT KA LA · ∇u(A)|KA = − κ(KA ) .
KA
LA
S4 f3
S1
f4
f2 S3
A
K
f1 S2
L
Fig. 2.3. The normal flux at vertex A.
1348
ZHIQIANG SHENG AND GUANGWEI YUAN
This system of equations can be written as ⎡ X∇u(A)|KA = ⎣ where
X=
Introduce the matrix
¯1 f1 −R κ(KA ) ¯ 4 −R8 − fκ(K A)
nTKA K
R=
0 −1
1 0
⎦,
.
nTKA LA
⎤
.
Then, the determinant of X is TKA = det X = nTKA K · RnKA LA . TKA is equal to twice the area of the triangle spanned by the vectors nKA K and nKA LA (the vectors nKA K and nKA LA form a right-handed system). The inverse of matrix X is given by X −1 =
1 [RnKA LA , −RnKA K ]. T KA
It follows that (2.23)
1 1 ¯1) + ¯ 8 ). RnKA LA (f1 − R RnKA K (f4 − R κ(KA )TKA κ(KA )TKA
∇u(A)|KA =
In the same way, we have (2.24)
∇u(A)|K =
1 1 ¯3) + ¯ 2 ), RnKKA (f2 − R RnKL (f1 − R κ(K)TK κ(K)TK
(2.25)
∇u(A)|L =
1 1 ¯5) + ¯ 4 ), RnLK (f3 − R RnLLA (f2 − R κ(L)TL κ(L)TL
(2.26)
∇u(A)|LA =
1 1 ¯7) + ¯ 6 ), RnLA L (f4 − R RnLA KA (f3 − R κ(LA )TLA κ(LA )TLA
where TK = nTKL · RnKKA , TL = nTLLA · RnLK , and TLA = nTLA KA · RnLA L . Substitute (2.23) into (2.15) to obtain (2.27)
¯K , u(KA ) = u(A) + ωKA ,4 f4 + ωKA ,1 f1 + R A
where ωKA ,4 =
1 |KA − A| cos θ2 , κ(KA )TKA
ωKA ,1 = −
1 |KA − A| cos θ1 , κ(KA )TKA
¯ = O(h2 ). Here θ1 and θ2 are the angles between the segment KA A and two and R KA cell sides (see Figure 2.4). Similarly, substitute (2.24), (2.25), and (2.26) into (2.16), (2.17), and (2.18), respectively, to obtain (2.28)
¯K , u(K) = u(A) + ωK,1 f1 + ωK,2 f2 + R
(2.29)
¯L , u(L) = u(A) + ωL,2 f2 + ωL,3 f3 + R
(2.30)
¯L , u(LA ) = u(A) + ωLA ,3 f3 + ωLA ,4 f4 + R A
1349
NINE POINT SCHEME
KA
LA
T2 T3
KA
T1 T8
TK A
T7 A T T 4 T5 6
K
LA
T LA A
TK L
TL
K
L
Fig. 2.4. The angles associated with vertex A.
where ωK,1 =
1 |K − A| cos θ4 , κ(K)TK
ωK,2 = −
1 |L − A| cos θ6 , κ(L)TL
ωL,3 = −
ωL,2 = ωLA ,3 =
1 |LA − A| cos θ8 , κ(LA )TLA
1 |K − A| cos θ3 , κ(K)TK
1 |L − A| cos θ5 , κ(L)TL
ωLA ,4 = −
1 |LA − A| cos θ7 , κ(LA )TLA
¯ , and R ¯ are all O(h2 ). ¯ , R and R K L LA Multiply (2.27)–(2.30) by ωAi (i = 1, 2, 3, 4), respectively, and add the resulting equations to obtain ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) =
4
ωAi u(A) + (ωA1 ωKA ,1 + ωA2 ωK,1 )f1 + (ωA2 ωK,2 + ωA3 ωL,2 )f2
i=1
(2.31)
¯A, + (ωA3 ωL,3 + ωA4 ωLA ,3 )f3 + (ωA4 ωLA ,4 + ωA1 ωKA ,4 )f4 + R
¯ A = O(h2 ). where R In order to obtain second-order approximation, the coefficients of f1 , f2 , f3 , and f4 should be zero, and the coefficient of u(A) should be 1, which leads to five equations about ωAi . However, there are only four unknowns ωAi (i = 1, . . . , 4). Note that the five equations are not independent. We will find their relations by the continuity of the tangential gradients on the cell side. Specifically, the tangential gradient on the edge S1 , which is the common edge of the cells K and KA , is continuous; that is, (2.32)
∇u(A)|KA · (−RnKA K ) = ∇u(A)|K · (RnKKA ).
Substitute (2.23) and (2.24) into the above equation to obtain (2.33)
¯ 1 + a2 R ¯ 8 − a3 R ¯ 3 − a4 R ¯ 2 ), a1 f1 + a2 f4 = a3 f2 + a4 f1 + (a1 R
1350
ZHIQIANG SHENG AND GUANGWEI YUAN cos θ
KA cos θK 1 where a1 = κ(KA )T , a2 = − κ(KA1)TK , a3 = κ(K)T , and a4 = − κ(K)T . Here θKA KA K K A and θK are shown in Figure 2.4. Similarly, note that the tangential gradients on the edge Si (i = 2, 3, 4) are continuous, respectively; that is,
∇u(A)|K · (−RnKL ) = ∇u(A)|L · (RnLK ), ∇u(A)|L · (−RnLLA ) = ∇u(A)|LA · (RnLA L ), ∇u(A)|LA · (−RnLA KA ) = ∇u(A)|KA · (RnKA LA ). It follows that (2.34)
¯ 3 + b2 R ¯ 2 − b3 R ¯ 5 − b4 R ¯ 4 ), b1 f2 + b2 f1 = b3 f3 + b4 f2 + (b1 R
(2.35)
¯ 5 + c2 R ¯ 4 − c3 R ¯ 7 − c4 R ¯ 6 ), c1 f3 + c2 f2 = c3 f4 + c4 f3 + (c1 R
(2.36)
¯ 7 + d2 R ¯ 6 − d3 R ¯ 1 − d4 R ¯ 8 ). d1 f4 + d2 f3 = d3 f1 + d4 f4 + (d1 R
Here b1 = c1 = d1 =
cos θK , κ(K)TK
cos θL , κ(L)TL
cos θLA , κ(LA )TLA
b2 = −
c2 = −
d2 = −
1 , κ(K)TK
1 , κ(L)TL
1 , κ(LA )TLA
b3 =
b4 = −
1 , κ(LA )TLA
c4 = −
1 , κ(KA )TKA
d4 = −
c3 = d3 =
1 , κ(L)TL
cos θL , κ(L)TL
cos θLA , κ(LA )TLA
cos θKA . κ(KA )TKA
From (2.33)–(2.36), we can find the relation between fi0 and fi (i = i0 ). Specifically, we can obtain the relation between f2 and f1 , f4 from (2.33), f2 = t1 f1 + t2 f4 + Rs1 ,
(2.37) where t1 = to obtain
a1 −a4 a3 , t2
=
a2 a3 ,
and Rs1 = O(h). Substitute the relation (2.37) into (2.31)
ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) =
4
ωAi u(A)
i=1
+ (ωA1 ωKA ,1 + ωA2 (ωK,1 + ωK,2 t1 ) + ωA3 ωL,2 t1 )f1 + (ωA3 ωL,3 + ωA4 ωLA ,3 )f3 (2.38)
¯A + (ωA4 ωLA ,4 + ωA1 ωKA ,4 + ωA2 ωK,2 t2 + ωA3 ωL,2 t2 )f4 + R ,
¯ = O(h2 ). where R A Letting the coefficients of f1 , f3 , and f4 be zero, we obtain four equations about four unknowns ωAi (i = 1, . . . , 4). It follows that u(A) = ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) + RA , ¯ = O(h2 ). = −R where RA A Similarly, we can obtain the relation between fi0 and fi (i = i0 ) from (2.34) or (2.35) or (2.36). Substitute this relation into (2.31) and let the coefficients of fi be zero; then we can obtain the expression of u(A).
NINE POINT SCHEME
1351
It is obvious that we obtain four different expressions of u(A) with four groups (j) (j) (j) (j) of combination coefficients (ωA1 , ωA2 , ωA3 , ωA4 ) (1 ≤ j ≤ 4). In order to make the expression of u(A) unique, we simply take ω = ω (j0 ) such that 4
(2.39)
(j )
|ωAi0 | = min
1≤j≤4
i=1
4
(j)
|ωAi |.
i=1
Hence, we get the unique expression of u(A) as follows: (2.40)
u(A) = ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) + RA ,
where RA = O(h2 ). 2.4. The nine point scheme. Substituting the expression of vertex unknowns into (2.10), we have FK,σ = −τσ {u(L) − u(K) − Dσ [ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) − (ωB1 u(K) + ωB2 u(KB ) + ωB3 u(LB ) + ωB4 u(L))]} + WK,σ , where WK,σ = O(h2 ). Denote F¯K,σ = −τσ {u(L) − u(K) − Dσ [ωA1 u(KA ) + ωA2 u(K) + ωA3 u(L) + ωA4 u(LA ) − (ωB1 u(K) + ωB2 u(KB ) + ωB3 u(LB ) + ωB4 u(L))]}. n+1 n+1 = F¯K,σ + WK,σ . Therefore, FK,σ Let
FK,σ = −τσ {uL − uK − Dσ [ωA1 uKA + ωA2 uK + ωA3 uL + ωA4 uLA − (ωB1 uK + ωB2 uKB + ωB3 uLB + ωB4 uL )]};
(2.41)
then the finite volume scheme of the problem (2.1)–(2.2) is given as follows: (2.42)
FK,σ = fK m(K),
K ∈ Ω,
σ∈EK
(2.43)
uK = 0,
K ∈ ∂Ω,
where fK = f (K), and K ∈ ∂Ω means that K is a boundary edge and also the midpoint of the boundary edge. It is obvious that the scheme at cell K is coupled with the eight cells around it; hence there are nine cells in our stencil (see Figure 2.5), so we call the scheme a nine point scheme. Our scheme reduces to the standard five point scheme on rectangular grids (see Figure 2.6). The scheme often leads to a system with a nonsymmetric matrix for general quadrilateral meshes. It is straightforward to extend our scheme on distorted quadrilateral meshes to arbitrary polygonal meshes.
1352
ZHIQIANG SHENG AND GUANGWEI YUAN
K
Fig. 2.5. Nine point stencil.
K
Fig. 2.6. Five point stencil.
3. Stability and convergence of scheme. In order to obtain the theorems of stability and convergence, we introduce the following assumption (H3): 2 ≤ τσ Dσ2 ωA 1
1 − ε0 − ε τKA |K , 16
2 τσ Dσ2 ωA ≤ 4
1 − ε0 − ε τLA |L , 16
2 τσ Dσ2 ωB ≤ 2
1 − ε0 − ε τK|KB , 16
2 τσ Dσ2 ωB ≤ 3
1 − ε0 − ε τL|LB , 16
Dσ (ωA3 + ωA4 − ωB3 − ωB4 ) ≤
ε0 , 2
where ε0 ∈ (0, 1) is a given constant, and ε > 0 is a small constant satisfying ε0 +ε < 1. The above assumption (H3) implies a geometric constraint on cell deformation. 2 For an orthogonal mesh, Dσ = 0. The inequality τσ Dσ2 ωA ≤ 1−ε160 −ε τKA |K can 1 |A−B| 2 be rewritten as τKτσ|K Dσ2 ωA ≤ 1−ε160 −ε . When κ = 1, we have τσ = |K−L| cos θ , 1 A
sin θ Dσ = |L−K| |A−B| . It is obvious that the assumption is a constraint on some geometric parameters, which include the angle between the cell side and the segment connecting neighboring cell centers, and the ratio between the length of the cell side and the length of the segment connecting neighboring cell centers.
1353
NINE POINT SCHEME
3.1. Stability. Now we prove that our scheme is stable. Theorem 3.1. Assume that (H1), (H2), and (H3) are satisfied. Then there exists a constant C, independent of h, such that τσ (uL − uK )2 ≤ C |fK |2 m(K). σ∈E
K∈J
Proof. Multiplying (2.42) by uK and summing up the resulting products for K ∈ J , we get (3.1) FK,σ uK = fK uK m(K). K∈J σ∈EK
K∈J
Notice that FK,σ uK = − τσ (uL − uK − Dσ (ωA1 uKA + ωA2 uK + ωA3 uL K∈J σ∈EK
K∈J σ∈EK
+ ωA4 uLA − (ωB1 uK + ωB2 uKB + ωB3 uLB + ωB4 uL )))uK = τσ (uL − uK )2 + τσ Dσ (ωA1 uKA + ωA2 uK + ωA3 uL σ∈E
σ∈Eint
+ ωA4 uLA − (ωB1 uK + ωB2 uKB + ωB3 uLB + ωB4 uL ))(uK − uL ) τσ (uL − uK )2 + τσ Dσ (ωA1 (uKA − uK ) = σ∈E
σ∈Eint
+ ωA4 (uLA − uL ) + ωB2 (uK − uKB ) + ωB3 (uL − uLB ) + (ωA2 + ωA1 − ωB1 − ωB2 ) · uK + (ωA3 + ωA4 − ωB3 − ωB4 )uL )(uK − uL ) τσ (uL − uK )2 + τσ Dσ (ωA1 (uKA − uK ) = σ∈E
σ∈Eint
+ ωA4 (uLA − uL ) + ωB2 (uK − uKB ) + ωB3 (uL − uLB ))(uK − uL ) − τσ Dσ (ωA3 + ωA4 − ωB3 − ωB4 )(uK − uL )2 .
(3.2)
σ∈Eint
Substitute (3.2) into (3.1) to obtain τσ (uL − uK )2 − τσ Dσ (ωA3 + ωA4 − ωB3 − ωB4 )(uK − uL )2 σ∈E
+
σ∈Eint
τσ Dσ (ωA1 (uKA − uK ) + ωA4 (uLA − uL ) + ωB2 (uK − uKB )
σ∈Eint
(3.3)
+ ωB3 (uL − uLB ))(uK − uL ) =
K∈J
fK uK m(K).
1354
ZHIQIANG SHENG AND GUANGWEI YUAN
By the Cauchy inequality, τσ (uL − uK )2 − τσ Dσ (ωA3 + ωA4 − ωB3 − ωB4 )(uK − uL )2 σ∈E
≤
σ∈Eint
1 ( 2 2 τσ (uL − uK )2 + 2 τσ Dσ2 ωA (uKA − uK )2 + ωA (uLA − uL )2 1 4 2
σ∈Eint
σ∈Eint
2 2 + ωB (uK − uKB )2 + ωB (uL − uLB )2 2 3
+
)
C ε |fK |2 m(K) + |uK |2 m(K). ε 4C K∈J
K∈J
Apply the assumption (H3) and the Sobolev inequality to obtain ε C τσ (uL − uK )2 ≤ |fK |2 m(K), 4 ε
σ∈E
K∈J
and hence,
τσ (uL − uK )2 ≤ C
σ∈E
|fK |2 m(K).
K∈J
Thus the scheme (2.42)–(2.43) is stable. 3.2. Convergence. Equation (2.3) is equivalent to the following equation:
(3.4)
FK,σ = fK m(K) + SK m(K),
σ∈EK
where fK = f (K), SK = K (f (x) − fK )dx/m(K). Obviously, |SK | ≤ Ch by assumption (H1). Let eK = u(K) − uK , and subtract (2.42) from (3.4) to obtain
(3.5)
GK,σ = SK m(K) −
σ∈EK
WK,σ ,
σ∈EK
where GK,σ = F¯K,σ − FK,σ . Now we present an error estimate for the scheme (2.42)–(2.43). Theorem 3.2. Assume that (H1), (H2), and (H3) are satisfied. Then there exists a constant C, independent of h, such that
1/2 τσ (eL − eK )
2
≤ Ch.
σ∈E
Proof. Multiplying (3.5) by eK , and summing up the resulting products for K ∈ J , we get (3.6) GK,σ eK = SK eK m(K) − WK,σ eK . K∈J σ∈EK
K∈J
K∈J σ∈EK
NINE POINT SCHEME
1355
Notice that GK,σ eK = − τσ Dσ (ωA3 + ωA4 − ωB3 − ωB4 )(eK − eL )2 K∈J σ∈EK
σ∈Eint
+
τσ (eL − eK )2 +
σ∈E
τσ Dσ (ωA1 (eKA − eK )
σ∈Eint
+ ωA4 (eLA − eL ) + ωB2 (eK − eKB ) + ωB3 (eL − eLB ))(eK − eL ).
(3.7)
Denote Wσ = WK,σ , notice that WL,σ = −WK,σ , and substitute (3.7) into (3.6) to obtain τσ (eL − eK )2 − τσ Dσ (ωA3 + ωA4 − ωB3 − ωB4 )(eK − eL )2 σ∈E
σ∈Eint
+
τσ Dσ (ωA1 (eKA − eKA ) + ωA4 (eLA − eL ) + ωB2 (eK − eKB )
σ∈Eint
=
K∈J
+ ωB3 (eL − eLB ))(eK − eL ) SK eK m(K) − Wσ (eK − eL ). σ∈E
By the Cauchy inequality, we have τσ (eL − eK )2 − τσ Dσ (ωA3 + ωA4 − ωB3 − ωB4 )(eK − eL )2 σ∈E
≤
σ∈Eint
1 > 2 2 τσ (eL − eK )2 + 2 τσ Dσ2 ωA (eKA − eK )2 + ωA (eLA − eL )2 1 4 2
σ∈Eint
σ∈Eint
? ε 2 2 τσ (eK − eL )2 + ωB (eK − eKB )2 + ωB (eL − eLB )2 + 2 3 8 σ∈E
+
2 ε |Wσ |2 + |eK |2 m(K) + Ch2 . ετσ 8C1
σ∈E
K∈J
Applying the assumption (H3) and the Sobolev inequality, we obtain τσ (uL − uK )2 ≤ Ch2 . σ∈E
4. Extension to general model diffusion equation. Consider now the general model diffusion problem (4.1) (4.2)
−∇ · (κ(x)∇u) = f (x) u(x) = 0
in Ω, on ∂Ω,
where κ(x) is a positive definite 2 × 2 matrix. Making the same operations as those at the beginning of subsection 2.2, and noticing that (κ∇u) · ν = ∇u · (κt ν),
1356
ZHIQIANG SHENG AND GUANGWEI YUAN
we obtain
(4.3)
FK,σ =
where
f (x)dx, K
σ∈EK
FK,σ = −
(4.4)
∇u(x) · κ(x)tnK,σ dl. σ
Since vectors τBA and τKI cannot be collinear, there exist α(x) and β(x) depending on κ such that κ(x)tnK,σ = −α(x)τBA + β(x)τKI ,
(4.5) where α(x) =
1 νKI · (κ(x)tnK,σ ), cos θK,σ
β(x) =
1 nK,σ · (κ(x)tnK,σ ). cos θK,σ
Using a similar derivation as in section 2, we have FK,σ = −τσ {u(L) − u(K) − Dσ [u(A) − u(B)]} + RK,σ , FL,σ = −τσ {u(K) − u(L) − Dσ [u(B) − u(A)]} + RL,σ , where τσ =
|A−B|
|I−K| |I−L| β(K) + β(L)
, Dσ =
|I−K|α(K) |A−B|β(K)
+
|I−L|α(L) |A−B|β(L) ,
and RK,σ = −RL,σ = O(h2 ).
Let FK,σ = −τσ {uL − uK − Dσ [ωA1 uKA + ωA2 uK + ωA3 uL + ωA4 uLA − (ωB1 uK + ωB2 uKB + ωB3 uLB + ωB4 uL )]} , where ωAi and ωBi satisfy relations similar to those in section 2. Then we obtain the finite volume scheme of problem (4.1)–(4.2). It is obvious that we have theorems similar to those in section 3. 5. Numerical experiments. Let us now present some numerical results to illustrate the behavior of the proposed finite volume scheme. The symmetric linear systems are solved by the conjugate gradient (CG) method, and the nonsymmetric linear systems are solved by the biconjugate gradient stabilized algorithm (BICGSTAB) (see [14]). Let Ω be the unit square, and let ∂ΩS , ∂ΩE , ∂ΩN , ∂ΩW be the boundaries of Ω. The first distorted mesh is a random mesh (see [8]). The random mesh over the physical domain Ω = [0, 1] × [0, 1] is defined by xij = Ii + σI (Rx − 0.5), yij = Ji + σ J (Ry −0.5), where σ ∈ [0, 1] is a parameter and Rx and Ry are two normalized random variables. The second distorted mesh is a Z-mesh as described in [9]. Figure 5.1 displays a random mesh generated with σ = 0.7 and a Z-mesh. 5.1. Linear elliptic equation. In order to test the scheme given in section 2 and compare it with some known methods, we consider the following linear elliptic equation whose analytic solution is u = 2 + cos(πx) + sin(πy): −∇ · (∇u) = π 2 (cos(πx) + sin(πy)) u = 2 + cos(πx) ∇u · ν = 0
in
Ω,
on ∂ΩS ∪ ∂ΩN , on ∂ΩE ∪ ∂ΩW .
1357
NINE POINT SCHEME
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
’y’
’y’
Table 5.1 gives the error between the exact solution and the numerical solution on the above mentioned random meshes, where NPCT (nine point scheme with cellcentered unknowns and Taylor expansion) is our new method, NPC is the method of [10] in which all of the combination coefficients are 14 , NPCV is the method of constructing the scheme on both cell centers and vertices (see [7, 19]), and MPFA is the method of multipoint flux approximations in [1]. NPCT, NPC, and MPFA lead to linear systems with a nonsymmetric matrix, and we use BICGSTAB to solve them. NPCV leads to a linear system with a symmetric matrix, and we use CG to solve it. From Table 5.1, one can see clearly that MPFA produces the best results for this case and exhibits nearly second-order convergence. NPCT produces the next best results for this case and also exhibits nearly second-order convergence. NPCV also has nearly second-order convergence, while NPC is the worst, failing to converge as the number of cells is increased. Compared with NPCV, our method not only provides more accurate results, but also is less expensive. This is because our method has less than half the unknowns of NPCV; i.e., our method has only the cell-centered unknowns, and NPCV has both cell-centered unknowns and vertex unknowns.
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.5
0
1
0
0.5
’x’
1
’x’
Fig. 5.1. Distorted quadrilateral mesh (24 × 24; left: random mesh, right: Z-mesh).
Table 5.1 Results for the linear elliptic equation on a random mesh.
NPCT MPFA NPCV NPC
I ×J Maximum error time(s) Maximum error time(s) Maximum error time(s) Maximum error time(s)
12 × 12 1.59e-2 – 1.49E-2 – 1.67e-2 – 0.23 –
24 × 24 4.01e-3 – 3.58E-3 – 4.28e-3 – 0.15 –
48 × 48 1.07e-3 0.25 1.05E-3 0.28 1.25e-3 0.40 0.14 0.22
96 × 96 3.05e-4 2.43 2.72E-4 2.02 3.30e-4 3.63 0.14 2.62
192 × 192 8.74e-5 28.85 6.99E-5 24.55 9.31e-5 57.94 0.14 37.08
Figure 5.2 displays the error between the exact solution and the numerical solution for the above-mentioned Z-mesh. The dashed line refers to first-order convergence, the solid line refers to second-order convergence, and the dash-dotted line gives the error of our method for the linear elliptic equation. One can see clearly that our
1358
ZHIQIANG SHENG AND GUANGWEI YUAN
10
First-order Second-order Linear elliptic equation General model diffusion equation
-1
Error
10
0
10-2
10-3
10
-4
0
50
100
150
200
I=J
Fig. 5.2. The results on Z-mesh.
method exhibits nearly second-order convergence in this case. Although we proved our scheme to be first-order accurate in theory, numerical experiments show that it appears to be second-order accurate for the tested problems. 5.2. The general model diffusion problem. In this subsection, we consider the general model diffusion problem −∇ · (κ(x)∇u) = f (x) u = sin(πx) sin(πy)
in
Ω,
on ∂Ω,
where κ(x) is a symmetric positive definite matrix, κ(x) = RDRT , and cos θ − sin θ d1 0 , R= , D= sin θ cos θ 0 d2 2 2 2 2 θ = 5π 12 , d1 = 1 + 2x + y , d2 = 1 + x + 2y . The solution is chosen to be u(x, y) = sin(πx) sin(πy).
Table 5.2 Results for the general model diffusion problem.
NPCT MPFA NPCV NPC
I ×J Maximum error time(s) Maximum error time(s) Maximum error time(s) Maximum error time(s)
12 × 12 9.65e-3 – 1.50E-2 – 1.30e-2 – 8.45e-2 –
24 × 24 2.55e-3 – 3.27E-3 – 3.24e-3 – 9.42e-2 –
48 × 48 8.21e-4 0.25 9.53E-4 0.28 9.32e-4 0.27 7.71e-2 0.23
96 × 96 2.12e-4 2.15 2.52E-4 2.17 2.48e-4 2.57 8.39e-2 2.12
192 × 192 5.46e-5 45.58 6.97E-5 36.32 7.36e-5 37.08 8.09e-2 40.77
Table 5.2 gives the error between the exact solution and the numerical solution. From this table, one can see that NPCT produces the best results for this case and
1359
NINE POINT SCHEME 1 0.9 0.8 0.7
y
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.5
1
x
Fig. 5.3. Random mesh with a discontinuity in x = 2/3 (24 × 24).
Table 5.3 Results for the linear elliptic equation with discontinue coefficients.
NPCT MPFA NPCV
I ×J Maximum error time(s) Maximum error time(s) Maximum error time(s)
12 × 12 8.56e-2 – 9.10E-2 – 1.81E-1 –
24 × 24 2.97e-2 – 3.07E-2 – 3.53E-2 –
48 × 48 1.04e-3 0.30 1.03E-2 0.30 1.12E-2 0.32
96 × 96 2.81e-3 2.32 2.79E-3 2.42 3.13E-3 2.48
192 × 192 8.01e-4 37.53 8.22E-4 34.20 8.63E-4 34.00
exhibits nearly second-order convergence. NPCV and MPFA produce the next best results for this case and also exhibit nearly second-order convergence, while NPC is the worst, failing to converge as the number of cells is increased. In Figure 5.2, the dotted line gives the error of our scheme for the general model diffusion equation on a Z-mesh. One can see clearly that our scheme exhibits nearly second-order convergence for this case. 5.3. Linear elliptic equation with discontinuous coefficients. Consider the following linear elliptic equation with discontinuous coefficients: −∇ · (κ(x, y)∇u) = f (x, y) u(x, y) = 0
in
Ω,
on ∂Ω,
where κ(x, y) =
4, (x, y) ∈ (0, 23 ] × (0, 1), 1, (x, y) ∈ ( 23 , 1) × (0, 1),
and f (x, y) =
20π 2 sin πx sin 2πy,
(x, y) ∈ (0, 23 ] × (0, 1),
20π 2 sin 4πx sin 2πy, (x, y) ∈ ( 23 , 1) × (0, 1).
1360
ZHIQIANG SHENG AND GUANGWEI YUAN
The exact solution is u(x, y) =
sin πx sin 2πy,
(x, y) ∈ (0, 23 ] × (0, 1),
sin 4πx sin 2πy, (x, y) ∈ ( 23 , 1) × (0, 1).
Since κ is discontinuous at x = 2/3, we use the randomly distorted mesh shown in Figure 5.3. Hence each cell is homogeneous, but material properties may vary between cells. We give the error between the exact solution and the numerical solution on a randomly distorted mesh in Table 5.3. From this table, we see that our scheme has nearly second-order convergence for this problem with discontinuous coefficients. 6. Conclusion. We present a new construction of vertex unknowns in the nine point scheme for discretizing diffusion operators on distorted quadrilateral meshes. The resulting scheme has only the cell-centered unknowns and has a local stencil; moreover, it treats material discontinuities rigorously and offers an explicit expression for the face-centered flux. Furthermore, the expression of the normal flux component has a specific physical meaning. In addition, we can obtain the vertex values. Our scheme is nonsymmetric in general and reduces to the standard five point scheme on rectangular grids. Although the construction of our scheme is described only on distorted quadrilateral meshes, it is straightforward to extend it to arbitrary polygonal meshes. Acknowledgments. The authors thank the two reviewers for their numerous constructive comments and suggestions that helped to improve the paper significantly. REFERENCES [1] I. Aavatsmark, An introduction to multipoint flux approximations for quadrilateral grids, Comput. Geosci., 6 (2002), pp. 405–432. [2] I. Aavatsmark, T. Barkve, Ø. Bøe, and T. Mannseth, Discretization on unstructured grids for inhomogeneous, anisotropic media. Part I: Derivation of the methods, SIAM J. Sci. Comput., 19 (1998), pp. 1700–1716. [3] I. Aavatsmark, T. Barkve, Ø. Bøe, and T. Mannseth, Discretization on unstructured grids for inhomogeneous, anisotropic media. Part II: Discussion and numerical results, SIAM J. Sci. Comput., 19 (1998), pp. 1717–1736. [4] I. Aavatsmark and G. T. Eigestad, Numerical convergence of the MPFA O-methods and U-method for general quadrilateral grids, Int. J. Numer. Methods Fluids, 51 (2006), pp. 939–961. [5] B. Andreianov, F. Boyer, and F. Hubert, Discrete duality finite volume schemes for LerayLions type elliptic problems on general 2D meshes, Numer. Methods Partial Differential Equations, 23 (2007), pp. 145–195. [6] K. Domelevo and P. Omnes, A finite volume method for the Laplace equation on almost arbitrary two-dimensional grids, ESAIM Math. Model. Numer. Anal., 39 (2005), pp. 1203–1249. [7] F. Hermeline, A finite volume method for the approximation of diffusion operators on distorted meshes, J. Comput. Phys., 160 (2000), pp. 481–499. [8] W. Huang and A. M. Kappen, A Study of Cell-Center Finite Volume Methods for Diffusion Equations, Mathematics Research Report 98-10-01, Department of Mathematics, University of Kansas, Lawrence KS, 1998. [9] D. S. Kershaw, Differencing of the diffusion equation in Lagrangian hydrodynamic codes, J. Comput. Phys., 39 (1981), pp. 375–395. [10] D. Li, H. Shui, and M. Tang, On the finite difference scheme of two-dimensional parabolic equation in a non-rectangular mesh, J. Numer. Methods Comput. Appl., 4 (1980), pp. 217–224. [11] K. Lipnikov, M. Shashkov, and D. Svyatskiy, The mimetic finite difference discretization of diffusion problem on unstructured polyhedral meshes, J. Comput. Phys., 211 (2006), pp. 473– 491.
NINE POINT SCHEME
1361
[12] J. E. Morel, J. E. Dendy, M. L. Hall, and S. W. White, A cell centered Lagrangian-mesh diffusion differencing scheme, J. Comput. Phys., 103 (1992), pp. 286–299. [13] J. E. Morel, R. M. Roberts, and M. J. Shashkov, A local support-operators diffusion discretization scheme for quadrilateral r–z meshes, J. Comput. Phys., 144 (1998), pp. 17–51. [14] Y. Saad, Iterative Method for Sparse Linear Systems, PWS Publishing, New York, 1996. [15] M. Shashkov, Conservative Finite-Difference Methods on General Grids, CRC Press, Boca Raton, FL, 1996. [16] M. Shashkov and S. Steinberg, Support-operator finite-difference algorithms for general elliptic problems, J. Comput. Phys., 118 (1995), pp. 131–151. [17] J. Wu, S. Fu, and L. Shen, A difference scheme with high resolution for the numerical solution of nonlinear diffusion equation, J. Numer. Methods Comput. Appl., 24 (2003), pp. 116–128. [18] X. Ye, A new discontinuous finite volume method for elliptic problems, SIAM J. Numer. Anal., 42 (2004), pp. 1062–1072. [19] G. Yuan and Z. Sheng, Analysis of accuracy of a finite volume scheme for diffusion equations on distorted meshes, J. Comput. Phys., 224 (2007), pp. 1170–1189.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1362–1386
NUMERICAL VARIATIONAL METHODS APPLIED TO CYLINDER BUCKLING∗ ˇ ´I HORAK ´ † , GABRIEL J. LORD‡ , AND MARK A. PELETIER§ JIR Abstract. We review and compare different computational variational methods applied to a system of fourth order equations that arises as a model of cylinder buckling. We describe both the discretization and implementation, in particular how to deal with a one-dimensional null space. We show that we can construct many different solutions from a complex energy surface. We examine numerically convergence in the spatial discretization and in the domain size. Finally we give a physical interpretation of some of the solutions found. Key words. mountain-pass algorithm, steepest descent, continuation AMS subject classifications. 65Z05, 65N06, 65K10, 35J35, 35J60 DOI. 10.1137/060675241
1. Introduction. We describe complementary approaches to finding solutions of systems of fourth order elliptic PDEs. The techniques are applied to a problem that arises in the classic treatment of an isotropic cylindrical shell under axial compression, but they are also applicable to a wide range of problems such as waves on a suspension bridge [4, 5], the Fuˇc´ık spectrum of the Laplacian [6], and the formation of microstructure [12, 3]. The cylindrical shell offers a computationally challenging and physically relevant problem with a complex energy surface. We take as our model for the shell the Von K´ arm´an–Donnell equations, which can be rescaled (see the appendix and [7]) to the form (1.1) (1.2)
Δ2 w + λwxx − φxx − 2 [w, φ] = 0, Δ2 φ + wxx + [w, w] = 0,
where the bracket is defined as 1 1 uxx vyy + uyy vxx − uxy vxy . 2 2 The function w is a scaled inward radial displacement measured from the unbuckled (fundamental) state, φ is the Airy stress function, and λ ∈ (0, 2) is a load parameter. The unknowns w and φ are defined on a two-dimensional spatial domain Ω = (−a, a)× (−b, b), where x ∈ (−a, a) is the axial and y ∈ (−b, b) is the tangential coordinate. Since the y-domain (−b, b) represents the circumference of the cylinder, the following boundary conditions are prescribed: (1.3)
(1.4a) (1.4b)
[u, v] =
w is periodic in y and wx = (Δw)x = 0 at x = ±a, φ is periodic in y and φx = (Δφ)x = 0 at x = ±a,
as shown in Figure 1 (i), (ii). ∗ Received by the editors November 17, 2006; accepted for publication (in revised form) September 4, 2007; published electronically March 28, 2008. http://www.siam.org/journals/sisc/30-3/67524.html † Department of Mathematics, Universit¨ at K¨ oln, 50931 K¨ oln, Germany (
[email protected]. de). ‡ Department of Mathematics, Heriot-Watt University, Edinburgh EH14 4A5, UK (g.j.lord@ma. hw.ac.uk). § Department of Mathematics and Computer Science, Technische Universiteit Eindhoven, Eindhoven S60 OMB, The Netherlands (
[email protected]). 1362
1363
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
x
(i)
periodic boundary condition for w, φ
a
(ii)
wν = (Δw)ν = φν = (Δφ)ν = 0
y
w −a
y
(iii) w
b
y
x
w
x
a
−a Ω
−a
−b
Ω1 4
−b
Fig. 1. (i) The geometry of the cylinder; (ii) the computational domain and the boundary conditions; (iii) one quarter of the domain and the corresponding boundary conditions.
1.1. Functional setting. We search for weak solutions w, φ of (1.1)–(1.4) in the space @ A ψ=0 X = ψ ∈ H 2 (Ω) : ψx (±a, ·) = 0, ψ is periodic in y, and Ω
with the norm 2
w X =
(
) Δw2 + Δφ21 ,
Ω
where φ1 ∈ H 2 (Ω) is the unique solution of (1.5)
Δ φ1 = −wxx , 2
φ1 satisfies (1.4b),
and
φ1 = 0. Ω
This norm is equivalent to the H 2 -norm on X, and with the appropriate inner product
·, ·X the space X is a Hilbert space. Alternatively, if the load parameter λ ∈ (0, 2) is fixed, another norm, ( ) 2 Δw2 + Δφ21 − λwx2 ,
w X,λ = Ω
can be used. Because of the estimate 1 1 1 2 2 2 2 wx = − wwxx = wΔ φ1 = ΔwΔφ1 ≤ Δw + Δφ21 = w X , 2 2 2 Ω Ω Ω Ω Ω Ω it is equivalent to · X and hence also to the H 2 -norm on X. The corresponding inner product will be denoted ·, ·X,λ .
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
1364
Equations (1.1)–(1.2) are related to the stored energy E, the average axial shortening S, and the total potential given by ( ) 1 1 (1.6) E(w) := Δw2 + Δφ2 , S(w) := w2 , Fλ = E − λS. 2 Ω 2 Ω x Note that the function φ in (1.6) is determined from w by solving (1.2) with boundary conditions (1.4b). All the functionals E, S, and Fλ belong to C 1 (X), i.e., are continuously Fr´echet differentiable. The fact that (1.1) is a reformulation of the stationarity condition Fλ = E −λS = 0 will be important in section 2, and we therefore briefly sketch the argument. It is easy to see that S (w) · h = − wxx h. Ω
For E (w)·h, let w, φ ∈ X solve (1.2) and h, ψ ∈ X solve Δ2 ψ = −hxx −[h, h]−2[w, h]. Then, assuming sufficient regularity on w, 1 1 2 E(w + h) − E(w) = ΔwΔh + (Δh) + ΔφΔψ + (Δψ)2 2 2 Ω Ω Ω Ω ( 2 ) 1 1 2 hΔ w − hφxx − 2[w, h]φ + = (Δh) + (Δψ)2 − [h, h]φ, 2 Ω 2 Ω Ω Ω 2
where we used integration by parts several times. The last three integrals are O( h X ) for h X → 0, and it can be shown by integration by parts that (1.7) [w, h] φ = h [w, φ]. Ω
Therefore Fλ (w)
Ω
· h = E (w) · h − λS (w) · h =
+ , h Δ2 w + λwxx − φxx − 2 [w, φ] .
Ω
1.2. Review of some variational numerical methods. We now describe the variational methods used to find numerical approximations of critical points of the total potential Fλ . In our numerical experiments these methods are accompanied by Newton’s method and continuation. The advantage of this approach is that it combines the knowledge of global features of the energy landscape with local ones of a neighborhood of a critical point. The details related to spatial discretization will be discussed in section 2, the Newton-based methods in section 3. 1.2.1. Steepest descent method (SDM). Let the load parameter λ ∈ (0, 2) be fixed; we work in a discretized version of (X, ·, ·X,λ ). We try to minimize the total potential Fλ by following its gradient flow. We solve the initial value problem d w(t) = −∇λ Fλ (w(t)) , dt
w(0) = w0 ,
with a suitable starting point w0 on some interval (0, T ]. This problem is then discretized in t. In [7] it was shown that w = 0 is a local minimizer of Fλ . Indeed, if w0 X,λ is small, the numerical solution w(t) converges to zero as t tends to infinity. If, on
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1365
w2
zm new zm −∇λ Fλ (zm )
w1
wMP X
Fig. 2. Deforming the path in the main loop of the MPA: point zm is moved a small distance new . This step is repeated until the mountain-pass point in the direction −∇λ Fλ (zm ) and becomes zm wMP is reached.
the other hand, w0 X,λ is large, the numerical solution w(t) stays bounded away from zero. In most of our experiments, the numerical algorithm did not converge for t → ∞ in the large norm case. The only exception for a relatively small value of λ will be mentioned later in section 5.3. Nevertheless, for a sufficiently large computational domain Ω and a sufficiently large t > 0 we obtain Fλ (w(t)) < 0. Such a state w(t) is needed for the mountain-pass algorithm as explained below. Existence of this state was also proved in [7]. 1.2.2. Mountain-pass algorithm (MPA). The algorithm was first proposed in [2] for a second order elliptic problem in one dimension and extended in [5] to a fourth order problem in two dimensions. We give a brief description of the algorithm here. Let the load parameter λ ∈ (0, 2) be fixed; we work again in a discretized version of (X, ·, ·X,λ ). We denote w1 = 0 the local minimum of Fλ and take a point w2 such that Fλ (w2 ) < 0 (in practice this point is found using the SDM). We take a discretized path {zm }pm=0 connecting z0 = w1 with zp = w2 . After finding the point zm at which Fλ is maximal along the path, this point is moved a small distance in the direction of the steepest descent −∇λ Fλ (zm ). Thus the path has been deformed and the maximum of Fλ lowered. This deforming of the path is repeated until the maximum along the path cannot be lowered anymore: a critical point wMP has been reached. Figure 2 illustrates the main idea of the method. The MPA is local in its nature. The numerical solution wMP it finds has the mountain-pass property in a certain neighborhood only. The choice of the path endpoint w2 may influence to which critical point the algorithm converges. Different choices of w2 are in turn achieved by choosing different initial points w0 in the SDM. 1.2.3. Constrained steepest descent method (CSDM). We fix the amount of shortening S of the cylinder. This is often considered as what actually occurs in experiments. We work now in a discretized version of (X, ·, ·X ). Let C > 0 be a
1366
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
fixed number and define a set of w with constant shortening: (1.8)
M = {w ∈ X : S(w) = C} .
Critical points of E under this constraint are critical points of Fλ , where λ is a Lagrange multiplier. The simplest such points are local minima of the stored energy E on M . We need to follow the gradient flow of E on M ; hence we solve the initial value problem d w(t) = −Pw(t) ∇E(w(t)) , dt
w(0) = w0 ∈ M
for t > 0. Pw denotes the orthogonal projection in X on the tangent space of M at w ∈ M: Pw u = u −
∇S(w), uX 2
∇S(w) X
∇S(w) .
The details of the algorithm can be found in [4]. The initial value problem is solved by repeating the following two steps: given a point w ∈ M find w ¯ = w − ΔtPw ∇E(w), ¯ where the scaling coefficient c is chosen where Δt > 0 is small, and define wnew = cw, so that wnew ∈ M . The algorithm is stopped when Pw ∇E(w) X is smaller than a prescribed tolerance. The corresponding load is given by λ=
∇S(w), ∇E(w)X 2
∇S(w) X
.
1.2.4. Constrained mountain-pass algorithm (CMPA). Let C > 0 and let M be the set of w with constant shortening given in (1.8). We would like to find mountain-pass points of E on M . The method has been described in [4] in detail. We need two local minima w1 , w2 of E on M , which can be found using the CSDM. The algorithm is then similar to the MPA. We take a discretized path {zm }pm=0 ⊂ M connecting z0 = w1 with zp = w2 . After finding the point zm at which E is maximal along the path, this point is moved a small distance in the tangent space to M at zm in the direction of the steepest descent −Pzm ∇E(zm ) and then scaled (as in the CSDM) to come back to M . Thus the path has been deformed on M and the maximum of E lowered. This deforming of the path is repeated until the maximum along the path cannot be lowered anymore: a mountain-pass point of E on M has been reached. The load λ is computed as in the CSDM. The choice of the endpoints w1 and w2 will in general influence to which critical point the algorithm converges. 1.3. Computational domains. We consider the problem on the domain Ω (Figure 1 (ii)) both without further restraints and under a symmetry assumption, which reduces the computational complexity. In the latter case we assume (1.9)
w(x, y) = w(−x, y) = w(x, −y) φ(x, y) = φ(−x, y) = φ(x, −y)
for (x, y) ∈ Ω.
By looking for solutions w, φ ∈ X that satisfy (1.9) we effectively reduce the domain to one quarter: Ω 14 = (−a, 0) × (−b, 0) as shown in Figure 1 (iii). One needs to solve (1.1)–(1.2) only on Ω 14 with the boundary conditions (1.10)
wν = (Δw)ν = φν = (Δφ)ν = 0
on ∂Ω 14 ,
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1367
where ν denotes the outward normal direction to the boundary. Hence we search for weak solutions of (1.1)–(1.2), (1.10) in the space ⎧ ⎫ ⎨ ⎬ ψ=0 . X 14 = ψ ∈ H 2 (Ω 14 ) : ψν = 0 on ∂Ω 14 , and ⎩ ⎭ Ω1 4
We can then use (1.9) to extend these functions to the whole Ω. We have performed numerical experiments both with and without the symmetry assumption. For the sake of simplicity we will give a detailed description of the numerical methods for the second case only where the boundary conditions are the same on all sides of Ω 14 . The first case with periodic conditions on two sides of Ω is very similar and will be briefly mentioned in Remark 2.1. 1.4. Solving the biharmonic equation. In order to obtain φ for a given w, one has to solve (1.2); to compute the norm of w, one has to solve (1.5). Both problems are of the form ψν = (Δψ)ν = 0 on ∂Ω 14 , ψ = 0, (1.11) Δ2 ψ = f in Ω 14 , Ω1
where f ∈ L (Ω 14 ) is given. If 1
4
Ω1
f = 0, then (1.11) has a unique weak solution ψ in
4
X 14 . It is a straightforward calculation to verify that the right-hand sides of equations in (1.2) and (1.5) have zero average. In the discretization described below, problem (1.11) is treated as a system: −Δu = f uν = vν = 0 on ∂Ω 14 , u= v = 0. (1.12) in Ω 14 , −Δv = u Ω1 Ω1 4
4
The system has a unique weak solution (u, v) ∈ (H (Ω 14 )) . Since the domain Ω 14 has no reentrant corners, Theorem 1.4.5 of [9] guarantees that v ∈ H 2 (Ω 14 ) and therefore that the two formulations are equivalent. 1
2
2. Finite difference discretization. We discretize the domain Ω 14 by a uniform mesh (xm , yn ) ∈ Ω 14 with M points in the x-direction and N points in the y-direction: xm = −a + (m − 12 )Δx,
m ∈ {1 . . . , M },
yn = −b + (n − 12 )Δy,
n ∈ {1 . . . , N },
where Δx = a/M , Δy = b/N . We represent the values of some function w on Ω 14 at N these points by a vector w = (wi )M i=1 , where wi = w(xm , yn ) and i = (n − 1)M + m. In our notation we will not distinguish between w as a function and w as a corresponding vector. The vector w can also be interpreted as a block vector with N blocks, each containing M values of a single row of the mesh. We introduce the following conventions for notation: N M N = (bk )N := • For two matrices AM = (aij )M i,j=1 and B k,=1 we define A ⊗B M N (bk A )k,=1 , which is an N × N block matrix, and each block is an M × M matrix. N MN MN • For two vectors u = (ui )M i=1 , v = (vi )i=1 we define u v = (ui vi )i=1 , i.e., a product of the components.
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
1368
To discretize second derivatives we use the standard central finite differences (with Neumann boundary conditions [10]). Let IdM denote the M × M identity matrix and define another M × M matrix: ⎤ ⎡ 1 −1 ⎥ ⎢ −1 2 −1 ⎥ ⎢ ⎥ ⎢ .. .. .. AM ⎥. . . . 2 =⎢ ⎥ ⎢ ⎣ −1 2 −1 ⎦ −1 1 The second derivatives −∂xx , −∂yy and the biharmonic operator Δ2 with the appropriate boundary conditions are approximated by Axx =
1 AM ⊗ IdN , Δx2 2
Ayy =
1 IdM ⊗ AN 2 , Δy 2
AΔ2 = (Axx + Ayy )2 ,
respectively. 2.1. Discretization of E, S, and the bracket [·, ·]. Supposing that we can solve the discretized version of (1.2), AΔ2 φ − Axx w + [w, w]2 = 0 ,
(2.1)
we can also evaluate the energy E and the shortening S: ( ) ( ) (2.2) E(w) = 2 wT AΔ2 w + φT AΔ2 φ Δx Δy, S(w) = 2 wT Axx w Δx Δy. In order to solve (2.1) we need to be able to solve the biharmonic equation and choose a discretization of the bracket [·, ·]. This bracket appears in the equations in two different roles: in (1.2) the bracket is part of the mapping w → φ, and therefore of the definition of the energy E; in (1.1), which represents the stationarity condition E − λS = 0, the bracket appears as a result of differentiating E with respect to w and applying partial integration. As a result, we need to use two different forms of discretization for the two cases. In both cases the bracket requires a discretization of the mixed derivative ∂xy . One choice is to use one-sided finite differences. Define M × M matrices ⎡ ⎡ ⎤ ⎤ 0 −1 1 ⎢ −1 ⎢ ⎥ ⎥ .. .. 1 ⎢ ⎢ ⎥ ⎥ . . (2.3) AM =⎢ , AM ⎥ ⎥. . . 1L = ⎢ 1R .. .. ⎣ ⎣ ⎦ −1 1 ⎦ −1 1 0 We choose either left or right-sided differences represented by these matrices, respectively, and we let AM 1 denote our choice (cf. section 5.1). The derivatives ∂x , ∂y , and −∂xy are approximated by (2.4)
Ax =
1 Δx
N AM 1 ⊗ Id ,
Ay =
1 Δy
IdM ⊗ AN 1 ,
Axy = −Ax Ay .
For the definition of φ in terms of w (see (1.2)) we choose (2.5)
[w, w]2 = (Axx w) (Ayy w) − (Axy w) (Axy w) ,
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1369
and the corresponding choice for (1.1) is (2.6) [w, φ]1 = 12 Ayy {(Axx w) φ} + 12 Axx (Ayy w) φ − ATxy (Axy w) φ . These two definitions are related in the sense given in (1.7): [w, h]T2 φ = hT [w, φ]1 for all h. With these definitions the partial derivatives of discretized E and S with respect to the components of w are given by (2.7)
E (w) = AΔ2 w + Axx φ − 2[w, φ]1 ,
S (w) = Axx w .
2.2. Solving the discretized biharmonic equation. Matrix AΔ2 is symmetric and has a one-dimensional null space: AΔ2 1 = 0, where 1 and 0 are vectors with M N -components which are all one and all zero, respectively. The same is true for Axx and Ayy . For a given vector f we would like to solve (2.8)
AΔ2 ψ = f ,
1T ψ = 0 .
A unique solution exists if and only if f has zero average, i.e., 1T f = 0. So we must verify that the discretized versions of the right-hand sides in (1.2), (1.5) satisfy this condition. Let w ∈ RM N ; then 1T Axx w = 0 , (2.9) (2.10)
1T [w, w]2 = (Axx w)T (Ayy w) − (Axy w)T (Axy w) = wT Axx Ayy w − wT ATxy Axy w = 0 ,
where the last equality holds because ATx Ax = Axx and ATy Ay = Ayy , and because the x- and y-matrices commute. We have, in fact, shown that the integration-by-parts formula from the continuous case holds for our choice of spatial discretization. This is not true for an arbitrary discretization but is key for a successful scheme. The inverse of matrix AΔ2 on the subspace of vectors with zero average, denoted with a slight abuse of notation by A−1 Δ2 , can be found, for example, using the fast cosine transform described below in section 2.4. 2.3. Computing the gradient. The variational methods of this paper are based on a steepest descent flow and modifications of this algorithm. The direction of the steepest descent of E at a point w ∈ X is opposite to the gradient of E at w. The gradient is the Riesz representative of the Fr´echet derivative, and hence we need to find a vector u ∈ X, such that E (w) · v = u, v for all v ∈ X. The inner product is either ·, ·X or ·, ·X,λ , and hence the gradient depends on the choice of the inner product. We use the notation u = ∇E(w) for the gradient in (X, ·, ·X ) and u = ∇λ E(w) for the gradient in (X, ·, ·X,λ ). To find the discretized version of the gradient, we first need to discretize the inner product. Let u, v ∈ RM N , 1T u = 1T v = 0. The inner product is evaluated in the following way: + ,
u, vX,λ = 4 uT AΔ2 v + φu1 T AΔ2 φv1 − λuT Axx v Δx Δy + ( ) , A − λA = 4 uT AΔ2 + Axx A−1 xx v Δx Δy , Δ2 xx
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
1370
where φu1 , φv1 are solutions of the discretized version of (1.5) with w replaced by u and v, respectively, and we assume that we work on Ω 14 . For w ∈ RM N , 1T w = 0 the Riesz representative of E (w) given in (2.7) is computed as )−1 ( E (w) . ∇λ E(w) = AΔ2 + Axx A−1 Δ2 Axx − λAxx
(2.11)
As in the case of A−1 Δ2 we have abused notation here since the inverse makes sense only on a subspace of vectors with zero average. It can be easily verified that 1T E (w) = 0. The numerical evaluation of ∇λ S and of the ·, ·X -gradients is similar. 2.4. Fourier coordinates. In Fourier coordinates most of the finite difference operators become diagonal matrices. This increases the efficiency of the numerical algorithm and makes it possible to easily find the inverse of matrices like AΔ2 . See, for example, [1]. On a uniform grid, it is standard procedure to apply some form of the fast Fourier transform to diagonalize finite difference matrices like AM 2 (see, e.g., [11]). Due to the Neumann boundary conditions (1.10) we need to employ the fast cosine transform. We define the M × M matrices ⎡ ⎤ 1 + + ,M ,M,M ⎥ (2i−1)(j−1)π 1 1 ⎢ .. 2 cos (i−1)(2j−1)π , CbM = √2M CfM = √2M ⎣ . 2 cos ⎦, 2M 2M i,j=1 i=1,j=2 1 which have the following properties: CfM CbM = IdM ,
M M CfM AM 2 Cb = Λ ,
where ΛM = diag(2 − 2 cos (m−1)π )M m=1 . Hence they are inverses of each other and M M diagonalize A2 . We further define the matrices C f = CfM ⊗ CfN ,
C b = CbM ⊗ CbN ,
which diagonalize Axx , Ayy , and AΔ2 : (2.12)
C f Axx C b = Λxx ,
C f Ayy C b = Λyy ,
C f AΔ2 C b = ΛΔ2 ,
where the diagonal matrices are given by (2.13) Λxx =
1 M Δx2 Λ
⊗ IdN ,
Λyy =
M 1 Δy 2 Id
⊗ ΛN ,
ΛΔ2 = (Λxx + Λyy )2 .
For a vector w ∈ RM N we introduce its Fourier coordinates w ˆ by w ˆ = Cf w
w = C b w. ˆ
ˆ is zero. We note that 1T w = 0 if and only if the first component of w Most of the computations involved in the variational methods described in section 1.2 can be done in the Fourier coordinates. One needs to go back to the original coordinates only when evaluating the brackets (2.5) and (2.6), because they are nonlinear and involve the discretized mixed derivative operator Axy .
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1371
2.5. Alternative discretization of −∂xy . The fast Fourier transform provides us with another discretization of the mixed derivative which is not biased to the left or right. In an analogy to (2.13) and (2.12) we define √ √ ˜xy = SΛxy C f , Λxy = Δx1Δy ΛM ⊗ ΛN , A where S is the fast sine transform matrix ⎡ S = SM ⊗ SN ,
SM =
√1 2M
0 ⎢ .. ⎣ . 0
⎤ + ,M,M ⎥ (2i−1)(j−1)π 2 sin ⎦. 2M i=1,j=2
˜xy . Property (2.10) also holds with Axy replaced by A Remark 2.1. When discretizing the problem on the full domain Ω with boundary conditions (1.4), we need to use different matrices in the x and y-directions. In the x-direction we use the matrices described above; in the y-direction to discretize the second derivatives, for example, we use ⎤ ⎡ 2 −1 −1 ⎥ ⎢ −1 2 −1 ⎥ ⎢ ⎥ ⎢ .. .. .. A2 = ⎢ ⎥. . . . ⎥ ⎢ ⎣ −1 2 −1 ⎦ −1 −1 2 In this direction the fast Fourier transform is used instead of the fast cosine/sine transform. 3. Newton’s method. We use Newton’s method in two different ways. The first is to improve the numerical approximations obtained by the variational numerical methods. Since these are sometimes slow to converge, it is often faster to stop such an algorithm early and use its result as an initial guess for Newton’s method. The second use for Newton’s method is as part of a numerical continuation algorithm (see section 3.3). 3.1. Newton’s method for given load parameter λ. This method can be used to improve solutions obtained by the MPA. Let λ ∈ (0, 2) be given. We are solving
G1 AΔ2 w − λAxx w + Axx φ − 2[w, φ]1 0 (3.1) G (w, φ) = = = G2 −AΔ2 φ + Axx w − [w, w]2 0 for w and φ with zero average using Newton’s method. The matrix we need to invert is
∂G ∂G1 1 AΔ2 − λAxx − 2B 1 Axx − 2B 2 ∂w ∂φ (3.2) G (w, φ) = ∂G = , ∂G2 2 Axx − 2B T2 −AΔ2 ∂w ∂φ where ∂ 1 1 [w, φ]1 = Axx (diag φ)Ayy + Ayy (diag φ)Axx − ATxy (diag φ)Axy , ∂w 2 2 T ∂ ∂ 1 [w, φ]1 = [w, w]2 B2 = ∂φ 2 ∂w 1 1 = Axx (diag Ayy w) + Ayy (diag Axx w) − ATxy (diag Axy w) . 2 2
B1 =
1372
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
From the properties of Axx , Ayy , Axy , AΔ2 it follows that matrix G (w, φ) is symmet> ? > ? ric and singular, and its null space is spanned by 10 , 01 . To describe how to find the inverse of G (w, φ) on a subspace orthogonal to its null space, we introduce a new notation for the four blocks of G (w, φ) from (3.2):
G11 G12 . G (w, φ) = GT12 G22 For given vectors u, η with zero average we need to find vectors v, ζ with zero average such that
G11 G12 v u (3.3) = . ζ η GT12 G22 Let the tilde denote the block of the first M N − 1 rows and columns of a matrix and h G i ˜ ˜ G 12 M N − 1 components of a vector. The matrix ˜ 11 is symmetric, nonsingular, T ˜ G12
G22
and sparse. It can be inverted by a linear sparse solver. System (3.3) is then solved in the following steps: ˜ r + s1 1 ˜T
−1 s := − r , v = , 1 ˜ 11 G ˜ 12 MN G s r u ˜ := , ρ η˜ ˜ ˜ T12 G ˜ 22 ρ + σ1 G 1 ˜T σ := − M N 1 ρ , η= . σ 3.2. Newton’s method for given S. This method can be used to improve solutions obtained by the CSDM and the CMPA. Let C > 0 be given. We are looking for numerical solutions of (1.1)–(1.2) in the set M defined by (1.8). Hence we are solving ⎡ ⎤ ⎡ ⎤ AΔ2 w − λAxx w + Axx φ − 2[w, φ]1 0 ⎢ ⎥ ⎢ ⎥ (3.4) ⎣ −AΔ2 φ + Axx w − [w, w]2 ⎦=⎣ 0 ⎦ − 12 wT Axx w + C/(4ΔxΔy)
0
for w and φ with zero average and λ using Newton’s method. The approach is very similar to that described in the previous section; the resulting matrix is symmetric and has just one more row and column than the matrix in (3.2). 3.3. Continuation. To follow branches of solutions (λ, w) of (3.1) we adopt a continuation method described in [8]. We introduce a parameter s ∈ R by adding a constraint—pseudoarclength normalization (in the (λ, w X )-plane). For a given value of s we are solving (3.5)
⎡
G1
⎤
⎡
AΔ2 w − λAxx w + Axx φ − 2[w, φ]1
⎤
⎡
0
⎤
⎥ ⎢ ⎢ ⎥ ⎢ ⎥ G¯ (w, φ, λ) = ⎣ G2 ⎦ = ⎣ −AΔ2 φ + Axx w − [w, w]2 ⎦=⎣ 0 ⎦ G3 θ w˙ 0 , w − w0 X + (1 − θ)λ˙ 0 (λ − λ0 ) − (s − s0 ) 0 for w, φ with zero average and the load λ, where the value of θ ∈ (0, 1) is fixed (e.g., θ = 12 ). We assume that we are given a value s0 , an initial point (λ0 , w0 ) on
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1373
the branch, and an approximate direction (λ˙ 0 , w˙ 0 ) of the branch at this point (an d (λ(s), w(s))|s=s0 ). approximation of the derivative ds System (3.5) is solved for a discrete set of values of s in some interval (s0 , s1 ) by Newton’s method. Then a new initial point on the branch is defined by setting w0 = w(s1 ), λ0 = λ(s1 ), s0 = s1 , a new direction (λ˙ 0 , w˙ 0 ) at this point is computed, and the process is repeated. The matrix we need to invert in Newton’s method is
G (w, φ, λ) g ¯ (3.6) G (w, φ, λ) = , hT d
−Axx w 4θA, w˙ 0 ΔxΔy g= , h= , d = (1 − θ)λ˙ 0 , 0 0 where A, = AΔ2 + Axx A−1 Δ2 Axx . Solving a linear system with this matrix amounts to solving system (3.3) for two right-hand sides. For a given u ∈ R2M N with [1T 1T ]u = 0 and a given η ∈ R we want to solve
G (w, φ, λ) g v u = (3.7) ζ η hT d for v with [1T 1T ]v = 0 and ζ. System (3.7) is solved in the following steps: solve :
G (w, φ, λ)v1 = g , G (w, φ, λ)v2 = u ,
ζ=
η − hT v2 , d − hT v1
v = v2 − ζv1 .
Remark 3.1. Note that in this implementation we simply follow a solution of the equation; there is no guarantee that this remains a local minimum, a mountain-pass solution or constrained mountain-pass solution (cf. Figure 12). Remark 3.2. Newton’s method and continuation have been implemented using only a one-sided discretization of the mixed derivative ∂xy and only on the domain Ω 14 assuming symmetry (1.9). The alternative discretization of ∂xy described in section ˜xy is not sparse, and 2.5 uses the fast cosine/sine transform. The resulting matrix A therefore we would obtain a dense block G12 in system (3.3) which would prevent us from using a sparse solver. On the full domain Ω we assume periodicity of w and φ in the y-direction. Hence for a discretization with a small step Δy the matrix we invert when solving (3.3) would become close to singular. The shift in the y-direction is prevented by assuming the symmetry w(x, y) = w(x, −y), φ(x, y) = φ(x, −y). 4. Numerical solutions. We fix the size of the domain and the size of the space step for the following numerical computations: a = b = 100, Δx = Δy = 0.5. We obtain solutions using the variational techniques SDM, MPA, CSDM, and CMPA. Table 1 provides a summary of which discretization was used in which algorithm. 4.1. A mountain-pass solution on the full domain Ω. The first numerical experiments were done on the full domain Ω, without symmetry restrictions, and with the unbiased (Fourier) discretization of ∂xy (section 2.5). For a fixed load λ = 1.4 we computed a mountain-pass solution using the MPA (section 1.2.2). For endpoints we used w1 = 0 and a second point w2 obtained by the SDM (here the initial point for the SDM was chosen to have a single peak centered at x = y = 0). The graph of the
1374
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER Table 1 Summary of which spatial discretization was used in the different numerical methods.
Variational methods Newton/continuation
Mixed derivative ∂xy One-sided Fourier √ √ √
Computational domain Full 1/4 √ √ √
Fig. 3. Mountain-pass solution for λ = 1.4 found using the MPA on the full domain Ω with ∂xy discretized using the fast Fourier transform.
solution wMP is shown in Figure 3 (left). The figure on the right shows wMP rendered on a cylinder (see the appendix for details on the scaling of the geometry), and we see it has the form of a single dimple. The value of shortening for this solution is S(wMP ) = 14.93529. Alternatively, if we apply the CSDM with S = 14.93529 and use a function with a single peak in the center of the domain as the initial condition w0 , we also obtain the same solution wMP , this time as a local minimizer of E under constrained S. We remark that although the MPA and the CSDM have a local character, we have not found any numerical mountain-pass solution with the total potential Fλ smaller than Fλ (wMP ) for λ = 1.4. Similarly, using the CSDM we have not found any solution with energy E smaller than E(wMP ) under the constraint S = 14.93529. We briefly discuss the physical relevance of this solution in section 6. 4.2. Solutions under symmetry restrictions. The solution wMP of Figure 3 satisfies the symmetry property (1.9). In the computations described below we enforced this symmetry and worked on the quarter domain Ω 14 , thus reducing the complexity of the problem. In order to improve the variational methods by combining them with Newton’s method, we also discretized the mixed derivative ∂xy using leftsided finite differences. The influence of this choice on the numerical solution is described in section 5.1. 4.2.1. CSDM. We first fixed S = 40 and used the CSDM to obtain constrained local minimizers of E as shown in Table 2. They are ordered according to the increasing value of stored energy E. Their graphs and renderings on a cylinder are shown in Figure 4. Solution 1.1 is similar to the single dimple solution wMP described above and according to Table 2 it has, indeed, the smallest value of E. 4.2.2. MPA. We then used the MPA for fixed λ = 1.4 and various choices of w2 to obtain the local mountain-pass points of Fλ as shown in Table 3. They are ordered according to the increasing value of the total potential Fλ . The shape of their graph is very similar to that of the CSDM solutions discussed above and depicted in
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1375
Table 2 Numerical solutions obtained by the CSDM on Ω 1 with ∂xy discretized using left-sided finite 4 differences. Graphs are shown in Figure 4. CSDM 1.1 1.2 1.3 1.4 1.5 1.6 1.7
λ 1.108121 1.299143 1.316146 1.311687 1.309586 1.328997 1.344898
S 40 40 40 40 40 40 40
E 56.85636 62.76150 63.21646 63.64083 63.70623 64.00875 64.52244
Fλ 12.53151 10.79577 10.57063 11.17334 11.32278 10.84889 10.72651
Same shape as MPA 2.1 2.2 2.3 2.4 2.5 2.6 2.7
Table 3 Numerical solutions obtained by the MPA on Ω 1 with ∂xy discretized using left-sided finite 4 differences. MPA 2.1 2.2 2.3 2.4 2.5 2.6 2.7
λ 1.4 1.4 1.4 1.4 1.4 1.4 1.4
S 17.73822 29.85121 31.28849 31.41723 31.22992 32.77491 34.19888
E 29.42997 49.08882 51.39952 52.01893 51.84074 54.15818 56.56472
Fλ 4.596460 7.297132 7.595635 8.034809 8.118852 8.273314 8.686284
Same Shape as CSDM 1.1 1.2 1.3 1.4 1.5 1.6 1.7
Table 4 Numerical solutions obtained by the CMPA/Newton on Ω 1 with ∂xy discretized using left-sided 4 finite differences. Graphs are shown in Figures 5 and 6. CMPA 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19
λ 1.310815 1.332112 1.447626 1.440841 1.447594 1.484790 1.477769 1.482261 1.413917 1.520975 1.475705 1.532000 1.527108 1.551762 1.547955 1.539785 1.546480 1.549780 1.561117
S 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40
E 63.98996 64.38609 66.49032 66.72079 66.97057 68.20637 68.23274 68.41086 68.56697 68.83818 69.00087 69.27379 69.35834 69.47838 69.68292 69.78487 69.85900 70.16253 70.74117
Fλ 11.55737 11.10161 8.585294 9.087129 9.066810 8.814758 9.121955 9.120428 12.01028 7.999162 9.972652 7.993781 8.274019 7.407904 7.764712 8.193480 7.999795 8.171339 8.296474
Endpoints 1.2, 1.4 1.3, 1.6 1.2, 1.3 or 1.3, 1.4 1.1, 1.4 1.1, 1.6 1.3, 1.5 1.1, 1.5∗ 1.1, 1.7 1.1, 1.2 1.2, 1.5 1.1, 1.5∗ 1.4, 1.5† 1.2, 1.6 1.4, 1.5† 1.6, 1.7 1.5, 1.7 1.3, 1.7 1.2, 1.7 1.4, 1.7
Figure 4, so we do not show their graphs here. Solution 2.1 is again the single dimple solution, and the table shows that it has the smallest value of Fλ . 4.2.3. CMPA. We then fixed S = 40 and applied the CMPA to obtain constrained local mountain passes of E as shown in Table 4. They are again ordered according to the increasing value of stored energy E. Their graphs and renderings on a cylinder are shown in Figures 5 and 6. For endpoints w1 , w2 of the path in the CMPA we used the constrained local minimizers 1.1–1.7.
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
1376
(1.5)
(1.1)
(1.6)
(1.2)
(1.7)
(1.3)
Fig. 4. Numerical solutions found using the CSDM with axial end shortening S = 40. More details are in Table 2.
(1.4)
(3.7)
(3.6)
(3.8)
(3.3)
(3.9)
(3.4)
Fig. 5. Numerical solutions found using the CMPA/Newton with S = 40. More details are in Table 4.
(3.2)
(3.1)
(3.10)
(3.5)
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1377
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
1378
(3.16)
(3.11)
(3.17)
(3.12)
(3.18)
(3.13)
(3.19)
(3.14)
Fig. 6. Numerical solutions found using the CMPA/Newton with S = 40. More details are in Table 4.
(3.15)
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1379
There are 21 possible pairs (w1 , w2 ) to be used but only 19 solutions in Table 4. The algorithm did not converge for the following three pairs: (1.1, 1.3), (1.5, 1.6), and (1.4, 1.6). This is most likely due to the complicated nature of the energy landscape between these endpoints. On the other hand, two choices of pairs, denoted by ∗ and † in the table, yielded two solutions each. When the path is deformed it sometimes comes close to another critical point of E which is not a constrained mountain pass. In that case the algorithm slows down and one can apply Newton’s method to such a point. It is a matter of luck whether Newton’s method converges. The CMPA then runs further and might converge to another point, this time a constrained mountainpass point. And finally, two choices of (w1 , w2 ) yielded the same solution 3.3. 5. Remarks on the numerics. 5.1. Bias in the discretization of ∂xy . In this section we examine the influence of the discretization of the mixed derivative ∂xy on the numerical solution. We recall that the mixed derivative ∂xy can be discretized using left-/right-sided finite differences (2.3), (2.4) or using the fast Fourier transform (section 2.5). For comparison we use the single-dimple solution on Ω = (−100, 100)2 at load λ = 1.4 obtained by the MPA. Let Δx = Δy = 0.5. Table 5 gives a list of numerical experiments together with the values of shortening and energy. Figure 7 shows a profile of the numerical solutions in the circumferential direction at x = 0. Table 5 Single-dimple numerical solution obtained by the MPA with and without the symmetry assumption (1.9) and with various kinds of discretization of ∂xy . Domain Ω or Ω 1
Discretization of ∂xy Fourier
λ 1.4
S 14.93529
E 24.71825
Fλ 3.808850
Ω Ω1
left-/right-sided left-sided
1.4 1.4
14.93617 17.73822
24.70828 29.42997
3.797636 4.596460
Ω1
right-sided
1.4
12.81205
21.16342
3.226549
4
4 4
Figure 7
Fourier ∂xy Ω, left-sided ∂xy Ω 1 , left-sided ∂xy 4
Ω 1 , right-sided ∂xy 4
Fig. 7. Profile of the single-dimple numerical solution wMP at x = 0 obtained by the MPA with and without the symmetry assumption (1.9) and with various kinds of discretization of ∂xy .
1380
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
wL − wCS ∞
S(wL )
S(wCS )
wR − wCS ∞
S(wR )
Fig. 8. Influence of the size of the space step Δx, Δy on the numerical solution wMP obtained by the MPA for three different kinds of discretization of ∂xy . Let wL , wR denote the numerical solutions obtained using the left- and right-sided discretization of ∂xy , respectively, with wCS using the fast cosine/sine transform. Left: comparison of the solutions in the maximum norm; right: the value of shortening S.
On the full domain Ω with no assumption on symmetry of solutions, the discretization of ∂xy using the left-/right-sided finite differences (2.3) provides a numerical solution that is slightly asymmetric (Figure 7, thin solid line). The Fourier transform provides a symmetric solution (Figure 7, thick solid line). The same numerical solution can be obtained on Ω 14 under the symmetry assumption (1.9) with ∂xy discretized using the fast cosine/sine transform. On Ω 14 the symmetry of numerical solutions is guaranteed by assumption (1.9). The use of left-/right-sided discretization of ∂xy does, however, have an influence on the shape of the numerical solution, as Figure 7 shows (the dotted and the dashed line). 5.2. Convergence. We now turn to the influence of the size of the space step Δx, Δy on the numerical solution. We run the MPA on Ω 14 under the symmetry assumption (1.9) with ∂xy discretized using (a) the fast cosine/sine transform, (b) left-sided finite differences, (c) right-sided finite differences. We consider Δx = Δy = 0.5, 0.4, 0.3, 0.2, 0.1, i.e., we take 200, 250, 333, 500, and 1000 points in both axis directions, respectively. Figure 8 illustrates convergence as Δx, Δy → 0 of the numerical solutions obtained by various types of discretization of ∂xy . 5.3. Dependence on the size of the domain. As observed in [7], the localized nature of the solutions suggests that they should be independent of domain size, in the sense that for a sequence of domains of increasing size the solutions converge (for instance, uniformly on compact subsets). Such a convergence would also imply convergence of the associated energy levels. Similarly, we would expect that the aspect ratio of the domain is of little importance in the limit of large domains. We tested these hypotheses by computing the single-dimple solution on domains of different sizes and aspect ratios. In all the computations the space step Δx = Δy = 0.5 is fixed. In order to use the continuation method of section 3.3, we discretized ∂xy using the left-sided finite differences. We also assumed symmetry of solutions given by (1.9) and worked on Ω 14 . We recall the notation of computational domains, Ω = (−a, a) × (−b, b), Ω 14 = (−a, 0) × (−b, 0).
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
(a, b) = (50, 50)
(a, b) = (50, 100)
(a, b) = (50, 200)
(a, b) = (100, 50)
(a, b) = (100, 100)
(a, b) = (100, 200)
(a, b) = (200, 50)
(a, b) = (200, 100)
(a, b) = (200, 200)
1381
Fig. 9. The single-dimple mountain-pass solution with λ = 1.4 computed under the assumption of symmetry (1.9) with left-sided discretization of ∂xy for various domain sizes.
Figure 9 shows the results for load λ = 1.4. First we notice that the central dimple has almost the same shape in all the cases shown. But there seems to be a difference in the slope of the “flat” part leading to this dimple. On domains with small a (short cylinder) the derivative in the circumferential y-direction in this part is larger than on domains with larger a (longer cylinder). The circumferential length b seems to be less important for the shape of the solution: for example, the cases (200, 50) and (200, 100) look like restrictions of the case (200, 200) to smaller domains. We take a closer look at domains of sizes (a, b) = (100, 100), (100, 200), (200, 100), and (200, 200) and the corresponding solutions w100,100 , w100,200 , w200,100 , and w200,200 shown in the figure. We compare the first three with the last one, respectively. It does not make sense to compare the values of w itself since the energy functional Fλ depends on derivatives of w only. We choose to compare wxx and wyy . Table 6 gives the infinity norm of the relative differences. Figure 10 shows graphs of the difference w100,100 − w200,200 and of the second derivatives (w200,200 − w100,100 )xx , (w200,200 − w100,100 )yy on the subdomain (−100, 0)2 . We conclude that solutions on different domains compare well; the maximal difference in the second derivatives of w is three orders of magnitude smaller than the supremum norm of the same derivative. We also observe that varying the length
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
1382
Table 6 Comparison of the second derivatives of solutions from Figure 9 computed on domains with (a, b) = (100, 100), (100, 200), (200, 100), and (200, 200). (w−w200,200 )xx ∞ (w200,200 )xx ∞
(w−w200,200 )yy ∞ (w200,200 )yy ∞
2.835 · 10−3 1.943 · 10−3 1.827 · 10−4
5.313 · 10−3 4.917 · 10−3 9.638 · 10−4
w = w100,100 w = w100,200 w = w200,100 w100,100 − w200,200
(w200,200 − w100,100 )xx
(w200,200 − w100,100 )yy
Fig. 10. Comparison of solutions w100,100 , w200,200 from Figure 9 obtained on square domains with a = b = 100 and a = b = 200, respectively, and their second derivatives. For a reference, we note that (w200,200 )xx ∞ = 1.064522 and (w200,200 )yy ∞ = 0.8242491.
b = 50
w 2X
Fλ (w)
b = 100 a = 200 b = 200 b = 50 b = 100 a = 100 b = 200 b = 50, 100, 200, a = 50
a = 200 b = 50 b = 100 b = 200 a = 100 b = 50 b = 100 b = 200
a = 50 b = 50, 100, 200
λ
λ
Fig. 11. Continuation of the single-dimple solution found as a numerical mountain pass for λ = 1.4 on domains of various sizes for a range of values λ. Left: Fλ (w) as a function of λ; right: wX as a function of λ.
parameter a while keeping the circumference parameter b fixed causes larger changes in the numerical solution than varying the cylinder circumference while keeping the length fixed. Another way of studying the influence of the domain size on the numerical solution is comparing solution branches obtained by continuation as described in section 3.3. We start with the mountain-pass solution for λ = 1.4 shown in Figure 9 and continue it for both λ > 1.4 and λ < 1.4. The results are presented in Figure 11. We observe that the branches corresponding to the considered domains do not differ much for the range of λ between approximately 0.71 and 2. Below λ ≈ 0.71 the size of the domain, particularly the length of the cylinder described by a, has a strong influence. The graph on the right shows that the larger (longer) the domain Ω, the smaller the value
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING (a) λ ≈ 0.593
(b) λ = 0.61 (mountain pass)
1383
(c) λ = 1.4 (mountain pass)
w 2X
a = b = 100 (d) λ = 1.95 (mountain pass) mountain pass local minimizer (of Fλ )
(a)
(b)
(c)
(d) λ
Fig. 12. A detailed look at the continuation of the single-dimple solution on the domain with a = b = 100.
of λ at which the norm w X starts to rapidly increase for decreasing λ. The graph on the left shows the energy Fλ (w) along a solution branch. The data shown here correspond to the ones in the graph on the right marked by a solid line. The dashed line in the right graph shows also some data after the first limit point is passed. Figure 12 shows how the graph of w(x, y) changes as a solution branch is followed. Here we chose a square domain with a = b = 100 and plotted the solution for four values of λ (note that Figures 12 (c), 9 (100,100), and 7 (dotted line) show the same numerical solution). We observe that with decreasing λ the height of the central dimple increases, the dimple becomes wider, and the ripples (present at λ close to 2) disappear. In Figure 12 (a) we observe that new dimples are being formed next to the central dimple. It should be noted that although we started the continuation at λ = 1.4 at a mountain-pass point, not all the points along a continuation branch are mountain passes. Since it is not feasible to use the MPA to verify this for each point, we chose just a few. Still on the example of the domain with a = b = 100 in Figure 12, the circles on the continuation branch mark those points that have also been found by the MPA (for λ = 0.61, 0.65, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 1.95). As described in section 1.2.2, in order to start the MPA a point w2 is needed such that Fλ (w2 ) < 0. The analysis in [7] shows that for a given λ such a point exists provided the domain Ω is large enough, and in practice it is found by the SDM of section 1.2.1. This was indeed the case for all the chosen values of λ except for λ = 0.61. In this case, starting from some w0 with a large norm, the SDM provides a trajectory w(t) such that Fλ (w(t)) > 0 for all t > 0. In fact, the steepest descent method converges to a local minimizer wM with Fλ (wM ) ≈ 76.1. This is hence no mountain pass but, nevertheless, lies on the same continuation branch and is marked by a triangle in the figure. Despite Fλ (wM ) > 0 we can still try to run the MPA with w2 = wM . It converges and yields wMP with Fλ (wMP ) ≈ 94.8 (marked by a circle at λ = 0.61 and shown in graph (b)). The comparison of solutions computed on different domains and their respective energies suggests that for each λ we are indeed dealing with a single, localized function
1384
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
defined on R2 , of which our computed solutions are finite-domain adaptations. Based on this suggestion and the above discussion of the mountain-pass solutions we could, for example, conclude that the mountain-pass energy ( ) V (λ, Ω) := inf Fλ wMP (λ, Ω, w2 ) : Fλ (w2 ) < 0 w2
is a finite-domain approximation of a function V (λ), whose graph almost coincides with that of V (λ, Ω) for λ not too small (cf. Figure 11 (left)). 6. Discussion. 6.1. Variational numerical methods. We have seen that given a complex energy surface many solutions may be found using these variational techniques. For example, for a fixed end shortening of S = 40, Figures 4, 5, and 6 are all solutions. Which of these solutions is of greatest relevance depends on the question that is being asked. In the context of the cylinder (and similar structural applications) the mountainpass solution from the unbuckled state (w1 = 0) with minimal energy is of physical interest. Often the experimental buckling load may be at 20–30% of the linear prediction from a bifurcation analysis (in our scaling this corresponds to λ = 2). This uncertainty in the buckling load is a drawback for design and so “knockdown” factors have been introduced, based on experimental data. It was argued in [7] that the energy of the mountain-pass solution wMP in fact provides a lower bound on the energy required to buckle the cylinder, and so these solutions provide bounds on the (observed) buckling load of the cylinder. This example illustrates an important aspect of the (constrained) mountain-pass algorithm: its explicit nonlocality. The algorithm produces a saddle point which has an additional property: it is the separating point (and level) between the basins of attraction of the endpoints w1 and w2 . Another technique to investigate a complex energy surface is to perform a simulated annealing computation, essentially to solve the SDM (or the CSDM) problem with additive stochastic forcing. The aim in these techniques is often to find a global minimizer (if it exists) where there are a large number of local minimizers. Here by either the MPA or the CMPA we find the solution between two such minima and so get an estimate on the surplus energy needed to change between local minima. 6.2. Numerical issues. The numerical issues that we encountered are of two types. First there are the requirements that are related to the specific problem of the Von K´ arm´an–Donnell equations, such as the discretization of the mixed derivative and the bracket, and the fact that the solutions are symmetric and highly localized. For other difficulties it is less clear. For smaller values of λ each of the variational methods converged remarkably slowly. Newton’s method provides a way of improving the convergence, but the question of whether this slow convergence is typical for a whole class of variational problems is relevant. It would be interesting to connect the rate of convergence of, for instance, the SDM to certain easily measurable features of the energy landscape. Appendix. This appendix shows the transformation between the Von K´ arm´an– Donnell equations in the physical coordinates and their scaled form (1.1)–(1.2). For the full derivation of the Von K´ arm´an–Donnell equations we refer to [7].
NUMERICAL VARIATIONAL METHODS, CYLINDER BUCKLING
1385
In the physical coordinates the Von K´ arm´an–Donnell equations, the stored energy, and the shortening assume the following form: (A.1) (A.2) (A.3)
P 1 t2 ˜ 2w w ˜x˜x˜ − φ˜x˜x˜ − 2 [w, ˜+ ˜ φ˜˜] = 0, Δ 12(1 − ν 2 ) 2πREt R ˜ 2 φ˜ + 1 w Δ ˜x˜x˜ + [w, ˜ w ˜ ˜] = 0, R t3 E tE 1 2 2 ˜ ˜ ˜ ˜ ˜ E(w) ˜ = (Δw) ˜ + (Δφ) , S(w) ˜ = (w ˜x˜ )2 , 24(1 − ν 2 ) Ω˜ 2 Ω˜ 4πR Ω˜
˜ := (0, L) × (0, 2πR), x where (˜ x, y˜) ∈ Ω ˜ is the axial coordinate, y˜ the circumferential coordinate, L the length of the cylinder, R its radius, t the thickness of the shell, E Young’s modulus, and ν Poisson’s ratio. The cylinder is subject to a compressive axial force P . The function w ˜ is an inward radial displacement measured from the unbuckled state, and φ˜ is the Airy stress function. The bracket [·, ·˜] is defined as in (1.3), only the derivatives are taken with respect to the physical coordinates x ˜, y˜. We ˜ does not include the energy of the prebuckling deformation. note that E In order to eliminate most of the physical parameters, we rescale the equations using the following transformation: √ √ x ˜ = (2πR√ k) (x + a) , a = L/(4πR k) , x ∈ (−a, a) , √ y˜ = (2πR k) (y + b) , b = 1/(2 k) , y ∈ (−b, b) , where k = t/(8π 2 R
3(1 − ν 2 )). We introduce new unknown functions,
˜ x, y˜) = (4π 2 Rk)2 φ(x, y), w(˜ ˜ x, y˜) = (4π 2 Rk) w(x, y) , φ(˜ and a load parameter λ = P 3(1 − ν 2 )/(πEt2 ). We then obtain (1.1)–(1.2), which depend on three parameters only: λ, a, and b. The functionals in physical coordinates defined in (A.3) and their scaled counterparts defined in (1.6) are related through ˜ w) E( ˜ = tER2 (4π 2 k)3 E(w) ,
˜ w) S( ˜ = 8π 3 Rk 2 S(w) .
Although the scaled energy and Young’s modulus in the above equations are both denoted by E, Young’s modulus does not appear in the scaled version of the problem. This avoids possible confusion. The numerical results in this paper are presented mostly as graphs of function w(x, y) over the domain Ω = (−a, a) × (−b, b), and as a rendering of w ˜ on a cylinder. For this rendering we used a sample cylinder of thickness t = 0.247 mm and Poisson’s ratio ν = 0.3. Hence, for example, a computational domain Ω = (−100, 100)2 corresponds to a physical cylinder of length 47.58 cm and radius 7.57 cm. REFERENCES [1] C. Canuto, M. Y. Hussaini, A. Quateroni, and T. A Zang, Spectral Methods in Fluid Dynamics, Springer, New York, 1987. [2] Y. S. Choi and P. J. McKenna, A mountain pass method for the numerical solution of semilinear elliptic problems, Nonlinear Anal., 20 (1993), pp. 417–437. [3] G. Friesecke and J. B. McLeod, Dynamic stability of non-minimizing phase mixtures, Proc. Royal Soc. London A, 453 (1997), pp. 2427–2436. ´k, Constrained mountain pass algorithm for the numerical solution of semilinear el[4] J. Hora liptic problems, Numer. Math., 98 (2004), pp. 251–276.
1386
ˇ ´I HORAK, ´ JIR GABRIEL J. LORD, AND MARK A. PELETIER
´k and P. J. McKenna, Traveling waves in nonlinearly supported beams and plates, in [5] J. Hora Nonlinear Equations: Methods, Models, and Applications (Bergamo, 2001), Progr. Nonlinear Differential Equations Appl. 54, Birkh¨ auser, Basel, 2003, pp. 197–215. ´k and W. Reichel, Analytical and numerical results for the Fuˇ [6] J. Hora c´ık spectrum of the Laplacian, J. Comput. Appl. Math., 161 (2003), pp. 313–338. ´k, G. J. Lord, and M. A. Peletier, Cylinder buckling: The mountain pass as an [7] J. Hora organizing center, SIAM J. Appl. Math., 66 (2006), pp. 1793–1824. [8] H. B. Keller, Numerical solution of bifurcation and nonlinear eigenvalue problems, in Applications of Bifurcation Theory (Proceedings of the Advanced Seminar, University of Wisconsin, Madison, WI, 1976), Publ. Math. Res. Center 38, Academic Press, New York, 1977, pp. 359–384. [9] V. Maz ya, S. Nazarov, and B. Plamenevskij, Asymptotic Theory of Elliptic Boundary Value Problems in Singularly Perturbed Domains. Vol. I, Oper. Theory Adv. Appl. 111, Birkh¨ auser Verlag, Basel, 2000. Translated from the German by Georg Heinig and Christian Posthoff. [10] R. D. Richtmeyer and K. W. Morton, Difference Methods for Initial-Value Problems, 2nd ed., Krieger, Malabar, FL, 1994. [11] G. Strang, Introduction to Applied Mathematics, Wellesley-Cambridge Press, Wellesley, MA, 1986. [12] P. J. Swart and P. J. Holmes, Energy minimization and the formation of microstructure in dynamic anti-plane shear, Arch. Rational Mech. Anal., 121 (1992), pp. 37–85.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1387–1412
c 2008 Society for Industrial and Applied Mathematics
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS OF HERMITIAN AND SYMMETRIC MATRICES∗ ANDREAS FROMMER† AND VALERIA SIMONCINI‡ Abstract. Building upon earlier work by Golub, Meurant, Strakoˇs, and Tich´ y, we derive new a posteriori error bounds for Krylov subspace approximations to f (A)b, the action of a function f of a real symmetric or complex Hermitian matrix A on a vector b. To this purpose we assume that a rational function in partial fraction expansion form is used to approximate f , and the Krylov subspace approximations are obtained as linear combinations of Galerkin approximations to the individual terms in the partial fraction expansion. Our error estimates come at very low computational cost. In certain important special situations they can be shown to actually be lower bounds of the error. Our numerical results include experiments with the matrix exponential, as used in exponential integrators, and with the matrix sign function, as used in lattice quantum chromodynamics simulations, and demonstrate the accuracy of the estimates. The use of our error estimates within acceleration procedures is also discussed. Key words. matrix functions, partial fraction expansions, error estimates, error bounds, Lanczos method, Arnoldi method, CG iteration AMS subject classifications. 65F30, 65F10, 65F50 DOI. 10.1137/070684598
1. Introduction. Matrix functions arise in a large number of application problems, and over recent years efforts to enhance their effective numerical computation have significantly encouraged their use in discretization and approximation methods. These include exponential integrators, which require the computation of the matrix exponential exp(A) or of ϕ(A) with ϕ(t) = (exp(t) − 1)/t, and have recently emerged for numerically solving stiff or oscillatory systems of ordinary differential equations; see, e.g., [22, 15]. Another example arises when simulating chiral fermions in lattice quantum chromodynamics (QCD). Here, one has to solve linear systems of the form (P + sign(A))x = b, where P is a permutation matrix and A is the Wilson fermion matrix; see [3]. In general, given a square matrix A, the matrix function f (A) can be defined for a sufficiently smooth function f by means of an appropriate spectral decomposition of A; see, e.g., [23]. In both the examples cited above, as well as in many other applications, the matrix A can be very large. Then it is practically impossible to explicitly compute f (A) by means of a spectral decomposition of A, since f (A) will in general be full even if A is sparse. Fortunately, in these applications the knowledge of the action of the matrix function on a vector is usually all that is required, and not the matrix function itself. For example, when solving the system (P + sign(A))x = b with an iterative solver, each step will usually require the computation of (P + sign(A))p for a certain vector p which changes at each iteration. ∗ Received by the editors March 7, 2007; accepted for publication (in revised form) September 10, 2007; published electronically March 28, 2008. http://www.siam.org/journals/sisc/30-3/68459.html † Fachbereich Mathematik und Naturwissenschaften, Universit¨ at Wuppertal, Gauß-Straße 20, D42097 Wuppertal, Germany (
[email protected]). This author was partly supported by DFG grant Fr755 17-2. ‡ Dipartimento di Matematica, Universit` a di Bologna, Piazza di Porta S. Donato, 5, I-40127 Bologna, Italy, and CIRSA, Ravenna, Italy (
[email protected]).
1387
1388
ANDREAS FROMMER AND VALERIA SIMONCINI
In the case of some functions such as the exponential, the sign, the square-root, and trigonometric functions, a particularly attractive approach for large matrices exploits the powerful rational function approximation f (t) ≈ g(t) =
ps1 (t) , ps (t)
where pi (t) are polynomials of degree i; see, e.g., [36, 2]. The built-in MATLAB [26] function for the matrix exponential, for example, uses a Pad´e rational approximation. Rational functions may be conveniently employed in a matrix context by using a partial fraction expansion, namely (assuming that multiple poles do not occur) (1.1)
g(t) =
s 1 ps1 (t) ωi . = ps2 (t) + ps (t) t − σi i=1
Since the computation of ps2 (A)b is trivial—although numerical accuracy may become an issue if A is large—we assume ps2 = 0 and concentrate on the sum representing the fractional part. When applied to a matrix A, this gives (1.2)
z = g(A)b =
s
−1
ωi (A − σi I)
b=
i=1
s
ωi xi .
i=1
For large dimensional problems, the solutions xi of the shifted linear systems (1.3)
(A − σi I)xi = b,
i = 1, . . . , s,
are i , so as to obtain z = 's approximated by more cheaply computable vectors x ω x . This can be done by using a single approximation space for all shifted i=1 i i systems. More precisely, if the columns of the matrix V span such an approximation space, then each approximate system solution may be written as x i = V yi for some yi , i = 1, . . . , s, so that (1.4)
z =
s i=1
ωi x i = V
s
ω i yi .
i=1
This strategy is particularly welcomed in view of the fact that constructing the approximation space is commonly the most expensive step in the whole approximation process. It is important to realize that such a procedure corresponds to employing a projection technique for g(A)b; we refer to [9] for a discussion of this connection and for further references. The aim of this paper is to obtain cheap as well as sufficiently accurate estimates of the Euclidean norm of the error (1.5)
z − g(A)b ,
which can then serve as a stopping criterion for an iterative process computing a sequence of approximations z. To this end, we build upon a large amount of work available in the literature, on estimating the energy-norm or Euclidean norm of the error when the conjugate gradient method (CG) is used to solve M x = b with M Hermitian and positive definite (hereafter Hpd); see, e.g., [4, 12, 13, 28, 30, 43, 44]. We recall at this point that ad hoc (mostly a priori) error estimates for projectiontype approximations of the matrix exponential have been recently developed [7, 42,
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1389
21]. On the other hand, a posteriori bounds have also been derived for the matrix sign function [45]. The estimates proposed in this paper directly aim at approximating the error norm (1.5) of the rational approximation and are therefore of a general nature. The overall approximation error is then a combination of the error of the rational function as an approximation to f , a quantity which is usually known a priori, and the error (1.5). Our estimates are particularly useful in two situations: when the matrix A is Hpd and all σi are negative or when A is real symmetric with the σi being complex. We show experimentally that the new estimates can be very tight, and that they are lower bounds when A − σi I is Hpd for all i. The stopping criterion associated with the bound can be cheaply included in Krylov subspace projection-type methods for solving the combined shifted linear systems, and we discuss the corresponding implementation within the Lanczos and the CG framework. The paper is organized as follows. In section 2 we review some important facts for Krylov subspace approximations to the solution of linear systems, and we establish the connection between the projection approach and Galerkin approximations. Section 3 discusses in more detail the CG method as an iterative method producing Galerkin approximations. After briefly explaining the known fact that CG can be implemented very efficiently in the context of multiple shifted linear systems, we highlight the role of the computed CG coefficients as ingredients of error estimates for the individual systems. In section 4 we briefly discuss alternative estimates and stopping criteria. Section 5 is the core section where we develop our new error estimates and prove some results on their accuracy. In section 6 we discuss details of an efficient implementation. Numerical experiments are contained in section 7, and, finally, section 8 shows how our estimates can be extended to shift-and-invert acceleration procedures that were recently proposed in [46] and [31]. Exact arithmetic is assumed throughout. This is a crucial issue, since rounding errors may substantially affect computed quantities in our context. To ensure the practical usefulness of the error estimates and the related stopping criteria to be developed in this work one thus needs an additional error analysis with respect to floating point arithmetic. Such an error analysis is beyond the scope of the work presented here. Let us just mention that in the case of the CG iteration such an error analysis has been presented in [43]. The fact that the stopping criteria to be developed here build on those for CG which have been identified as stable in [43] may thus be taken as a first hint on their numerical reliability. We close this introduction with a word on notation. As a rule, we will use superscripts to denote the iteration number, and subscripts to denote the associated system, (k) so that ri is the residual after k iterations of the ith system. For certain matrices that are independent of the systems considered, we will prefer the simpler notation Vk (say) rather than V (k) . We use the Euclidean 2-norm for vectors. Moreover, for a column vector x ∈ Cn , we use xT = [(x)1 , . . . , (x)n ] and xH = [(x)1 , . . . , (x)n ], where (v)i denotes the ith component of the vector v, and (x)i the conjugate of the ith component of x. 2. Basic facts on Krylov subspace approximation. Given a linear system M x = b and an approximation space K, an approximate solution x ∈ K to x may be obtained by imposing some orthogonality condition on the corresponding residual r = b − M x . For M Hermitian or complex symmetric, a common strategy is to impose that the residual be orthogonal to the approximation space, and this is called the Galerkin condition. Such an orthogonality condition may be given, for instance,
1390
ANDREAS FROMMER AND VALERIA SIMONCINI
in terms of the standard Euclidean inner product, that is, (2.1)
v H r :=
n
(v)i ( r)i = 0,
v ∈ K.
i=1
A particularly convenient approximation space is given by the Krylov subspace Km (M, b) = span{b, M b, M 2 b, . . . , M m−1 b}. Starting with v (0) , a normalized version of b, a basis of this space can be constructed iteratively, one vector at a time, as {v (0) , v (1) , . . . , v (m−1) }, so that for each m, Km ⊆ Km+1 . The following classical matrix relation, the Arnoldi recurrence, provides the overall procedure for generating a nested orthonormal basis for Km (A, b) as the columns of the matrices Vm = [v (0) , v (1) , . . . , v (m−1) ]: (2.2)
M Vm = Vm Tm + v (m) tm+1,m eTm ,
Vm = [v (0) , v (1) , . . . , v (m−1) ].
Here, Tm is an m × m upper Hessenberg matrix which reduces to a tridiagonal matrix if M is self-adjoint with respect to the inner product in use. The vector em is the mth vector of the canonical basis, and tm+1,m is a normalization factor [40]. Note that while the entries in Vm and Tm depend on the underlying inner product, the general recursive form (2.2) remains unchanged. In the case of the standard Euclidean inner product, Vm will satisfy VmH Vm = Im , the identity matrix of dimension m, and if M ∈ Cn×n is Hermitian, Tm is tridiagonal and the Arnoldi recurrence (2.2) simplifies accordingly (see below). The choice of an appropriate inner product depends on the matrix properties. In our context, the use of a bilinear form different from the Euclidean inner product turns out to be very convenient in the case where M ∈ Cn×n is complex symmetric T T (M 'n = M ). In this case, M is self-adjoint with respect to the bilinear form x y = i=1 (x)i (y)i . To make the notation more uniform, we use ·, ·∗ to denote either of
x, yH := xH y =
n i=1
(x)i (y)i ,
x, yT := xT y =
n
(x)i (y)i .
i=1
Both forms will be called “inner products,” the first one being definite, the second one indefinite. Hence, we assume that the basis generated with the Arnoldi process (2.2) is orthogonal with respect to the employed inner product ·, ·∗ when no additional specification is needed. With a slight abuse of terminology, we will use the following definition from now on. Definition 2.1. A shifted complex symmetric matrix is a matrix of the form M = A − σI, with A real symmetric and σ ∈ C. Note that shifted complex symmetric matrices are a subclass of the complex symmetric matrices. When M is Hermitian (resp., complex symmetric) and the basis vectors are orthogonal with respect to ·, ·H (resp., ·, ·T ), the upper Hessenberg matrix Tm = VmH M Vm is Hermitian (resp., Tm = VmT M Vm is complex symmetric) and thus tridiagonal. This allows one to derive the next basis vector v (m) in (2.2) by using only the previous two basis vectors. The resulting short-term recurrence is the well-known Lanczos procedure for generating the Krylov subspace associated with M and b. We will henceforth always employ ∗ = H when M is Hpd, and ∗ = T when M is shifted complex symmetric.
1391
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
We next recall some key facts about Galerkin approximations in the Krylov subspace when the coefficient matrix has a shifted form; see, e.g., [34, 35]. Proposition 2.2. Let the system M x = b be given, with M = A − σI for some σ ∈ C. Then we have the following: 1. Km (A − σI, b) = Km (A, b) (invariance with respect to shift). 2. Let x (σ) ∈ Km (A, b) be the Galerkin approximation to M x = b with the given inner product, and r(σ) = b − M x (σ). For any σ ∈ C we have r(σ) = (−1)m ρ(σ)v (m) , where v (m) is the (m + 1)st Krylov subspace basis vector from (2.2) and |ρ(σ)| = r(σ) . The first result shows that when solving systems that differ only for the shifting parameter σ, approximations can be carried out in a single approximation space. The second result says that the residuals associated with the shifted systems are all collinear to the next basis vector. In terms of the Arnoldi relation, shift invariance results in (A − σI)Vm = Vm (Tm − σI) + v (m) tm+1,m eTm ,
Vm∗ Vm = I,
where Vm and Tm are the same for all σ’s. Denoting by x (σ) = Vm y(σ) ∈ Km (A, b) the Galerkin approximation for x(σ), the solution of (A − σI)x = b, we have 0 = Vm∗ b − Vm∗ (A − σI)Vm y(σ) = e1 β − (Tm − σI)y(σ),
1/2
β = b, b∗ .
Assuming (Tm − σI) nonsingular, it follows that y(σ) = (Tm − σI)−1 e1 β, so that (2.3)
x (σ) = Vm y(σ) = Vm (Tm − σI)−1 e1 β.
By substituting this quantity into the residual r(σ) = b − M x (σ), it also follows that r(σ) = v (m) eTm y(σ)tm+1,m , which is related to Proposition 2.2.2. In particular, this relation shows that neither x (σ) nor r(σ) need to be explicitly computed to get (2.4)
r(σ), r(σ)∗ = (eTm y(σ)tm+1,m )∗ (eTm y(σ)tm+1,m ).
Remark 2.3. If A in M = A − σI is real and b is also real, then Vm is real. Complex arithmetic for σ ∈ C arises only in the computation of the approximate solution y(σ). Moreover, all residuals are complex multiples of a real vector. In this case, we also have a particularly simple relation between the iterates belonging to pairs of conjugate shifts. If x (σ) = Vm y(σ) is the approximate solution to (A − σI)x = b in (2.3), then y(σ) = (Tm − σI)−1 e1 β = (Tm − σI)−1 e1 β = y(σ). This shows that x (σ) is identical to x (σ), so that x (σ) need not be computed explicitly. Clearly, one also has r(σ) = r(σ). 3. CG-type approximations and their error estimates. In the previous section we recalled that when Tm is tridiagonal the Arnoldi process reduces to a short (three)-term recurrence. In fact, if in addition, Tm can be factorized as a product of two bidiagonal matrices, then a coupled two-term recurrence can be obtained. In the case when M (and thus Tm ) is Hpd, then Tm = Lm LHm , with Lm lower bidiagonal,
1392
ANDREAS FROMMER AND VALERIA SIMONCINI
Choose x(0) , set r(0) = b − M x(0) , p(0) = r(0) for k = 1, 2, . . . do γ (k−1) = r(k−1) , r(k−1) ∗ / p(k−1) , M p(k−1) ∗ x(k) = x(k−1) + γ (k−1) p(k−1) r(k) = r(k−1) − γ (k−1) M p(k−1) δ (k) = r(k) , r(k) ∗ / r(k−1) , r(k−1) ∗ p(k) = r(k) + δ (k) p(k−1) end for Fig. 3.1. CG for the two inner products (∗ = H or ∗ = T).
which gives rise to the classical CG method. A possible implementation is reported in Figure 3.1 (take ∗ = H). In the case when M = A − σI is shifted complex symmetric and b is real, the Lanczos procedure in the inner product ·, ·T yields the shifted complex symmetric tridiagonal matrix Tm − σI. An implementation of the CG method in this inner product has been proposed in [27]. It is also subsumed in Figure 3.1, now with ∗ = T. We next focus on some specific properties of the CG algorithm that allow us to derive error estimates for the rational function approximation. In section 6 we show that these quantities are easily available in practical algorithms that realize the rational function approximation. Before we move on to discussing error estimates obtained from the CG coefficients, we need to provide sufficient conditions for the CG recurrence to exist. If M = A − σI with A Hpd and σ ≤ 0 real, then M as well as Tm − σI are also Hpd, and therefore the Lanczos approximate solution can be computed by solving (2.3). In addition, a Cholesky factorization of the form Tm − σI = Lm LHm exists, which yields the CG recurrence; see [10, 40]. When A is Hermitian and σ ∈ C is nonreal, the Lanczos recurrence still exists, Tm is Hermitian, and Tm − σI is nonsingular, since Tm has only real eigenvalues. However, the existence of a Cholesky-type factorization of Tm − σI that formally produces the CG recurrence (in the inner product ·, ·T ) is not obvious. For a general complex symmetric matrix such a factorization does not necessarily exist unless additional hypotheses on the matrix are assumed; see [19]. Fortunately, for the shifted tridiagonal matrices we are dealing with, the existence of the Cholesky-type factorization is guaranteed by the following result. It thus ensures that the CG procedure does not break down, so that the associated coefficients can be recovered. Proposition 3.1. Let T& be a Hermitian matrix, and let σ be a complex number with nonzero imaginary part. Let T = T& − σI. Then the symmetric root-free Cholesky decomposition T = LDLT with L lower triangular with unit diagonal, D diagonal, exists. Proof. We first show that the traditional LU -decomposition of T without pivoting exists. Therefore, L and U are lower and upper tridiagonal, respectively. By using [18, Theorem 9.2], this decomposition exists, and all diagonal entries of U are nonzero, if det(Ti ) is nonzero for i = 1, . . . , k − 1. Here, Ti denotes the i × i principal submatrix of T . Now note that T&i is Hermitian, since T& is. Therefore, if Ti were singular, σ would be an eigenvalue of T&i , which is impossible, since all eigenvalues of T&i are real. T with D the diagonal matrix containing the diagonal To conclude, write U = DL T T and T = T T = LDL elements of U . Then T = LDL . By the uniqueness of the LU -factorization we get L = L.
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1393
Corollary 3.2. Let A ∈ Rn×n be symmetric, and let σ ∈ C, !(σ) = 0. Then there is no breakdown in the CG algorithm with ·, ·T when applied to M = A − σI with b real. Proof. This algorithm updates iterates using the complex Cholesky factorization of the tridiagonal matrix Tk = T&k − σI, where T&k comes from the classical Lanczos procedure. Since T&k = VkT AVk is real and symmetric, Proposition 3.1 guarantees the existence of the factorization. Our error estimates will make use of the following relations in the CG methods. For the standard Euclidean inner product, they date back to the original work of Hestenes and Stiefel [17] and have recently been highlighted in [43, formulas (1.4)– (1.7)]. For the bilinear form x, yT they can be derived in a completely analogous manner so that we do not reproduce a proof here. For the expressions associated with
e(k) , e(k) ∗ we refer to the corresponding relations in [43, formulas (11.1)–(11.3)]; see also [29] for related results. Lemma 3.3. Let M be either Hpd or shifted complex symmetric. Let e(k) = x − x(k) denote the error at iteration k, where x(k) is the iterate of the CG method for M x = b and r(k) = M e(k) its residual. For d ∈ N, using the CG coefficients γ (k) , define η (k,d) :=
d−1
γ (k+i) r(k+i) , r(k+i) ∗ ,
i=0
ϕ(k,d) :=
d ,
p(k+i) , p(k+i) ∗ + (k+i) (k+i) (k+i+1) (k+i+1) , e +
r , e
r . ∗ ∗
p(k+i) , M p(k+i) ∗ i=0
The following relations hold at any iteration k and for any d > 0:
r(k) , e(k) ∗ = r(k+d) , e(k+d) ∗ + η (k,d) ,
e(k) , e(k) ∗ = e(k+d) , e(k+d) ∗ + ϕ(k,d) . Note that r(k) , e(k) ∗ = M e(k) , e(k) ∗ . Therefore, in case M is Hpd (and ∗ = H), all inner products in the definition of η (k,d) and ϕ(k,d) are positive, from which it follows that η (k,d) and ϕ(k,d) are both positive and
r(k) , e(k) H ≥ η (k,d) ,
e(k) , e(k) H ≥ ϕ(k,d) .
In the case that M is shifted complex symmetric, we can only write the estimates (3.1)
r(k) , e(k) T ≈ η (k,d) ,
(3.2)
e(k) , e(k) T ≈ ϕ(k,d) ,
since these are in general complex quantities and ·, ·T is not definite. In either case, the estimates (3.1) and (3.2) are close to equalities whenever the quantities r(k+d) , e(k+d) ∗ , e(k+d) , e(k+d) ∗ at iteration k + d are small (but not necessarily very small) compared to those at iteration k; see [43]. Despite the lack of any minimization property in the bilinear form with ∗ = T, it can be shown that the convergence behavior of CG applied to the shifted complex symmetric matrix M is driven by the spectral properties of M in a way somehow similar to the Hpd case;
1394
ANDREAS FROMMER AND VALERIA SIMONCINI
see [25]. We thus expect at least linear convergence for sufficiently large k, and in particular, we expect that the error e(k) decreases with k. The term η (k,d) is very easily computable from the scalar quantities of the CG iteration. For ϕ(k,d) this is not so, since its computation involves the unavailable quantities r(k+i) , e(k+i) ∗ . Estimating those as in (3.1), we get an estimate τ (k,d) for ϕ(k,d) with τ (k,d) =
(3.3)
d
p(k+i) , p(k+i) ∗ (η (k+i,d) + η (k+i+1,d) ). (k+i) , M p(k+i)
p ∗ i=0
Using this instead of ϕ(k,d) in (3.2), we obtain the estimate
e(k) , e(k) ∗ ≈ τ (k,d) ,
(3.4)
which is now again very cheaply computable from the CG coefficients. Therefore, after k + d iterations it is possible to compute the estimates η (k,d) , τ (k,d) associated with the CG error at iteration k. For M Hpd and ∗ = “H”, we can summarize this discussion as follows; see [43]. Lemma 3.4. If M is Hpd, then for d ∈ N at any iteration k the following hold:
r(k) , e(k) H ≥ η (k,d) ,
e(k) , e(k) H ≥ ϕ(k,d) ,
e(k) , e(k) H ≥ e(k+d) , e(k+d) H + τ (k,d) ≥ τ (k,d) . When M is Hpd and has a shifted form, we now show that the norm of the residual of the Galerkin approximation gets smaller as the shift moves the matrix spectrum away from the origin. We will need this result later in section 5. Lemma 3.5. Assume that A is Hermitian and positive definite, and consider the two linear systems (A − σi I)x = b, σi ∈ R, i = 1, 2, with 0 ≥ σ1 > σ2 . Let v (k) denote (k) (k) (k) the (k + 1)st Lanczos vector. Let ri = (−1)k ρi v (k) , with ρi ∈ R, i = 1, 2, be the residuals associated with a Galerkin approximation in Kk (A, b). Then for any k ≥ 0 (k) with v (k) = 0, we have ρi > 0, i = 1, 2, and, in addition, for d ≥ 0 with v (k+d) = 0, (k)
ρ2
(3.5)
(k)
ρ1
(k+d)
≥
ρ2
(k+d)
ρ1
≥ 0.
Proof. We use the expressions (see [34, 45]) (k)
(k)
ρ1 = ρ0
k 4
1
, (k) ν=1 1 − σ1 /Θν
(k)
(k)
ρ2 = ρ0
k 4
1 (k)
ν=1
(k)
1 − σ2 /Θν
.
Here, ρ0 ≥ 0 is the norm of the residual of the Galerkin approximation to Ax = b in (k) Kk (A, b), and the Θν ’s denote the Ritz values of A in Kk (A, b), i.e., the (all positive) (k) eigenvalues of the Hpd matrix Tk from the Lanczos process. This shows that ρi ≥ 0, i = 1, 2. We have (k)
(3.6)
ρ2
(k)
ρ1
=
k (k) 4 1 − σ1 /Θν (k)
ν=1
1 − σ2 /Θν
.
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1395
Since the Ritz values for two consecutive values of k interlace, we can order them in (k+d) (k) such a manner that Θν ≤ Θν for ν = 1, . . . , k. Because σ2 < σ1 ≤ 0, the fraction 1−σ1 /t σ2 −σ1 1−σ2 /t = 1 + t−σ2 is a positive and monotonically increasing function of t ∈ [0, ∞). Applying this to each factor in (3.6), we obtain (k)
ρ2
(k)
ρ1 Since 0 <
1−σ1 /Θ(k+d) ν (k+d)
1−σ2 /Θν
≥
k (k+d) 4 1 − σ1 /Θν (k+d)
ν=1
1 − σ2 /Θν
.
≤ 1 for ν = k + 1, . . . , k + d, we can multiply the expression on
the right-hand side with these factors to obtain (k)
ρ2
(k)
ρ1
≥
k+d 4
1 − σ1 /Θν
ν=1
1 − σ2 /Θν
(k+d) (k+d)
(k+d)
=
ρ2
(k+d)
,
ρ1
which is the desired inequality. 4. Standard estimates. Given the vector z = g(A)b and its approximation zk obtained after k steps of an iterative process, a simple stopping criterion consists of 's (k) monitoring the change of these iterates. Since (see (1.4)) zk = i=1 ωi Vk yi , within the Lanczos method we can cheaply compute
(4.1)
Δk,d
s (k) yi (k+d) = zk − zk+d = ωi − yi , 0d i=1
where 0d is the zero vector of d components. Note that the second equality holds H Vk+d = I, which is the case in both our situations: A real symmetric only if Vk+d and all shifts complex (∗ = T and Vk+d is real) as well as A Hermitian and all shifts real (∗ = H). In the case of the CG algorithm, for instance, and for d = 1, 's (k−1) (k−1) Δk,1 may be computed as Δk,1 = i=1 ωi γi pi
. A criterion based on Δk,d is commonly employed in linear and nonlinear iterative solvers [24]. Its effectiveness strongly depends on the convergence behavior of the iterative process, and in fact Δk,d may be small due to temporary stagnation, rather than to a satisfactory convergence of the approximate solution. In our context, convergence is in many cases linear (with a good rate) or even superlinear (cf., e.g., [21] for superlinear convergence results for exp(x)), except possibly for the first phase, where almost complete stagnation may occur. Our numerical experience fully confirms these phenomena, showing that (4.1) is very reliable when convergence enters the second phase (good linear convergence), whereas it may fail completely in the case of initial stagnation; cf. section 7. Another stopping criterion may be derived by trying to generalize the concept of residual stemming from the linear system setting, by resorting to the partial fraction (k) expansion of g in (1.2). Let xi be the Galerkin approximation to (A − σi I)−1 b (k) in Kk (A, b), ri the associated residual, and assume that (A − σi I) is not highly 's (k) ill-conditioned for any i. If the quantity r(k) = i=1 ωi ri is small, then one may ' (k) s reasonably argue that z (k) = i=1 ωi xi is a good approximation to z. This provides
1396
ANDREAS FROMMER AND VALERIA SIMONCINI
an argument for using r(k) as stopping criterion [39]. In particular, since for each i (k) (k) we have ri = (−1)k ρi v (k) and v (k) = 1, we can define s s (k) (k) (k) k (k) (4.2) := v ωi (−1) ρi = ω i ρi , i=1
i=1
which can be cheaply computed at each iteration; cf. [25] for a related discussion. Experience in the context of evaluating the exponential (cf., e.g., [46, 25, 37, 39, 41]) has shown that (k) may dramatically underestimate the actual error in the early convergence phase until good information on the spectrum of A is generated in the Krylov subspace. After the stagnation phase terminates, we have observed a reasonable agreement, at least in terms of slope, with respect to the true error; cf. section 7. However, (k) can still differ from the exact error by some orders of magnitude. The approach just discussed can be refined. Assume that we know bounds 1 , 2 such that spec(A) ⊆ [1 , 2 ]. Starting from g(A)b − z (k) =
s
s + , (k) (k) ωi (A − σi I)−1 b − xi ωi (A − σi I)−1 ri =
i=1
=
s
i=1
(−1)k ρi ωi (A − σi I)−1 v (k) , (k)
i=1
with v (k) = 1, we see that (4.3)
g(A)b − z
(k)
s (k) −1
≤ ρi ωi (A − σi I) ≤ i=1
max |h(k) (t)|,
t∈[1 ,2 ]
where h
(k)
(t) =
s (k) ρ ωi i
i=1
t − σi
.
Similarly, (4.4)
g(A)b − z (k) ≥ min |h(k) (t)|. t∈[1 ,2 ]
We can obtain upper and lower bounds for the error if we can bound h(k) in [1 , 2 ] from above and from below, respectively. In some situations, for example for rational approximations to the sign function or the square root of an Hpd matrix, the function h(k) is monotone, so that the maximum and the minimum can be read off from its values at 1 or 2 . In other situations, for example for the exponential, the extremal values of h(k) are attained in the interior of the interval [1 , 2 ], and we can use standard optimization methods to find them. Of course, this process should be comparably cheap. In experiments not reported here, we used Rump’s intlab toolbox [38], and we obtained bounds of the extrema by just subdividing the interval [1 , 2 ] into several subintervals and computing an image interval containing the range of |h(k) | on each of these subintervals using standard interval arithmetic. Bounds for the maximum and the minimum are then obtained from the endpoints of the image intervals.
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1397
The previous discussion allows us to better interpret the classical estimate (k) from (4.2), at least in the case where A is Hpd, 0 ≥ σ1 > σ2 > · · · > σs , and all ωi (k) are positive. By Lemma 3.5, we then know that ρi ≥ 0 for all i. Thus we have (k) (k) ≤ h(k) (t) ≤ , 2 − σs 1 − σ1
t ∈ [1 , 2 ].
We finish this section by referring to some earlier work on Gaussian quadrature and iterative methods. For A Hpd, in [11] Golub and Meurant developed an elegant theory on moments and Gaussian quadrature with respect to discrete measures allowing them to obtain methods for computing lower and upper bounds to quantities of the form v H f (A)v; see also [13]. This theory was then elaborated to be cheaply included in CG-type algorithms, leading to important practical developments, on which the results of this paper are based; see, e.g., [12, 28, 29, 43]. 5. New error estimates for rational approximations. Our goal is to develop estimates for the 2-norm of the error when the approximation z (k) is determined 's (k) (k) as z (k) = i=1 ωi xi and each xi is obtained by a Galerkin procedure. For simplicity of notation we assume b = 1 from now on. Then s s ωi (A − σi I)−1 b − ωi Vk (Tk − σi I)−1 e1 .
g(A)b − z (k) = i=1
i=1
Our estimates rely only on the quantities available from the CG processes for each of the s systems (A − σi I)x = b, as described in section 3. In particular, using the estimates from (3.1) and (3.4), we now prove the following result. Theorem 5.1. In the systems (A−σi I)xi = b, assume that either of the following assumptions holds: (a) A is Hpd and σi real, i = 1, . . . , s, (b) A is real symmetric and b is real. (k) For each i, let xi be the Galerkin approximation to xi with respect to the kth Krylov subspace (obtained via CG or Lanczos), which is assumed to exist. This is the case, (k) (k) e.g., if σi ≤ 0 in case (a) and if !(σi ) = 0 in case (b). Let ri = (−1)k ρi v (k) be the associated residual. With the definitions in Lemma 3.3 and in (3.3), it holds that 2 s (k) ωi xi ≈ η (k,d) + τ (k,d) , g(A)b −
(5.1)
i=1
2
where
η
(k,d)
=
s i,j=1,¯ σi =σj
τ (k,d) =
s i,j=1,σ i =σj
ω i ωj σ i − σj
(k,d)
ω ¯ i ω j τj
(k)
ρj
(k,d) η (k) i ρi
.
(k)
−
ρi
(k,d) η (k) j ρj
,
1398
ANDREAS FROMMER AND VALERIA SIMONCINI
Proof. We have 2 s s + ,H (k) (k) (k) (5.2) g(A)b − ωi xi = ω i ωj (A − σi I)−1 ri (A − σj I)−1 rj i=1
i,j=1
2
=
s
ω i ωj (ei )H ej . (k)
(k)
i,j=1
We discuss each summand in (5.2) depending on whether σ i = σj or not. First, let σ i = σj . Using 1 1 1 1 = (5.3) · − , (t − σ i )(t − σj ) σ i − σj t − σi t − σj we see that in this case ,H + (k) (k) (A − σj I)−1 rj (A − σi I)−1 ri + , 1 (k) (k) (k) (k) (ri )H (A − σ i I)−1 rj − (ri )H (A − σj I)−1 rj σ i − σj + , 1 (k) (k) (k) (k) = (ei )H rj − (ri )H ej . σ i − σj
=
(k)
(k)
Recall that ri = (−1)k ρi v (k) . If A and b are real and A is symmetric, then v (k) is a real vector and (ei )H rj (k)
(k)
= (−1)k ρj (ei )H v (k) = (−1)k ρj (v (k) )T ei (k)
(k)
(k)
(k)
(k)
ρj
(k)
(k)
(r )T ei (k) i
=
=
ρi Analogously, (ri )H ej (k)
(k)
=
(k)
ρi
(k)
ρj
(k)
ρj
(k)
(k)
ρi
(k)
rj , ej T . (k)
(k)
(k)
(k)
=
(e )H ri (k) i (k)
(k)
=
ρi (k)
=
is real, and also
(k)
ρj
as well as, analogously, (ri )H ej (k)
(k)
ri , ei T .
If A is Hermitian and all poles σi are real, then ρi (ei )H rj
(k)
(k)
ρi
(k)
ρj
(k)
ρj
(k) ρi
(k)
(k)
ri , ei H ,
(k)
rj , ej H . With our general notation ·, ·∗ ,
we can subsume both cases by writing +
(A −
(k) σi I)−1 ri
,H (k) (A − σj I)−1 rj =
1 σ i − σj
(k)
(k) (k)
r , ei ∗ (k) i ρi
−
ρi
(k) (k)
r , ej ∗ (k) j ρj (k)
Now, let σ i = σj . If A is real symmetric and b is real, then xi (k) ri
=
(k) (k) rj , ei
=
(k) ej ;
see Remark 2.3. Therefore, (ei )H ej (k)
(k)
= (ej )T ej (k)
(k)
(k)
(k)
ρj
(k)
= ej , ej T ,
(k)
= xj
.
and
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1399
while for A Hermitian and σi = σj real we have (ei )H ej = ej , ej H . Therefore, (5.2) can be written as 2 (k) s s (k) ρj ω i ωj ρi (k) (k) (k) (k) (k) ωi xi =
r , ei ∗ − (k) rj , ej ∗ g(A)b − σ i − σj ρ(k) i ρj i=1 i,j=1,σ i =σj i 2 (k)
(5.4)
+
s
(k)
(k)
(k)
(k)
(k)
ω i ωj ej , ej ∗ .
i,j=1,¯ σi =σj
By using the estimates from (3.1) and (3.4), the result follows. (k) Except for the ρi ’s—which can be updated very cheaply (see section 6)—all quantities used in estimate (5.1) are directly available from the respective CG pro(k,d) (k,d) cesses. More precisely, the quantities ηi and τj needed in η (k,d) and τ (k,d) are built up from quantities available in iterations k, . . . , k + d and k, . . . , k + 2d, respectively. Therefore, after k + 2d iterations of the CG or Lanczos methods, it is possible to estimate the 2-norm of the error at iteration k. At convergence, the overall estimation procedure will have required only 2d additional iterations to get often very accurate estimates throughout the convergence history. The parameter d can be fixed a priori or adjusted dynamically; in many cases a small constant value, d = 2, . . . , 5, is satisfactory. (k+j) Remark 5.2. Since τ (k,d) requires the quantities ηi for j = 0, . . . , 2d, we can even use, at the same computational cost, the improved estimate 2 s (k) (5.5) ωi xi ≈ η (k,2d) + τ (k,d) . g(A)b − i=1
2
Remark 5.3. For any d > 0 the quantities η (k,d) and τ (k,d) are real in the following two cases of interest: 1. A is (complex) Hermitian and all shifts are real; 2. A is real symmetric, b is real, the nonreal shifts σi come in conjugate pairs, and ω i = ωj whenever σ i = σj . Indeed, in case 1, every single summand in η (k,d) and τ (k,d) is real. In case 2, each summand in η (k,d) and τ (k,d) corresponding to an index pair (i, j) has a complex conjugate summand corresponding to the index pair (j, i). We now proceed to further investigate the error estimate (5.1) in the case of A Hpd and negative poles σi . This cases arises, for instance, for rational approximations to the sign function as well as to the inverse square root. We show that the estimates we derive are then lower bounds for the true error. 5.4. Let A be Hermitian and positive definite and b ∈ Cn . Let g(t) = 's Theorem −1 . Assume that ωi ∈ R, ωi > 0, i = 1, . . . , s, and that the poles σi are i=1 ωi (t − σi ) real and satisfy σi < σj < 0 for i < j. Let η (k,d) , τ (k,d) be defined as in Theorem 5.1. Then, for any d ≥ 0, 's (k) 1. g(A)b − i=1 ωi xi 2 ≥ η (k,d) + τ (k,d) ; 's (k) 2. g(A)b − i=1 ωi xi 2 ≥ η (k,2d) + τ (k,d) . Proof. Note first that the condition σ i = σj is now equivalent to i = j. Since σj > σi for i < j, Lemma 3.5 yields (k)
(5.6)
ρi
(k)
ρj
(k+d)
≥
ρi
(k+d)
ρj
,
1400
ANDREAS FROMMER AND VALERIA SIMONCINI
where all quantities are real. In the following, we use the symbol & to denote quantities related to iteration k + d. Define w &i = v&H (A − σi I)−1 v& ∈ R with v& the (complex) (k+d) H (k+d) Lanczos vector at stage k + d. We have w &i > 0 for all i and (ri ) ei = ρ&2i w &i . Therefore, using (3.1), the part that was neglected when passing from the first sum in (5.4) to its estimate η (k,d) can be bounded as follows: (k) s (k) ρj ωi ωj ρi (k+d) H (k+d) (k+d) H (k+d) (r ) ei − (k) (rj ) ej σi − σj ρ(k) i ρ i,j=1,i=j
i
s
=2
i,j=1,i<j s
≥2
i,j=1,i<j
j
ωi ωj σi − σj
(k)
(k)
ρj
ρ&2 w & (k) i i ρi
−
ρi
ρ&2 w & (k) j j ρj
ωi ωj (& ρj ρ&i w &i − ρ&j ρ&i w &j ) . σi − σj
In the last line we have used that σi − σj < 0 for i < j and that, according to (5.6), (k)
(k)
ρj
ρ&2 ≤ ρ&i ρ&j (k) i
ρi
ρ&2 (k) j
and
ρi
ρj
) (A − σi I)−1 ri
(k+d) H
&i = (rj Now, ρ&j ρ&i w
(k+d)
≥ ρ&j ρ&i .
, which together with (5.3) shows that
1 (k+d) H (k+d) (& ρj ρ&i w &i − ρ&j ρ&i w &j ) = (ej ) ei . σi − σj This gives us s i,j=1,i=j
ωi ωj σi − σj
(k)
ρj
(k+d) H (k+d) (r ) ei (k) i ρi s
≥2
(k)
−
ρi
(k+d) H (k+d) (r ) ej (k) j ρj
(k+d) H (k+d) ) ej
ωi ωj (ei
i,j=1,i<j
(5.7)
=
s
(k+d) H (k+d) ) ej .
ωi ωj (ei
i,j=1,i=j
According to Lemma 3.4, the part that was neglected when passing from the 's (k+d) H (k+d) second sum in (5.4) to its estimate τ (k,d) is not smaller than j=1 ωj2 (ej ) ej . Putting things together, we obtain 2 s (k) ωi xi g(A)b − i=1
≥ η (k,d) + τ (k,d) + 2
2
s
(k+d) H (k+d) ) ej
ωi ωj (ei
i,j=1,i<j
+
s j=1
2 s (k+d) = η (k,d) + τ (k,d) + g(A)b − ωi xi . i=1
2
(k+d) H (k+d) ) ej
ωj2 (ej
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1401
This proves part 1. For 2, we obtain in a similar manner as before 2 s (k) ωi xi g(A)b − i=1
2 s
≥ η (k,2d) + τ (k,d) +
(k)
ω i ω j ρj
i,j=1,i=j s
−
(k)
ω i ω j ρi
(σi −
(k) σj )ρi
(k+2d) H (k+2d) ) ei
(ri
(k+2d) H (k+2d) ) ej
(r (k) j (σi − σj )ρj
i,j=1,i=j
⎛
s
(k+d) H (k+d) ) ej
ωj2 (ej
j=1
s
ω i ω j ρj
i,j=1,i<j
(σi − σj )ρi
= η (k,2d) + τ (k,d) + 2 · ⎝
+
(k) (k)
(k+2d) H (k+2d) ) ei
(ri
(k)
− +
(5.8)
s
ω i ω j ρi (σi −
(k+2d) H (k+2d) (r ) ej (k) j σj )ρj
(k+d) H (k+d) ) ej .
ωj2 (ej
j=1
The same steps that lead to (5.7) yield
s
(k)
ω i ω j ρj (σi −
i,j=1,i<j
≥
s
(k+2d) H (k+2d) (r ) ei (k) i σj )ρi
(k)
−
ωi ωj ρi (σi −
(k+2d) H (k+2d) (r ) ej (k) j σj )ρj
(k+2d) H (k+2d) ) ej
ωi ωj (ei
i,j=1,i<j
=
s
ωi ωj ρ&i ρ&j v&H (A − σi I)−1 (A − σj I)−1 v&j ≥ 0,
i,j=1,i<j
where the last inequality follows from the positive definiteness of the matrix (A − σi I)−1 (A − σj I)−1 and the fact that ρ&i ρ&j is positive. The result in 2 now 's (k+d) H (k+d) follows since j=1 ωj2 (ej ) ej ≥ 0. Note that the proof of this theorem also shows that we have a monotone behavior, 'p 'p (k) (k+1)
g(A)b − i=1 ωi xi ≥ g(A)b − i=1 ωi xi
, as soon as η (k,1) + τ (k,1) ≥ 0. Al(k,d) (k,d) though it is easy to see that the individual terms ηi and τj are nonnegative (see [43]), we did not succeed in proving that the latter condition always holds. Numerical evidence strongly suggests that this is true. 6. Implementation aspects. The bounds obtained in Theorems 5.1 and 5.4 allow us to derive a computable and rather cheap stopping criterion. For a fixed small value of d, after k iterations of the iterative process, with k > 2d, it is possible to derive an error estimate at iteration k − 2d. As an example, for d = 3, after 20 iterations it is possible to give an estimate of the error norm at iteration 14. Starting with k = 1, a sketch of the procedure is as follows:
1402 1. 2. 3. 4. We
ANDREAS FROMMER AND VALERIA SIMONCINI
Expand Krylov subspace from k − 1 to k. Update projected approximate solution y(σ) for all σ’s. If k > 2d, compute estimate of error at step k − 2d. If not converged, set k = k + 1 and goto 1. suggest to stop the iteration as soon as the following criterion is satisfied: 1
|ηη (k,2d) + τ (k,d) | 2 ≤ tol, where tol is a user-selected tolerance. We next discuss some additional details associated with the actual implementation of the overall approach. (k) The computation of η (k,2d) , τ (k,d) requires knowledge of the CG coefficients γi (k) and δi . When using the Lanczos recurrence (2.2), the CG coefficients are related to the entries of the tridiagonal matrix Tk = (tij ) as follows (cf., e.g., [40]): D (6.1)
tk+1,k+1 − σi =
1 (k)
γi
+
(k−1) δi , (k−1) γi
(k−1)
δi
tk,k+1 =
(k−1)
,
γi
while tk+1,k = tk,k+1 and tk+1,k = tk,k+1 for ∗ = “T” and ∗ = “H”, respectively. Moreover, τ (k,d) requires the computation of τik,d , which for i = 1, . . . , s contains the factor (k)
π ˆi,k =
(k)
pi , pi ∗ (k)
(k)
pi , (A − σi I)pi ∗
.
Once again, this quantity is not directly available in the case when the Lanczos process is employed. However, a short-term recurrence can be used for its update. We first observe that (k)
π ˆi,k =
(k)
pi , pi ∗ (k)
(k)
pi , (A − σi I)pi ∗
(k) (k) (k) pi , pi ∗ . (k) (k)
ri , ri ∗
= γi
(k−1)
Using the CG recurrence and the orthogonality between pi (k)
(k)
(k)
(k)
(k)
(k−1)
pi , pi ∗ = ri , ri ∗ + (δi )2 pi (0)
Therefore, starting with δi have (k)
(k)
(0)
(0)
(k−1)
, pi (0)
(k)
and ri , we have ∗ .
(0)
= 0, πi,0 = pi , pi ∗ = ri , ri ∗ , i = 1, . . . , s, we
(k)
(6.2) πi,k = ri , ri ∗ + (δi )2 πi,k−1 ,
(k)
π ˆi,k = γi
πi,k , (k) (k)
ri , ri ∗
k = 1, 2, . . . .
In the following we report a possible implementation1 of the whole procedure with the new stopping criterion. The Lanczos recurrence is employed as the underlying Krylov space method. We assume that A and b are given as the coefficients 's as well 1 ωi and the poles σi of the rational function g(t) = i=1 ωi t−σi . Without loss of generality, we scale b so that b = 1. 1 For the sake of readability we do not address the possible optimizations in the presence of complex conjugate shifts; see Remark 2.3.
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1403
Algorithm PFE-Lanczos. Choose d, tol, maxit {for error estimate and stopping} I = {1, 2, . . . , s} {nonconverged systems} for i, j = 1, , . . . , s do {various initializations} if σ i = σj , then ξi,j = ω i ωj /(σ i − σj ) else ξi,j = 0, end end for (−1) (0) (0) (0) β = 0, v0 = b, v−1 = 0, πi,0 = 1, γi = 1, δi = 0, ρi = 1, πi = 1 (i ∈ I) for k = 1, . . . , maxit do {iteration} ∗ q, tk,k = α, {Lanczos coefficients and vectors} q = Avk−1 − βvk−2 , α = vk−1 v = q − αvk−1 , β = ( v ∗ v)1/2 , vk = v/β, tk+1,k = β, tk,k+1 = β, vk = v/β, (k−1) (k−1) (k−2) γi = 1/(α − σi − δi /γi ), i ∈ I, {CG coefficients γi using (6.1)} (k) (k−1) 2 (k−1) 2 (k−1) πi = (ρi ) + (δi ) πi , i ∈ I, {update factors in (6.2)} (k) (k) (k−1) (k−1) (k−1) π ˆi = (πi /ρi )(γi /ρi ), i ∈ I, (k) (k−1) 2 δi = β 2 (γi ) ,i∈I {CG coefficients δi using (6.1)} if k − 1 ≥ 2d, then {compute ⎛ ⎞ error estimate} k k (j) (m−1) (m−1) (j−1) (j−1) 2 π ˆi ⎝γi (ρi ) +2 γi (ρi )2 ⎠ , i ∈ I, i = j=k−d
τ k,d =
m=j−d
ωi2 i ,
i∈I
ˆi =
k
(m−1)
γi
m=k−d
η k,2d =
j∈I m∈I
(m−1) 2
(ρj
ˆj
) ,
i ∈ I,
ξ ρ(k−2d) (k−2d) j,m m
ρj
−
ˆm
(k−2d) ξ ρ (k−2d) j,m j ρm
,
1
est = |ττ k,d + η k,2d | 2 end if yi = (Tk − σi Ik )−1 e1 , i ∈ I (k) ρi = eTk yi tk+1,k , i ∈ I, if est < tol, then s k−1 ωi yi , zk = (wk )i+1 vi , stop wk = i=1
{get projected solutions} {factors for residuals} {iteration converged} {zk approx. solution as in (1.4)}
i=0
else (k) {remove converged systems} remove i with |ρi | ≈ mach from I end if end for Note that, in this implementation, the whole set of Lanczos basis vectors is needed only when retrieving the approximate solution upon convergence. So, during the iteration phase, older vectors may be stored in secondary memory and accessed again only at convergence. An alternative is a two-sweep strategy, where the basis vectors are recomputed a second time. On the other hand, the number of basis vectors involved in the approximation procedure does not depend on the number of shifted systems to be solved. Therefore, the storage requirements do not increase significantly with the number of shifted systems, and the additional cost for getting a solution for the s shifted tridiagonal systems (Tk − σi Ik )yi = e1 is just O(sk) in iteration k. The
1404
ANDREAS FROMMER AND VALERIA SIMONCINI
computational cost for computing the various coefficients is O(s2 ) per iteration. This shows that the dominant cost per iteration is caused by the computation of the next Lanczos basis vector, where we have to invest one matrix-vector multiplication and O(n) additional work for the vector updates. An analogous implementation relying on the CG method for multiple shifted systems can also be derived; see [47]. There, again, the Krylov subspace is expanded only once for all shifted systems at a time. In exact arithmetic, at each CG iteration the approximate solutions and the associated residuals are the same as those obtained in the Lanczos procedure; see (2.3). The algorithmic difference is that they are updated at each iteration, so that the whole subspace basis needs not be stored. As opposed to the Lanczos procedure, however, the direction vectors p(k) depend on the shift σ in M = A − σI. As was shown in [47], it is possible to retrieve the CG coefficients γ (k) and δ (k) for a shifted system from those of the nonshifted system at little constant cost. For each additional shift, one direction vector and one solution iterate need to be stored so that the memory requirements increase linearly with the number of shifts. This is in contrast to the Lanczos-based approach, where memory requirements increase linearly with the number of iterations performed. The computational cost of CG for each additional shifted system is O(n) per iteration for the updates of the iterates and of the direction vectors, plus O(1) for retrieving the shifted CG coefficients. Summarizing, we see that the computational costs of the Lanczos and the CG methods for solving several shifted linear systems are slightly in favor of the former, which does not require s direction vector updates. However, if memory requirements become crucial, the multiple CG approach is preferable. 7. Numerical experiments. In this section we report on our numerical experience with the discussed stopping criteria. In particular, we compare our new estimate (5.5) for a very low value of d, usually d ≤ 5, with the estimates using Δk,d from (4.1), for the same value of d, and the classical estimate (k) from (4.2). For completeness, in one case with A Hpd we also give the bounds obtained via the operator norm bounds in (4.3) and (4.4). We consider four different rational functions, each approximating a function of 1 interest in applications: sign(x), x− 2 , exp(x), and cos(x). In our experiments we use both matrices from real application problems, and “academic” examples with diagonal matrices. The latter case is not a major restriction since the performance of the methods in exact arithmetic depends only on the spectrum and the initial vector, and not on the sparsity of the matrix. Example 7.1. The Wilson fermion matrix Q results as a discretization of the theory of quantum chromodynamics (QCD), which explains the strong interaction between the quarks as constituents of matter. Recent developments that for the first time respect the physically important “chiral symmetry” (see, e.g., [3]) produce models in which systems of the form P + sign(Q), P a simple permutation matrix, have to be solved repeatedly. This is done with a Krylov subspace method, so that in each step one has to compute sign(Q)b for some vector b. The matrix Q is Hermitian and indefinite. Here we report on numerical results obtained with the matrix D available in the QCD collection of the Matrix Market [32] as configuration conf5.4-00l8x8-2000.mtx with κc = 0.15717. Q is then given as Q = P (I − 43 κc D), with P the permutation ⎛ ⎞ 0 0 1 0 ⎜ 0 0 0 1 ⎟ ⎟ n P = I3 ⊗ ⎜ ⎝ 1 0 0 0 ⎠ ⊗ I 12 . 0 1 0 0
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1405
2
10
true error Δk,d ρ(k) New estimate bounds from (4.3) and (4.4)
0
10
−2
norm of error
10
−4
10
−6
10
−8
10
−10
10
0
50
100
150 200 250 number of iterations
300
350
400
Fig. 7.1. Results for the Zolotarev rational approximation of sign(Q) for the QCD matrix Q. The 30 smallest eigenvalues are deflated. We plot the convergence history of the Krylov subspace approximation and various error estimates for s = 11 poles. Both the new estimate and Δk,d use d = 5.
The dimension of the system is n = 12 · 84 ≈ 50,000. We compute sign(Q)b for a randomly generated vector b. To this purpose, we first compute two numbers 0 < a1 < a2 such that spec(Q) ⊂ [−a2 , −a1 ] ∪ [a1 , a2 ]. We then approximate sign(t) rational approximation; see [36]. This on [−a2 , −a1 ] ∪ [a1 , a2 ] with the Zolotarev 's t approximation has the form g&(t) = i=1 ωi t2 +α , ωi , αi > 0, and it is an ∞ best i approximation. The number of poles s was chosen such that the ∞ -error was less than 10−7 , that is, s = 11. To compute g&(Q)b, we actually computed g(Q) · (Qb) with (7.1)
g(t) =
s i=1
ωi
t2
1 . + αi
Since Q2 is Hpd, Theorem 5.4 applies, so that our estimates will be lower bounds of the true error. We also used a deflation technique which has become common in realistic QCD computations: Since the density of the eigenvalues of Q close to 0 is relatively small, it pays to compute q eigenvalues of Q which are smallest in modulus, λ1 , . . . , λq , say, beforehand using a Lanczos procedure for Q2 . With Π denoting the orthogonal projector along the space spanned by the corresponding eigenvectors wi , i = 1, . . . , q, we then work with the matrix ΠQΠ and the vector Πb. In this manner, we effectively shrink the eigenvalue intervals for Q, so that we need fewer poles for an accurate Zolotarev approximation, and, in addition, the linear systems to be solved converge more rapidly. The vector sign(Q)b can be retrieved as sign(ΠQΠ)Πb + sign(diag(λ1 , . . . , λq )) · (I − Π)b. In QCD practice, this approach results in a major speedup, since sign(Q)b must usually be computed repeatedly for various vectors b. The convergence plot in Figure 7.1 shows that the error is monotonically decreasing. Our new estimate (with d = 5) is quite close to the true error, and it is a lower bound in accordance with Theorem 5.4. The plot also gives upper and lower bounds as obtained by (4.3) and (4.4), as well as the classical estimate (k) from (4.2) and the estimate Δk,d from (4.1) with d = 5. Our new estimate is the most precise of all. Let us mention that [45] gives an alternative way for obtaining an upper estimate for the error with the Zolotarev approximation. This estimate assumes that we first
1406
ANDREAS FROMMER AND VALERIA SIMONCINI
2
2
10
10 true error Δk,d
0
10
New estimate
−2
New estimate
10
−4
−4
10
norm of error
norm of error
ρ(k)
−2
10
−6
10
−8
10
10
−6
10
−8
10
−10
−10
10
10
−12
−12
10
10
−14
10
true error Δk,d
0
10
ρ(k)
−14
0
10
20
30
40 50 60 number of iterations
70
80
90
100
10
0
50
100 150 number of iterations
200
250
√ Fig. 7.2. Results for the (11, 12) Zolotarev rational function approximation of 1/ x. Convergence history of the Krylov subspace approximation with various error estimates. Left: A is a 200 × 200 diagonal matrix with uniformly distributed values in [1, 1000]. Right: A is a 3000 × 3000 diagonal matrix with clustered values in [1, 104 ]. In both examples, we used d = 2 for the new estimate and for Δk,d .
compute the Galerkin approximation for g(Q)b and then postmultiply by Q, whereas here we use just the opposite order; i.e., we first multiply b by Q. Example 7.2. We consider the Zolotarev rational function approximation to the inverse square root function, g(A)b ≈ A−1/2 b; see [36, Chapter 4]. This is directly related to the sign function as g(t) = g(t1/2 ) with g from (7.1). So, again, our new estimates will be lower bounds of the error if A is Hpd. In both experiments below, we employ d = 2. We use a rational function with denominator of degree s = 12 (and numerator of degree s − 1 = 11), and a 200 × 200 diagonal matrix A with uniformly distributed values in the interval [1, 1000]; b is the normalized vector of all ones. With these parameters, the accuracy of the Zolotarev approximation turns out to be of the order of 10−7 . The convergence history of our Krylov subspace method together with the error estimates are reported in the left-hand plot of Figure 7.2. The figure shows the good agreement of the new estimate with the true error. The norm Δk,d (for d = 2) between different approximate solutions is also a good estimate, since complete stagnation of the process is never observed. The estimate (k) is not sharp, losing at least two orders of magnitude. In the right-hand plot A was taken a diagonal matrix of size 3000 × 3000 with clustered eigenvalues. There are 300 clusters, the centers of which are uniformly distributed in [1, 104 ]. Each cluster contains 10 eigenvalues obtained by perturbing the center with a random relative change uniformly distributed in [0, 10−4 ]. The analysis in [33] explains that we are now to expect stagnation phases such as those observed in the plot. The error estimates become less tight—and they are too small—when stagnation occurs, whereas they are quite good when there is progress in the approximation. The new error estimate is again slightly better than Δk,d (d = 2 in both estimates). As in Example 7.1 we know that the new estimate represents a lower bound by Theorem 5.4. Example 7.3. In this example we consider the Chebyshev rational approximation g(A)b to the exponential function exp(−A)b. The coefficients of the two polynomials of the same degree appearing in g have been tabulated in [5] for several different degrees. It is known that the error associated with this approximation is maxt>0 | exp(−t) −
1407
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS 2
0
10
10
true error Δ
true error Δ
ρ New estimate
ρ New estimate
k,d (k)
0
10
−2
10
k,d (k)
−5
10
norm of error
norm of error
−4
10
−6
10
−8
10
−10
10 −10
10
−12
10
−14
10
−15
0
10
20
30
40
number of iterations
50
60
70
10
0
50
100
150
number of iterations
Fig. 7.3. Results for the Chebyshev rational function approximation of the exponential. Convergence history of the Krylov subspace approximation with various error estimates. Left: A stems from the 2D Laplace operator on a 30 × 30 grid. Right: A is a diagonal matrix of dimension 400 with uniformly distributed eigenvalues in [0, 2000]. In both examples, we used d = 2 for the new estimate and for Δk,d .
g(t)| = O(10−s ), where s is the degree of the polynomials in the rational function. In this case, the poles σi and the coefficients ωi in the partial fraction expansion are complex and appear in conjugate pairs. Therefore, the code with ∗ = T is employed. In both experiments below we use d = 2. We first consider the 900 × 900 matrix A = with A stemming from the finite difference discretization of the two dimensional 0.1 A, Laplace operator in the unit square and homogeneous boundary conditions, and thus A is real symmetric and positive definite; b is taken as the scaled vector of all ones. The convergence in the approximation to the vector g(A)b, with g of degree s = 14, is reported in the left-hand plot of Figure 7.3, together with the considered error estimates. Both the new bound and the difference Δk,d , for the same value of d, are able to closely follow the true convergence behavior of the error. The classical bound (k) stays almost two orders of magnitude above the actual error, although the slope is very similar. We next consider an example in which the convergence history shows an initial stagnation phase, and the picture changes significantly. To this end, we let A be a diagonal 400 × 400 matrix with uniformly distributed values in the interval [0, 2000], and b, g(t) as above. The results are reported in the right-hand plot of Figure 7.3. Both the classical estimates (k) and those relying on Δk,d are completely unreliable throughout the stagnation stage, whereas our new estimate provides a useful estimate. After that phase, all curves behave as in the previous example, with the new estimate being the sharpest one. We refer to [39] for an improved, possibly more reliable stopping criterion than that based on (k) , in the case of the exponential function. In this context, we also mention that a stopping criterion based on a variant of Δk,d is also used in [46, section 4]. Example 7.4. Finally, we consider the (8, 8) Pad´e rational approximation to the cosine function, as described in [20, formula (4.1)]; see also [16]. We consider a diagonal 1000 × 1000 matrix A with diagonal real elements uniformly distributed in [0, π], and b equal to a normalized vector of uniformly distributed random values in [0,1]. The moderate norm of A ensures an accuracy of the Pad´e approximation on the order of 10−8 . The poles and coefficients in the partial fraction expansion of the Pad´e function arise in complex conjugates, yielding a shifted complex symmetric matrix M .
1408
ANDREAS FROMMER AND VALERIA SIMONCINI
0
true error Δk,d
−2
ρ New estimate
10
(k)
10
−4
10
−6
norm of error
10
−8
10
−10
10
−12
10
−14
10
−16
10
−18
10
0
5
10
15
20
25
number of iterations
Fig. 7.4. Results for the (8, 8) Pad´ e rational function approximation to the cosine. Convergence history of the Krylov subspace approximation with various error estimates. A is a 1000 × 1000 diagonal matrix with uniformly distributed values in [−π, π]. We used d = 2 for the new estimate and for Δk,d .
The results for the Krylov subspace iteration are displayed in Figure 7.4. They confirm the high accuracy of the new estimate and of the estimate based on Δk,d . Actually, these two estimates are so close to each other that they cannot be distinguished in the plot of Figure 7.4. We used d = 2 for both the new estimate and Δk,d . The classical estimate (k) has a dramatic oscillating behavior, which makes the estimate completely unreliable. We also observe that the true error stagnates at the level ≈ 10−13 , which for this problem appears to be the method’s final attainable accuracy. This phenomenon deserves a deeper analysis, in view of similar discussions in the case of iterative system solvers for linear systems; see, e.g., [14]. 8. Acceleration procedures. Acceleration procedures have been proposed to enhance the convergence of the approximation to matrix functions; see, e.g., [46, 31, 6, 8, 1]. In [46] and [31] a procedure based on shift-and-invert Lanczos is proposed to accelerate the Krylov subspace approximation of the action of matrix functions to a vector, f (A)b, so that fewer iterations are required to reach the desired accuracy. For a given real parameter μ > 0, the procedure first constructs the Krylov subspace Kk ((I − μA)−1 , b), by means of the Arnoldi recurrence (I − μA)−1 Vk = Vk Tk + tk+1,k vk+1 eTk , and then determines an approximation to f (A)b as Vk f ((I −Tk−1 )/μ)e1 . For f a rational function, it was shown in [37, Proposition 3.1] that the shift-andinvert Lanczos procedure amounts to approximating each system solution (A−σi I)−1 b &= in Kk ((I − μA)−1 , b) by imposing the Galerkin condition. To be specific, set A 1 &i = σi μ−1 . Then (I − μA)−1 and σ + , &−σ & · (A − σi I) = 1 − μσi A &i I , A μ and the shift-and-invert Lanczos procedure amounts to solving, for i = 1, . . . , p, the systems , + 1 − μσi & &−σ & (8.1) x, b = Ab. A &i I x & = &b, with x &= μ The linear systems in (8.1) have precisely the same shifted structure as those (k) & &b); in the previous sections. Let x &i be the Galerkin solution to system i in Kk (A,
1409
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS 0
0
10
10
true error Δ
true error Δ
ρ New estimate
ρ New estimate
k,d (k)
−2
10
k,d (k)
−4
10
−5
norm of error
norm of error
10 −6
10
−8
10
−10
10 −10
10
−12
10
−14
10
−15
0
5
10
15
20
25
30
10
0
20
40
number of iterations
60
80
100
120
number of iterations
Fig. 8.1. Convergence of the acceleration procedure. Left: approximation to the Chebyshev rational function approximating the exponential. A is the discretization of an elliptic operator; see Example 8.1. √ Right: approximation to the (11, 12) Zolotarev rational function approximation of the function 1/ x; see Example 8.2. In both cases, we used d = 2 for the new estimate and for Δk,d .
note that the generation of this subspace requires solving a system with (I − μA) at each iteration. The acceleration procedure is therefore effective only if one can solve (k) (k) μ x & be the these systems efficiently, e.g., using a multigrid method. Let xi = 1−μσ i i corresponding approximate solution to the original system (A − σi I)x = b; see (8.1). Then g(A)b −
s
(k)
ωi xi
=
i=1
s
(k)
ωi (xi − xi )
i=1
=
s i=1
μωi (k) (k) (& xi − x &i ) ≡ ω &i (& xi − x &i ). 1 − μσi i=1 s
's The results(k)of Theorems 5.1 and 5.4 can now be used to estimate the error &i (& xi − x &i ). We show the behavior of the shift-and-invert acceleration procei=1 ω dure in the next two examples. Example 8.1. We consider the approximation of the operation exp(−tA)b, where t = 0.1, b is the scaled unit vector, and A is the 10,000 × 10,000 matrix stemming from the five point finite difference discretization of the operator L(u) = (a(x, y)ux )x + (b(x, y)uy )y ,
a(x, y) = 1 + y − x,
b(x, y) = 1 + x + x2 ,
in [0, 1]2 ; see [46]. The standard Lanczos approach is extremely inefficient on this problem, whereas the shift-and-invert acceleration strategy is very competitive [37]. In Figure 8.1 we report the performance of the procedure for s = 14, d = 2 and the acceleration parameter μ taken as μ = 1/ maxi |σi |; cf. [37] for a justification of this choice. The results fully confirm the effectiveness of the error estimate even in the acceleration context and highlight its sharpness whenever convergence is fast. Note that the simple estimate Δk,d is equally good, owing to the fast convergence rate, whereas the classical residual-based estimate (k) is unable to capture the true order of magnitude of the error. Example 8.2. We conclude with the use of the shift-and-invert Lanczos procedure for accelerating the approximation of A−1/2 b in Example 7.2 with A of size 3000×3000.
1410
ANDREAS FROMMER AND VALERIA SIMONCINI
For this example, after some tuning we set the acceleration parameter to be equal to μ = 1/(100 + mini |σi |) and we considered d = 2. With the same data as in Example 7.2, the results are displayed in the right-hand plot of Figure 8.1 and once again report an optimal behavior of our new error estimate compared to (k) . 9. Conclusions. In this paper we have shown that a sharp error estimate may be obtained for the approximation by Krylov subspace methods of the action of rational matrix functions to a vector. Our results are sufficiently general to be applicable to a large class of rational functions, commonly employed to approximate not necessarily analytic functions such as the exponential, the sign, the square-root functions, and trigonometric functions. Our estimates rely on known error estimates for Hermitian positive definite systems; however, we apply them to a wider class of matrices and to the far more general context of rational functions. Under certain hypotheses, we were able to prove that our estimates are true lower bounds of the Euclidean norm of the rational function error. We have also discussed practical implementation issues, showing that our estimates can be cheaply included in a Lanczos or CG procedure. We also showed that a classical measure of the error, the difference Δk,d between two iterates, may be a good indicator of the actual convergence history, unless complete stagnation takes place. Acknowledgment. We would like to thank Zdenek Strakoˇs for his helpful comments on an earlier version of this paper and for suggesting the second matrix in Example 7.2. REFERENCES [1] O. Axelsson and A. Kucherov, Real valued iterative methods for solving complex symmetric linear systems, Numer. Linear Algebra Appl., 7 (2000), pp. 197–218. [2] G. A. Baker and P. Graves-Morris, Pad´ e approximants, Encyclopedia of Mathematics and Its Applications, Vol. 59, Cambridge University Press, Cambridge, UK, 1996. ´ , A. Kennedy, and B. Pendleton, QCD and numerical [3] A. Borici, A. Frommer, B. Joo analysis III, in Proceedings of the Third International Workshop on Numerical Analysis and Lattice QCD, Edinburgh, UK, 2003, Lecture Notes in Comput. Sci. Engrg. 47, Springer, Berlin, 2005. [4] D. Calvetti, S. Morigi, L. Reichel, and F. Sgallari, Computable error bounds and estimates for the conjugate gradient method, Numer. Algorithms, 25 (2000), pp. 79–88. [5] A. J. Carpenter, A. Ruttan, and R. S. Varga, Extended numerical computations on the 1/9 conjecture in rational approximation theory, in Rational Approximation and Interpolation, P. R. Graves-Morris, E. B. Saff, and R. S. Varga, eds., Lecture Notes in Math. 1105, Springer-Verlag, Berlin, 1984, pp. 383–411. [6] P. Castillo and Y. Saad, Preconditioning the matrix exponential operator with applications, J. Sci. Comput., 13 (1999), pp. 225–302. [7] V. Druskin and L. Knizhnerman, Two polynomial methods of calculating functions of symmetric matrices, U.S.S.R. Comput. Math. Math. Phys., 29 (1989), pp. 112–121. [8] V. Druskin and L. Knizhnerman, Extended Krylov subspaces: Approximation of the matrix square root and related functions, SIAM J. Matrix Anal. Appl., 19 (1998), pp. 755–771. [9] A. Frommer and V. Simoncini, Matrix functions, in Model Order Reduction: Theory, Research Aspects and Applications, W. H. A. Schilders and H. A. van der Vorst, eds., Mathematics in Industry, Springer, Heidelberg, 2008, to appear. [10] G. Golub and C. F. Van Loan, Matrix Computations, 3rd ed., The Johns Hopkins University Press, Baltimore, MD, 1996. [11] G. H. Golub and G. A. Meurant, Matrices, moments and quadrature, in Numerical Analysis 1993, D. Griffiths and G. Watson, eds., Pitman Res. Notes Math. 303, Longman Scientific and Technical, Harlow, UK, 1994, pp. 105–156. [12] G. H. Golub and G. A. Meurant, Matrices, moments and quadrature II; How to compute the norm of the error in iterative methods, BIT, 37 (1997), pp. 687–705.
STOPPING CRITERIA FOR RATIONAL MATRIX FUNCTIONS
1411
[13] G. H. Golub and Z. Strakos, Estimates in quadratic formulas, Numer. Algorithms, 8 (1994), pp. 241–268. [14] A. Greenbaum, Estimating the attainable accuracy of recursively computed residual methods, SIAM J. Matrix Anal. Appl., 18 (1997), pp. 535–551. [15] E. Hairer, C. Lubich, and G. Wanner, Geometric Numerical Integration. Structurepreserving Algorithms for Ordinary Differential Equations, Springer Ser. Comput. Math. 31, Springer, Berlin, 2002. [16] G. I. Hargreaves and N. J. Higham, Efficient algorithms for the matrix cosine and sine, Numer. Algorithms, 40 (2005), pp. 383–400. [17] M. R. Hestenes and E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. National Bureau of Standards, 49 (1952), pp. 409–436. [18] N. J. Higham, Accuracy and Stability of Numerical Algorithms, SIAM, Philadelphia, PA, 1996. [19] N. J. Higham, Factorizing complex symmetric matrices with positive definite real and imaginary parts, Math. Comput., 67 (1998), pp. 1591–1599. [20] N. J. Higham and M. I. Smith, Computing the matrix cosine, Numer. Algorithms, 34 (2003), pp. 13–26. [21] M. Hochbruck and C. Lubich, On Krylov subspace approximations to the matrix exponential operator, SIAM J. Numer. Anal., 34 (1997), pp. 1911–1925. [22] M. Hochbruck and A. Ostermann, Exponential Runge-Kutta methods for parabolic problems, Appl. Numer. Math., 53 (2005), pp. 323–339. [23] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1994. [24] C. T. Kelley, Iterative Methods for Linear and Nonlinear Equations, Frontiers in Appl. Math. 16, SIAM, Philadelphia, 1995. [25] L. Lopez and V. Simoncini, Analysis of projection methods for rational function approximation to the matrix exponential, SIAM J. Numer. Anal., 44 (2006), pp. 613–635. [26] The MathWorks, Inc., MATLAB 7, September, 2004. [27] J. A. Meijerink and H. A. van der Vorst, An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix, Math. Comp., 31 (1977), pp. 148– 162. [28] G. Meurant, The computation of bounds for the norm of the error in the conjugate gradient algorithm, Numer. Algorithms, 16 (1997), pp. 77–87. [29] G. Meurant, Estimates of the l2 norm of the error in the conjugate gradient algorithm, Numer. Algorithms, 40 (2005), pp. 157–169. [30] G. Meurant and Z. Strakoˇ s, The Lanczos and conjugate gradient algorithms in finite precision arithmetic, in Acta Numer. 15, Cambridge University Press, Cambridge, UK, 2006, pp. 471–542. [31] I. Moret and P. Novati, RD-rational approximations of the matrix exponential, BIT, 44 (2004), pp. 595–615. [32] National Institute of Standards and Technology, Matrix Market. ´ , On sensitivity of Gauß-Christoffel quadra[33] D. P. O’Leary, Z. Strakoˇ s, and P. Tichy ture, Numer. Math., 107 (2007), pp. 147–174; available online at http://math.nist.gov/ MatrixMarket/. [34] C. C. Paige, B. N. Parlett, and H. A. van der Vorst, Approximate solutions and eigenvalue bounds from Krylov subspaces, Numer. Linear Algebra Appl., 2 (1995), pp. 115–134. [35] B. N. Parlett, A new look at the Lanczos algorithm for solving symmetric systems of linear equations, Linear Algebra Appl., 29 (1980), pp. 323–346. [36] P. P. Petrushev and V. A. Popov, Rational Approximation of Real Functions, Cambridge University Press, Cambridge, UK, 1987. [37] M. Popolizio and V. Simoncini, Refined Acceleration Techniques for Approximating the Matrix Exponential Operator, Technical report, Dipartimento di Matematica, Universit` a di Bologna, 2006. [38] S. M. Rump, INTLAB—INTerval LABoratory, SIAM J. Matrix Anal. Appl., to appear. [39] Y. Saad, Analysis of some Krylov subspace approximations to the matrix exponential operator, SIAM J. Numer. Anal., 29 (1992), pp. 209–228. [40] Y. Saad, Iterative Methods for Sparse Linear Systems, The PWS Publishing Company, Boston, 1996; 2nd ed., SIAM, Philadelphia, 2003. [41] R. B. Sidje, Expokit: A software package for computing matrix exponentials, ACM Trans. Math. Software, 24 (1998), pp. 130–156. [42] D. E. Stewart and T. S. Leyk, Error estimates for Krylov subspace approximations of matrix exponentials, J. Comput. Appl. Math., 72 (1996), pp. 359–369.
1412
ANDREAS FROMMER AND VALERIA SIMONCINI
´ , On error estimation in the conjugate gradient method and why it [43] Z. Strakoˇ s and P. Tichy works in finite precision computations, Electronic Trans. Numer. Anal., 13 (2002), pp. 56– 80. ´ , Error estimation in preconditioned conjugate gradients, BIT, 45 [44] Z. Strakoˇ s and P. Tichy (2005), pp. 789–817. [45] J. van den Eshof, A. Frommer, T. Lippert, K. Schilling, and H. van der Vorst, Numerical methods for the QCD overlap operator. I: Sign-function and error bounds, Comput. Phys. Commun., 146 (2002), pp. 203–224. [46] J. van den Eshof and M. Hochbruck, Preconditioning Lanczos approximations to the matrix exponential, SIAM J. Sci. Comput., 27 (2006), pp. 1438–1457. [47] J. van den Eshof and G. L. Sleijpen, Accurate conjugate gradient methods for families of shifted systems, Appl. Numer. Math., 49 (2004), pp. 17–37.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1413–1429
c 2008 Society for Industrial and Applied Mathematics
LIMITED DATA X-RAY TOMOGRAPHY USING NONLINEAR EVOLUTION EQUATIONS∗ VILLE KOLEHMAINEN† , MATTI LASSAS‡ , AND SAMULI SILTANEN§ Abstract. A novel approach to the X-ray tomography problem with sparse projection data is proposed. Nonnegativity of the X-ray attenuation coefficient is enforced by modelling it as max{Φ(x), 0}, where Φ is a smooth function. The function Φ is computed as the equilibrium solution of a nonlinear evolution equation analogous to the equations used in level set methods. The reconstruction algorithm is applied to (a) simulated full and limited angle projection data of the Shepp–Logan phantom with sparse angular sampling and (b) measured limited angle projection data of in vitro dental specimens. The results are significantly better than those given by traditional backprojection-based approaches, and similar in quality (but faster to compute) compared to the algebraic reconstruction technique. Key words. limited angle tomography, X-ray tomography, level set, nonlinear evolution equation AMS subject classifications. 44A12, 92C55, 65N21, 65R32 DOI. 10.1137/050622791
1. Introduction. In medical X-ray tomography, the inner structure of a patient is reconstructed from a collection of projection images. The widely used computerized tomography (CT) imaging uses an extensive set of projections acquired from all around the body. Reconstruction from such complete data is by now well understood, the most popular method being filtered backprojection (FBP). However, there are many clinical applications where three-dimensional (3D) information is helpful, but a complete projection data set is not available. For example, in mammography and intraoral dental imaging, the X-ray detector is in a fixed position behind the tissue, and the X-ray source moves with respect to the detector. In these cases the projections can be taken only from a view angle significantly less than 180◦ , leading to a limited angle tomography problem. In some applications, such as surgical imaging, projections are available from all around the body, but the radiation dose to the patient is minimized by keeping the number of projections small. In addition, the projections are typically truncated to detector size, leading to a local tomography problem. We refer to the above types of incomplete data as sparse projection data. Sparse projection data does not contain sufficient information to completely describe the tissue, and thus successful reconstruction requires some form of regularization or a priori information. It is well known that traditional reconstruction methods, such as FBP, are not well suited for sparse projection data [27, 20]. More promising ∗ Received by the editors January 17, 2005; accepted for publication (in revised form) July 16, 2007; published electronically April 9, 2008. This work was supported by National Technology Agency of Finland (TEKES, contract 206/03), the Academy of Finland (projects 203985 and 108299), and Finnish Centre of Excellence in Inverse Problems Research (Academy of Finland CoE-project 213476). http://www.siam.org/journals/sisc/30-3/62279.html † Department of Physics, University of Kuopio, P.O. Box 1627, 70211 Kuopio, Finland (ville. kolehmainen@uku.fi). ‡ Institute of Mathematics, Helsinki University of Technology, P.O. Box 1100, 02015 TKK, Finland (matti.lassas@tkk.fi). § Institute of Mathematics, Tampere University of Technology, P.O. Box 553, 33101 Tampere, Finland (samuli.siltanen@tut.fi).
1413
1414
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
approaches include algebraic reconstruction (variants of the algebraic reconstruction technique (ART)) [1, 5, 25], tomosynthesis [8], total variation methods [15, 24, 7, 6], Bayesian inversion [13, 30, 33, 15, 28, 17], variational methods [16], and deformable models [12, 3, 10, 42, 19]. We introduce a novel variant of the level set method, where the X-ray attenuation coefficient is modelled as the function max{Φ(x), 0} with Φ a smooth function. Thus we make use of the natural a priori information that the X-ray attenuation coefficient is always nonnegative (the intensity of X-rays does not increase inside tissue). We assume that the attenuation coefficient v ∈ L2 (Ω) for a bounded subset Ω ⊂ 2 R and use the following linear model for the direct problem: (1.1)
m = Av + ε,
where A is a linear operator on L2 (Ω) with appropriate target space and ε is measurement noise. To reconstruct v approximately from m, we solve numerically the evolution equation (1.2)
∂t φ(x, t) = −A∗ (A(f (φ(x, t))) − m) + βΔφ(x, t), (ν · ∇ − r)φ(x, t)|∂Ω = 0,
with a suitable initial condition φ(0) = φ0 and r ≥ 0, β > 0. Here ν is the interior normal vector of the boundary. The cutoff function f : R → R is given by s, s > 0, (1.3) f (s) = 0, s ≤ 0. We denote (1.4)
Φ(x) := lim φ(x, t), t→∞
and consider the function (1.5)
w(x) = f (Φ(x))
as the reconstructed attenuation coefficient. The main difference between traditional level set methods [22, 31, 9, 29, 23, 36, 40] and our algorithm is that we represent the attenuation coefficient as f (Φ), as opposed to the traditional form H(Φ) with H the Heaviside function. Using f means that the attenuation coefficient is represented by the smooth level set function itself in the regions bounded by zero level sets; in classical level set methods the attenuation coefficient would be constant in those regions. In our experiments related to dental radiology, using f produces high quality reconstructions, whereas using H and piecewise constant representation leads to unacceptable quality. In addition to computational results, we provide proofs of some aspects of our method. In Theorem 4.1 we show that the solution of the evolution equation (1.2) exists when the measurement equation (1.1) comes from one of the two most popular models for X-ray tomography: the pencil beam model or the Radon transform. Further, by Theorem 4.2 we know that the limit (1.4) exists when β is large enough and r > 0. While the proofs of the theorems rest on rather standard theory of nonlinear evolution equations, the novelty of our results lies in the combination of good reconstructions from measured data and theoretical justification for the proposed algorithm. The proofs rely essentially on the use of f instead of H.
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1415
Let us review earlier relevant level set studies. There are some approaches avoiding the Heaviside function but retaining the piecewise constant representation [18, 35]. Feng, Karl, and Casta˜ non [10] show tomographic reconstructions with smooth variation from simulated noisy limited-view Radon transformed data. Yu and Fessler [42] use pixel-based representation inside components defined by level sets and an alternating minimization algorithm (our method simultaneously recovers the level sets and the interior). Villegas et al. [37] use Heaviside function and level sets for piecewise smooth geological inversion. A rigorous existence and uniqueness proof for the solution is given by Nguyen and Hoppe [21] in the context of amorphous surface growth. This paper is organized as follows. In section 2 we discuss the pencil beam model and the Radon transform. Equation (1.2) is derived in section 3, and the solution is shown to converge to a nonnegative reconstruction of the attenuation coefficient in section 4. Reconstructions from sparse projection data, both simulated and measured (in vitro), are presented in section 5. We conclude our results in section 6. 2. X-ray measurement models. In medical X-ray imaging, an X-ray source is placed on one side of the target tissue. Radiation passes through the tissue, and the attenuated signal is detected on the other side; see the left-hand illustration in Figure 2.1. We model a two-dimensional (2D) slice through the target tissue by a rectangle Ω ⊂ R2 and a nonnegative attenuation coefficient v : Ω → [0, ∞). The tissue is contained in a subset Ω1 ⊂ Ω, and v(x) ≡ 0 for x ∈ Ω \ Ω1 . This leads to the linear model (2.1) v(x) dx = log I0 − log I1 , L
where L is the line of the X-ray, I0 is the initial intensity of the X-ray beam when entering Ω, and I1 is the attenuated intensity at the detector. Below we present two popular ways to organize and interpret collections of measured line integrals (2.1) in the form (1.1): the Radon transform and the pencil beam model. We neglect scattering phenomena and effects of nonmonochromatic radiation, or beam hardening. 2.1. Radon transform. We define the operator A appearing in (1.1) by A : L2 (Ω) → L2 (D), (Av)(θ, s) = v(x) dx, L(θ,s)
where L(θ, s) = {x ∈ R2 : x1 cos θ + x2 sin θ = s}. We allow models of limited angle and local tomography by taking D = {(θ, s) : θ ∈ [θ0 , θ1 ],
s ∈ [s0 (θ), s1 (θ)]},
where 0 ≤ θ0 < θ1 ≤ 2π and −∞ ≤ s0 (θ) < s1 (θ) ≤ ∞. Further, we assume that ε ∈ L2 (D). We remark that A is a compact operator; see [20]. 2.2. Pencil beam model. Suppose we take N1 projection images with a digital detector consisting of N2 pixels. Then our data consists of integrals of v over N := N1 N2 different lines L in (2.1). Accordingly, the linear operator in (1.1) is defined as A : L2 (Ω) → RN , the measurement is a vector m ∈ RN , and noise is modelled by a Gaussian zerocentered random vector ε taking values in RN .
1416
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
X−ray source
L
j
Object Detector
Fig. 2.1. Left: typical measurement in X-ray transmission tomography. Right: in the discrete pencil beam model the domain Ω is divided into pixels.
We remark that raw data in X-ray tomography consists of photon counts I1 that obey Poisson statistics. However, a large number of photons are usually counted, and in addition, the projection data m represents the logarithm of the count data. It has been shown in [4, 33] that statistics of such data can be reasonably well approximated with the additive Gaussian noise model. The pencil beam model needs to be discretized for practical computations. A square containing the domain Ω is divided into a lattice of disjoint pixels Ωi with i = 1, . . . , M . The attenuation map v is approximated by a constant value within each pixel: v(x) ≈
(2.2)
M
vi χi (x),
i=1
where χi is the characteristic function of Ωi . Using (2.2), the integral (2.1) can be approximated by the weighted sum of pixel values: v(x)dx ≈
(2.3) L
M
vi μ(Ωi ∩ L),
i=1
where μ(Ωi ∩ L) is the length of the line segment Ωi ∩ L. See Figure 2.1 for an illustration. For discussion on different discretizations for the attenuation function v(x) and approximations for the line integrals, see [14]. Furthermore, according to (2.2), the attenuation map can be identified with the coefficient vector v = (v1 , v2 , . . . , vM )T ∈ RM . Thus the discrete √ approximation to the operator A can be expressed as an N × M matrix with CN M nonzero entries. We mention that computation of a discrete Radon transform can be done very effectively using algorithms given in [2] or in a matrix-free fashion utilizing graphic processing hardware; see, e.g., [41]. 3. The evolution equation. 3.1. Classical level set method for inverse obstacle problems. Consider a physical parameter of the form σ = σ0 + cχΩ1 , where σ0 (x) is known background,
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1417
c is a constant, and the characteristic function χΩ1 (x) causes a discontinuity at the boundary ∂Ω1 . In inverse obstacle problems one aims to recover the set Ω1 from indirect measurements of σ. For instance, the parameter σ may be sound speed or electrical impedance, and one may measure scattered waves or voltage-to-current boundary maps, respectively. In the classical level set approach the obstacle is represented as H(Φ), where H is the Heaviside function and Φ is smooth. The boundary ∂Ω1 of the obstacle is given by the zero level set of Φ. The measurement is typically written in the form m = A(H(Φ)) =: Q(Φ). In the classical level set method the function Φ is found as the limit Φ(x) := lim φ(x, t), t→∞
where φ is the solution of the evolution equation φt = −θ(φ, ∇x φ)[(DQ|φ )∗ (Q(φ) − m)].
(3.1)
Here θ is a nonnegative function and (DQ|φ )ρ is the Gateaux derivative of Q at the point φ in direction ρ ∈ C0∞ (Ω) defined by (3.2)
(DQ|φ )ρ = lim
s→0+
Q(φ + sρ) − Q(φ) , s
and (DQ|φ )∗ is the adjoint operator of DQ|φ . The intuition behind this approach is the following. Define a cost functional (3.3)
F0 (u) =
1
A(H(u)) − m 2L2 (D) 2
and compute ∂ F0 (u + sρ) = s→0+ ∂s
(DQ|u )ρ (Q(u) − m) dx =
lim
Ω
ρ (DQ|u )∗ (Q(u) − m)dx.
Ω
Then we have formally ∂ F0 (φ + sφt ) = − s→0+ ∂s
∂t F0 (φ) = lim
> ?2 θ(φ, ∇x φ) (DQ|φ )∗ (Q(φ) − m) dx ≤ 0,
Ω
and thus limt→∞ F0 (φ(x, t)) = F0 (Φ) is expected to be small. We refer the reader to [9] and the references therein for more details on solving inverse obstacle problems with classical level set methods. 3.2. Motivation for the new method. Consider the linear measurement (1.1) in the case that the X-ray attenuation coefficient v is smooth and differs from zero only inside a subset Ω1 ⊂ Ω. Now the operator A∗ A arising from the Radon transform or pencil beam model is nonlocal, and mathematical justification of the classical level set approach described in section 3.1 seems difficult. Also, our numerical experiments suggest that the classical level set method does not give satisfactory reconstructions when applied to the tomographic data available to us. We want to design an algorithm that (i) constructs an approximation Ω2 for the subset Ω1 , and
1418
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
(ii) with given approximation Ω2 produces a reconstruction w that solves the Tikhonov regularization problem @ A 1 β
A(u) − m 2L2 (D) + ∇u 2L2 (Ω) , w = arg min u 2 2 where β > 0 is a parameter and the minimum is taken over all u satisfying (3.4)
u|Ω\Ω2 ≡ 0,
(3.5)
u|Ω2 ∈ H01 (Ω2 ) = {g ∈ L2 (Ω2 ) : ∇g ∈ L2 (Ω2 ), g|∂Ω2 = 0}.
However, goal (ii) depends on goal (i). One possibility overcoming this problem would be to iteratively alternate between (i) and (ii) similarly to [42], but we follow another route. 3.3. Formulation of the new method. We approximate the X-ray attenuation coefficient v by w = f (Φ), where f is given by (1.3) and Φ is smooth. Note that (3.4) is achieved naturally with ∂Ω2 given by the zero level set of Φ. The measurement of X-ray projection images is now modelled by m = A(f (Φ)). In the new method the function Φ is found as the limit Φ(x) := limt→∞ φ(x, t), where φ is the solution of the evolution equation (3.6)
∂t φ = −(A∗ (A(f (φ)) − m)) − βΔφ, ∂ν φ − rφ|∂Ω = 0, φ(x, 0) = φ0 (x),
with ν the interior normal of ∂Ω, β > 0 a regularization parameter, and r ≥ 0. Compare (3.6) to (3.1) with the choice θ ≡ 1. The function w in goals (i) and (ii) of section 3.2 satisfies (3.7)
βΔw − A∗ (A(w) − m) = 0
in Ω2 .
The solution of evolution equation (3.6) converges to the solution of (3.7) and simultaneously produces a useful approximation Ω2 for Ω1 . Note that since the result of the evolution is nonnegative, it follows from (3.7) that w in goals (i) and (ii) is nonnegative as well. How did we come up with such a formulation? Tikhonov regularization leads to the cost functional (3.8)
F (u) =
1 β
A(f (u)) − m 2L2 (D) + ∇u 2L2 (Ω) . 2 2
Computing the derivative ∂t F (u) similarly to the process in section 3.1 suggests the evolution (3.9)
∂t φ = −H(φ)(A∗ (A(f (φ)) − m)) − βΔφ.
However, (3.9) is numerically unstable. Outside the level set Ω2 (t) := {x | φ(x, t) = 0} the evolution is driven by the term −βΔφ alone, pushing φ typically toward constant value zero in Ω \ Ω2 . This in turn creates spurious and unstable components of the level set Ω2 . Thus we drop the Heaviside function in (3.9) and arrive at (3.6). Numerical tests show that evolution (3.6) is numerically stable and gives much better reconstructions than (3.9).
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1419
4. Existence proof for the new method. In the following theorem we show that (3.6) has a strong L2 (Ω) solution. We remark that similar analysis fails for the classical level set approach because using H instead of f leads to a heat equation with very singular source terms. Theorem 4.1. Let A : L2 (Ω) → L2 (D) and m ∈ L2 (D), where D is either a subset of R2 equipped with the Lebesgue measure, or D = {1, 2, . . . , N } equipped with the counting measure. Assume φ0 ∈ W 1,2 (Ω), r ≥ 0, and β > 0. Then the evolution equation, (4.1) (4.2) (4.3)
∂t φ = −A∗ (A(f (φ)) − m) + βΔφ
in Ω × R+ ,
(∂ν − r)φ|∂Ω = 0, φ(x, 0) = φ0 (x),
has a solution φ ∈ W 1 ([0, T ]; L2 (Ω)) ∩ L2 ([0, T ]; H 1 (Ω)) for any T > 0. Remark. The assumptions of Theorem 4.1 are satisfied when A is the Radon transform (including limited angle and local tomography cases). Further, RN can be identified with the space L2 (D) when D = {1, 2, . . . , N } equipped with the counting measure. This way we see that Theorem 4.1 covers the pencil beam model as well. Proof. We consider first the equation on a finite time interval [0, T ]. We denote B(φ) = A∗ A(f (φ)),
g = A∗ m.
Let ψ(x, t) = e−ηt φ(x, t), η ≥ 0. Since f (sa) = sf (a) for s > 0, (4.3) is equivalent to (4.4) (4.5)
∂t ψ = [−ηψ − A∗ A(f (ψ)) + βΔψ] + g, ψ(x, 0) = φ0 , (∂ν − r)ψ|∂Ω = 0,
for t ∈ [0, T ]. Next we consider this equation. Let V = W 1,2 (Ω) be the Sobolev space and V its dual. Then V is separable and reflexive, and the embedding V → L2 (Ω) is compact. In the following, Δ is always the Laplacian defined with Robin boundary condition (∂ν −r)φ|∂Ω = 0. This operator has a continuous extension Δ : V → V . Let λ0 ≥ 0 be the smallest Robin eigenvalue of −Δ in Ω. We choose the value of η such that for some ε1 , ε2 > 0
A∗ A L(L2 (Ω)) ≤ η + (β − ε2 )λ0 − ε1 . First, since f (u) L2 (Ω) ≤ u L2 (Ω) and A∗ A : L2 (Ω) → L2 (Ω) is continuous, we observe that the operator u → βΔu + B(u) is hemicontinuous V → V , that is, for any u, v ∈ V the map s → βΔ(u + sv) + B(u + sv), v is continuous from R to R. Also, it satisfies an estimate (4.6)
βΔ(u) + B(u) V ≤ c3 u V .
Because of the choice of η, we have that (4.7)
ηu + B(u), u ≥ (ε1 − (β − ε2 )λ0 ) u 2L2 (Ω) .
1420
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
Since Δ is defined with Robin boundary conditions, we see that
(−Δu), u ≥ λ0 u 2L2 (Ω) . Using (4.7) we see that
−βΔu + B(u) + ηu, u ≥ (−ε2 Δu), u + (ε2 − β)Δu + B(u) + ηu, u ≥ ε2 ∇u 2L2 (Ω) + ε1 u 2L2 (Ω) (4.8) ≥ min(ε1 , ε2 ) u 2V . Since |f (s1 ) − f (s2 )| ≤ |s1 − s2 | and A∗ A : L2 (Ω) → L2 (Ω) is continuous, we have (4.9) (4.10)
| B(u1 ) − B(u2 ), u1 − u2 | ≤ A∗ A L(L2 (Ω)) u1 − u2 2L2 (Ω) ,
(η − βΔ)(u1 − u2 ), u1 − u2 ≥ (η + Rλ0 ) u1 − u2 2L2 (Ω) .
Thus the operator Au = −βΔu + B(u) + ηu is hemicontinuous and satisfies (4.11) A(u1 ) − A(u2 ), u1 − u2 ≥ (η + βλ0 − A∗ A L(L2 (Ω)) ) u1 − u2 2L2 (Ω) ≥ ε1 u1 − u2 2L2 (Ω) . This means by definition that A is a strictly monotone operator. Thus we have shown that A( · ) is a hemicontinuous and satisfies (4.6), (4.8), and (4.11). Hence all assumptions of the existence theorem [32, Prop. III.4.1] concerning quasilinear parabolic equations are valid (see also [32, Lemma II.2.1]), and we see that (4.3) has a unique solution in the weak sense on any time interval [0, T ] and thus on t ∈ R+ ; that is, ψ ∈ C([0, T ]; L2 (Ω)) ∩ L2 (0, T ; V ), ∂t ψ ∈ L2 (0, T ; V ), and for any u ∈ C([0, T ]; L2 (Ω)) ∩ L2 (0, T ; V ) with u(T ) = 0 we have
T
−
∂t ψ(t), u(t)V
×V
0
T
A(ψ(t)), u(t)V ×V dt
dt + 0
T
g(t), u(t)L2 (Ω)×L2 (Ω) dt + φ0 , v(0)L2 (Ω)×L2 (Ω) .
= 0
Next we show that ψ is a strong solution. Observe that ψ(t) is a weak solution of the linear equation (4.12)
∂t ψ(t) + ηψ − βΔψ(t) = g(t),
ψ(0) = φ0 ,
(∂ν − r)ψ|x∈∂M = 0,
where g = B(ψ(t)) + g ∈ C([0, T ]; L2 (Ω)). Using (4.12) we see by [32, Prop. III.4.2] that ψ ∈ L∞ (0, T ; V ), ∂t ψ ∈ L2 (0, T ; L2 (Ω)), and in L2 (Ω) we have ∂t ψ(t) = −ηψ(t) + βΔψ(t) + g(t) = A(ψ(t)) + g(t)
for a.e. t ∈ [0, T ].
Next we show that the limit limt→∞ φ(x, t) exists for r > 0 and large β. Theorem 4.2. Let A and m be as in Theorem 4.1. Let φ be the solution of (4.1) with r > 0. Then there is β0 > 0 depending on Ω and A such that if β > β0 , then the limit Φ(x) := lim φ(x, t) t→∞
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1421
exists in the topology of L2 (Ω) and Φ is a solution of the equation (4.13)
A∗ A(f (Φ)) − βΔΦ = A∗ m (∂ν − r)Φ|∂Ω = 0.
in Ω,
Proof. Recall that Au = −βΔu + B(u) + ηu. Since A : V → V is strictly monotone and hemicontinuous and satisfies (4.6) and (4.8), it follows from [32, Thm. II.2.1 and Lemma II.2.1] that the equation (4.14)
A(Φ) = g,
Φ ∈ V,
has a unique solution. Denote S1 (u) = A(u + Φ) − A(Φ), u. Then S1 (u − Φ) = A(u) − m, u − Φ. By (4.8) we see that (4.15)
−S1 (u − Φ) ≥ min(ε1 , ε2 ) u − Φ 2V .
Now for a.e. t ∈ R+ we have for the solution ψ(t) of (4.12) ∂t ( ψ(t) − Φ 2L2 (Ω) ) = 2 ∂t ψ(t), ψ(t) − Φ = 2 −A(ψ(t)) + g, ψ(t) − Φ = −2S1 (ψ(t) − Φ), and, denoting ε3 = min(ε1 , ε2 ), we see that ∂t ( ψ(t) − Φ 2L2 (Ω) ) ≤ −ε3 ψ(t) − Φ 2L2 (Ω) . Thus s(t) = ψ(t) − Φ 2L2 (Ω) satisfies s(t) ≤ s(0) exp(−ε3 |t − t1 |), implying that
ψ(t) − Φ L2 (Ω) ≤ ψ(0) − Φ L2 (Ω) exp(−ε1 |t − t1 |). Finally, when β is large enough, we can choose η = 0. Then φ(t) = ψ(t), and the claim follows. 5. Computational results. We test the proposed method with simulated and measured projection data. The first test case is a simulated example of full angle and limited angle tomography with sparse projection data from the classical Shepp– Logan phantom. With the full angle data we study the effect of angular sampling on reconstruction quality. Results with FBP and ART are given as references. For details on FBP, see [14, 20], and for details on ART, see [1, 5, 25]. The second test case is limited angle tomography with data measured from a tooth specimen, and the third test case uses intraoral projection data from a dry skull. In these cases, reconstructions with the traditional tomosynthetic method are shown as reference. Tomosynthesis, or unfiltered backprojection, is widely used for dental imaging with few projections; see [38, 11, 39]. We use 2D discrete pencil beam model in all computations. We discretize the evolution equation (4.3) using a finite difference scheme where φ(x, tk ) is approximated with the piecewise constant function (5.1)
φ(x, tk ) ≈
M i=1
φi (tk )χi (x)
1422
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
Fig. 5.1. Simulated sparse full angle projection data from the Shepp–Logan phantom. First column from the left: the Shepp–Logan phantom. Second column: data in sinogram form. In all cases the total view angle is 180◦ . The number of projections from top to bottom is 37 (5◦ step), 19 (10◦ ), 13 (15◦ ), and 10 (20◦ ), respectively. The missing parts of the sinograms are denoted by black. Third column: reconstructions with FBP. Fourth column: reconstructions with ART. Fifth column: reconstructions with the proposed method. See Table 5.1 for relative errors of the reconstructions and the computation times.
and central differencing is used for the computation of the partial derivatives. Homogeneous Neumann boundary condition ∇φ·ν = 0 is used at the exterior boundary ∂Ω. For temporal discretization we employ an explicit Euler method. 3D reconstructions are formed as stacks of reconstructed 2D slices. The computations are carried out using MATLAB 7.1 on a modern desktop computer (3.2GHz Pentium 4 processor with 4GB random access memory). 5.1. Full and limited angle tomography with sparse angular sampling. We simulate projection data using the Shepp–Logan phantom of size 256 × 256 shown in Figure 5.1. Using the conventional parallel beam CT imaging geometry, 37 onedimensional projections from a total view angle of 180◦ (with 5◦ steps) are generated. The number of line integrals in each projection is 180, leading to a total number of data N = 37 × 180 = 6660. Additive Gaussian noise with standard deviation 3% of the maximum value of the generated projections is added to the data, leading to a signal-to-noise ratio of 25 dB. We divide the domain Ω ⊂ R2 into M = 180 × 180 = 32400 regular pixels. The smoothing parameter in the evolution equation is taken to be λ = 0.1. Table 5.1 shows the numbers of data we use. Figure 5.1 shows the full angle data in sinogram form and reconstructions from the respective data. The number of projections is from top to bottom 37, 19, 13, and 10, respectively. In FBP reconstructions we reduce the effects of noise by applying the Ram–Lak filter multiplied by the Hamming window to the filtering in the frequency domain and use the nearest neighbor interpolation
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1423
Table 5.1 Measurement parameters, relative L2 -errors, and computation times of the reconstructions given in Figure 5.1. Number of projections 37 19 13 10
Angular step size 5◦ 10◦ 15◦ 20◦
Number of projections 37 19 13 10
Computation time of FBP 0.2 s 0.1 s 0.1 s 0.1 s
Error of FBP 60.7 % 85.9 % 107.4 % 127.0 %
N 6660 3420 2340 1800
Computation time of ART 523.1 s 145.1 s 57.8 s 26.1 s
Error of ART 44.4 % 52.4 % 58.1 % 62.1 %
Error of new method 48.8 % 54.3 % 57.7 % 60.5 %
Computation time of new method 49.6 s 29.4 s 22.0 s 17.5 s
Fig. 5.2. Simulated limited angle projection data from the Shepp–Logan phantom. First column from the left: the Shepp-Logan phantom. Second column: data in sinogram form. The data consisted of 21 projections with 5◦ steps, leading to a total opening angle of 100◦ . The missing parts of the sinogram are denoted by black. Third column: reconstruction with FBP. Fourth column: reconstruction with ART. Fifth column: reconstruction with the proposed method. See Table 5.2 for relative errors of the reconstructions and the computation times. Table 5.2 Measurement parameters, relative L2 -errors, and computation times of the limited angle reconstructions given in Figure 5.2. The data consisted of 21 projections from a total opening angle of 100◦ (number of data N = 3780).
Error Computation time
FBP 90.3 % 0.1 s
ART 65.6 % 59.0 s
New method 61.6 % 27.8 s
in the backprojection process. In the ART reconstructions, the iteration is terminated once the least squares residual reaches the expected level of the measurement noise. Table 5.1 contains the computation times and the relative L2 -errors of the reconstructions shown in Figure 5.1. Figure 5.2 shows the results from the limited angle data which consisted of 21 projections with total opening angle of 100◦ (5◦ steps). The relative L2 -errors and computation times of the reconstructions are tabulated in Table 5.2. 5.2. Sparse limited angle data from a tooth specimen. We acquire projection images using full angle cone beam CT geometry, so full angle reconstructions are available as ground truth for the limited angle reconstructions. We use a commercial intraoral X-ray detector Sigma and a dental X-ray source Focus.1 The detector is based on charge coupled device technology. The size of the 1 Sigma
and Focus are registered trademarks of PaloDEx Group.
1424
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
CCD detector
Tooth
X−ray source Fig. 5.3. Left: the experimental setup. The X-ray source is on the left, and the detector is attached to the camera stand on the right. The tooth phantom is positioned on the rotating platform. Right: illustration of the projection geometry. Circles denote the source locations for the full angle data (23 projections from total view angle of 187◦ ). The projections used in limited angle computations (10 projections from view angle of 76◦ ) are denoted by black dots within the circles. For clarity, the location and alignment of the detector with respect to the source are depicted for only one source location.
Fig. 5.4. Left: projection radiograph of the tooth specimen. Note that the radiograph is shown with inverted color map. Middle: one 2D slice of the (transformed) projection data in sinogram form. The projections are collected from a total view angle of 187◦ (23 projections with 8.5◦ steps). Right: the part of data used in the limited angle reconstructions (10 projections from a total view angle of 76◦ ). The missing part of the sinogram is denoted by black.
imaging area is 34mm × 26mm, and the resolution is 872 × 664 pixels with pixel size 0.039mm × 0.039mm. Signal-to-noise ratio in our experiments is 34 dB. In the experimental setup, the detector and the X-ray source are attached into fixed positions such that the source direction is normal to the detector array. The distance from the focal spot to the detector is 840mm. The tooth specimen is placed on a rotating platform so that projections from different angles can be obtained. The distance from the center of rotation to the detector is 56mm. Projection angles are read from a millimeter scale paper attached to the rotating platform. The measurement setup is illustrated in Figure 5.3. We take 23 projection images from a total view angle of 187◦ (with 8.5◦ steps) and transform them into projection data of the form (2.1). We use two angular samplings: 23 projections from a total view angle of 187◦ and 10 projections from a total view angle of 76◦ . The size of the data vector m for each 2D problem is N = 664 × 23 = 15272 in the full angle case and N = 664 × 10 = 6640 in the limited angle case. Figure 5.4 shows one of the projection radiographs and one 2D slice of the projection data in sinogram form.
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1425
Fig. 5.5. Left column: reconstructed slices from full angle data consisting of 23 projection images from a total view angle of 187◦ . Center column: backprojected reconstructions from limited angle data (10 projections from a total view angle of 76◦ ). Right column: reconstructions with the proposed method from the same limited angle data. Relative L2 -errors from top to bottom: 29%, 30%, and 40%.
We divide the 26mm × 26mm square domain Ω ⊂ R2 into M = 166 × 166 = 27556 regular pixels, leading to a pixel size of ∼ 0.16mm × 0.16mm. The smoothing parameter in the evolution equation is λ = 0.1. Results are shown in Figures 5.5 and 5.6. 5.3. Sparse intraoral data from a dry skull. We model intraoral X-ray imaging by placing the detector in a fixed position inside the mouth of a dry skull right behind the teeth. A metal reference ball is attached in front of the teeth with distance of 14mm from the detector for calibration. We move the X-ray source on an approximately circular arc with a distance of ∼ 590mm from the detector. A photograph and a schematic illustration of the experimental setup are shown in Figure 5.7. We take seven projection images with approximately equal angular steps from a total view angle of 60◦ . This represents roughly the maximum view angle that can be used in practice. Projection angles are estimated based on the shift of the reference ball in the images. Figure 5.8 shows one projection radiograph from this data set and one slice of the data in sinogram form. Note that the sinogram is truncated from the upper side (i.e, it does not go to zero in the upper side). Thus, in addition of being a limited angle case, the problem contains features of a local tomography problem [34, 26]. The number of data for each 2D problem is N = 872 × 7 = 6104, and the smoothness parameter in the evolution equation is λ = 0.1. The domain Ω ⊂ R2 is
1426
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN
Fig. 5.6. Vertical slices from 3D reconstructions obtained as stacks of 2D reconstructions. Left column: reconstruction from full angle data consisting of 23 projection images from a total view angle of 187◦ . Center column: backprojected reconstruction from limited angle data (10 projections from a total view angle of 76◦ ). Right column: reconstruction with the proposed method from the same limited angle data.
X−ray source positions
Intraoral detector
Dental arc
Fig. 5.7. Left: geometry for intraoral measurements. The detector is in fixed position inside the patient’s mouth. The source locations are denoted by black dots (7 projections from a total view angle of 60◦ ). Right: experimental setup.
a 61mm × 25mm rectangle divided into M = 393 × 160 = 62880 regular pixels with size ∼ 0.16mm × 0.16mm. The results for the dry skull case are shown in Figure 5.9. 6. Conclusion. We introduce a novel reconstruction method for tomographic problems. Our approach is inspired by level set methods. The algorithm is given in the form of a nonlinear evolution equation, for which we prove existence of solutions and convergence to a limit function considered as the reconstruction.
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1427
Fig. 5.8. Left: Intraoral projection radiograph of the head phantom. The location of the image of the metal ball used to estimate the projection angles is indicated. Right: One 2D slice of data in sinogram form. The projections are collected from a total view angle of 60◦ . The black parts denote the missing parts of the sinogram.
Fig. 5.9. Approximate 3D reconstruction from limited angle projection data from the dry skull. The 3D reconstruction was obtained as a stack of 2D reconstructions. The data consists of 7 intraoral projection images collected from a total view angle of 60◦ . Left column: vertical slices from a backprojected reconstruction. Right column: respective slices with the proposed method.
Reconstructions computed from simulated full angle and limited angle data show that the proposed method clearly decreases the reconstruction error compared to FBP, and the reconstruction errors are similar to those of ART. The computation times of the new method are significantly smaller than those of ART. Further, we perform realistic experiments involving specimens of dental tissue. The new method gives excellent results in all test cases as judged by visual inspection. Diagnostically crucial information, such as the position of tooth roots, is more clearly visible in the reconstructions using the new method than in tomosynthetic slices. The new method is easy to implement, and the most computational effort goes to linear projection and backprojection operations (for which highly optimized hardware implementations are available). One drawback of the proposed method is that the reconstruction f (Φ) is always a continuous function although the target tissue is known to have jumps in the attenuation coefficient.
1428
V. KOLEHMAINEN, M. LASSAS, AND S. SILTANEN REFERENCES
[1] A.H. Andersen, Algebraic reconstruction in CT from limited views, IEEE Trans. Med. Imaging, 8 (1989), pp. 50–55. [2] A. Averbuch and Y. Shkolnisky, 3D Fourier based discrete Radon transform, Appl. Comput. Harmon. Anal., 15 (2003), pp. 33–69. [3] X. L. Battle, G. S. Cunningham, and K. M. Hanson, 3D tomographic reconstruction using geometrical models, in Proceedings of SPIE 3034, Medical Imaging: Image Processing, K. M. Hanson, ed., 1997, pp. 346–357. [4] C. Bouman and K. Sauer, A generalized Gaussian image model for edge-preserving MAP estimation, IEEE Trans. Image Process., 2 (1993), pp. 296–310. [5] C. Byrne, Block-iterative interior point optimization methods for image reconstruction from limited data, Inverse Problems, 16 (2000), pp. 1405–1419. `s, J. Romberg, and T. Tao, Robust uncertainty principles: Exact signal recon[6] E. J. Cande struction from highly incomplete frequency information, IEEE Trans. Inform. Theory, 52 (2006), pp. 489–509. [7] A. H. Delaney and Y. Bresler, Globally convergent edge-preserving regularized reconstruction: An application to limited-angle tomography, IEEE Trans. Image Process., 7 (1998), pp. 204–221. [8] J. T. Dobbins and D. J. Godfrey, Digital x-ray tomosynthesis: Current state of the art and clinical potential, Phys. Med. Biol., 48 (2003), pp. R65–R106. [9] O. Dorn and D. Lesselier, Level set methods for inverse scattering, Inverse Problems, 22 (2006), pp. R67–R131. ˜on, A curve evolution approach to object-based [10] H. Feng, W. C. Karl, and D. A. Castan tomographic reconstruction, IEEE Trans. Image Process., 12 (2003), pp. 44–57. [11] D. G. Grant, Tomosynthesis: A three-dimensional radiographic imaging technique, IEEE Trans. Biomedical Engrg., 19 (1972), pp. 20–28. [12] K. M. Hanson, G. S. Cunningham, G. R. Jennings, and D. R. Wolf, Tomographic reconstruction based on flexible geometric models, in Proceedings of the IEEE International Conference on Image Processing (ICIP-94), 1994, pp. 145–147. [13] K. M. Hanson and G. W. Wecksung, Bayesian approach to limited-angle reconstruction in computed tomography, J. Opt. Soc. Amer., 73 (1983), pp. 1501–1509. [14] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, Classics Appl. Math. 33, SIAM, Philadelphia, 2001. ¨rvenpa ¨a ¨, J. P. Kaipio, P. Koistinen, M. Lassas, [15] V. Kolehmainen, S. Siltanen, S. Ja ¨, and E. Somersalo, Statistical inversion for medical X-ray tomography with J. Pirttila few radiographs: II. Application to dental radiology, Phys. Med. Biol., 48 (2003), pp. 1465– 1490. [16] J. Kybic, T. Blu, and M. Unser, Variational approach to tomographic reconstruction, in Proceedings of SPIE (San Diego, 2001), Vol. 4322, Medical Imaging 2001: Image Processing, Milan Sonka, Kenneth M. Hanson, eds., 2001, pp. 30–39. [17] M. Lassas and S. Siltanen, Can one use total variation prior for edge-preserving Bayesian inversion?, Inverse Problems, 20 (2004), pp. 1537–1563. [18] J. Lie, M. Lysaker, X.-C. Tai, A binary level set model and some applications to Mumford– Shah image segmentation, IEEE Trans. Image Process., 15 (2006), pp. 1171–1181. [19] A. Mohammad-Djafari and K. Sauer, Shape reconstruction in x-ray tomography from a small number of projections using deformable models, in Proceedings of the 17th International Workshop on Maximum Entropy and Bayesian Methods (MaxEnt97), Boise, ID, 1997. [20] F. Natterer, The Mathematics of Computerized Tomography, John Wiley & Sons, Chichester, UK, Teubner, Stuttgart, 1986. [21] C.-D. Nguyen and H. W. Hoppe, Amorphous surface growth via a level set approach, Nonlinear Anal., 66 (2007), pp. 704–722. [22] S. Osher and R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, New York, Springer, 2003. [23] S. Osher and F. Santosa, Level set methods for optimization problems involving geometry and constraints. I. Frequencies of a two-density inhomogeneous drum, J. Comput. Phys., 171 (2001), pp. 272–288. [24] M. Persson, D. Bone, and H. Elmqvist, Total variation norm for three-dimensional iterative reconstruction in limited view angle tomography, Phys. Med. Biol., 46 (2001), pp. 853–866. [25] C. Popa and R. Zdunek Kaczmarz extended algorithm for tomographic image reconstruction from limited data, Math. Comput. Simulation, 65 (2004), pp. 579–598. [26] A. G. Ramm and A. I. Katsevich, The Radon Transform and Local Tomography, CRC Press, Boca Raton, FL, 1996.
X-RAY TOMOGRAPHY USING EVOLUTION EQUATIONS
1429
[27] R. M. Ranggayyan, A. T. Dhawan, and R. Gordon, Algorithms for limited-view computed tomography: An annotated bibliography and a challenge, Appl. Optics, 24 (1985), pp. 4000–4012. ¨nska ¨, S. Ja ¨rvenpa ¨a ¨, M. Kalke, M. Lassas, J. Moberg, and S. Silta[28] M. Rantala, S. Va nen, Wavelet-based reconstruction for limited angle X-ray tomography, IEEE Trans. Med. Imaging, 25 (2006), pp. 210–217. [29] F. Santosa, A level-set approach for inverse problems involving obstacles, ESAIM Control Optim. Calc. Var., 1 (1996), pp. 17–33. [30] K. Sauer, S. James, Jr., and K. Klifa, Bayesian estimation of 3-D objects from few radiographs, IEEE Trans. Nucl. Sci., 41 (1994), pp. 1780–1790. [31] J. A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, Cambridge, UK, 1999. [32] R. E. Showalter, Monotone Operators in Banach Space and Nonlinear Partial Differential Equations, AMS, Providence, RI, 1997. ¨rvenpa ¨a ¨, J. P. Kaipio, P. Koistinen, M. Lassas, [33] S. Siltanen, V. Kolehmainen, S. Ja ¨, and E. Somersalo, Statistical inversion for medical x-ray tomography with J. Pirttila few radiographs: I. General theory, Phys. Med. Biol., 48 (2003), pp. 1437–1463. [34] K. T. Smith and F. Keinert, Mathematical foundations of computed tomography, Appl. Optics, 24 (1985), pp. 3950–3957. [35] B. Song and T. F. Chan, Fast Algorithm for Level Set Segmentation, UCLA CAM Report 02-68, University of California, Los Angeles, CA, 2002. [36] J. S. Suri, K. Liu, S. Singh, S. N. Laxminarayan, X. Zeng, and L. Reden, Shape recovery algorithms using level sets in 2-D/3-D medical imagery: A state-of-the-art review, IEEE Trans. Inf. Technol. Biomed., 6 (2002), pp. 8–28. [37] R. Villegas, O. Dorn, M. Moscoso, M. Kindelan, and F. J. Mustieles, Simultaneous characterization of geological shapes and permeability distributions in reservoirs using the level set method, in Proceedings of the SPE Europec/EAGE Annual Conference and Exhibition, Vienna, 2006, pp. 1–12. [38] R. L. Webber and R. A. Horton, Method and System for Creating Three-Dimensional Images Using Tomosynthetic Computed Tomography, U.S. Patent 6,289,235, Wake Forest University, Winston-Salem, NC, September 11, 2001. [39] R. L. Webber, R. A. Horton, D. A. Tyndall, and J. B. Ludlow, Tuned-aperture computed tomography (TACT). Theory and application for three-dimensional dento-alveolar imaging, Dentomaxillofac. Radiol. 26 (1997), pp. 53–62. [40] R. T. Whitaker and V. Elangovan, A direct approach to estimating surfaces in tomographic data, Med. Image Anal., 6 (2002), pp. 235–249. [41] F. Xu and K. Mueller, RapidCT: Acceleration of 3D computed tomography on GPUs, in Proceedings of the ACM Workshop on General-Purpose Computing on Graphics, 2004, p. C-13. [42] D. F. Yu and J. A. Fessler, Edge-preserving tomographic reconstruction with nonlocal regularization, IEEE Trans. Medi. Imaging, 21 (2002), pp. 159–173.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1430–1458
IMPROVEMENT OF SPACE-INVARIANT IMAGE DEBLURRING BY PRECONDITIONED LANDWEBER ITERATIONS∗ PAOLA BRIANZI† , FABIO DI BENEDETTO† , AND CLAUDIO ESTATICO‡ Abstract. The Landweber method is a simple and flexible iterative regularization algorithm, whose projected variant provides nonnegative image reconstructions. Since the method is usually very slow, we apply circulant preconditioners, exploiting the shift invariance of many deblurring problems, in order to accelerate the convergence. This way reasonable reconstructions can be obtained within a few iterations; the method becomes competitive and more robust than other approaches that, although faster, sometimes lead to lower accuracy. Some theoretical analysis of convergence is given, together with numerical validations. Key words. Landweber, two-level Toeplitz and circulant matrices, preconditioning, regularization AMS subject classifications. 65F22, 65F10, 45Q05, 15A18 DOI. 10.1137/050636024
1. Introduction. Image deblurring is the process of correcting degradations from a detected image. In the first analysis [4], the process of image formation is described by a Fredholm operator of the first kind; in many applications the blurring system is assumed to be space-invariant, so that the mathematical model is the following: (1.1) g(x, y) = K(x − θ, y − ξ) f∗ (θ, ξ) dθdξ + ω(x, y), R2
where f∗ is the (true) input object, K is the space-invariant integral kernel of the operator, also called the point spread function (PSF), ω is the noise which arises in the process, and g is the observed data. The image restoration problem is the inversion of (1.1): Given the observed data g, we want to recover (an approximation of) the true data f∗ . Its discrete version requires one to invert a linear system, typically of very large size and very sensitive to data error, due to the ill-posed nature of the continuous problem [16]. Space invariance leads to strong algebraic structures in the system matrix; depending on the boundary conditions enforced in the discretization, we find circulant, Toeplitz, or even more complicated structures related to sine/cosine transforms (see [33] for details). Exploiting structures in the inversion algorithm is necessary to face computational issues. The problem of noise sensitivity is usually addressed by using regularization methods, where a suitable (sometimes more than one) parameter controls the degree of bias in the computed solutions. There are several techniques in literature (such as Tikhonov [16] or truncated SVD [22]), but in large-scale problems the main choice is given by iterative regularization algorithms, where the parameter is represented by the ∗ Received by the editors July 14, 2005; accepted for publication (in revised form) July 23, 2007; published electronically April 9, 2008. This work was partially supported by MIUR, grants 2002014121, 2004015437, and 2006017542. http://www.siam.org/journals/sisc/30-3/63602.html † Dipartimento di Matematica, Universit` a di Genova, Via Dodecaneso 35, 16146 Genova, Italy (
[email protected],
[email protected]). ‡ Dipartimento di Matematica e Informatica, Universit` a di Cagliari, Via Ospedale 72, 09124 Cagliari, Italy (
[email protected]).
1430
PRECONDITIONED LANDWEBER METHOD
1431
number of iterations: The method works if an early stop prevents the reconstruction of noisy components in the approximated solution [14, 4]. The simplest iterative technique in this class is the Landweber method [27], proposed in 1951 but in agreement with an older work of Cimmino (see the historical overview in [2]); besides its easy implementation, this method presents very good regularization and robustness features. An early and very comprehensive analysis of the method is contained in [34]; a wide variety of even more recent applications can be found in [3, 6, 5, 29, 38]. In section 6 we report further details from the related literature. In many real problems, the use of a priori information is basic for obtaining a substantial improvement in reconstructions; an important instance in imaging is taking into account nonnegativity constraints. In the past years more attention has been paid to faster Krylov methods such as conjugate gradient applied to normal equations (CGLS) [18] or GMRES [8], but, unfortunately, these methods do not provide nonnegative iterates, independently of the initial guess. Although the recent literature has proposed specific approaches to enforce sign constraints (see, e.g., [21]), the Landweber method allows for a straightforward extension in order to do so, leading to the projected Landweber method [12]. On the other hand, its main disadvantage is that convergence may be very slow in practical applications (see the bibliographic notes in section 6). In this paper we aim to overcome this disadvantage of the Landweber method by proposing an acceleration technique specifically designed for the space-invariant setting. Following a general idea introduced in [34, 31], we study the effect on this method of structure-based preconditioning techniques recently investigated for conjugate gradient iterations [20, 15]. We prove that such preconditioners can improve the convergence speed of the Landweber method (10 to 20 iterations are often sufficient to obtain a reasonable reconstruction), preserving its regularization capabilities; this way the method becomes more competitive with respect to other algorithms from a computational point of view. The same considerations could be extended to other preconditioning proposals [19, 24]. We stress that the removal of computational disadvantages allows us to emphasize the advantages of Landweber in comparison to other iterative methods: • simplicity (we are able to give a formal proof of convergence and regularization behavior for the nonprojected version); • flexibility (sign or even other constraints are easily incorporated; there are several parameters at our disposal, so a fine-tuning can be performed according to time/accuracy demands); • robustness (little sensitivity to the inaccurate choice of parameters). It is worth mentioning that the Landweber idea has applications in nonlinear inverse problems, too; in this context, it has been successfully applied to many real problems due to its robustness and strong regularization capabilities (see the survey in [13] and the references therein). The paper is organized as follows. In section 2 we introduce the Landweber method, the circulant preconditioning technique (section 2.1), and a first convergence analysis in the simplest case of periodic boundary conditions (section 2.2). In section 3 we use this analysis to discuss the choice of parameters which define the preconditioned method. In section 4 we show how a convergence analysis can be performed without the simplifying assumptions made in [34], by developing the case study of Dirichlet boundary conditions. Numerical results are presented in section 5, and final remarks
1432
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
are given in section 6. Technical details on convergence for suitable parameter values are given in the appendix. 2. Landweber method and preconditioning. The discretization of (1.1), with image size n = (n1 , n2 ), reads as g = Af∗ + ω , where g, f∗ , ω represent the column-ordered vectors of the corresponding quantities and the matrix A discretizes the kernel K [4]. In order to enforce the same finite length N = n1 n2 to all of the vectors g, f∗ , ω, appropriate “boundary conditions” must be applied; for an exhaustive survey of possible choices, see [33]. This way A is a square N × N matrix having a multilevel structure depending on the specific choice; for instance, A is a block Toeplitz matrix with Toeplitz blocks in the case of zero (Dirichlet) boundary conditions, and A is a block circulant matrix with circulant blocks if periodic boundary conditions are assumed. In the discrete setting, given the blurred and noisy image g, we want to recover a suitable approximation f of the true image f∗ , by computing a regularized solution of the least squares problem min Af − g 2 [4]. Since the continuous problem is known to be ill-posed, the matrix A has ill-determined rank, since its smallest singular values accumulate to zero as N increases. In this paper we deal with the Landweber method [27], which is the following iterative method for solving the normal equation A∗ Af = A∗ g . Let f0 be an arbitrarily chosen initial guess; as we will see later, a recommended choice is f0 ≡ 0. Compute, for k = 0, 1, 2, . . . , the iterate fk+1 = fk + τ (A∗ g − A∗ Afk ) ,
(2.1)
where τ is a fixed value which should belong to (0, 2/ A∗ A ) in order to ensure the convergence along every direction. The Landweber method (2.1) can be studied in several ways. It corresponds to the method of successive approximations for the computation of a fixed point of the operator G(f ) = f + τ (A∗ g − A∗ Af ). Moreover, it is the simplest method which returns the minimum point of the convex operator H(f ) = 12 Af − g 22 , since A∗ g − A∗ Afk = −∇H(fk ) is the steepest descent direction. By induction, it is simple to verify that fk+1 = τ
k
(I − τ A∗ A)i A∗ g + (I − τ A∗ A)k+1 f0 .
i=0
The Landweber algorithm belongs to the class of Krylov methods [4]. If we consider the fixed initial guess f0 ≡ 0, the iteration can be written as fk+1 = Qk,τ (A∗ A)A∗ g , where Qk,τ is the polynomial of degree k defined as Qk,τ (t) = τ Pk (τ t), with k k 1 − (1 − s)k+1 k+1 i (2.2) Pk (s) = (1 − s) = (−s)i = i+1 s i=0 i=0 if s = 0, and Pk (0) = k + 1. The method is linear since the polynomial Qk,τ does not depend on g. We re−1 mark that, if t ∈ (0, 2/τ ) (0, A∗ A ] , then Q k,τ (t) −→ kt (k −→ +∞) , and, ∗ if t ∈ [0, 2/τ ) [0, A A ] , then |tQk,τ (t)| = 1 − (1 − tτ ) ≤ 1 . These two latter properties of Qk,τ state that the Landweber method is a continuous regularization algorithm, where the number of iterations k plays the role of regularization
PRECONDITIONED LANDWEBER METHOD
1433
parameter [14, Theorem 6.1]. Basically, the first iterations of the method filter out the components of data mainly corrupted by noise; hence, an early stop of the deblurring process improves the stability and gives a good noise filtering. Notice that Qk,τ (t) ≥ Q1,τ ( A∗ A ) = τ (2 − τ A∗ A ) > 0 for all t ∈ [0, A∗ A ] , which can be useful to improve the numerical stability. A wide analysis of the convergence properties of the method was first carried out more than thirty years ago [34]. Indeed, by exploiting its simple formulation, the behavior of the iterations along each eigendirection can be successfully assessed. Here, in order to briefly study the regularization properties of the algorithm, we analyze the convergence of Qk,τ (t) in a (right) neighborhood of 0. The function Qk,τ (t) is a polynomial of degree k such that (2.3)
Qk,τ (0) = τ (k + 1) ,
Qk,τ (0) = −(1/2)τ 2 (k + 1)k .
By continuity arguments, the values of Qk,τ (t) are bounded by τ (k + 1) in a right neighborhood of 0. The “level” of regularization of the kth iteration is summarized by the values Qk,τ (0) = O(k) and Qk,τ (0) = O(k 2 ), with Qk,τ (0) < 0 for k ≥ 1. At the kth iteration, the approximation of the largest eigenvalues of the Moore–Penrose generalized inverse of A, that is, the approximation of the reciprocal of the smallest nonnull eigenvalues of A∗ A, is bounded by τ (k + 1) and decreasing at a rate O(k 2 ). Furthermore, the kth iteration of the Landweber algorithm has basically the same regularization effects of the Tikhonov regularization method with regularization parameter α = (τ (k + 1))−1 > 0 , where fα = (A∗ A + αI)−1 A∗ g is the Tikhonov’s α-regularized solution of the normal equations [14]. The Landweber method is a linear regularization algorithm, which allows a modification denoted as the projected Landweber method , which is very useful to solve inverse problems where some specific constraints on the solution play an important role. For example, the nonnegative constraint f∗ ≥ 0 is very common in image deblurring, whereas many classical regularization methods do not ensure any sign property for computed reconstructions. The projected variant consists of the following simple modification of (2.1): (2.4)
fk+1 = P+ [fk + τ (A∗ g − A∗ Afk )] ,
where P+ is the projection onto the nonnegative cone. This leads to a nonlinear algorithm, for which the theoretical understanding is not complete [12]; it is proved that the iterates converge, for exact data, to a minimizer of Af − g 2 among nonnegative vectors. Other important observed properties are just conjectures at this moment: We have numerical evidence of semiconvergence for noisy data, and the natural initial guess f0 = 0 seems to provide the convergence of fk to the least squares nonnegative solution having minimal norm [31]. Because of this absence of effective mathematical tools for investigating the convergence, the projected version of the method will not be considered in the following theoretical analysis. Besides, the numerical experiments of section 5 will concern both the projected and the nonprojected variants. It is interesting to make a comparison with the widely used CGLS method [18]. The CGLS method is a nonlinear regularization algorithm, and its kth iteration is fk+1 = Pk,g (A∗ A)A∗ g , provided that f0 ≡ 0 as before. Here Pk,g = Pk,g (t) is a polynomial of degree k which depends on the input data g. The value Pk,g (0), which
1434
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
mainly controls the level of regularization at the kth iteration, is usually much greater than k +1 [18]. This implies that the CGLS method is faster than the Landweber one. This is confirmed by recalling that the CGLS method is an optimal Krylov method in the sense that the error at any iteration is minimized among all of the Krylov polynomials. On the other hand, in the absence of a reliable stopping rule, the fast convergence speed of the CGLS method may be a negative fact in image deblurring, since it can give rise to a fast amplification of the components of the restored image fk which are related to the noise on input data [17]. We can summarize that the regularization of the Landweber method is high, whereas the convergence speed is low. In the following, our aim is to improve its convergence speed without losing its very favorable regularization capabilities. 2.1. Circulant regularizing preconditioners. As already noticed, the discrete image deblurring system A∗ Af = A∗ g has ill-determined rank, since the continuous problem is ill-posed. The Landweber scheme is a suitable algorithm for solving that system, since it is a very effective regularization method. The negative fact is that the method is often quite slow; in several applications, such as astronomical image deblurring, thousands of iterations could be necessary. Here we improve the convergence speed by means of preconditioning techniques. Basically, in order to speed up the convergence of any iterative method, a preconditioner is often an approximation of the, possibly generalized, inverse of the system matrix. Following [34], the N × N linear system A∗ Af = A∗ g is replaced by an algebraic equivalent system (2.5)
DA∗ Af = DA∗ g ,
where the N × N matrix D is the preconditioner which approximates the generalized inverse of A∗ A. This way the preconditioned version of the method reads as follows: (2.6)
fk+1 = P+ [τ DA∗ g + (I − τ M )fk ] ,
where M := DA∗ A is the preconditioned matrix [31]. We stress that the least squares problem underlying (2.5) has been changed by inserting the preconditioner; therefore, the iteration (2.6) does not necessarily converge to the same limit of (2.4), with exact data. In the case of real data, as observed in [31, page 449], we cannot even expect in principle the same behavior of the nonpreconditioned method. From now on we will consider for the theoretical discussion just the nonprojected variant, for which the operator P+ does not appear on the right-hand side of (2.6). In this case fk+1 linearly depends on fk , whence we obtain the closed formula for the case f0 = 0 (2.7)
fk = Gk A∗ g ,
Gk := τ Pk−1 (τ M )D ,
Pk (t) being the polynomial introduced in (2.2). The new limitation for τ becomes 0 < τ < 2/ DA∗ A . If B denotes an approximation of the matrix A, we construct the preconditioner D by computing D = (B ∗ B)† , where the symbol † denotes the Moore–Penrose generalized inverse. Since for space-invariant problems A has a two-level Toeplitz-like structure, we look for B inside the matrix algebra C of block circulant matrices with circulant
PRECONDITIONED LANDWEBER METHOD
1435
blocks (BCCB). The BCCB matrices are very useful in the Toeplitz context since they provide fast diagonalization and matrix-vector multiplication within O(N log N ) operations, via two-dimensional fast Fourier transform (FFT). From now on, we consider the T. Chan optimal approximation B = Bopt of the system matrix A; that is, Bopt solves the following minimization problem [10]: (2.8)
Bopt = arg min A − X F , X∈C
' where · F is the Frobenius norm G 2F = i,j |(G)i,j |2 . Since Bopt is the best approximation of A in the space C of the BCCB matrices, with respect to the Frobenius norm, it “inherits” the spectral distribution of A. This means that, if A has ill-determined rank, the same will hold for Bopt . The solution ∗ Bopt )† , leads to worse of the preconditioned system DA∗ Af = DA∗ g, with D = (Bopt numerical results due to amplification of the components related to the noise of g. Differing from Bopt , any useful preconditioner for deblurring should approximate its system matrix only in the subspace less sensitive to data errors. According to [20], this so-called signal subspace corresponds to the largest singular values of A, in the sense that it is spanned by the associated singular vectors. On the other hand, the noise subspace is related to the smallest singular values and represents the components where the direct reconstruction is more contaminated by data errors. In the Toeplitz deblurring context, the problem of locating these two fundamental subspaces was first studied by Hanke, Nagy, and Plemmons [20]. Having fixed a small real value α > 0 called the truncation parameter, the signal space can be roughly identified by the eigendirections corresponding to the eigenvalues of Bopt greater than α. If the parameter is well chosen, we can believe that the noise space falls into the directions related to the eigenvalues of Bopt with an absolute value smaller than α. Therefore the authors proposed in [20] to set equal to 1 all of these eigenvalues; in that way, the convergence speed increases in the signal space only, without fast amplification of the noisy components. On these grounds, now we extend the approach of [20], by providing a family of different filtering procedures. Given a BCCB matrix G, let λ1 (G), λ2 (G), . . . , λN (G) denote its eigenvalues with respect to the fixed base of eigenvectors collected into the columns of the two-dimensional unitary Fourier matrix. If α > 0 is a truncation parameter, we define the regularizing BCCB preconditioner D = Dα as the matrix whose eigenvalues λ1 (Dα ), λ2 (Dα ), . . . , λN (Dα ) are such that ( ) ∗ Bopt ) , (2.9) λi (Dα ) = Fα λi (Bopt where the function Fα : R+ −→ R+ is one of the eight filters of Table 2.1. Notice that the central column prescribes the eigenvalues along the noise space, and the last one is referred to the signal space. The filter I comes from the Tikhonov regularization [4], and the filter III is that of Hanke, Nagy, and Plemmons [20]. The filter IV was introduced by Tyrtyshnikov, Yeremin, and Zamarashkin in [37], and VIII is the Showalter filter for asymptotic regularization [14]. For the polynomial filter V, we consider an integer p > 0. It is worth noticing two important properties common to all of these filters: • if the truncation parameter α is small enough, then λi (Dα ) approximates the ∗ Bopt ) on the signal space; reciprocal of λi (Bopt • since Fα is a bounded function in all eight cases, the eigenvalues of Dα have a uniform upper bound independent of the dimension.
1436
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO Table 2.1 Regularizing functions for the optimal-based preconditioners. Fα (t) I II III IV V VI VII VIII
0≤t<α (t + α)−1 0 1 α−1 α−(p+1) tp α+1 1 α α t−α −1 αt
α−
t≥α (t + α)−1 t−1 t−1 t−1 t−1 t−1
t
α e - 1/α −ts e ds 0
t−1 - 1/α −ts e ds 0
Further properties of these filters, together with a wide numerical comparison, can be found in [15]. It is worth mentioning that the role played here by Bopt can be replaced by other popular basic preconditioners, such as those of Strang or R. Chan type; see [30]. We restrict our investigation to Bopt just because this was the original choice made in [20], but we expect similar results in the other cases. 2.2. A simplified convergence analysis: Periodic boundary conditions. The theoretical analysis of preconditioned iterations is in general a complicated task, due to the incomplete knowledge of how the singular vectors of the blurring matrix A and of the preconditioner D are related. We will try to give partial results in section 4, with special emphasis to the case where A is Toeplitz; by now we want to carry on just a preliminary analysis, in order to give useful insights about the role played by the different parameters at our disposal. For this reason, in the present section we overcome the main difficulty by considering the following simplifying assumption, which will be removed later: Both A∗ A and D are circulant matrices. It is understood that this assumption is restrictive, but it corresponds to the practical application of periodic boundary conditions in the discretization of the continuous space-invariant blurring model, as described in [4, 23, 33]. This is very common in some imaging contexts where the boundary effect is negligible, e.g., in astronomical observations of isolated sources. With this choice, A can be regarded as the classical Strang [35] circulant approximation of the PSF matrix. In order to avoid a trivial situation, we assume that D is obtained from the T. Chan approximation of the Toeplitz pattern of the PSF, determined before applying boundary conditions; therefore Bopt is no longer related to the minimizer of (2.8), which in this case would reduce to A itself. As stated in the ap∗ Bopt ) considered in (2.9) is a good approximation pendix (Lemma A.1), every λi (Bopt of the corresponding eigenvalue of A∗ A. In light of the circulant structure, A∗ A and D are diagonalized by a common eigenvector basis specifically represented by the discrete Fourier transform F (here we intend the two-dimensional transform, but we will use the same symbol independently of the number of dimensions): A∗ A = F ΛA F ∗ , A ΛA = diag(λA 1 , . . . , λN ) ,
D = F ΛD F ∗ , D ΛD = diag(λD 1 , . . . , λN ) .
1437
PRECONDITIONED LANDWEBER METHOD
In this case D satisfies the Strand condition for regularizing preconditioners (compare [34, eq. (34)]), and therefore the sequence fk of preconditioned iterations preserves the semiconvergence property [31, Proposition 3.1]. Under this strong assumption, convergence analysis of the preconditioned Landweber method becomes straightforward; we will obtain similar results to those of [34] concerning the “response” of (continuous) iterations, but we will use the language of numerical linear algebra since we are directly interested in the discrete case. Both the preconditioned matrix M and the iteration matrix Gk , such that fk = Gk A∗ g, introduced in (2.7), can be diagonalized in the same way as A∗ A and D: M = DA∗ A = F ΛM F ∗ ,
M ΛM = ΛD ΛA = diag(λM 1 , . . . , λN ) ,
Gk = τ Pk−1 (τ M )D = F ΛGk F ∗ ,
D A λM j = λj λj ;
Gk k ΛGk = τ Pk−1 (τ ΛM )ΛD = diag(λG 1 , . . . , λN ) .
Recalling the closed formula (2.2) for Pk−1 (t), whenever λM j = 0 we have (2.10)
D k = τ Pk−1 (τ λM λG j )λj = τ j
k A k 1 − (1 − τ λM 1 − (1 − τ λD j ) j λj ) D λ = . j τ λM λA j j
Now we are ready to study the components of the kth Landweber approximation fk and of the generalized solution f † along the Fourier directions v1 , . . . , vN . On one hand, fk = Gk A∗ g =⇒ F ∗ fk = ΛGk F ∗ A∗ g, and therefore the jth Fourier component of fk is k vj∗ fk = λG j γj ,
where γj = vj∗ A∗ g,
in agreement with [34, eq. (59)]. On the other hand, we can use for f † the expression f † = (A∗ A)† A∗ g =⇒ F ∗ f † = Λ†A F ∗ A∗ g, whence vj∗ f †
γj /λA j = 0
if λA j > 0, if λA j = 0.
We obtain the following relation for the relative error on the components lying outside the null space of A: (2.11)
A k |λG |vj∗ fk − vj∗ f † | j − 1/λj | A k = = |1 − τ λD j λj | . |vj∗ f † | 1/|λA j |
It is evident that the rate of decay for this error, as the iterations proceed, heavily depends on the value of the parameter τ and on the distance of the preconditioned A eigenvalues λD j λj from 1. As described in the previous subsection, this distance is reduced just in the signal space, where λD j approximates the reciprocal of the optimal circulant preconditioner eigenvalue (which in turn is a good approximation of λA j ). It is worth noticing that (2.10) do not apply to the directions belonging to the null space; instead, we may deduce from (2.3) the relation Pk−1 (0) = k, which implies (2.12)
vj∗ fk = τ kλD j γj ,
1438
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
so that this component is amplified as the iteration count k increases, unless λD j is very small. A similar behavior can be generally observed along the noise space, since under our assumptions these directions correspond to the indices such that λA j < α, the small threshold used to define the regularizing preconditioner. If this occurs, from the inequalities D A i 1 − iτ λD j α ≤ (1 − τ λj λj ) ≤ 1,
we obtain the bounds Pk−1 (τ λM j )=
k D A i τ λ (1 − τ λD λ ) ∈ k 1 − α , k ; j j 2 j i=0
k−1
substituting into the relation M D k vj∗ fk = λG j γj = τ Pk−1 (τ λj )λj γj
gives (2.13)
τ kλD j |γj |
k 1 − τ λD α 2 j
≤ |vj∗ fk | ≤ τ kλD j |γj | .
The last relation also gives a dependence between the chosen threshold α and the iteration count k, since both of them affect the lower bound for the amplification of the undesired components. The different role of the parameters k, τ, α and of the filter (which mainly involves the magnitude of λD j in the noise space) is now made explicit in (2.11), (2.12), and (2.13); we are ready to present a full discussion in the next section. 3. Choice of parameters. The preconditioned Landweber method described so far involves four parameters to choose: the relaxation parameter τ , the regularization parameter k (iteration count), the filter parameter α, and finally the type of filtering function Fα . As already shown in (2.11), the parameter τ allows us to control the convergence of the iterations towards the solution of the normal equations. More precisely, the relative reconstruction error is the largest one (≈ 100%) along the components for A −1 , whereas it is the smallest one (≈ 0%) which the value of λD j λj is close to 0 or 2τ D A −1 when λj λj is close to τ . This implies that the convergence is always slow in the A noise space where λD j λj is small, but an appropriate choice of τ enables us to “select” the most important subspace of components to be first resolved in the reconstruction process. We recall that, by using a suitable filtering preconditioner D as explained in A ∗ section 2.1, the numbers λD j λj giving the spectrum of DA A can be made clustered at unity in the signal space and clustered at zero in the noise space; in that case, the simplest choice τ ≡ 1 should provide good results in terms of convergence speed. It is worth noticing that for some choices of the filter (as those labeled by I and VIII in Table 2.1) the reciprocal function is only approximated, and therefore the preconditioned spectrum has a slightly different distribution; in these cases another value of τ could provide better results. In any case, as we will see in section 4, the constant τ must be such that τ λM j ∈ (0, 2), at least along the signal components. In the case of periodic boundary
PRECONDITIONED LANDWEBER METHOD
1439
conditions, all of the filters listed in Table 2.1 ensure that the choice τ = 1 always verifies the constraint above for α not too large; a proof is given in the appendix. With other boundary conditions (for instance, Dirichlet conditions, which are used in the experiments of section 5), the constraint can be violated by few outliers, but numerical evidence suggests to us the conjecture that the corresponding eigendirections lie in the noise subspace, and in practice, since we often compute few iterations, these diverging components are still negligible with respect to all of the converging ones, especially for large-scaled systems arising in real applications. Choosing the number of iterations k can heavily affect the performance when a method is not robust. If we underestimate k, we can provide restorations which are not sufficiently accurate because the “signal” component (2.11) of the relative error has not yet reached an acceptable value. If we overestimate k, we perform too many iterations, and this way we do not improve the efficiency of the method; moreover, we can obtain satisfactory restorations, provided that the unwanted components (related to k through the expressions (2.12) and (2.13)) have not been amplified too much. On the other hand, by the numerical experiments performed in section 5 on simulated data (see, in particular, Figure 5.3), the restoration error is decreasing first and increasing afterwards preserving the property of semiconvergence [4], and, in general, the region of the minimum is large enough; this allows us to choose k in a wide range of values without consequences on the restoration. Therefore we propose to apply for general problems the most simple and efficient stopping rule for estimating the optimal value of k, that is, the discrepancy principle; if we have an estimate of the noise level = ω = Af∗ − g , we stop the iterations when the residual rk = Afk − g becomes less than . In the case where such an estimate of the noise level is not at our disposal, [29] gives a stopping criterion connected to the general behavior of the residual rk , which rapidly decreases in a few iterations and then decreases much more slowly; it looks “natural” to stop the iterations at the beginning of the flat region. The authors find also that the performance of the method does not change significantly for different values of the parameter k in this region (in their specific example, the range was 50 ≤ k ≤ 100). Concerning the choice of α, we recall here the “recipe” of [20]. We consider the discrete Fourier transform (DFT) of the right-hand side g, and we observe that there is an index r at which the Fourier coefficients begin to stagnate; this corresponds to the components where the random error starts to dominate the data vector g. ∗ Bopt , obtained again by means of a DFT Then we consider the eigenvalues λi of Bopt as discussed in section 2.1; we take as approximation of the filter parameter α the magnitude of the eigenvalue λr . In many cases this strategy is difficult to apply, due to a great uncertainty in locating the index r; in a case of doubt, it is better to underestimate its value because this is equivalent to taking a greater value of α in order to prevent the reconstruction of noisy components. The choice of the type of filtering function Fα depends on the features of the problem and on the action we want to apply to the high frequency components in the reconstruction. The filters presented in Table 2.1 can be classified in two categories: “smooth” filters and “sharp” filters. For instance, the “smooth” filters I and VIII have the same expression for all t, whereas the “sharp” filtering functions from II to VII have some discontinuity in t = α. The smooth filters give an approximation of the reciprocal function everywhere,
1440
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
so that the error in (2.11) is somehow reduced in the noise space, too; therefore they allow us to modulate the restoration from high frequencies according to the problem. On the other hand, the sharp filters do not try to invert on the noise space, and so they do not introduce further improvement in the solution; hence they produce a sequence that at first reduces the restoration error and after becomes stable, because for t ≤ α the filter functions are slightly varying. This behavior is particularly desirable in the case of a strongly ill-conditioned problem, since it makes the method more robust with respect to a wrong choice of the “critical” parameters k and α. As we can see in the numerical results of section 5, the nature of the problem may suggest that we use one type of filter instead of another. 4. Convergence analysis: The Toeplitz case. If the assumption of a common eigenvector basis for A and D is dropped, the classical Strand approach does not work, and convergence analysis of the preconditioned Landweber method becomes more involved, if we look at the Fourier components. An alternative way to quantify the acceleration and regularization behavior of iterations is to perform the analysis with respect to the eigenvector basis of the preconditioned matrix M = DA∗ A; this choice needs no particular assumption on the structure of A∗ A. The argument is the classical one for stationary iterative methods; recalling that the iterates satisfy the recurrence relation (4.1)
fk+1 = τ DA∗ g + (I − τ M )fk ,
the generalized solution f † can be expressed in a fixed-point form: (4.2)
A∗ Af † = A∗ g
⇒
f † = τ DA∗ g + (I − τ M )f † .
Subtracting (4.2) from (4.1), we obtain for the kth error a recurrence relation leading to the closed formula (4.3)
fk − f † = (I − τ M )k (f0 − f † ) = −(I − τ M )k f † ,
having assumed the standard choice for the initial guess, that is, f0 = 0. −1 be the eigenvalue decomposition of the preconditioned maLet M = VM ΛM VM trix, where ΛM is real nonnegative because M is symmetrizable (i.e., similar to a symmetric matrix) and semidefinite, but VM may be not unitary. If we define φkj and φ†j as the jth components of fk and f † , respectively, along the eigenvector basis VM , by (4.3) we obtain k † φkj − φ†j = −(1 − τ λM j ) φj ,
λM j being the generic eigenvalue of M ; this kth componentwise error tends to zero provided that τ λM j ∈ (0, 2). From this characterization we can draw the following general conclusions: ≈ 1, the relative error on the associated 1. Along the directions where λM j component of f † decreases in magnitude with a linear rate close to 1 − τ ; in the case τ = 1, the reconstruction of such components is heavily accelerated. 2. Along the directions where λM j is small enough, the associated component of fk stays bounded as follows: k−1 ? † > k l † M † |φj | = τ λM |φkj | = 1 − (1 − τ λM (1 − τ λM j ) j j ) |φj | ≤ τ kλj |φj | . l=0
1441
PRECONDITIONED LANDWEBER METHOD
Hence such components are poorly reconstructed, provided that the iterations are stopped early; in particular, fk has no component along the null space of M . On the other hand, the spectral analysis carried out in [20, Theorem 6.1] for the filter III (but extendable to several other choices) proves that the eigenvalues of M have a unit cluster and accumulate to zero, without giving any insight about the related eigenvectors; hence we are sure that statements in items 1 and 2 do not refer to the empty set. Anyway, our conclusions are of no practical relevance unless we give an answer to some crucial questions: • Are the directions considered in item 1 related to the signal space (here we desire a fast reconstruction)? • Are we sure that the noise space (where reconstruction is not wanted) falls into the directions considered in item 2? This delicate matter is crucial, as similarly pointed out in [26], and no exact knowledge is at our disposal. In space-invariant deblurring problems, it is known that signal and noise spaces can be described in terms of low and high frequencies (see, e.g., [20]), but no direct relation between the frequency-related Fourier basis and the eigenvectors of M is known in literature, except for the trivial case of common bases assumed in section 2.2. In the case where the blurring operator A is a Toeplitz matrix (this occurs when Dirichlet boundary conditions are imposed to the PSF; see [4, 23, 33]), we take some insight from an indirect relation known as equal distribution. Definition 4.1 (see [39]). The eigenvectors of the sequence {Bn }, where Bn (n) is a n × n matrix, are distributed like the sequence of unitary vectors {qk }, where (n) qk ∈ Cn for k = 1, . . . , n, if the discrepancies (n)
rk
(n)
:= Bn qk
(n)
(n)
(n)
− qk , Bn qk qk 2
are clustered around zero, in the sense that B C (n) ∀ > 0 : # k ∈ {1, . . . , n} : |rk | > = o(n) . (n)
Since the discrepancies are a sort of measure of how much the {qk } behave like the eigenvectors of Bn , our goal is now to show that the eigenvectors of M are distributed like the Fourier vectors and therefore are frequency-related; this way we have a partial positive answer to the questions addressed above. Clearly, in order to apply Definition 4.1 we should think of M as the element of a sequence of matrices indexed by their total size, which of course is the product of the individual ones along the different directions. Thus we are not sure that the sequence is well-defined for any positive integer n, and it is better to adopt the typical multi-index notation of multilevel matrices (see, e.g., [36]), with a slight change in the original Definition 4.1. From now on, we put n = (n1 , n2 ) ∈ N2 as the vector of individual dimensions, in the sense that Mn is a (block) two-level matrix of total size N (n) := n1 n2 . By the multi-indices i = (i1 , i2 ) and j = (j1 , j2 ) we label a single entry of Mn , located at the inner i2 , j2 position of the block having the indices i1 , j1 at the outer level. The indices il , jl at each level (l = 1, 2) range from 1 to the respective dimension nl . The same notation applies to the preconditioners Dn and to the rectangular Toeplitz blurring matrices Am,n ∈ RN (m)×N (n) , too. This way we may consider a
1442
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
(double) sequence {Mn }n∈N2 and adjust the concept of equal distribution to our twolevel setting, according to the following new definition. Definition 4.2. Consider a sequence {Bn }n∈N2 of two-level matrices as described before and a sequence of unitary vectors C B (n) qk : n = (n1 , n2 ) ∈ N2 , k ∈ {1, . . . , n1 } × {1, . . . , n2 } ⊆ CN (n) . (n)
We say that the eigenvectors of Bn are distributed like the sequence {qk } if the (n) discrepancies rk as in Definition 4.1 satisfy B C (n) ∀ > 0 : # k ∈ {1, . . . , n1 } × {1, . . . , n2 } : |rk | > = o(N (n)) . (n)
Our vectors qk will be the two-dimensional Fourier vectors, indexed in such a way that they correspond to the columns of the two-level Fourier matrix @ An1 1 2π(i1 − 1)(j1 − 1) (4.4) F = √ exp ˆı n1 n1 i1 ,j1 =1 ⊗
An2 @ 1 2π(i2 − 1)(j2 − 1) , √ exp ˆı n2 n2 i2 ,j2 =1
where ⊗ denotes the Kronecker (tensor) product and ˆı is the imaginary unit. Lemma 4.3. For a sequence {Tn (a)}n∈N2 of two-level Toeplitz matrices generated [36] by the L2 bivariate function a(x), the eigenvectors are distributed like the Fourier vectors, in the sense of Definition 4.2. Proof. The same statement has been proved in [39] for the one-level case; the proof was a direct consequence of the estimate (4.5)
Tn (a) − Cn (a) 2F = o(n) ,
where Cn (a) is the optimal circulant preconditioner of Tn (a), denoted as Bopt in (2.8). The last result has been extended to the two-level version
Tn (a) − Cn (a) 2F = o((N (n)) in the paper [36], so that the proof can be fully generalized by following the same steps as in [39], without any substantial change. The one-level equidistribution result has been further extended to L1 generating functions in [40]; since the proof no longer uses (4.5), we cannot say that Lemma 4.3 holds if a ∈ L1 , too. The result is probably true but is not of interest in image deblurring, where generating functions are related to the PSFs and hence they have a high degree of regularity. In order to extend the equidistribution result to our matrices Mn = Dn A∗m,n Am,n , we must overcome two difficulties: the presence of the first factor Dn (this will not be a great problem, since the Fourier vectors are exactly the eigenvectors of Dn ) and the loss of Toeplitz structure in forming the normal equation matrix product A∗m,n Am,n . The next lemma takes care of this second topic. Lemma 4.4 (see [11]). Let {Am,n } be a sequence of two-level Toeplitz matrices generated by a continuous bivariate function a(x). Then for every > 0 there exist
1443
PRECONDITIONED LANDWEBER METHOD
two sequences {Rn }, {En } of N (n) × N (n) matrices and a constant s such that, for m and n large enough, A∗m,n Am,n = Tn (f ) + Rn + En
∀n, m ,
where f (x) = |a(x)|2 , En 2 < , and each Rn is a low-rank matrix having the following multilevel pattern: Rn = (Ri1 ,j1 )ni11,j1 =1 ,
Ri1 ,j1 = (Ri,j )ni22,j2 =1 ,
where each element Ri,j is nonzero only if all of the indices i1 , i2 , j1 , j2 take the first or the last s values. The continuity assumption for a(x) is stronger than the L2 hypothesis used in Lemma 4.3 but again is not restrictive in the image deblurring context. We end this section by proving the equal distribution for the eigenvectors of our preconditioned matrices with respect to Fourier vectors. Theorem 4.5. Under the same assumptions as in Lemma 4.4, the eigenvectors of the matrices Mn are distributed like the Fourier vectors, in the sense of Definition 4.2. Proof. First we will show that the equidistribution property holds for normal system matrices A∗m,n Am,n , too. Notice that for all B ∈ CN ×N and for all q ∈ CN , as observed, e.g., in [39],
q, Bq = arg min Bq − λq 2 , λ∈C
so that in order to prove an equidistribution result for the sequence {Bn } it suffices (n) (n) to show an o(N (n)) bound for vectors of the form Bqk − λqk for any suitable λ in (n) place of the discrepancies rk . (n) (n) (n) Now let λ := qk , Tn (f )qk , where qk is a Fourier vector and f is given by Lemma 4.4. Then for every > 0 and m, n large enough
A∗m,n Am,n qk
(n)
(4.6)
(n)
(n)
− λqk 2 = Tn (f )qk (n)
(n)
+ Rn qk (n)
(n)
+ En q k
(n)
− λqk 2
(n)
(n)
≤ rk + Rn qk 2 + En 2 qk 2 < Rn qk 2 + 2 (n)
except for o(N (n)) multi-indices k, where we have denoted by rk the discrepancy referred to as the Toeplitz matrix Tn (f ) and we have applied Lemmas 4.3 and 4.4. (n) In order to manage the product Rn qk , we must take into account the sparsity pattern of Rn given by Lemma 4.4 and the following general expression for the j = (n) (j1 , j2 )th entry of the Fourier vector qk , deduced by (4.4): @ A (j1 − 1)(k1 − 1) (j2 − 1)(k2 − 1) 1 (n) . (4.7) (qk )j = exp 2πˆı + n1 n2 N (n) The structure of Rn implies that the product with the Fourier vector involves just the first and the last s values of the indices j1 and j2 ; moreover, only such entries of the product are nonzero. More precisely, by exploiting the usual multi-index notation we obtain 2 (n) (n)
Rn qk 22 = R (q ) i,j k j , i∈I1 ×I2 j∈I1 ×I2
1444
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
where Il = {1, . . . , s} ∪ {nl − s + 1, . . . , nl }, l = 1, 2. Setting c := maxi,j |Ri,j | and using the expression (4.7), we have the bound c 4cs2 (n) (n) R (q ) |(qk )j | = #(I1 × I2 ) = , i,j k j ≤ c N (n) N (n) j∈I1 ×I2 j∈I1 ×I2 whence (n)
Rn qk 22 ≤ #(I1 × I2 )
64c2 s6 16c2 s4 = , N (n) N (n)
which can be made less than provided that N (n) is large enough. Substituting into (4.6) and observing that is arbitrarily small, we have proved the equal distribution between the eigenvectors of A∗m,n Am,n and the Fourier vectors. Concerning the preconditioned matrices, it suffices to observe that for μ := λ · λD k (4.8)
(n)
Mn qk
− μqk 2 = Dn [A∗m,n Am,n qk (n)
(n)
(n)
− λqk ] 2 ≤ Dn 2
except for o(N (n)) values of k, where we have used the property that Fourier vectors are also eigenvectors of Dn . Since Dn 2 equals the maximal eigenvalue of Dn , having a uniform upper bound for regularizing preconditioners (see section 2.1), the inequality (4.8) proves the equal distribution result for the matrices {Mn }. 5. Numerical results. In this section we provide some numerical experiments illustrating the effectiveness of the preconditioned Landweber method, both in the basic version and in the projected variant. In particular, the deblurring capabilities of the method will be tested through both synthetic and widely used experimental data. The analysis of section 3 is a useful starting point for the appropriate choice of all of the parameters of the algorithm. Along this direction, the main aim of the present section is to compare the results related to different settings. As already shown (see section 2), our discrete model of image formation is the image blurring with shift invariance, and it can be simply written as follows: (5.1)
g = Af∗ + ω,
where g is the blurred data, A is the multilevel Toeplitz matrix version of the PSF associated to Dirichlet boundary conditions, f∗ is the input true object, and ω is the noise which arises in the process. The deblurring problem is to recover a suitable approximation f of the true object f∗ , by means of the knowledge of g, A, and some statistical information about the noise ω. We consider two different test examples. T1. In the first one, the true image f∗ is the 256 × 256 brain section of Figure 5.1 (top left), and the convolution operator A is a Gaussian PSF with standard deviation σ = 5 shown in Figure 5.1 (top right), with an estimated condition number of 1.2 · 1020 . In this first example, we compare the restorations corresponding to different levels of artificial Gaussian white noise ω, where the relative data error ω / Af∗ ranges from 3% to 30%, with · being the vector 2-norm. T2. In the second example, the object f∗ to be recovered is the 256 × 256 satellite image of Figure 5.2 (top left), while the blurring operator and the blurred data are experimental and plotted in Figure 5.2 (top right, bottom). These
1445
PRECONDITIONED LANDWEBER METHOD
50
50
100
100
150
150
200
200
250
250 50
100
150
200
250
50
100
150
200
250
50
100
150
200
250 50
100
150
200
250
Fig. 5.1. Test set 1—True object, PSF, and synthetic blurred image with relative noise ≈ 8%.
50
50
100
100
150
150
200
200
250
250 50
100
150
200
250
50
100
150
200
50
100
150
200
250 50
100
150
200
250
Fig. 5.2. Test set 2—True data, experimental PSF, and blurred image.
250
1446
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
test data have been developed by the U.S. Air Force Phillips Laboratory, Lasers and Imaging Directorate, Kirtland Air Force Base, New Mexico, and they are widely used in the literature [32]. This time A is just moderately ill-conditioned (cond(A) ≈ 1.3 · 106 ), but the data image g is corrupted by a noise of unknown distribution, corresponding to ω / Af∗ ≈ 5%. In addition, on the same true object f∗ and blurring operator A, we consider another corrupted data g, developed by Bardsley and Nagy [1] and related to a statistical model where the noise comes from both Poisson and Gaussian distributions, plus a background contribution, so that the global noise level is
ω / Af∗ ≈ 1%. These two sets of data allow us to numerically compare the features of the method with other techniques proposed in [1, 19, 21, 24, 28]. According to Table 2.1 of section 2.1, we test the following four filters Fα : F1. Fα is the Tikhonov filter I; F2. Fα is the low pass filter II; F3. Fα is the Hanke, Nagy, and Plemmons filter III; F4. Fα is the p-polynomial vanishing filter V, with p = 1. The convergence parameter τ of the method is set to 1, providing a good compromise between fast convergence and noise filtering for the preconditioned Landweber method, as discussed in section 3. We stress that the choice of the regularization parameter α is not a simple task, so we attempt several values in order to select the best one. In test T2 we try to adopt the strategy proposed by Hanke, Nagy, and Plemmons [20] recalled in section 3, which is based on an appropriate estimate of the noise space, and we suggest some improvements and remarks. The number of iterations can be fairly well controlled by means of the discrepancy principle, even though it can be underestimated with respect to the optimal one; therefore we prefer to present the best achievable results within the first 200 iterations. All of the experiments have been implemented in MATLAB 6.1 and performed on an IBM PC, with a floating-point precision of 10−16 . 5.1. Test 1 (synthetic data with different levels of noise). We test the projected variant of the Landweber method, where each iteration fk is projected onto the nonnegative cone (see end of section 2), using the four filters for α ranging from 0.005 to 0.1, and we take the best restoration among the first 200 iterations. Table 5.1 Table 5.1 Test 1—Best relative restoration errors and number of iterations for 3% and 8% noise. Data No prec. α 0.005 0.01 0.02 0.03 0.05 0.1
F1 0.2028 1 0.1890 2 0.1854 3 0.1823 7 0.1782 16 0.1752 49
3% relative noise 0.1794 200 F2 F3 0.1900 0.1898 3 3 0.1979 0.1969 7 7 0.2012 0.1847 199 199 0.2055 0.1798 200 200 0.2172 0.1796 123 200 0.2325 0.1794 200 200
F4 0.1908 2 0.1849 4 0.1862 13 0.1844 43 0.1801 200 0.1861 200
F1 0.2107 1 0.2016 2 0.1925 2 0.1907 5 0.1896 9 0.1881 20
8% relative noise 0.1855 200 F2 F3 0.1968 0.1967 2 2 0.2024 0.2018 3 3 0.2074 0.1967 19 139 0.2090 0.1886 179 200 0.2188 0.1867 200 200 0.2335 0.1858 200 200
F4 0.2031 2 0.1930 3 0.1946 7 0.1949 15 0.1914 116 0.1910 200
1447
PRECONDITIONED LANDWEBER METHOD
shows the values of the minimal relative restoration error (RRE) fk − f∗ / f∗ and the corresponding iteration number k. The left side of the table reports the results with 3% of relative data error, the right side with 8%. First of all, we point out that the nonpreconditioned Landweber method is really slower than the preconditioned version. After 200 iterations, the RRE without preconditioning gives the same accuracy provided by the Tikhonov filter F1 within at most 20 iterations (see the columns F1 of Table 5.1 relative to the values α = 0.05 for 3% of noise and α = 0.1 for 8% of noise). As expected, when the noise is higher the filtering parameter α has to be larger, and the method has to be stopped earlier. Indeed, both α and k together play the role of regularization parameter, as can be observed by comparing the results on the left and right sides of the table. In particular, a good choice of α should be the compromise between noise filtering and fast convergence. Small values of α yield low noise filtering; in the first rows of the table, the noise dominates on the restoration process, and the RREs are larger. If this is the case, it is interesting to notice that the filter F4 outperforms the others, because the problem stays ill-conditioned and F4 “cuts” very much on the noisy components. On the other hand, too large values of α do not speed up the convergence, as shown by the latter rows, especially for filters F3 and F4. The best restorations for this example are given by the Tikhonov filter F1, provided that a good value of α has been chosen. The corresponding convergence histories, that is, the values of all RREs versus the first 200 iterations, are shown in Figure 5.3. In this graph, it is quite evident that small filtering parameters α provide fast convergence and low accuracy, while large filtering parameters α provide high accuracy and low convergence speed. The graph of Figure 5.3 confirms how the improvement provided by the preconditioned Landweber method is high with respect to the nonpreconditioned case. Moreover, the shape of any semiconvergence graph is very regular, without oscillations and fast changes of the concavity, even for low values of the filtering parameter α (see also the analogous Figures 5.7 and 5.9 related to Test 2). This good behavior is not common in other preconditioned iterative strategies which, although faster, may give much lower stability (see, e.g., [28, Figure 3.2]). 0.23 No Prec. alpha=0.03 alpha=0.05 alpha=0.1 0.22
0.21
0.2
0.19
0.18
0.17
0
20
40
60
80
100
120
140
160
180
Fig. 5.3. Test 1—RREs versus the first 200 iterations; noise 3%, filter F 1.
200
1448
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
We remark that the low pass filter F2 always gives worse results. The reason is that this filter neglects all of the eigenvalues lower than α; in this case, the corresponding components are completely lost, and the reconstruction is done without those pieces of information. All of the other filters take into account such components, although the restoration is slow therein, and hence the results are more accurate. A comparison between projected and nonprojected versions is shown in Table 5.2, related to filters F1 and F4. The left side concerns the Landweber method with projection on positives, while the right side concerns the method without projection. Notice that the positivity constraint improves the results, since actually the projected variant always leads to the minimal RREs and the maximal convergence speed. We can say that the projection on positives acts as a regularizer, since it reduces the instability due to the noise on the data. In addition, the higher regularizing properties of the projected variant allow us to adopt smaller values of the regularization parameter α, which accelerates the method. As graphical examples of restorations, the two images on the top of Figure 5.4 are the best restorations after 200 iterations of the nonpreconditioned Landweber method, with projection onto positive values (left) and without projection (right), for input data with 8% of relative noise. The images in the central row are the best restorations with filter F1 and α = 0.03: 5 iterations with projection on positives on the left and 3 iterations without projection on the right. The images at the bottom are the best restorations with filter F4, again with α = 0.03: 15 iterations with projection on positives on the left and 32 iterations without projection on the right. The images of the projected version on the left are quite better than the images of the classical Landweber method on the right, since the ringing effects are sensibly reduced, especially on the background of the object. It is worth noticing that the corresponding numerical values of the RREs in Table 5.2 are not so different and do not allow one to completely evaluate these qualitative improvements.
Table 5.2 Test 1—Best relative restoration errors and number of iterations for filters F1 and F4.
Noise No prec.
Filter F1 Noise α = 0.01 α = 0.03 α = 0.1
Filter F4 Noise α = 0.01 α = 0.03 α = 0.1
With positivity 3% 8% 30% 0.1794 0.1855 0.2185 200 200 29
Without positivity 3% 8% 30% 0.1817 0.1928 0.2263 200 156 15
With positivity 3% 8% 30% 0.1890 0.2016 0.1866 2 2 2 0.1823 0.1907 0.2276 7 5 1 0.1752 0.1881 0.2219 49 20 3
Without positivity 3% 8% 30% 0.2091 0.2142 0.2083 1 1 1 0.1906 0.1977 0.2361 4 3 1 0.1763 0.1929 0.2268 69 16 2
With positivity 3% 8% 30% 0.1849 0.1930 0.2642 4 3 1 0.1844 0.1949 0.2301 43 15 2 0.1861 0.1910 0.2240 200 200 8
Without positivity 3% 8% 30% 0.1891 0.1964 0.2886 3 2 1 0.1851 0.1949 0.2395 48 32 2 0.1895 0.1956 0.2293 200 200 5
1449
PRECONDITIONED LANDWEBER METHOD
No prec.
No prec.
It = 200
50
50
100
100
150
150
200
200
250
It = 156
250 50
100
F1
150
200
250
50
It = 5
100
F1
50
50
100
100
150
150
200
200
250
150
200
250
200
250
200
250
It = 3
250 50
100
F4
150
200
250
50
It = 15
100
F4
50
50
100
100
150
150
200
200
250
150
It = 32
250 50
100
150
200
250
50
100
150
Fig. 5.4. Test 1—Best reconstructions with α = 0.03, for relative noise 8%, within 200 iterations. Left: With projection on positives. Right: Without projection.
5.2. Test 2 (experimental data). The experimental data of the blurred image with about 5% of noise are now used here (see Figure 5.2). Table 5.3 shows the best RREs fk − f∗ / f∗ and the corresponding iterations k, obtained by using the four preconditioners and several thresholds α with the projected variant of the Landweber method. In this second test, we try to adopt the strategy proposed by Hanke, Nagy, and Plemmons for the choice of the regularization parameter α. Basically, the Fourier spectral components of the (“unregularized”) optimal preconditioner B ∗ B are compared with the Fourier spectral components of the blurred and noisy image g, in order to estimate the components of g where the noise dominates on the signal. We summarize the procedure by the following three steps: (i) collect in decreasing order the
1450
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO Table 5.3 Test 2—Best relative restoration errors and number of iterations. No prec. α 0.01 0.02 0.03 0.04 0.05 0.06
0.4689 200 F1 0.5176 2 0.4924 5 0.4249 19 0.3929 37 0.3758 62 0.3623 99
F2 0.4837 200 0.5449 200 0.5581 200 0.5752 200 0.5911 200 0.6171 200
F3 0.4351 200 0.4568 200 0.4606 200 0.4630 200 0.4642 200 0.4677 200
F4 0.5347 3 0.5132 17 0.4146 200 0.4340 200 0.4496 200 0.4626 200
eigenvalues (computed by two-dimensional FFT) of the circulant optimal preconditioner B ∗ B; (ii) look at the Fourier components of the blurred image g with respect to the same ordering of step (i), and choose the first index, say r, such that all of the following harmonics have small widths of approximately constant size; (iii) λr (B ∗ B) is the truncation parameter α for the filtered optimal preconditioner Dα . A graphic example of the three steps is given in Figure 5.5. The graph on the top shows all of the 2562 Fourier components, and the graph on the bottom is the corresponding zoom onto the first 500. As already mentioned, the procedure is not simple to apply. Indeed, there is a large set of “reasonable” indices where the Fourier components of g start to be approximately constant, which gives rise to very different truncation parameters α. In the figure, the value α ≈ 10−3 seems to be a good “stagnating” value, corresponding to a noise space generated by the Fourier components of index greater than about 350. Unfortunately, this choice gives a too small parameter, as can be observed in Table 5.3, and the results are unsatisfactory due to the high contribution of the noise in the restoration process. In our test, good values of α are contained between 0.4 · 10−1 and 10−1 , corresponding to a noise space generated by the Fourier components more or less after the first 30–70. Hence, in order to avoid noise amplification, it seems to be better to overestimate the noise space by choosing a truncation parameter α higher than the one located by steps (i)–(iii). In this way, although the convergence may be slower, the restoration is generally good. Since it is easy to miss the proper value of α, it is worth noticing that sharp filters are not appropriate for such situations, due to their discontinuity with respect to this parameter. This is confirmed by the column F4 of Table 5.3: There is a remarkable gap in the results between the second and the third rows (in particular, a dramatic change occurs from 17 iterations for α = 0.02 to 200 iterations for α = 0.03). Similarly, other experiments not reported in the table show very irregular performances for the filter F3 when α goes from 0.0087 (RRE of 0.5734 in 3 iterations) to 0.01 (RRE of 0.4351 in 200 iterations). Concerning the quality of the restorations, we can say that after 200 iterations the RRE without preconditioning is still very high (RRE=0.4689), and the reconstruction is unsatisfactory, as shown on the top left of Figure 5.6. A similar accuracy is provided within about 10 iterations by the Tikhonov filter, with α = 0.03, for instance. The Tikhonov filter F1 gives again the best results, and for this filter the choice of the regularization parameter is simpler since it is not very sensitive with respect to
1451
PRECONDITIONED LANDWEBER METHOD 0
10
Prec. Image −2
10
−4
10
−6
10
−8
10
−10
10
−12
10
0
1
2
3
4
5
6
7 4
x 10
0
10
Prec. Image
−1
10
−2
10
−3
10
−4
10
−5
10
0
50
100
150
200
250
300
350
400
450
500
Fig. 5.5. Test 2—Choice of the regularization parameter α.
nonoptimal choices. The other choices, especially F2 and F3, do not provide good results, since they bring too much regularization, that is, the convergence speed is too slow along the components related to the smallest eigenvalues. Notice that the restorations with F1 are good for any α between 0.03 and 0.06. Within few iterations, we obtain very fast and sufficiently good restorations with the low values α = 0.03 and α = 0.04 (see the corresponding images of Figure 5.6). On the other hand, if we are more interested in low RREs than fast computation, we can adopt a larger α. For instance, by using α = 0.06, within about 100 iterations the details of the restored image are accurate, as shown by the image on the center right of Figure 5.6. The results of this numerical test could be directly compared with those of other solving techniques used in the literature on the same input data set (see, e.g., [19, 21, 24, 28]). For instance, we consider the (preconditioned) conjugate gradient and the (preconditioned) steepest descent methods applied to the normal equations (CGLS/PCGLS and RNSD/PRNSD), as developed by Lee and Nagy [28].
1452
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
RRE=0.4689 It=200 No Prec.
RRE=0.4249 It=19 α = 0.03
50
50
100
100
150
150
200
200
250
250 50
100
150
200
250
50
RRE=0.3929 It=37 α = 0.04
100
150
200
250
RRE=0.3623 It=99 α = 0.06
50
50
100
100
150
150
200
200
250
250 50
100
150
200
250
50
RRE=0.3438 It=200 α = 0.08
150
200
250
200
250
True Data
50
50
100
100
150
150
200
200
250
100
250 50
100
150
200
250
50
100
150
Fig. 5.6. Test 2—Restorations of the Landweber method with positivity, filter F1.
The preconditioner used in [28] corresponds to our BCCB preconditioner with filter F3 and regularization parameter α chosen by a cross-validation technique, which works very well in this case. We recall that the RNSD method is similar to the Landweber one (2.1), since the two methods differ only in the choice of the convergence parameter τ : It changes with any iteration in the former, so that fk+1 = arg minf =fk +τ A∗ (g−Afk ) A∗ (g − Af ) 2 , whereas it is simply constant in the latter. The convergence history of Figure 5.7 can be directly compared with [28, Figure 4.2]. By a direct comparison of the RRE values, we can say that the preconditioned Landweber method is approximately fast as the CGLS method without preconditioning acceleration. In addition, as expected by virtue of its better choice of τ , the RNSD method is faster than the (unpreconditioned) Landweber one (for instance, RNSD reaches after 50 iterations the RRE value of about 0.50 instead of our value of
1453
PRECONDITIONED LANDWEBER METHOD 0.75 No Prec. alpha=0.03 alpha=0.04 alpha=0.05 alpha=0.06 alpha=0.08
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35 0
20
40
60
80
100
120
140
160
180
200
Fig. 5.7. Test 2—RREs versus the first 200 iterations; experimental blurred data with 5% of noise, filter F1.
0.56), and the same is true for the corresponding preconditioned versions. Concerning the graphical quality of the restored images, comparing the output of Figure 5.6 (see, for instance, the top right image) and [28, Fig. 4.3], the situation is different, since the projection on positives of the proposed method substantially reduces the artifacts due to ringing effects, as already noticed at the end of Test 1. These results show that the method is not competitive by considering the convergence speed only, but the quality of the restorations is generally good in a moderate number of iterations, with high regularization effectiveness. In addition, we notice that the projected Landweber method with regularizing preconditioners presented here allows us to recover better restorations than other widely used deblurring algorithms; with a large value of the regularization parameter the convergence is not fast, but the restored image is very good (RRE=0.3510 at iteration 158 with α = 0.07; RRE=0.3438 at iteration 200 with α = 0.08). In this case, the method could be very favorable when the speed is not crucial. The image on the bottom left of Figure 5.6, relative to α = 0.08, where RRE=0.3438, seems better than others provided in the previous literature on the same example (consider, e.g., [19, 21, 24]). As a final test, we compare the projected Landweber algorithm with the constrained methods proposed by Bardsley and Nagy in [1]. In particular, we consider the same blurred data developed and used in [1] as a simulation of a charge-coupleddevice (CCD) camera image of the satellite taken from a ground-based telescope. These input data are shown in Figure 5.8 (top), where the noise comes from both Gaussian and Poisson distributions plus a background contribution so that the relative noise is about 1% (see [1, section 4] for details). The PSF and the true image are the same as in the previous test (see Figure 5.2). The first method used in [1] is basically a nonnegatively constrained RNSD method, where at any iteration the convergence parameter τ is the result of a more involved minimization procedure which guarantees the nonnegativity of the restoration [25]. A BCCB preconditioned version of this first method is also considered (RNSD/PRNSD). The second method of [1] is a weighted version of the previous nonnegatively constrained RNSD method (WRNSD), where the weight at any pixel
1454
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
Blurred and 1% noisy data 50
100
150
200
250 50
100
150
RRE=0.4084 It=28 α = 0.04
200
250
RRE=0.3326 It=200 α = 0.1
50
50
100
100
150
150
200
200
250
250 50
100
150
200
250
50
100
150
200
250
Fig. 5.8. Test 2—Input data of [1] and restorations of the Landweber method with positivity, filter F1.
0.75 No Prec. alpha=0.04 alpha=0.06 alpha=0.08 alpha=0.10
0.7
0.65
0.6
0.55
0.5
0.45
0.4
0.35
0.3
0
20
40
60
80
100
120
140
160
180
200
Fig. 5.9. Test 2—RREs versus the first 200 iterations, filter F1; satellite blurred data with 1% of noise of [1].
depends on the values of the blurred data. As similarly noticed in the previous test, the convergence history of Figure 5.9 shows that the unpreconditioned Landweber algorithm is slower than all of these more sophisticated methods, as can be seen by a direct comparison with [1, Figure 4.4 (left)]. The preconditioned version is a bit faster than both RNSD and WRNSD (about 70 iterations are needed by these two
PRECONDITIONED LANDWEBER METHOD
1455
methods to achieve an RRE value of 0.4), but it is much slower than the preconditioned RNSD method. On the other hand, the convergence curves of Figure 5.9 show again that the projected Landweber algorithm has a very regular semiconvergence behavior, stronger than the other approaches. When speed is not a basic aim, the method becomes competitive due to its very high regularization capabilities also for the preconditioned version; in particular, the restorations are quite good, and the application of a stopping rule may be simplified. For instance, Figures 5.8 (bottom left) and 5.8 (bottom right) show two different restorations, the first related to a good compromise between speed and accuracy and the second with high accuracy. 6. Conclusions. On account of its reliability and easiness of implementation, we have proposed a regularizing preconditioned version of the Landweber algorithm for space-invariant image deblurring, which speeds up the convergence without losing the regularization effectiveness. Our theoretical contribution has proceeded along two main directions. First, a basic analysis of the method in a simplified setting related to periodic boundary conditions has been given, in agreement with the theory developed by Strand [34] but described here in terms of linear algebra language. This way, the convergence properties of the algorithm have been easily addressed to provide practical rules about the choice of all of the several parameters of the method. Second, Dirichlet boundary conditions have been considered as a case study of other settings not considered in [34]. In particular, by proving that the eigenvectors of the preconditioned matrix are distributed like the Fourier vectors, we have shown that the algorithm is able to speed up the convergence among all of the eigendirections related to the signal space only. These arguments also extend the results of [20] and the subsequent literature, where signal and noise spaces were described in terms of low and high frequencies, regardless of the eigenvectors of the preconditioned matrix. The numerical results have confirmed the main properties of the method, that is, robustness and flexibility; on these grounds, the method may become a valid tool for the solution of inverse problems arising in real applications. More precisely, in seismology the Landweber method has been used to estimate the source time function [3] because it provides numerically stable and physically reasonable solutions by introducing positivity, causality, and support constraints. In astronomy the projected Landweber (PL) method is considered as a routine method for the analysis of large binocular telescope images [5], together with Tikhonov regularization and ordered subsets-expectation maximization (OS-EM); in addition, as stated in [7] Landweber should be preferred to others because it is more flexible and modifiable (e.g., by taking into account the support of localized objects) in order to obtain superresolution effects. Again, the PL method is the routine method used for the restoration of chopped and nodded images in thermal infrared astronomy [6]; it is able to determine the response function in the chirp-pulse microwave computerized tomography [29] in a reasonable iteration number (as discussed in section 3, without problems of parameter estimation); even for more ill-conditioned problems in multiple-image deblurring [38], Landweber is more robust than the other methods in the literature. Indeed, other widely used strategies, such as CGLS, GMRES, or MRNSD [38] methods, are less reliable; although much faster, these methods may lead to less accuracy in the restorations with respect to the proposed method if the signal-to-noise ratio is not large or in severely ill-posed problems (as formerly noticed by [17]). For instance, in these faster methods it is essential to choose the filtering preconditioner
1456
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
with very high accuracy and stop the iterations really very close to the optimal point, and we know how both of these tasks are as crucial as they are difficult to solve. On the contrary, the regularizing preconditioned Landweber method with positivity allows us to operate at higher safety, since it is much less sensitive to nonoptimal settings of the parameters. Appendix A. Convergence for τ = 1. As observed in section 3, τ = 1 is the best parameter choice in order to speed up the preconditioned Landweber method. In any case, we must ensure that this value is suitable for convergence along the directions related to the signal; this is true if the preconditioned eigenvalue λM j is less than 2 for any spectral component of interest. In this appendix we give a formal proof of the inequality λM j < 2 for all j, assuming periodic boundary conditions. In what follows, we will use the same notation as in section 2.2 concerning D M , λ λA j j , λj : notice that all of these eigenvalues implicitly depend on n = (n1 , n2 ). ∗ Bopt The first step consists of proving a “spectral equivalence” between A∗ A and Bopt (from which the definition of the preconditioner D comes). ∗ Lemma A.1. Let λB j := λj (Bopt Bopt ), Bopt being the optimal T. Chan approximation of the PSF. Then lim
B max |λA j − λj | = 0 .
n→∞ j=1,...,N
Proof. Since we are imposing periodic boundary conditions, as observed in section 2.2 both A and Bopt are square circulant matrices, representing Strang or T. Chan approximations of the PSF matrix. It is well known (see, e.g., [9, Lemma 5] or [36, Theorem 7.1]) that they are spectrally equivalent in the following sense: (A.1)
lim
max |λj (A) − λj (Bopt )| = 0 .
n→∞ j=1,...,N
The circulant algebraic structure of the matrices involved yields the relations ∗ 2 λA j = λj (A A) = |λj (A)| ,
∗ 2 λB j = λj (Bopt Bopt ) = |λj (Bopt )| ;
thus B |λA j − λj | = |λj (A)λj (A) − λj (Bopt )λj (Bopt )|
≤ |λj (A)| |λj (A) − λj (Bopt )| + |λj (Bopt )| |λj (A) − λj (Bopt )| ≤ ( A 2 + Bopt 2 ) · |λj (A) − λj (Bopt )| . Applying (A.1) and the uniform boundedness of A and Bopt , the thesis follows. Theorem A.2. Under the assumption of periodic boundary conditions, if the threshold parameter α is sufficiently small, then any filter listed in Table 2.1 determines a preconditioner D such that DA∗ A 2 < 2. Proof. Since A
DA∗ A 2 = max |λD j λj | j=1,...,N
B B and λD j = Fα (λj ) can change its expression according to the comparison between λj and α, we distinguish two cases.
PRECONDITIONED LANDWEBER METHOD
1457
D B Case λB j ≥ α. It is easy to check that the inequality λj ≤ 1/λj holds for all of the filters I–VIII of Table 2.1. B Set = α, and apply Lemma A.1; for n large enough, we have λA j < λj + α for all j and then A 0 < λD j λj ≤
λA λB α j j +α < =1+ B ≤2 . B λB λ λ j j j
A B Case λB j < α. As before, λj < λj + α for large n, and in this case we obtain < 2α. Examining the eight filters of Table 2.1, it can be observed that λD j always −1 has an upper bound: 1 for filter III and α for the others. It follows that 2α for filter III, D A 0 < λj λj < 2 for filters I, II, IV–VIII.
λA j
Hence the thesis is proved with the only restriction α ≤ 1 for filter III. It is worth noticing that the maximal eigenvalue is attained for the indices j such that λB j ≈ α or even smaller: If the same situation occurs for other boundary conditions, we could justify the conjecture that large outliers (not belonging to the interval (0, 2)) correspond to eigendirections outside the signal subspace, as confirmed by the experiments where the semiconvergence property of the method is preserved. Acknowledgment. The authors express their thanks to Prof. J. Nagy, who kindly provided the blurred data for Test 2 of section 5. REFERENCES [1] J. Bardsley and J. Nagy, Covariance preconditioned iterative methods for nonnegatively constrained astronomical imaging, SIAM J. Matrix Anal. Appl., 27 (2006), pp. 1184–1197. [2] M. Benzi, Gianfranco Cimmino’s contributions to numerical mathematics, in Atti del Seminario di Analisi Matematica, Volume Speciale: Ciclo di Conferenze in Memoria di Gianfranco Cimmino, 2004, Dipartimento di Matematica dell’Universit` a di Bologna, Tecnoprint, Bologna, 2005, pp. 87–109. [3] M. Bertero, D. Bindi, P. Boccacci, M. Cattaneo, C. Eva, and V. Lanza, Application of the projected Landweber method to the estimation of the source time function in seismology, Inverse Problems, 13 (1997), pp. 465–486. [4] M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, Institute of Physics, Bristol, UK, 1998. [5] M. Bertero and P. Boccacci, Image restoration methods for the large binocular telescope, Astron. Astrophys. Suppl. Ser., 147 (2000), pp. 323–332. [6] M. Bertero, P. Boccacci, F. Di Benedetto, and M. Robberto, Restoration of chopped and nodded images in infrared astronomy, Inverse Problems, 15 (1999), pp. 345–372. [7] M. Bertero and C. De Mol, Super-resolution by data inversion, in Progress in Optics XXXVI, E. Wolf, ed., Elsevier, Amsterdam, 1996, pp. 129–178. [8] D. Calvetti, B. Lewis, and L. Reichel, On the regularizing properties of the GMRES method, Numer. Math., 91 (2002), pp. 605–625. [9] R. Chan and X.-Q. Jin, A family of block preconditioners for block systems, SIAM J. Sci. Statist. Comput., 13 (1992), pp. 1218–1235. [10] T. F. Chan, An optimal circulant preconditioner for Toeplitz systems, SIAM J. Sci. Statist. Comput., 9 (1988), pp. 766–771. [11] F. Di Benedetto, Solution of Toeplitz normal equations by sine transform based preconditioning, Linear Algebra Appl., 285 (1998), pp. 229–255. [12] B. Eicke, Iteration methods for convexly contrained ill-posed problems in Hilbert space, Numer. Funct. Anal. Optim., 13 (1992), pp. 413–429. ¨gler, Nonlinear inverse problems: Theoretical aspects and some in[13] H. W. Engl and P. Ku dustrial applications, in Multidisciplinary Methods for Analysis Optimization and Control of Complex Systems, Math. Ind. 7, Springer, Berlin, 2004, pp. 3–48.
1458
P. BRIANZI, F. DI BENEDETTO, AND C. ESTATICO
[14] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996. [15] C. Estatico, A classification scheme for regularizing preconditioners, with application to Toeplitz systems, Linear Algebra Appl., 397 (2005), pp. 107–131. [16] C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Res. Notes Math., 105, Pitman, Boston, 1984. [17] M. Hanke, Accelerated Landweber iterations for the solutions of ill-posed problems, Numer. Math., 60 (1991), pp. 341–373. [18] M. Hanke, A second look at Nemirovskii’s analysis of the conjugate gradient method, Beitr¨ age zur angewandten Analysis und Informatik, E. Schock ed., Shaker Verlag, Aachen, Germany, 1994, pp. 123–135. [19] M. Hanke and J. Nagy, Inverse Toeplitz preconditioners for ill-posed problems, Linear Algebra Appl., 284 (1998), pp. 137–156. [20] M. Hanke, J. Nagy, and R. Plemmons, Preconditioned iterative regularization for ill-posed problems, in Numerical Linear Algebra (Kent, OH, 1992), de Gruyter, Berlin, 1993, pp. 141–163. [21] M. Hanke, J. Nagy, and C. Vogel, Quasi-Newton approach to nonnegative image restorations, Linear Algebra Appl., 316 (2000), pp. 223–236. [22] P. C. Hansen, Truncated singular value decomposition solutions to discrete ill-posed problems with ill-determined numerical rank, SIAM J. Sci. Statist. Comput., 11 (1990), pp. 503–518. [23] A. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, 1989. [24] J. Kamm and J. Nagy, Kronecker product and SVD approximations in image restoration, Linear Algebra Appl., 284 (1998), pp. 177–192. [25] L. Kaufman, Maximum likelihood, least squares, and penalized least squares for PET, IEEE Trans. Med. Imag., 12 (1993), pp. 200–214. [26] M. E. Kilmer and D. P. O’Leary, Pivoted Cauchy-like preconditioners for regularized solution of ill-posed problems, SIAM J. Sci. Comput., 21 (1999), pp. 88–110. [27] L. Landweber, An iteration formula for Fredholm integral equations of the first kind, Amer. J. Math., 73 (1951), pp. 615–624. [28] K. Lee and J. Nagy, Steepest descent, CG and iterative regularization of ill-posed problems, BIT, 43 (2003), pp. 1003–1017. [29] M. Miyakawa, K. Orikasa, M. Bertero, P. Boccacci, F. Conte, and M. Piana, Experimental validation of a linear model for data reduction in Chirp-Pulse Microwave CT, IEEE Trans. Med. Imag., 21 (2002), pp. 385–395. [30] M. Ng, Iterative Methods for Toeplitz Systems, Oxford University Press, London, 2004. [31] M. Piana and M. Bertero, Projected Landweber method and preconditioning, Inverse Problems, 13 (1997), pp. 441–464. [32] M. C. Roggemann and B. Welsh, Imaging through Turbulence, CRC Press, Boca Raton, FL, 1996. [33] S. Serra-Capizzano, A note on antireflective boundary conditions and fast deblurring models, SIAM J. Sci. Comput., 25 (2003), pp. 1307–1325. [34] O. N. Strand, Theory and methods related to the singular-function expansion and Landweber’s iteration for integral equations of the first kind, SIAM J. Numer. Anal., 11 (1974), pp. 798–825. [35] G. Strang, A proposal for Toeplitz matrix calculations, Stud. Appl. Math., 74 (1986), pp. 171–176. [36] E. E. Tyrtyshnikov, A unifying approach to some old and new theorems on distribution and clustering, Linear Algebra Appl., 232 (1996), pp. 1–43. [37] E. E. Tyrtyshnikov, A. Y. Yeremin, and N. L. Zamarashkin, Clusters, preconditioners, convergence, Linear Algebra Appl., 263 (1997), pp. 25–48. [38] R. Vio, J. Nagy, L. Tenorio, and W. Wamsteker, A simple but efficient algorithm for multiple-image deblurring, Astron. Astrophys., 416 (2004), pp. 403–410. [39] N. L. Zamarashkin and E. E. Tyrtyshnikov, Distribution of eigenvalues and singular values of Toeplitz matrices under weakened conditions on the generating function, Russian Acad. Sci. Sb. Math., 188 (1997), pp. 1191–1201. [40] N. L. Zamarashkin and E. E. Tyrtyshnikov, On the distribution of eigenvectors of Toeplitz matrices with weakened requirements on the generating function, Russian Math. Surveys, 522 (1997), pp. 1333–1334.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1459–1473
c 2008 Society for Industrial and Applied Mathematics
AN AUGMENTED LAGRANGIAN APPROACH TO LINEARIZED PROBLEMS IN HYDRODYNAMIC STABILITY∗ MAXIM A. OLSHANSKII† AND MICHELE BENZI‡ Abstract. The solution of linear systems arising from the linear stability analysis of solutions of the Navier–Stokes equations is considered. Due to indefiniteness of the submatrix corresponding to the velocities, these systems pose a serious challenge for iterative solution methods. In this paper, the augmented Lagrangian-based block triangular preconditioner introduced by the authors in [SIAM J. Sci. Comput., 28 (2006), pp. 2095–2113] is extended to this class of problems. We prove eigenvalue estimates for the velocity submatrix and deduce several representations of the Schur complement operator which are relevant to numerical properties of the augmented system. Numerical experiments on several model problems demonstrate the effectiveness and robustness of the preconditioner over a wide range of problem parameters. Key words. Navier–Stokes equations, incompressible flow, linear stability analysis, eigenvalues, finite elements, preconditioning, iterative methods, multigrid AMS subject classifications. 65F10, 65N22, 65F50 DOI. 10.1137/070691851
1. Introduction. In this paper we consider the numerical solution of the following problem: Given a mean velocity field U , a forcing term f , a scalar α ≥ 0, and a viscosity coefficient ν, find a velocity-pressure pair {u, p} which solves (1.1) (1.2) (1.3)
−νΔu − αu + (U · ∇)u + (u · ∇)U + ∇p = f in Ω, −div u = 0 in Ω, u = 0 on ∂Ω
on a given domain Ω ⊂ Rd (with d = 2 or 3). We assume Ω to be bounded and with a sufficiently smooth boundary ∂Ω, except in section 4 where we -briefly consider the case Ω = Rd . Imposing on the pressure the additional condition Ω p dx = 0, we assume the system to have exactly one solution. This problem typically arises in the linear stability analysis of solutions of the Navier–Stokes equations; see, e.g., [8, section 7.2.1]. Such analysis leads to the solution of an eigenvalue problem, in particular, to the determination of eigenvalues close to the imaginary axis. Indeed, a necessary condition for the (original) flow solution to be linearly stable is that the real parts of all the eigenvalues are negative. This type of analysis is especially useful in the determination of values of the Reynolds number above which a steady state flow becomes unstable. Shift-and-invert type methods are often used for the solution of the eigenvalue problem, leading (on the continuous level) to systems of the form (1.1)–(1.3); see, e.g., [5]. ∗ Received by the editors May 15, 2007; accepted for publication (in revised form) September 7, 2007; published electronically April 9, 2008. http://www.siam.org/journals/sisc/30-3/69185.html † Department of Mechanics and Mathematics, Moscow State M. V. Lomonosov University, Moscow 119899, Russia (
[email protected]). The work of this author was supported in part by the Russian Foundation for Basic Research grants 06-01-04000 and 05-01-00864 and by a Visiting Fellowship granted by the Emerson Center for Scientific Computation at Emory University. ‡ Department of Mathematics and Computer Science, Emory University, Atlanta, GA 30322 (
[email protected]). The work of this author was supported in part by the National Science Foundation grant DMS-0511336.
1459
1460
MAXIM A. OLSHANSKII AND MICHELE BENZI
As a prototypical problem (with U = 0) for the linearized Navier–Stokes equations we also consider the following indefinite Stokes-type problem: (1.4) (1.5) (1.6)
−νΔu − αu + ∇p = f
in Ω,
−div u = 0 in Ω, u = 0 on ∂Ω .
This problem also arises in the stability analysis of the Ladyzhenskaya–Babuˇska– Brezzi (LBB) condition for incompressible finite elements for linear elasticity or Stokes flow; see [15]. It is worth noting that similar problems arise in other contexts as well, e.g., electromagnetism. As we shall see, the development of solvers for (1.4)–(1.6) and a good understanding of their capabilities and limitations are crucial steps towards efficient numerical solution methods for (1.1)–(1.3); these, in turn, are necessary for analyzing the spectra (or pseudospectra, see [23, 24]) of operators arising in fluid mechanics. Discretization of (1.4)–(1.6) using LBB-stable finite elements (see, e.g., [8]) results in a saddle point system of the form u f A − αMu B T (1.7) = , p 0 B 0 where A is the discretization of the vector Laplacian, Mu the velocity mass matrix, and B T the discrete gradient. Note that A is symmetric positive definite, whereas A − αMu is indefinite for α > 0 sufficiently large, making the system (1.7) difficult to solve. In the case of the full system (1.1)–(1.3), the matrix A also contains the discretization of the first order terms in (1.1), and is nonsymmetric. Again, the matrix A − αMu will generally have eigenvalues on both sides of the imaginary axis, making the solution of system (1.7) by iterative methods a challenge. The present paper is devoted to the development of such methods, building on the work described in [2] for the case α = 0. The remainder of this paper is organized as follows. In section 2 we briefly recall the augmented Lagrangian-based block preconditioner from [2]. Section 3 is devoted to an analysis of the spectrum of the (1, 1) block in the augmented system corresponding to the discrete Stokes-like problem (1.7) with A symmetric and positive definite. Analysis of the preconditioner also requires knowledge of the eigenvalue distribution of the Schur complement of the augmented system; some analysis of the spectrum of this operator is presented, for a few different model problems, in section 4. Numerical experiments illustrating the performance of the preconditioner are discussed in section 5. Some conclusive remarks are given in section 6. 2. Augmented Lagrangian approach. Here we briefly recall the augmented Lagrangian (AL) approach used in [2] for the case with α = 0. For convenience, define Aα := A − αMu . The original system (1.7) is replaced with the equivalent one u f Aα + γB T W −1 B B T (2.1) = , p 0 B 0 where γ > 0 is a parameter and W ≈ Mp is a diagonal approximation to the pressure mass matrix. In our case W is a scaled identity. We consider a block triangular preconditioner of the form Aˆα B T (2.2) P= . 0 Sˆ
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
1461
Here the matrix Aˆα is not given explicitly; rather, Aˆ−1 α represents an inexact solver for linear systems involving the matrix Aα + γB T W −1 B. For the case α = 0, excellent results were obtained in [2] with a multigrid iteration based on a method due to Sch¨ oberl [22]. We discuss the approximate multigrid solver Aˆ−1 α in section 5. For the choice of Sˆ we consider two possibilities: a simple scaled mass matrix preconditioner (2.3)
Sˆ−1 := −(ν + γ)Mp−1 ,
and one which takes into account the presence of the α-term, namely (2.4)
) ( ˆ u−1 B T −1 , Sˆ−1 := −(ν + γ)Mp−1 + α B M
ˆ u is a diagonal approximation to the velocity mass matrix. Note that where M ˆ u−1 B T can be seen as a mixed approximation to the pressure Poisson problem BM with Neumann boundary conditions and that (2.4) resembles the Cahouet–Chabard Schur complement preconditioner [4] initially proposed for the instationary Stokes problem. However, since the reactive term in (1.4) is now negative, the α-term enters (2.4) with the opposite sign compared to the Cahouet–Chabard preconditioner. The block triangular preconditioner (2.2) can be used with any Krylov subspace method for nonsymmetric linear systems, such as GMRES [21] or BiCGStab [26]; if, ˆ−1 is computed via a nonstationary inner iteration, however, the action of Aˆ−1 α or of S then a flexible variant (such as FGMRES [20]) must be used. We note that some preliminary experiments with a block triangular preconditioner (2.2) for systems of the form (1.7) arising from marker-and-cell (MAC) discretizations of flow problems can be found in [1]. The results in [1] show the good performance of the AL-based approach, especially in terms of robustness with respect to problem and algorithmic parameters. In that paper, however, the crucial question of how to efficiently approximate the action of (Aα +γB T W −1 B)−1 was left open. In this paper we propose some reasonably effective ways to address this difficult problem. 3. Eigenvalue estimates. In this section we analyze the eigenvalues of the submatrix A + γB T W −1 B in the augmented problem (2.1) corresponding to the Stokes problem (with α = 0). Information about its eigenvalue distribution is of interest since it helps to understand the performance of the (inexact) multigrid solver for the (1,1) block of (2.1), which is an essential component of the entire approach. In particular, we will show that under certain assumptions the eigenvalues of the problem (3.1)
(A + γB T W −1 B)u = λγ u
tend for γ → ∞ to the (generalized) eigenvalues of the problem u u In 0 A BT . =λ (3.2) 0 0 p p B 0 To show this we need the following assumptions. Let A ∈ Rn×n be symmetric positive definite, i.e., (3.3)
A = AT
and
c1 In ≤ A
with some c1 > 0. In (3.3), we have used “≤” to denote the usual positive semidefinite ordering. Let B ∈ Rm×n . Assume that the matrix S = −BA−1 B T is also nonsingular.
1462
MAXIM A. OLSHANSKII AND MICHELE BENZI
Owing to (3.3), this is equivalent to assuming that S = ST
(3.4)
c2 Im ≤ −S ,
and
with some c2 > 0. Note that these two assumptions together imply that the block matrix on the left-hand side of (3.2) is also nonsingular and n ≥ m. Also assume that W ∈ Rm×m is symmetric positive definite. Finally, let c3 be a positive constant from the estimate 1
Bv ≤ c3 A 2 v
(3.5)
∀ v ∈ Rn .
We note that the constants in (3.3)–(3.5) do not depend on γ, but may depend on n and m, which are often related to the discretization parameter h. For the Stokes problem additional discussion is given in Remark 3.2. With the above assumptions the main result of this section is the following theorem on the generalized eigenvalues of (3.2). Here and throughout the paper, the matrix norm used is the spectral norm. Theorem 3.1. The problem (3.2) has n − m real finite eigenvalues 0 < λ1 ≤ · · · ≤ λn−m . There are n − m eigenvalues 0 < λγ,1 ≤ · · · ≤ λγ,n−m of (3.1) such that −1 −1 |λ−1
W
k − λγ,k | ≤ C γ
(3.6) −1
with C = (1 + c1 2 c3 )2 c−2 2 . The remaining m eigenvalues of (3.1) can be estimated from below as (3.7)
λγ,k ≥ C −1 γ W −1 .
Proof. From the assumption (3.4) we conclude that B has full rank and thus dim(ker(B)) = n − m. Let P : Rn → ker(B) be the orthogonal projector. The problem (3.2) is equivalent to the eigenvalue problem: P Au = λu for u ∈ ker(B). Since the operator P A is self-adjoint and positive definite on the kernel of B, the problem has n − m positive real eigenvalues. Denoting p = Bu, we rewrite (3.1) as u λγ u A BT = . (3.8) p 0 B −γ −1 W We will also use the following notations for the block matrices: 0 In A BT , I := . Aγ := δ 0 δ Im B −γ −1 W −1 Letting μk = λ−1 k , μγ,k = λγ,k , and
A := A∞ =
A B
BT 0
,
we can rewrite (3.2) and (3.8) in the form (3.9)
A−1 I0 x = μx,
A−1 γ I0 x = μγ x.
All eigenvalues of (3.9) are real and nonnegative. Positive μk and μγ,k correspond to finite real eigenvalues of (3.2) and (3.8), while zero μk and μγ,k correspond to infinite
1463
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
eigenvalues of (3.2) and (3.8). Proving (3.6) and (3.7) is equivalent to showing the upper bound |μk − μγ,k | ≤ C γ −1 W
(3.10) −1
for all eigenvalues of A−1 I0 and A−1 with C = (1 + c1 2 c3 )2 c−2 γ I0 . Consider the 2 following auxiliary eigenvalue problem: δ A−1 γ Iδ x = μγ x
(3.11)
with some δ > 0. As before the case γ = ∞ will denote the matrix A∞ = A with zero (2,2) block. The matrix A−1 γ Iδ is nonsingular and self-adjoint in the Iδ ·, · scalar product. By the triangle inequality we get (3.12)
|μk − μγ,k | ≤ |μk − μδ∞,k | + |μδ∞,k − μδγ,k | + |μδγ,k − μγ,k |.
For the second term on the right-hand side of (3.12) we will prove the estimate |μδ∞,k − μδγ,k | ≤ C γ −1 W
(3.13) −1
with C = (1 + c1 2 c3 )2 c−2 independent of δ (for small enough values δ). Since the 2 eigenvalues are continuous functions of the matrix elements, the first and third terms on the right-hand side of (3.12) vanish as δ → 0. Therefore, passing to the limit in (3.12) with δ → 0 we obtain the desired bound (3.10). It remains to prove (3.13). Consider the block factorization −1 0 A 0 In In −A−1 B T , A−1 = γ 0 Sγ−1 −A−1 B T Im 0 Im where Sγ = −BA−1 B T − γ −1 W . Using this factorization we obtain (3.14)
−1 T 2
A−1 − A−1 B ) S −1 − Sγ−1
γ ≤ (1 + A −1
for any γ ∈ (0, ∞]. Using (3.3) and (3.5) we immediately get A−1 B T ≤ c1 2 c3 . For the last term in (3.14) we obtain (3.15)
S −1 − Sγ−1 ≤ S −1 Sγ − Im
Sγ−1 = γ −1 S −1 W
Sγ−1
≤ γ −1 S −1
Sγ−1
W ≤ γ −1 c−2 2 W .
In the last inequality we used the symmetry and positive definiteness of W and (3.4) to −1 −1 conclude that c2 Im ≤ −S ≤ −S+γW = −Sγ and thus S −1 ≤ c−1 2 and Sγ ≤ c2 . Substituting the bound (3.15) into (3.14) we get (3.16)
−1
2 −2 −1 2
A−1 − A−1
W . γ ≤ (1 + c1 c3 ) c2 γ
The Courant–Fischer theorem gives for the kth eigenvalue of problem (3.11) the characterization μδγ,k = max
min
S∈Vk−1 0=y∈S ⊥
A−1 γ Iδ y, Iδ y ,
Iδ y, y
1464
MAXIM A. OLSHANSKII AND MICHELE BENZI
where Vk−1 denotes the family of all (k − 1)-dimensional subspaces of Rn+m . The inequality miny (a(y)+b(y)) ≤ miny a(y)+maxy b(y) implies that miny a(y)−miny b(y) ≤ maxy (a(y) − b(y)). Using this we estimate, assuming δ ∈ (0, 1],
(A−1 − A−1
(A−1 − A−1 γ )Iδ y, Iδ y γ )Iδ y, Iδ y ≤ max n+m S∈Vk−1 y∈S ⊥
Iδ y, y
Iδ y, y y∈R 1
I y, I y − δ δ 2 −2 −1 2 = A−1 − A−1 ≤ A−1 − A−1
W . γ max γ ≤ (1 + c1 c3 ) c2 γ y∈Rn+m Iδ y, y
μδ∞,k − μδγ,k ≤ max max
One can estimate the difference μδγ,k − μδ∞,k in the same way, so we have the desired bound on |μδ∞,k − μδγ,k |, i.e., inequality (3.13). The theorem is proved. Remark 3.2. Assume that the matrix on the left-hand side of (3.2) results from an LBB-stable finite element (or finite difference) discretization of the Stokes problem and W is the diagonal approximation to the mass matrix (or the n × n identity). Then the assumptions of Theorem 3.1 are satisfied. Depending on the boundary conditions for the Stokes problem, the matrix S may have a one-dimensional kernel. This singularity of S can be overcome by restricting the discrete pressure to lie in the subspace of all functions ph satisfying (ph , 1) = 0. Moreover, if the mesh is quasiuniform and W is the diagonal of the pressure mass matrix, then for finite element discretizations one has in (3.6) and (3.7) that C W = O(h−d ), where h denotes the mesh size. Indeed, it can be easily shown that the standard ellipticity, continuity, and stability conditions for corresponding finite element bilinear forms imply c1 = O(hd ), d c2 = O(hd ), c3 = O(h 2 ), and W = O(hd ); see, e.g., Lemma 3.3 in [17]. For MAC finite difference discretizations, one has C W = O(1); the same holds true for finite elements if the problem is scaled in such a way that λ1 = O(1) and λ1,γ = O(1). Remark 3.3. Two conclusions can be based on Theorem 3.1. First of all, solving (3.1) for large enough γ can be used as a penalty method for finding the eigenvalues of the saddle-point problem (3.2). The theorem shows convergence of the first order with respect to the small parameter γ −1 for the eigenvalues. In the literature one can find results on the first order convergence of the solution of the penalized problem to the solution of the saddle point problem with zero (2,2) block; see, e.g., [3, 16] and [11, Thm. 7.2], but—to our knowledge—no result about eigenvalue convergence was known. Second, the kth eigenvalue of the augmented problem (3.1) is, in general, larger than the kth eigenvalue of the nonaugmented problem (γ = 0), since max
min
S∈Vk−1 0=y∈S ⊥
(A + γBW −1 B T )y, y
Ay, y ≥ max min . S∈Vk−1 0=y∈S ⊥ y, y
y, y
Therefore, for a fixed α the problem (A + γBW −1 B T − αIn )y = f is, in general, “less indefinite” for γ > 0. This property is advantageous for the multigrid solves. For example, in the case of the Stokes problem with Dirichlet boundary conditions in the unit square, it is known that (after appropriate scaling) λmin (A) ≈ 2π 2 ≈ 20, whereas for the minimal eigenvalue of (3.2) one has λmin ≈ 52.3; see [12, section 36.3]. 4. Analysis of the Schur complement for the augmented system. In [2] it was shown that the clustering of the eigenvalues of the augmented matrix in (2.1) preconditioned by the matrix P in (2.2) depends on the distribution of the eigenvalues of the Schur complement −B(Aα + γB T W B)−1 B T . (Although the analysis in [2] was done for α = 0, the same holds true for α = 0.) In this section we study the spectrum of the Schur complement operator for several model problems.
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
1465
4.1. Analysis for the periodic problem. Consider the indefinite linearized Navier–Stokes periodic problem in two or three space dimensions with an additional “grad-div” term (augmentation). Assume that the mean flow U is constant. In this case the term (u · ∇)U vanishes and the equations read −νΔu − αu − γ∇div u + (U · ∇)u + ∇p = f in Rd , −div u = 0 in Rd . Denote by Sγ the Schur complement operator for this problem. For the harmonic q(x) = exp(i(c · x)), where x ∈ Rd , c ∈ Nd , d = 2, 3, it is straightforward to compute1 (4.1)
−Sγ q = λc q
with λc =
|c|2 , (ν + γ)|c|2 − α + i(U · c)
where |c|2 = c · c. It is clear that for large enough |c| we have (ν + γ)λc ≈ 1. Let us estimate the number of “poor” eigenvalues, such that |(ν + γ)λc | ≤ ε or |(ν + γ)λc | ≥ ε−1 for some (reasonably small) ε ∈ (0, 1). This is easy to check for the case of U = 0. One finds that |λc | ≤ ε ⇔ |c|2 ≤
α , (ε−1 + 1)(ν + γ)
|λc | ≥ ε−1 ⇔ (1 − ε) ≤ |c|2 α−1 (ν + γ) ≤ (1 + ε). Thus, assuming ν and ε are fixed we have that the number of “poor” eigenvalues is O( α/γ) for α, γ → ∞. 4.2. Analysis for nonperiodic problem in unbounded domain. The analysis in subsection 4.1 can be extended to the case of nonsymmetric problems posed in Rd under certain (standard) assumptions: (4.2)
−νΔu − αu − γ∇div u + (U · ∇)u + ∇p = f
in Rd ,
(4.3) (4.4)
−div u = 0
in Rd ,
v→0
as |x| → ∞.
For the definite case (α ≤ 0), the system (4.2)–(4.4) is also known as the Oseen problem and the proper weak formulation of the problem can be found in [14]. Let us consider further the following problem: Given f ∈ C ∞ (Rd ), d = 2, 3, with compact support find u ∈ H 1 (Rd )d , p ∈ L2 (Rd ) satisfying in the weak sense: (4.5)
−νΔu − αu − γ∇div u + (U · ∇)u + ∇p = 0
in Rd ,
(4.6) (4.7)
−div u = f
in Rd ,
v→0
as |x| → ∞.
We note that problem (4.5)–(4.7) can be interpreted as finding the pressure p satisfying Sγ∞ p = f , where Sγ∞ defines the pressure Schur complement operator for the problem (4.2)–(4.4). 1 The simplest way to show (4.1) is to look for v solving −νΔv −αv −γ∇div v +(U ·∇)v = ∇q(x) in the form v = k exp(i(c · x)). This gives a 2 × 2 or 3 × 3 system for the vector k. Solving this system for k, one finds −Sγ q = div v = i(c · k)q.
1466
MAXIM A. OLSHANSKII AND MICHELE BENZI
Lemma 4.1. Assume that the problem (4.5)–(4.7) has a unique solution. Then p = (−(ν + γ)Δ − α + (U · ∇)) G ∗ f,
(4.8)
where G(x) = (4π)−1 |x|−1 for d = 3 and G(x) = (2π)−1 ln |x|−1 for d = 2 is the Green’s function (fundamental solution) for the Laplace operator, and ∗ stands for convolution. Proof. By definition, the fundamental solution {v, q} for (4.5)–(4.7) satisfies: −νΔv − αv − γ∇div v − (U · ∇)v + ∇q = 0
(4.9)
−div v = δ0 (x)
(4.10) (4.11)
v → 0, q → 0
in Rd , in Rd , as |x| → ∞,
where δ0 stands for the Dirac’s delta at the origin. Denote by f˜ the Fourier transform of f : f˜(c) = f (x) exp(−ic·x) dx . Rd
One easily finds from (4.9)–(4.11) that ν|c|2 v˜i − α˜ vi + γci (c·˜ v) − i(U·c)˜ vi + ici q˜ = 0
for i = 1, . . . , d,
c·˜ v = i.
Solving for q˜, we obtain q˜ = ((ν + γ)|c|2 − α − i(U·c))|c|−2 . The inverse Fourier transform gives q = (−(ν + γ)Δ + α + (U · ∇)) G(x). Denote by L the differential operator on the left-hand side of (4.5)–(4.6) and by L∗ its adjoint. A standard argument yields p = {0, δ0 (x)} ∗ {u, p} = L∗ {v, q} ∗ {u, p} = {v, q} ∗ L{u, p} = q ∗ f. The proof is complete. A similar technique as in Lemma 4.1 was used by Kay, Loghin, and Wathen in [13] to construct a preconditioner for the discrete Schur complement of the Oseen problem. As one may expect, the λ−1 c ’s in (4.1) are the eigenvalues of the periodic counterpart of the operator on the right-hand side of (4.8). 4.3. Analysis for problem in a bounded domain. Consider the augmented indefinite Stokes problem (we assume here U = 0) in two or three dimensions with nonstandard boundary conditions in a bounded domain Ω with Lipschitz boundary ∂Ω: (4.12) (4.13) (4.14)
−νΔu − αu − γ∇div u + ∇p = f in Ω, −div u = g in Ω, u·n = 0, n × rot u = 0 on ∂Ω,
where n is a normal vector on ∂Ω. For Ω ⊂ R2 the boundary conditions are slightly different: u·n = rot u = 0. For γ = 0 and α = 0 (or if the term αu enters the
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
1467
momentum equation with the positive sign) the well-posedness of the problem was shown in [9, 18]. Denote by Sγ the Schur complement operator for this problem. Using the technique developed in [18] one obtains the following representation for Sγ−1 . Lemma 4.2. Assume that the problem (4.12)–(4.14) is well-posed, then (4.15)
Sγ−1 = −(ν + γ)I − αΔ−1 N
on L20 (Ω),
where Δ−1 N is the solution operator for the Poisson problem with Neumann boundary conditions. Proof. Define the following space: V = H0 (div ) ∩ H(rot ) = {u ∈ L2 (Ω) | div u ∈ L2 (Ω), rot u ∈ L2 (Ω)2d−3 , u·n|∂Ω = 0}. We note that the properties of V are well studied in [10]. The weak form of (4.12)– (4.14) reads (cf. [18]): Given f ∈ V−1 , g ∈ L20 find {u, p} in V × L20 satisfying (ν + γ)(div u, div v) + ν(rot u, rot v) − α(u, v) − (p, div v) − (div u, q) = f , v + (g, q) for any {v, q} in V × L20 . Here, as usual, ·, · denotes the duality pairing between V and V−1 . The well-posedness of (4.12)–(4.14) implies that the Schur complement operator is nonsingular on L20 . For arbitrarily given r ∈ L20 , consider (4.16)
p = Sγ−1 r .
In weak form, equality (4.16) can be written as (4.17) (ν + γ)(div u, div v) + ν(rot u, rot v) − α(u, v) − (p, div v) − (div u, q) = (r, q) for any {v, q} in V × L20 with some auxiliary velocity u ∈ V. Using −div u = r in (4.17) one gets (4.18)
−(ν + γ)(r, div v) + ν(rot u, rot v) − α(u, v) = (p, div v) .
Furthermore, for arbitrary ψ ∈ L20 consider v = ∇Δ−1 N ψ. Note that v ∈ V and rot v = 0. Substituting this into (4.18) we get (4.19)
−(ν + γ)(r, ψ) − α(r, Δ−1 N ψ) = (p, ψ)
for all ψ ∈ L20 .
2 Relation (4.19) is the weak form of −(ν + γ)r − αΔ−1 N r = p. Since the function r ∈ L0 in (4.16) was taken to be arbitrary, equalities (4.16) and (4.19) yield (4.15). In [18] the relation (4.15) was shown for the case of γ = 0 and αu entering the momentum equation with the positive sign (in that case the second term in (4.15) should be added rather than subtracted). Representation (4.15) shows that the formula (4.1) for U = 0 can be extended to the nonperiodic case by replacing |c| with the eigenvalues of the Poisson problem with Neumann boundary conditions. The expression (4.1) for the eigenvalues λc , as well as relations (4.15) and (4.8), show that for the Schur complement, γ plays the same role as the viscosity ν. This explains why setting γ = O(1) (assuming U = O(1) and α = O(1)) is sufficient for providing convergence rates independent of Reynolds number if sufficiently accurate solvers (or preconditioners) are used for the (1,1) block. This effect is not recovered by the purely algebraic analysis of the augmented system (Theorem 4.2 in [2]), where, under similar assumptions, ν-independent bounds for the Schur complement of the Oseen system were proved only for γ = O(ν −1 ). Furthermore, the analysis of the periodic problem in subsection 4.1 shows that increasing γ leads to a reduction in the number of “poor” eigenvalues of the Schur complement.
1468
MAXIM A. OLSHANSKII AND MICHELE BENZI
5. Numerical experiments. In our numerical experiments we use isoP2 -P0 or isoP2 -P1 finite elements on uniform grids. In all experiments for the Stokes-like problem we set ν = 1. First, we tested a multigrid method to solve a linear system of equations with the matrix Aα + γB T W −1 B from the (1,1) block in matrix (2.1). We need this multigrid further to define the preconditioner Aˆ in (2.2). Note that the problem in the (1,1) block can be interpreted as a discrete Helmholtz-type problem augmented with the term γB T W −1 B. To solve the system we consider a multigrid V(1,1) cycle. Since the problem is indefinite, we have to ensure the coarsest mesh is fine enough. We use the same criterion √ as the one suggested in [6]: Perform the coarsening while the mesh size satisfies h α ≤ 0.5. If the inequality fails to hold, then the mesh is treated as the coarsest one. For the smoother, we consider a block Gauss– Seidel method similar to those proposed by Sch¨ oberl in [22] for the linear elasticity problem; see also [2]. The restriction operator is canonical, and the prolongation operator is the one based on the solution of local subproblems, see again [22, 2]. This multigrid method was proved to be robust for the case of α = 0 with respect to the variation of γ; see [22]. One smoothing step consists of one forward and one backward sweep of the block Gauss–Seidel method. On the coarsest grid we do not solve the problem exactly; rather, we perform 30 iterations of left-preconditioned GMRES with the same block Gauss-Seidel iteration as a preconditioner, and we use FGMRES for the outer iteration. In Table 5.1 we give the number of iterations (and timings) for the preconditioned flexible GMRES method applied to the system (Aα + γB T W −1 B)v = f , where Aα = A − αIn . As a preconditioner in FGMRES we use one V(1,1) cycle of the multigrid described above. We use the zero right-hand side (f = 0) and a vector with random entries uniformly distributed in [0, 1] as the initial guess. The stopping criterion was a drop of the 2-norm of the residual by 10−6 . Note that for fixed α the method scales perfectly with h. When α becomes considerably larger, the number of iterations increases. The dependence of the number of iterations on γ is not significant for smaller α; however, for larger α an appropriate Table 5.1 Results for the indefinite Helmholtz-type problem; isoP2 -P0 finite elements. α and h
parameter γ 0
1
10
102
103
α = 100 1/256 1/512
4 (12s) 4 (52s)
6 (19s) 6 (79s)
5 (16s) 5 (65s)
7 (22s) 7 (92s)
7 (22s) 7 (94s)
4 (14s) 4 (54s)
6 (14s) 4 (54s)
7 (24s) 5 (64s)
6 (21s) 4 (54s)
6 (21s) 6 (81s)
1/256 1/512 α = 6400
21 (101s) 10 (148s)
157 (792s) 26 (394s)
14 (68.6s) 9 (134s)
9 (44s) 7 (104s)
25 (124s) 10 (150s)
1/256 1/512
> 200 > 200
> 200 > 200
115 (1247s) 25 (520s)
20 (214s) 13 (269s)
74 (798s) 67 (1442s)
α = 400 1/256 1/512 α = 1600
Number of preconditioned FGMRES iterations and CPU times in seconds.
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
1469
choice of the augmentation parameter γ can reduce the number of iterations considerably. This phenomenon can be explained with the help of the analysis of section 3, which predicts that the problem becomes less indefinite for γ > 0. In Table 5.2 we show the number of iterations (and timings) for preconditioned FGMRES applied to the system (2.1). The method was restarted after every 200 iterations, if necessary. As a preconditioner in FGMRES we use the block triangular matrix P from (2.2), with Aˆ−1 implicitly defined through the execution of one V(1,1) cycle of the above-mentioned multigrid method, and Sˆ defined through Sˆ−1 = −(ν + γ)Mp−1 . We use again a zero right-hand side and a vector {u0 , p0 } with entries randomly distributed in [0, 1] as the initial guess. The stopping criterion was the same as before. We can see that similar to the Helmholtz case, for fixed α the method scales perfectly with h. When α becomes considerably larger, the growth in the number of iteration is more noticeable than for the Helmholtz case. An appropriate choice of γ > 0 is now even more crucial than for the Helmholtz case. In Table 5.3 we show iteration counts for the same problem as in Table 5.2 and a slightly different preconditioner P. We now use (2.4) to define the action of the approximate inverse of the Schur complement, with Aˆ−1 defined as before. The matrix ˆ u−1 B T from (2.4) is not inverted exactly; instead, we use four V(0,4) cycles of a BM ˆ u−1 B T . standard geometric multigrid method to define an approximate inverse of B M Since the number m of pressure degrees of freedom is small compared to the total number n + m, this does not increase considerably the cost of applying the block preconditioner P. The table shows results for two pairs of finite elements: isoP2 -P0 and isoP2 -P1 . Poor convergence for the case of γ = 103 with isoP2 -P1 occurs because the multigrid solver for the (1,1) block is not effective in this case. As discussed in [2], the multigrid method we used for the (1,1) block is more sensitive to the ratio γ/ν for this finite element pair. ( ) ˆ −1 B T −1 term in the Schur The results show that although including the α B M u complement preconditioner leads to some improvement for the case of small γ and large α, it does not have as dramatic an effect on the performance as it does in
Table 5.2 Results for the indefinite Stokes-type problem, Sˆ−1 = −(ν + γ)Mp−1 ; isoP2 -P0 finite elements. α and h
parameter γ 0
1
10
102
103
α = 100 1/256 1/512
18 (58s) 14 (191s)
15 (49s) 13 (178s)
10 (29s) 9 (123s)
9 (29s) 9 (124s)
9 (29s) 8 (109s)
151 (627s) 127 (2113s)
91 (360s) 89 (1431s)
40 (149s) 43 (638s)
10 (36s) 9 (127s)
10 (36s) 8 (112s)
1/256 1/512 α = 6400
> 600 > 600
> 600 > 600
130 (708s) 136 (2310s)
26 (130s) 22 (344s)
25 (124s) 11 (173s)
1/256 1/512
> 600 > 600
> 600 > 600
> 600 > 600
220 (1755s) 145 (3603s)
105 (1173s) 45 (1004s)
α = 400 1/256 1/512 α = 1600
Number of preconditioned FGMRES iterations and CPU times in seconds.
1470
MAXIM A. OLSHANSKII AND MICHELE BENZI
Table 5.3 ) ( ˆ −1 B T −1 . Results for isoP2 -P1 Indefinite Stokes-type problem, Sˆ−1 = −(ν + γ)Mp−1 + α B M u / isoP2 -P0 finite elements. α and h
parameter γ 0
1
10
102
103
α = 100 1/256 1/512
24 / 18 23 / 11
13 / 13 13 / 11
10 / 8 9/8
17 / 9 9/9
> 600 / 8 > 600 / 8
138 / 93 135 / 66
53 / 44 42 / 40
48 / 28 49 / 22
24 / 10 19 / 9
> 600 / 10 > 600 / 8
1/256 1/512 α = 6400
> 600 / > 600 > 600 / > 600
159 / > 600 153 / 379
48 / 124 49 / 58
24 / 19 19 / 13
> 600 / 27 > 600 / 11
1/256 1/512
> 600 / > 600 > 600 / > 600
> 600 / > 600 > 600 / > 600
337 / > 600 109 / > 600
42 / 200 36 / 85
> 600 / 125 > 600 / 70
α = 400 1/256 1/512 α = 1600
Number of preconditioned FGMRES iterations.
the positive definite case2 (α ≤ 0, see [4]). This observation deserves further comments. As suggested by (4.15) and in terms of Fourier analysis, the preconditioner ) ( ˆ u−1 B T −1 is optimal for the Schur complement of the Sˆ−1 = −(ν + γ)Mp−1 + α B M matrix in (2.1) for all values of α. For the positive definite case (α ≤ 0), using this choice of Sˆ−1 together with a good preconditioner Aˆ−1 in (2.2) is well known to ensure the fast convergence of the preconditioned iterative method to solve (2.1). Comparing results in Tables 5.1 and 5.3, we conclude that this is not the case if the (1,1) block in (2.1) becomes indefinite (α > 0) and no augmentation is used (γ = 0). A possible explanation of this different behavior is that for α > 0 and γ = 0 the matrix S may have eigenvalues with small absolute values. Passing from the periodic problem or model problem (4.12)–(4.14) to the original one introduces a (small) perturbation which preserves the spectral equivalence of the Schur complement operators in the positive definite case, leading to an efficient preconditioner. However, the same perturbation in the indefinite case (α > 0) can produce large relative changes of the eigenvalues near the origin leading to poor preconditioning for this part of the spectrum for S. These observations may indicate that without augmentation, finding a proper block preconditioner for the indefinite linearized Navier–Stokes problem can be a very difficult task. 5.1. Newton linearization. In this subsection we consider (1.1)–(1.3) with two examples of the mean flow in Ω = (0, 1)2 : a Poiseuille flow U = (8y(1 − y), 0) and a flow mimicking a rotating vortex U = (4(2y − 1)(1 − x)x, − 4(2x − 1)(1 − y)y) . 2 By the “positive definite case” we mean that the (1,1) block A is positive definite. Therefore α the Schur complement S = −BAα B T is negative definite on (ker (B T ))⊥ . This is always the case for α ≤ 0.
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
1471
The instability of the Poiseuille flow is a well-studied problem in the literature [19, 25]. The common definition of the Reynolds number for this problem is Re = 0.5H|U (0, 0.5)|ν −1 , where H is the height of the channel. In our settings it holds that Re = ν −1 . It is well known (see, e.g., [25, 19]) that the eigenvalue of (1.1)–(1.3) with minimal real part approaches the imaginary axis as O(Re−1 ) for Re → ∞. Thus, in our experiments we set α = 1. This leads to increasingly indefinite problems as Re → ∞. Both problems are discretized by finite elements. In our experiments we use both isoP2 -P0 and isoP2 -P1 elements. Since the SUPG-type stabilization technique applied to (1.1)–(1.3) in the context of finite element methods leads to a bulk of additional terms in the matrix of the resulting system of algebraic equations, we apply the SUPG stabilization only in the preconditioner (see [2] for the details of the stabilization used). The latter is done on every grid level of our geometric multigrid and is known to be necessary to ensure that the multigrid method for the (1,1) block is efficient for the case of small diffusion coefficient ν. For the stiffness matrix, which enters the residual part of our iterative method, we are using fine enough meshes to keep all local mesh Reynolds numbers reasonably small. Moreover, we do not incorporate the discretization of the term (u · ∇)U in the preconditioner. Numerical experiments show that when the mean velocity field U is smooth (this is typical for a mean flow in a linear stability analysis), adding the discrete term (u · ∇)U only in the residual does not affect the convergence of the preconditioned solver in any substantial way. A similar experience for Newton-type Navier–Stokes linearizations is reported in [7]. The block triangular preconditioner P is used with FGMRES to solve linear systems of the form (2.1) corresponding to Newton linearizations of these problems with additional negative reaction terms. Now the inverse of Aˆ consists of one V (1, 1) cycle of the multigrid method used in [2]. The main difference with the multigrid method used in the symmetric case (see results for the Helmholtz and Stokes problem in Tables 5.1–5.3) is that now we do not impose any restriction on the coarsest grid, thus we take it to be very coarse and the coarse grid problem is solved exactly. (We have no good explanation why for this problem the need for the restriction on the size 2 of the coarsest grid depending on the ratio αh ν is not observed. It is possible that the additional skew-symmetric terms make the coarse grid correction less sensitive to the disturbance of the eigenvalues with smallest absolute real part introduced by the discretization of the problem on a coarser grid. For the symmetric case this phenomenon of “sign changes” is discussed in [6].) Also, we use Sˆ−1 = −(ν + γ)Mp−1 , where Mp−1 is an approximate solver for the pressure mass matrix; see again [2] for details. Iteration counts and timings are given in Tables 5.4 and 5.5. Once again, we Table 5.4 Results for linearized Navier–Stokes problems with indefinite term; α = 1, isoP2 -P0 elements, γ = 1. h
Reynolds number 102
103
1
10
13 (57s) 13 (268s)
13 (57s) 13 (269s)
16 (71s) 16 (339s)
31 (140s) 26 (545s)
13 (56s) 13 (264s)
12 (53s) 12 (242s)
18 (79s) 18 (370s)
45 (203s) 46 (976s)
U =Poiseuille flow 1/256 1/512 U =rotating vortex 1/256 1/512
Number of preconditioned FGMRES iterations and CPU times.
1472
MAXIM A. OLSHANSKII AND MICHELE BENZI
Table 5.5 Results for linearized Navier–Stokes problems with indefinite term; α = 1, isoP2 -P1 elements. h
Reynolds number 1
10
102
103
Parameter γ 1.
1.
1.
0.1
14 (59s) 15 (271s)
14 (59s) 14 (254s)
22 (92s) 24 (444s)
35 (148s) 30 (554s)
14 (58s) 15 (273s)
14 (59s) 14 (253s)
24 (102s) 25 (458s)
51 (221s) 52 (995s)
U =Poiseuille flow 1/256 1/512 U =rotating vortex 1/256 1/512
Number of preconditioned FGMRES iterations and CPU times.
observe the perfect scaling with respect to h, and a relatively mild degradation in the performance of the preconditioner for increasing Reynolds numbers. Note that γ = 1 works very well for all cases except for the isoP2 -P1 elements with Reynolds number Re = 103 , where a smaller value of γ is needed. 6. Conclusions. In this paper we have extended the augmented Lagrangianbased preconditioner described in [2] to the case where the (1, 1) block in the saddle point problem is indefinite, an important subproblem in the linear stability analysis of solutions to the Navier–Stokes equations. We have derived estimates for the eigenvalues of various operators and matrices of interest. In particular we have shown that the augmentation influences the system in two ways: it makes the (1,1) block of the system less indefinite, and it improves the numerical properties of the Schur complement matrix exactly in the way additional viscosity would. We have tested the preconditioner on some challenging model problems. Our results indicate that the augmented Lagrangian-based block triangular preconditioner is effective and robust over a wide range of problem parameters, including highly indefinite cases. REFERENCES [1] M. Benzi and J. Liu, Block preconditioning for saddle point systems with indefinite (1,1) block, Internat. J. Comput. Math., 84 (2007), pp. 1117–1129. [2] M. Benzi and M. A. Olshanskii, An augmented Lagrangian-based approach to the Oseen problem, SIAM J. Sci. Comput., 28 (2006), pp. 2095–2113. [3] M. Bercovier, Perturbation of mixed variational problems. Applications to mixed finite element methods, RAIRO Anal. Numer., 12 (1978), pp. 221–236. [4] J. Cahouet and J.-P. Chabard, Some fast 3-D finite element solvers for the generalized Stokes problem, Internat. J. Numer. Methods Fluids, 8 (1988), pp. 869–895. [5] K. A. Cliffe, T. J. Garratt, and A. Spence, Eigenvalues of block matrices arising from problems in fluid mechanics, SIAM J. Matrix Anal. Appl., 15 (1994), pp. 1310–1318. [6] H. C. Elman, O. Ernst, and D. P. O’Leary, A multigrid method enhanced by Krylov subspace iteration for discrete Helmholtz equations, SIAM J. Sci. Comput., 23 (2001), pp. 1291–1315. [7] H. C. Elman, D. Loghin, and A. J. Wathen, Preconditioning techniques for Newton’s method for the incompressible Navier–Stokes equations, BIT, 43 (2003), pp. 961–974 [8] H. C. Elman, D. Silvester, and A. J. Wathen, Finite Elements and Fast Iterative Solvers: With Applications in Incompressible Fluid Dynamics, Numer. Math. Sci. Comput., Oxford University Press, Oxford, UK, 2005. [9] V. Girault, Incompressible finite element methods for Navier–Stokes equations with nonstandard boundary conditions in R3 , Math. Comp., 51 (1988), pp. 55–74.
AUGMENTED LAGRANGIAN IN HYDRODYNAMIC STABILITY
1473
[10] V. Girault and P.-A. Raviart, Finite Element Methods for Navier–Stokes Equations: Theory and Algorithms, Springer Ser. Comput. Math. 5, Springer-Verlag, Berlin, 1986. [11] R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer Ser. Comput. Phys. 4, Springer-Verlag, New York, 1984. [12] R. Glowinski, Finite Element Methods for Incompressible Viscous Flow, Handb. Numer. Anal. 9, North–Holland, Amsterdam, 2003. [13] D. Kay, D. Loghin, and A. J. Wathen, A preconditioner for the steady-state Navier–Stokes equations, SIAM J. Sci. Comput., 24 (2002), pp. 237–256. [14] O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, Gordon and Breach Science, New York and London, 1963. [15] D. S. Malkus, Eigenproblems associated with the discrete LBB condition for incompressible finite elements, Internat. J. Engrg. Sci., 19 (1981), pp. 1299–1310. [16] J. T. Oden and O.-P. Jacquotte, Stability of some mixed finite element methods for Stokesian flows, Comput. Methods Appl. Mech. Engrg., 43 (1984), pp. 231–248. [17] M. A. Olshanskii and Yu. V. Vassilevski, Pressure Schur complement preconditioners for the discrete Oseen problem, SIAM J. Sci. Comput., 29 (2007), pp. 2686–2704. [18] M. A. Olshanskii, On the Stokes problem with model boundary conditions, Sb. Mat., 188 (1997), pp. 603–620. Translated from Mat. Sb., 188 (1997), pp. 127–144. [19] S. C. Reddy, P. J. Schmid, and D. S. Henningson, Pseudospectra of the Orr–Sommerfeld operator, SIAM J. Appl. Math., 53 (1993), pp. 15–47. [20] Y. Saad, A flexible inner-outer preconditioned GMRES algorithm, SIAM J. Sci. Comput., 14 (1993), pp. 461–469. [21] Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869. ¨ berl, Multigrid methods for a parameter dependent problem in primal variables, [22] J. Scho Numer. Math., 84 (1999), pp. 97–119. [23] L. N. Trefethen and M. Embree, Spectra and Pseudospectra, The Behavior of Nonnormal Matrices and Operators, Princeton University Press, Princeton, NJ, 2005. [24] L. N. Trefethen, A. E. Trefethen, S. C. Reddy, and T. A. Driscoll, Hydrodynamic stability without eigenvalues, Science, 261 (1993), pp. 578–584. [25] L. N. Trefethen, A. E. Trefethen, and P. J. Schmid, Spectra and pseudospectra for Poiseuille flow, Comput. Methods Appl. Mech. Engrg., 175 (1999), pp. 413–420. [26] H. A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 12 (1992), pp. 631–644.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1474–1489
FAST MULTILEVEL ALGORITHM FOR A MINIMIZATION PROBLEM IN IMPULSE NOISE REMOVAL∗ RAYMOND H. CHAN† AND KE CHEN‡ Abstract. An effective 2-phase method for removing impulse noise was recently proposed. Its phase 1 identifies noisy pixel candidates by using median-type filters. Then in phase 2, it restores only the noisy pixel candidates by some variational methods. The resulting method can handle saltand-pepper noise and random-valued impulse noise at a level as high as 90% and 60%, respectively. The aim of this paper is to generalize a fast multilevel method for Gaussian denoising to solving the minimization problem arising in phase 2 of the 2-phase method. The multilevel algorithm gives better images than standard optimization methods such as the Newton method or the conjugate gradient method. Also it can handle more general regularization functionals than the smooth ones previously considered. Supporting numerical experiments on two-dimensional gray-scale images are presented. Key words. image restoration, impulse noise, regularization, multilevel methods AMS subject classifications. 68U10, 65F10, 65K10 DOI. 10.1137/060654931
1. Introduction. Image denoising is one of the basic problems in image processing [1, 9, 19, 20]: Given an observed image z ∈ Rn×n , restore a “quality” image u ∈ Rn×n such that z = u + η, with η being some noise matrix. Here by quality, we mean that the problem of finding u from z is a typical inverse problem which does not have a unique solution without regularization. How to model η properly depends on the context in which the given image z is gathered. For images contaminated by environments or a transmission process, all pixels contain noise but the whole image is still vaguely “visible” to the human eye. Then the Gaussian noise model for η is commonly used [19, 20]; we assume that η is sampled from a Gaussian distribution with zero mean and some variance which may be estimated from z. For other noisy images generated by imperfect imaging equipment, e.g., malfunctioning sensors and faulty memory, the impulse noise model for z appears to be more appropriate [10, 14]. Here although the underlying image may not be visible to human eyes, the belief is that some pixels do contain the true values and the others contain completely incorrect values, with both locations being unknown. Recently an effective 2-phase method for removing impulse noise was proposed [4, 6]. In phase 1, it tries to identify noisy pixel candidates by using some median-type filters; see [10, 14, 15, 18], and the references therein. Then in phase 2, it restores only the noisy pixel candidates by variational methods. It is similar to doing an inpainting on the noisy pixel candidates. By using the functional proposed in [14], the resulting method can handle salt-and-pepper noise and random-valued impulse noise at a level as high as 90% and 60%, respectively; see [4, 6, 12]. The main difficulty in this 2-phase ∗ Received by the editors March 23, 2006; accepted for publication (in revised form) October 1, 2007; published electronically April 9, 2008. http://www.siam.org/journals/sisc/30-3/65493.html † Department of Mathematics, Chinese University of Hong Kong, Shatin, Hong Kong SAR, China (
[email protected]). Research was supported in part by HKRGC grant 400405 and CUHK DAG grant 2060257. ‡ Department of Mathematical Sciences, University of Liverpool, Peach Street, Liverpool L69 7ZL, UK (
[email protected]). Research was supported in part by The Leverhulme Trust RF/9/ RFG/2005/0482 and the Department of Mathematics, CUHK.
1474
1475
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
method is in solving the optimization problem that arises in the second phase. The minimization functional consists of an 1 data-fitting term and either a smooth or a nonsmooth regularization term; see [14, 6, 4]. In [5, 1], we have shown that, in phase 2, there is no need to consider the datafitting term as the data are fitted exactly for nonnoisy candidates. It remains only to minimize the smooth or nonsmooth regularization functional over the noisy candidate set obtained from phase 1. In this paper, we will develop a multilevel acceleration method for this optimization problem which is based on the method in [7] for Gaussian noise and is more robust and reliable even for nonsmooth regularization functional such as the total-variation norm model [19]. We will use the following notations. Let A denote the index set of all pixels, i.e., A = {(i, j) | i = 1, . . . , n; j = 1, . . . , n}, and N denote the index set of pixels that are found to be noisy by a filtering method in phase 1. The remaining pixels are denoted by N c = A\N , and we will call them simply as correct pixels. For any noisy candidate pixel (i, j) ∈ N , denote by VG i,j the set of its neighbors (not including itself). Then the splitting Vi,j = (Vi,j ∩ N ) (Vi,j ∩ N c ) will separate correct pixels from the noisy candidates. In the second phase of the 2-phase method, we shall restore the pixels in N by solving the following minimization problem [5, 1]: (1)
min
F (u) =
(i,j)∈N
⎛ ⎝
ui,j ∈R,(i,j)∈N
F (u),
2φ(ui,j − zm,r ) +
(m,r)∈Vi,j ∩N c
⎞ φ(ui,j − um,r )⎠ ,
(m,r)∈Vi,j ∩N
where φ is an even edge-preserving potential function. We emphasize that, since data fitting is done exactly on N c , there is only the regularization term in formulation (1), and hence no regularization parameter is needed here. Examples of the above φ are (2) (3) (4)
φ(t) = |t|, φ(t) = t2 + β, β > 0, φ(t) = |t|γ , 1 < γ ≤ 2.
Here when (2) is used, one normally approximates it by (3) with a small β as a regularized version—consequently one has to address the sensitivity of numerical techniques with respect to the choice of β. For smooth φ(t), e.g., (3) and (4) above, one can solve (1) by some standard optimization methods. In fact, Newton’s method with continuation and conjugate gradient method were tried in [5] and [1], respectively. Readers familiar with the Gaussian noise removal [19] can connect (2) to the variational seminorm [16] (|ux | + |uy |) ,
u ∗ = Ω
where Ω = [0, 1]2 is the continuous image domain. This norm is nonrotationally invariant, but one may immediately relate it to the well-known rotationally invariant total variation (TV) seminorm [19, 9, 20]: D
u T V = u2x + u2y . Ω
1476
RAYMOND H. CHAN AND KE CHEN
By using the TV seminorm, we can suggest a similar restoration model to (1): D (5) FT V (u), FT V (u) = (ui,j − ui+1,j )2 + (ui,j − ui,j+1 )2 , min ui,j ∈R,(i,j)∈N
(i,j)∈N
where we take as zero those differences involving ui,n+1 , un+1,j for all i, j. Here um,r = zm,r for all (m, r) ∈ N c as these indices correspond to the correct pixels. Our task here is to accelerate the solution procedure of variational models in phase 2. That is, we shall discuss how to solve the minimization problems (1) and (5). For smooth φ(t), one can solve the Euler–Lagrange equation of (1) as we did in [5, 1]. Standard multigrid [2, 11] is difficult to use on the equation, since we are solving it on the irregular grid points N and coarsening will be a problem. The method that we are going to use is a multilevel method that solves the minimization problems (1) and (5) directly and has been shown to work well for Gaussian noise removal [7]. Since minimization is not done for the whole index set A, but on N , special modification is needed. Problem (1) with the choice (3) will be considered as the main task, while discussions on problem (1) with (2) and problem (5), although not done before because of the nonsmoothness of the regularization functionals, will be supplementary as the multilevel adaption is the same. The plan is to review our recently proposed multilevel method for the Gaussian noise removal [7] in section 2. Section 3 presents details of the implementation of a multilevel method for model (1). Numerical experiments are reported in section 4, where we shall use the new method first as a combined method with the Newton continuation method [5] and then as a stand-alone method. The merits of using (5) are also considered and highlighted. 2. Review of a multilevel method for optimization. We now briefly review the multilevel method proposed in [7] for removing Gaussian noise. One nice advantage about the method is that it can be applied to nonsmooth functionals as it does not require their derivatives. Given z ∈ Rn×n as before, we illustrate the method in solving the standard TV model [19]: D 1 2 2 2 J(u) = α ux + uy + (u − z) , min J(u), u 2 Ω which is discretized to give rise to the optimization problem J(u), min n×n
u∈R
J(u) = α
n−1 n−1 D
(ui,j − ui,j+1 )2 + (ui,j − ui+1,j )2
i=1 j=1
1 (ui,j − zi,j )2 , 2 i=1 j=1 n
(6)
+
n
with α = α/h and h = 1/(n − 1). For simplicity, we shall assume n = 2L . Let the standard coarsening be used, giving rise to L+1 levels k = 1 (finest) and 2, . . . , L, L+1 (coarsest). Denote the dimension of level k by τk × τk , with τk = n/2k−1 . As a prelude to multilevel methods, consider the minimization of (6) by the coordinate descent method on the finest level 1: ⎧ (0) ⎪ Given u(0) = (ui,j ) = (zi,j ) with l = 0, ⎪ ⎪ ⎨ (l) Solve ui,j = argminui,j ∈R J loc (ui,j ) for i, j = 1, 2, . . . , n (7) (l) ⎪ Set u(l+1) = (ui,j ) and repeat the above step with l = l + 1 ⎪ ⎪ ⎩ until a prescribed stopping step on l,
1477
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
where J loc (ui,j ) = α (8)
5D
+
D D
(l)
(l)
(ui,j − ui+1,j )2 + (ui,j − ui,j+1 )2 (l)
(l)
(l)
(ui,j − ui−1,j )2 + (ui−1,j − ui−1,j+1 )2
6
1 + (ui,j − zi,j )2 . 2 Due to Neumann’s condition for the continuous variable u, all difference terms involving indices in subscripts larger than n are set to zero. Note that each subproblem in (7) is only one-dimensional. To introduce the multilevel algorithm, we rewrite (7) in an equivalent form: ⎧ (0) ⎪ Given u(0) = (ui,j ) = (zi,j ) with l = 0, ⎪ ⎪ ⎨ (l) (l) (l) Solve cˆ = argminc∈R J loc (ui,j + c), set ui,j = ui,j + cˆ (9) (l) ⎪ Set u(l+1) = (ui,j ) and repeat the above step with l = l + 1 ⎪ ⎪ ⎩ until a prescribed stopping step on l, +
(l)
(l)
(l)
(ui,j − ui,j−1 )2 + (ui,j−1 − ui+1,j−1 )2
where i, j = 1, 2, . . . , n. Here each subproblem can be interpreted as finding the best (l) correction constant cˆ at the current approximate ui,j on level 1. Likewise, one may consider a 2×2 block of pixels with pixel values denoted by the current approximate u . Our multilevel method for k = 2 is to look for the best correction constant to update this block so that the underlying merit functional (relating to all four pixels) achieves a local minimum. One sees that this idea operates on level 2. If we repeat the idea with larger blocks, we arrive at levels 3 and 4 with, respectively, 4×4 and 8×8 blocks. If we write down the above idea in formulas, it may appear complicated but the idea is simple. On level k, set b = 2k−1 , k1 = (i−1)b+1, k2 = ib, 1 = (j −1)b+1, and 2 = jb. Then the (i, j)th computational block (stencil) involving the single constant ci,j on level k can be depicted in terms of pixels of level 1 as follows: .. .
u k1 −1,2 +1 + ci−1,j+1 u k1 −1,2 + ci−1,j ··· u k1 −1,1 + ci−1,j u k1 −1,1 −1 + ci−1,j−1 .. .
.. .
u k1 ,2 +1 + ci,j+1 u k1 ,2 + ci,j .. . u k1 ,1 + ci,j u k1 ,1 −1 + ci,j−1 .. .
··· ··· ··· ··· ··· ··· ···
.. .
u k2 ,2 +1 + ci,j+1 u k2 ,2 + ci,j .. . u k2 ,1 + ci,j u k2 ,1 −1 + ci,j−1 .. .
.. .
u k2 +1,2 +1 + ci+1,j+1 u k2 +1,2 + ci+1,j ··· u k2 +1,1 + ci+1,j u k2 +1,1 −1 + ci+1,j−1 .. .
(10) Clearly there is only one unknown constant ci,j , and we shall obtain a onedimensional subproblem. After some algebraic manipulation [7, 8], we find that the u + Pk ci,j ) (with Pk an interpolation operator local minimization problem minci,j J( distributing ci,j to a b × b block on level k as illustrated above) is equivalent to the problem minci,j G(ci,j ), where 2 D k 2 −1 D 2 2 (ci,j − hk1 −1, ) + vk1 −1, + α (ci,j − vm,2 )2 + h2m,2 G(ci,j ) = α =1
+α
2 −1 D =1
(11)
m=k1
(ci,j − hk2 , )2 + vk22 ,
k2 D 2 +α (ci,j − vm,1 −1 )2 + vm, 1 −1 m=k1
√ D b2 2 + α 2 (ci,j − v k2 ,2 )2 + hk2 ,2 + (ci,j − w i,j )2 2
1478
RAYMOND H. CHAN AND KE CHEN
and ⎧ k2 2 + , ⎪ z(m, ) ⎪ ⎪ zm, = zm, − u m, , w i,j = mean z(k1 : k2 , 1 : 2 ) = , ⎪ ⎪ ⎪ b2 ⎨ m=k1 =1 (12) vk , + hk2 ,2 ⎪ vm, = u m,+1 − u k, , v k2 ,2 = 2 2 , ⎪ ⎪ 2 ⎪ ⎪ − h v ⎪ k , k , 2 2 ⎩ m+1, − u m, , hk2 ,2 = 2 2 . hm, = u 2 The solution of the above one-dimensional minimization problem defines the updated solution of u = u + Pk ci,j . Then we obtain a multilevel method if we cycle through all levels and all blocks on each level. Two remarks are due here. First, would the derived multilevel algorithm converge? The answer is no, if the functional J is nonsmooth. However, in [7], we found that the wrongly converged solution is incorrect only near flat patches of the solution. The idea of detecting such flat patches during iteration and incorporating new local minimizations based on the patches was suggested in [7]. Essentially we implement a new coarse level. Second, how would one solve the one-dimensional minimization problem? Our experience suggests either a fixed-point-based Richardson iteration or the Newton method. On the coarsest level, the TV term is unchanged by adding c so the problem has an exact solution. To avoid the gradient becoming zero on other levels, we need a regularizing parameter δ [7] which does not influence the final convergence as long as it is small (e.g., 10−20 ). Here the solution of each local minimization problem is needed only to be approximate as with smoothing steps of a multigrid method for an operator equation. One might question the advantage of solving (6) this way or ask: Why cannot one solve a regularized version of (6) at the very beginning? The reason is that with small but nonzero δ the local patches (of the true solution) are smoothed out and are less easy to be detected in order to speed up the convergence by a specially designed coarse level. Consequently, if implementing such an approach, one would observe sensitivity of the method when δ changes. Overall the revised multilevel method [7] for solving (6) is the following. Algorithm 1. Given z and an initial guess u = z, with L + 1 levels: 1. Iteration starts with uold = u ( u contains the initial guess before the first iteration and the updated solution at all later iterations). 2. Smooth for ν iterations the approximation on the finest level 1; i.e., solve (7) for i, j = 1, 2, . . . , n. 3. Iterate for ν times on each coarse level k = 2, 3, . . . , L + 1: hm, via (12); • Compute z = z − u , w i,j , vm, , and • compute the minimizer c of (11) if k ≤ L; or on the coarsest level k = L + 1, the correction constant is simply c = mean(w) = mean(z − u ); • add the correction u =u + Pk c, where Pk is the interpolation operator distributing ci,j to the corresponding b × b block on level k as illustrated in (10). 4. On level k = 1, check the possible patch size for each position (i, j): patch = {(i , j ) : |ui ,j − ui,j | < ε} for some small ε. Implement the piecewise constant update as with step 3. 5. If u −uold 2 is small enough, exit with u = u or return to step 1 and continue iterations.
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
1479
We note that, whenever the TV seminorm is used (resulting in a nonsmooth J), the solution will allow local constants. Such local constants lead to the hemivariateness of the solution, which may prevent local minimizations from reaching the global minimizer [17]. Step 4 here is to overcome this; see [8]. Finally we remark that the above method can also be adapted for solving the Poisson denoising model [3]. 3. A modified multilevel method for the 2-phase approach. The main difference between our new problem (1) and problem (6) is that in (1) we do only the minimization in a subset N , and we do not update any pixels in N c . Therefore we have to adapt the above multilevel method accordingly. Figure 1 shows a typical example of noisy pixels in N (denoted by ×) mixed with correct pixels in N c (denoted by •). Clearly some subproblems are empty if all pixels within are the correct pixels. This will be the main characteristic of our new multilevel method. First of all, we show how to use the coordinate descent method on the finest level 1: ⎧ (0) ⎪ Given u(0) = (ui,j ) = (zi,j ) with l = 0, ⎪ ⎪ ⎨ (l) (l) (l) Solve cˆ = argminc∈R F loc (ui,j + c), ui,j = ui,j + cˆ for (i, j) ∈ N (13) (l) ⎪ Set u(l+1) = (ui,j ) and repeat the above step with l = l + 1 ⎪ ⎪ ⎩ until a prescribed stopping step on l,
Level 1
Level 2
Level 3
Level 4, etc.
Fig. 1. Illustration of subproblems for a piecewise constant multigrid method for impulsedenoising.
1480
RAYMOND H. CHAN AND KE CHEN
Fig. 2. Illustration of interacting pixels in block (2, 2) in Figure 1 (level 3).
which modifies the method (9) for the related Gaussian denoising case, where minimization is carried out for all pixels. Here the local merit functional is defined as F loc (ui,j ) = 2φ(ui,j − zm,r ) + φ(ui,j − um,r ), (m,r)∈Vi,j ∩N c
(m,r)∈Vi,j ∩N
which is already in discrete form for φ(t) defined in (2)–(4). For the TV norm, it is defined as J loc in (8), but we do only the minimization for (i, j) ∈ N . The discretization and minimization will proceed as with J loc before, with the only modification of not involving terms which are not linked to noisy pixels. Next we consider how to formulate the coordinate descent method on level k > 1 (as in Figure 1). To introduce the main idea and formulation, we need to generalize the previous notation Vi,j from a single pixel to a block of pixels. Denote by D(i, j) the index set of all pixels of block (i, j) on level G k (k ≥ 1) and VD(i,j) the set of neighboring pixels of D(i, j). Let D(i, j) = B(i, j) I(i, j) be a nonoverlapping splitting, separating the boundary pixels B(i, j) and the interior pixels I(i, j) of block (i, j). Therefore on levels k = 1, 2, I(i, j) is empty and D(i, j) = B(i, j) for all (i, j). On level 1, clearly, D(i, j) = {(i, j)}, so Vi,j = VD(i,j) in this case. For an example on level 3, consider block (i, j) = (2, 2) as depicted in Figure 1: D(2, 2), B(2, 2), and I(2, 2) contain 16, 12, and 4 members, respectively, and VD(2,2) contains 20 members (neighbors). u +Pk ci,j ) for block (i, j) To simplify the block minimization problem minci,j ∈R F ( on level k, we point out these characteristics: 1. Only noisy pixels D(i, j) ∩ N require taking into consideration in the minimization. 2. All noisy pixels in B(i, j) ∩ N enter the minimization formula. 3. The noisy pixels in I(i, j) ∩ N enter the minimization formula only if one of their neighbors is a correct pixel. In Figure 2, we illustrate the above observations by using the example of block (2, 2) from Figure 1. There are only 8 noisy pixels (in D(2, 2) ∩ N denoted by ⊗) within this block of 16 pixels, and, in particular, all 5 members in B(2, 2) ∩ N will enter in the minimization formula, while only 2 members (out of the total 3) in I(2, 2) ∩ N will enter in the minimization formula. This is because any two neighboring interior pixels have the same difference in gray values before and after adding a constant; see the (3,3)th member in this block.
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
1481
We are ready to state a simple and computable form of the block minimization u + Pk ci,j ) for block (i, j). This problem is equivalent to miniproblem minci,j ∈R F ( mizing the following: 2φ( ui,j + ci,j − zm,r ) Gloc (ci,j ) = (i1 ,j1 )∈I(i,j)∩N (m,r)∈Vi1 ,j1 ∩N c
⎛
+
(i1 ,j1 )∈B(i,j)∩N
=
⎞
φ( ui,j + ci,j − u m,r )⎠
(m,r)∈Vi1 ,j1 ∩N ∩VD(i,j)
(i1 ,j1 )∈D(i,j)∩N (m,r)∈Vi1 ,j1 ∩N c
+
2φ( ui,j + ci,j − zm,r )
(m,r)∈Vi1 ,j1 ∩N c
+
⎝
2φ( ui,j + ci,j − zm,r )
φ( ui,j + ci,j − u m,r )
(i1 ,j1 )∈B(i,j)∩N (m,r)∈Vi1 ,j1 ∩N ∩VD(i,j)
=
(i1 ,j1 )∈D(i,j)∩N (m,r)∈Vi1 ,j1 ∩N c
(14)
+
2φ(ci,j − zm,r )
φ(ci,j − zm,r ),
(i1 ,j1 )∈B(i,j)∩N (m,r)∈Vi1 ,j1 ∩N ∩VD(i,j)
where zm,r = zm,r − u i,j for (m, r) ∈ N c and zm,r = u m,r − u i,j for (m, r) ∈ N . Here (14) is a one-dimensional minimization problem for ci,j that may be solved by any available solver (as remarked in the previous section). Once solved, we add the constant correction: u =u + Pk ci,j . As in the Gaussian noise case, whenever the TV seminorm is used, the solution will allow local constants, which may prevent local minimizations from reaching the global minimizer [8, 17]. By following [8] (see step 4 of Algorithm 1), we detect if such a “patch” exists: C B ui ,j − u (15) i,j | < ε and (i, j), (i , j ) ∈ N H = patch = (i , j ) | for small ε (e.g., 10−3 ). If this happens, the coordinate descent method can get stuck. Our solution is to let each patch of such pixels form a local block minimization problem. Assume that the above patch H is embedded in some rectangular block D(i, j) of pixel indices. The local block minimization problem is to find the best constant ci,j to be added to all of the noisy candidates in D(i, j) so that the overall merit functional is minimized. This will proceed exactly as in (14). Our overall algorithm will proceed as follows. Algorithm 2. Given the observed image z, an initial guess u , and the noisy candidate set N : 1. Iteration starts with uold = u . 2. Smooth for ν iterations the approximation on the finest level 1; i.e., solve (13) for i, j = 1, 2, . . . , n. 3. Iterate for ν iterations on each coarse level k = 2, 3, . . . , L + 1: • Compute z = z − u ; • compute the minimizer c of (14); • add the correction u =u + Pk c.
1482
RAYMOND H. CHAN AND KE CHEN
4. On level k = 1, find each patch H via (15), and implement the piecewise constant update as with step 3. or return to step 1 and continue 5. If u −uold 2 is small enough, exit with u = u iterations. In our experiments, we take ν = 2. To estimate the complexity of Algorithm 2, we need to estimate the number of terms in (14). For the given image z ∈ Rn×n , let the predicted noise level be 100w% from phase 1; e.g., w = 0.5 for 50% noise. First, the cardinality of N is approximately wN , with N = n2 for an n × n image. Consequently the cardinality of N c is (1 − w)N . Second, on level k, the number of operations associated with all interior pixels Vi1 ,j1 ∩ N c will be 2wN , while the number of terms associated with all boundary pixels Vi1 ,j1 ∩ N ∩ VD(i,j) will be 4 bN2 (1 − w)b = 4(1 − w)N/2k−1 . Therefore similarly to [8] we can estimate the complexity of one cycle of Algorithm 2 as follows: sw
L+1
(
) 2wN + 4(1 − w)N/2k−1 ≈ 2sN (L + 1)w2 + 8N sw(1 − w) ≈ N log(N ),
k=1
which is close to the optimal O(N ) expected from a multilevel method. Here s is the total number of local relaxation for solving each local minimization per cycle (i.e., the total number from all ν iterations per cycle). 4. Numerical experiments. Here we shall demonstrate the usefulness of the proposed multilevel method (Algorithm 2). Restoration performance is quantitatively measured by the peak signal-to-noise ratio (PSNR) P SN R = P SN R(r, u) = 10 log10
1 mn
2552 , 2 i,j (ri,j − ui,j )
'
where ri,j and ui,j denote the pixel values of the original image and the restored image, respectively, with r, u ∈ Rm×n . Here we assume that zi,j , ri,j , ui,j ∈ [0, 255]. We will consider salt-and-pepper noise here, and the noisy candidate set N is detected by the adaptive median filter (AMF); see [4, 13]. We note that AMF uses median values to restore pixels in N to obtain the restored image on A. But, for the 2-phase method, AMF is used just to obtain N , and pixels in N are then restored by minimizing (1) in phase 2. We will consider three different ways of restoration here: • MG—our Algorithm 2 with the initial image u on A given by AMF. • NT—the Newton continuation method [5] with the initial image u on A given by AMF. • MG/NT—Algorithm 2 with initial image u on A given by the solution obtained by NT. We have three sets of√experiments. In set 1, we compare the performance of MG/NT with NT for φ(t) = t2 + 102 as used in [1]. In set 2, we compare the performance of MG with NT for the same φ. Finally, in set 3, we consider the performance of MG with φ(t) = |t| and with the TV model (5). Since these functionals are nonsmooth, such cases cannot be handled by NT [5] or conjugate gradient methods [1], unless we regularize the functional. It turns out that models (2) and (5) may produce better restored images than the popular choice (3) in cases of a high noise level. Set 1—comparison of MG/NT with NT. We have taken 4 test images of size 512 × 512 and experimented with various levels of salt-and-pepper-type noise. We summarize the results in Table 1, where one can observe that the improvement by
1483
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
Table 1 Comparison of restoration quality of uM G/N T = u with uN T (P: Problem, δ: improvement). P 1
2
3
4
Noise 50% 70% 95% 50% 70% 95% 50% 70% 95% 50% 70% 95%
PSNR(r, z) 8.51 7.05 5.73 8.37 6.90 5.58 7.79 6.32 5.01 8.47 7.02 5.69
PSNR(r, uAM F ) 32.41 28.17 19.66 28.92 26.05 19.66 27.86 25.29 18.87 33.76 29.55 19.97
PSNR(r, uN T ) 38.95 33.66 23.32 32.77 29.78 23.51 31.27 29.02 23.61 40.65 35.41 24.21
PSNR(r, u) 39.49 35.09 25.70 32.95 30.13 25.10 31.15 29.09 25.01 40.79 36.76 27.24
δ 0.53 1.42 2.38 0.18 0.35 1.58 −0.11 0.07 1.41 0.13 1.35 3.03
MG/NT over NT alone is more for higher noise levels. The comparative results for the noise level of 70% are shown in Figures 3 and 4. We note that the improvement can be bigger for smaller size images; we show in Figure 5 a test example with 64 × 64 resolution and 95% noise for an image taken from problem 4 where the improvement in PSNR is about 6 dB. Set 2—comparison of MG with NT. The previous tests show that our MG works best if it uses the result from NT as an initial guess. Here we test how MG performs without such good initial guesses. We display the results in Table 2, where one can see that improvements are still observed in most cases, especially when the noise level is high. Clearly in most test cases, MG/NT gives the best restoration result. In separate experiments, we have compared the speed of our MG with NT as 2 different solvers. It turns out that the CPU times are comparable for images up to the size of 512 × 512. However, for larger images with n ≥ 1024, NT cannot be run on the same Pentium PC with 2 GB memory (due to the excessive memory requirement), while MG can be run showing the same quality restoration after 2–3 cycles. Set 3—behavior of MG/NT with nonsmooth regularization functionals. We now discuss how Algorithm 2 performs with models (2) and (5). We note that local minimization by using model (2) leads to problems of the type min G(c), c∈R
G(c) =
p
|c − cj |
j=1
for some integer p, which has the exact solution c = cmin = median([c1 , . . . , cp ]). Our experiments show that MG/NT is still quite useful for high noise cases, as seen from Figure 6 for images with 95% noise, where we compare NT (the left plot), MG/NT with (2) (the middle plot), and model (5) (the right plot). We can observe that the improvement over NT is not as great as MG/NT with (3) in set 1 tests. However, it is pleasing to see that such (previously not tested) nonsmooth functionals can lead to better restored images. Finally, we discuss the number of levels that can be used in Algorithm 2. In previous tests, we used the full levels. This point arises as all local minimizations are to help achieve the global convergence in a less nested way than the geometric MG for a partial differential equation. As discussed before, we always need the patch level and the corresponding coarse level minimization on each patch block. We wish
1484
RAYMOND H. CHAN AND KE CHEN
2
Problem 1 size: 512 with 70% sp noise
100
100
200
200
300
300
400
400
500
500 100
200 300 zpsnr=7.0458
400
500
100 200 300 400 AMFpsnr=28.1727 [21.127]
100
100
200
200
300
300
400
400
500
500
500 100 200 300 400 Newton φ (t): CG =33.6649 1
500
100 200 300 400 500 MG φ (t): sqrt =35.0867 → 1.4217
psnr
1
psnr
2
Problem 2 size: 512 with 70% sp noise
100
100
200
200
300
300
400
400
500
500 100
200 300 zpsnr=6.9009
400
500
100 200 300 400 AMF =26.0517 [19.1508]
500
psnr
100
100
200
200
300
300
400
400
500
500 100 200 300 400 Newton φ (t): CG =29.7805 1
psnr
500
100 200 300 400 500 MG φ (t): sqrt =30.1303 → 0.34974 1
psnr
Fig. 3. Comparison of MG/NT with NT with 70% salt-and-pepper noise (problems 1 and 2).
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
1485
2
Problem 3 size: 512 with 70% sp noise
100
100
200
200
300
300
400
400
500
500 100
200 300 z =6.3237
400
500
100 200 300 400 AMF =25.2868 [18.9631]
psnr
500
psnr
100
100
200
200
300
300
400
400
500
500 100 200 300 400 Newton φ (t): CG =29.019 1
500
100 200 300 400 500 MG φ (t): sqrt =29.0899 → 0.070935
psnr
1
psnr
2
Problem 4 size: 512 with 70% sp noise
100
100
200
200
300
300
400
400
500
500 100
200 300 z =7.0225
400
500
100 200 300 400 AMF =29.5547 [22.5322]
psnr
500
psnr
100
100
200
200
300
300
400
400
500
500 100 200 300 400 Newton φ (t): CG =35.4065 1
psnr
500
100 200 300 400 500 MG φ (t): sqrt =36.7586 → 1.3521 1
psnr
Fig. 4. Comparison of MG/NT with NT with 70% salt-and-pepper noise (problems 3 and 4).
1486
RAYMOND H. CHAN AND KE CHEN True image
Problem 4: 95%
Phase 1
10
10
10
20
20
20
30
30
30
40
40
40
50
50
50
60
60 20
40
60
60
20
40
60
20
psnr=5.81
40
60
psnr=17.36
NT
MG/NT
10
10
20
20
30
30
40
40
50
50
60
60 10
20
30
40
50
60
psnr=17.31
10
20
30
40
50
60
psnr=23.09
Fig. 5. Comparison of MG/NT with NT with 95% for a small image. Table 2 Comparison of restoration quality of MG with NT (δ: improvement). Problem 1
2
3
4
Noise 50% 70% 95% 50% 70% 95% 50% 70% 95% 50% 70% 95%
PSNR(r, uN T ) 38.95 33.66 23.32 32.77 29.78 23.51 31.27 29.02 23.61 40.65 35.41 24.21
PSNR(r, uM G ) 39.17 35.06 25.37 32.42 30.06 24.78 30.63 29.03 24.72 40.53 36.73 26.80
δ 0.22 1.40 2.05 −0.36 0.28 1.27 −0.63 0.01 1.12 −0.12 1.32 2.59
to see if other coarse levels are needed. In Figures 7–8, respectively, we display the cases of ν = 1 and ν = 2 for testing problems 1–2 with 90% noise and n = 512. There we compared the obtained PSNR values as the number of MG cycles increases when levels = 1, 2, and 10 (the maximal number of levels), plotting against the corresponding PSNR value from NT. Clearly one observes that there is no harm in using more MG levels, larger ν makes the use of full MG levels less necessary, and above all convergence does not require to have all coarse levels. The related idea of starting iterations from the coarsest level rather than the finest is also tested (on the above 4 examples), and no advantages are observed. However, for extremely simple
1487
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL φ(t)=|t| PCMG
Newton for 95%
φ(t)=TV(t) PCMG
50
50
50
100
100
100
150
150
150
200
200
200
250
250
250
300
300
300
350
350
350
400
400
400
450
450
450
500
500 100 200 300 400 psnr=23.3162
500
500 100 200 300 400 500 psnr=24.3882 → 1.0721
100 200 300 400 500 psnr=23.8359 → 0.51976
φ(t)=|t| PCMG
Newton for 95%
φ(t)=TV(t) PCMG
50
50
50
100
100
100
150
150
150
200
200
200
250
250
250
300
300
300
350
350
350
400
400
400
450
450
450
500
500 100 200 300 400 psnr=23.5163
500
500 100 200 300 400 500 psnr=24.403 → 0.88677
100 200 300 400 500 psnr=24.0761 → 0.55979
Fig. 6. Performance of MG/NT with models (2) and (5) with 95% salt-and-pepper noise. For problem 1: PSNR(r, uN T ) = 23.32, PSNR(r, ueq.(2) ) = 24.39, and PSNR(r, uT V ) = 23.86. For problem 2: PSNR(r, uN T ) = 23.52, PSNR(r, ueq.(2) ) = 24.40, and PSNR(r, uT V ) = 24.08.
26.5 28
27.5
26
PSNR
PSNR
27
26.5
26
Lev=1 Lev=2 Lev max Newton
25.5
25.5 Lev=1 Lev=2 Lev max Newton 25
24.5 25 2
4
6
ML cycles
8
10
2
4
6
8
10
ML cycles
Fig. 7. Convergence history for ν = 1 with problems 1 (left) and 2 (right).
images that contain a few large and piecewise constant features, it is not difficult to see that this (full multigridlike) idea will be useful. In any case, we emphasize again that the patch level is always needed for convergence. Remark 1. It is of interest to remark on why the initial image u at N should influence the convergence of Algorithm 2. We believe that the initial image u supplied by phase 1 obtained by local median-type ideas commonly has overestimated piecewise constant patches present in u that our existing multilevel method (Algo-
1488
RAYMOND H. CHAN AND KE CHEN
26.6
28
26.4
26.2 Lev=1 Lev=2 Lev max Newton
Lev=1 Lev=2 Lev max Newton
26
PSNR
PSNR
27.5
27
25.8
25.6 26.5 25.4
25.2
26
2
4
6
8
10
ML cycles
2
4
6
8
ML cycles
Fig. 8. Convergence history for ν = 2 with problems 1 (left) and 2 (right).
rithm 2) cannot handle. A similar problem also existed for the piecewise constant multilevel method (Algorithm 1) for Gaussian denoising. For instance, if the true image (in 1 dimension) is r = [10 10 10 10 20 20 20 20] and an initial image is given as u = [10 10 10 10 10 10 20 20], Algorithm 1 will not work while the initial image u = [1 1 1 1 50 50 50 50] or any random vector u will be fine. This is related to the assumption of constant patches being correctly detected [8]. In this sense, one idea would be to ensure that the initial image does not have any constant patches at all. 5. Conclusions. We have generalized a multilevel method previously proposed for the standard Gaussian denoising model to solve an impulse denoising model. The multiresolution strategy of the new algorithm is found to give better restoration results for images with a high noise level, and the improvement is up to 2–3 dB. As the multilevel method has a nearly optimal complexity, it naturally offers a fast solution procedure for extremely large images. Acknowledgments. The authors thanks J. F. Cai, CUHK, for providing various assistance in numerical experiments. They are grateful to all three anonymous referees for making critical and helpful comments. REFERENCES [1] J. Cai, R. Chan, and B. Morini, Minimization of edge-preserving regularization functional by conjugate gradient type methods, in Image Processing Based on Partial Differential Equations, Math. Phys., X. C. Tai et al., eds., Springer-Verlag, Berlin, 2006, pp. 109–122. [2] R. Chan, T. Chan, and J. Wan, Multigrid for differential-convolution problems arising from image processing, in Proceedings of the Workshop on Scientific Computing, G. Golub, S. Lui, F. Luk, and R. Plemmons, eds., Springer-Verlag, Berlin, 1997, pp. 58–72. [3] R. Chan and K. Chen, Multilevel algorithm for a Poisson noise removal model with totalvariation regularisation, Int. J. Comput. Math., 84 (2007), pp. 1183–1198. [4] R. Chan, C. Ho, and M. Nikolova, Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization, IEEE Trans. Image Process., 14 (2005), pp. 1479–1485. [5] R. Chan, C. Ho, C. Leung, and M. Nikolova, Minimization of detail-preserving regularization functional by Newton’s method with continuation, in Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 2005, pp. 125–128. [6] R. Chan, C. Hu, and M. Nikolova, An iterative procedure for removing random-valued impulse noise, IEEE Signal Proc. Lett., 11 (2004), pp. 921–924. [7] T. Chan and K. Chen, On a nonlinear multigrid algorithm with primal relaxation for the image total variation minimisation, J. Numer. Algor., 41 (2006), pp. 387–411.
MULTILEVEL ALGORITHM FOR IMPULSE NOISE REMOVAL
1489
[8] T. Chan and K. Chen, An optimization-based multilevel algorithm for total variation image denoising, Multiscale Model. Simul., 5 (2006), pp. 615–645. [9] T. Chan and J. Shen, Image Processing and Analysis—Variational, PDE, Wavelet, and Stochastic Methods, SIAM, Philadelphia, 2005. [10] T. Chen and H. Wu, Space variant median filters for the restoration of impulse noise corrupted images, IEEE Trans. Circuits Syst. II, 48 (2001), pp. 784–789. [11] M. Donatelli and S. Serra-Capizzano, On the regularizing power of multigrid-type algorithms, SIAM J. Sci. Comput., 27 (2006), pp. 2053–2076. [12] Y. Dong, R. Chan, and S. Xu, A detection statistic for random-valued impulse noise, IEEE Trans. Image Process., 16 (2007), pp. 1112–1120. [13] H. Hwang and R. A. Haddad, Adaptive median filters: New algorithms and results, IEEE Trans. Image Process., 4 (1995), pp. 499–502. [14] M. Nikolova, A variational approach to remove outliers and impulse noise, J. Math. Imaging Vision, 20 (2004), pp. 99–120. [15] T. Nodes and N. Gallagher, Jr., The output distribution of median type filters, IEEE Trans. Communications, COM-32 (1984), pp. 532–541. [16] S. Osher and S. Esedoglu, Decomposition of Images by the Anisotropic Rudin-Osher-Fatemi Model, Comm. Pure Appl. Math., 57 (2004), pp. 1609–1626. [17] J. Ortega and W. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. [18] G. Pok, J. Liu, and A. Nair, Selective removal of impulse noise based on homogeneity level information, IEEE Trans. Image Process., 12 (2003), pp. 85–92. [19] L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), pp. 259–268. [20] C. Vogel, Computational Methods for Inverse Problems, SIAM, Philadelphia, 2002.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1490–1507
A DISCRETE OPTIMIZATION APPROACH TO LARGE SCALE SUPPLY NETWORKS BASED ON PARTIAL DIFFERENTIAL EQUATIONS∗ † , S. GOTTLICH ‡ , M. HERTY‡ , A. KLAR‡ § , AND A. MARTIN† ¨ ¨ A. FUGENSCHUH
Abstract. We introduce a continuous optimal control problem governed by ordinary and partial differential equations for supply chains on networks. We derive a mixed-integer model by discretization of the dynamics of the partial differential equations and by approximations to the cost functional. Finally, we investigate numerically properties of the derived mixed-integer model and present numerical results for a real-world example. Key words. supply chains, conservation laws, networks, optimization AMS subject classifications. 90B10, 65M DOI. 10.1137/060663799
1. Introduction. Supply chain management is usually defined as a set of approaches utilized to efficiently integrate suppliers, manufacturers, warehouses, and stores so that merchandise is produced and distributed in the right quantities, to the right locations, and at the right time, in order to minimize system-wide costs while satisfying service level requirements [26]. Regarding this definition, supply chain modeling and simulation is obviously characterized by many different scales and several different mathematical approaches. On the one hand there are discrete event simulations based on considerations of individual parts. On the other hand, continuous models using partial differential equations (PDEs) have been introduced; see, e.g., [2, 3, 4] and [8] for a general overview. They describe the evolution of the density of parts in the supply chain. In the economic literature, optimization of supply chains often refers to a wide variety of problems, where a couple of independent entities (such as companies) depend on each other in a network-like structure, where the output of one entity is used as the input of another. On such a very coarse scale, mainly models based on mixed-integer programming are used; see [6]. As a starting point we consider continuous supply chain models based on PDEs. They have been derived using a time recursion process. In [2] a conservation law for the density of parts was derived: @ A L ρ, μ(x) = 0. (1.1) ∂t ρ + ∂x min T Here, L and T are averaged length and processing time. In [12, 13] a network model based on these equations was introduced by adding suitable equations for queues. ∗ Received by the editors June 27, 2006; accepted for publication (in revised form) December 20, 2007; published electronically April 9, 2008. This work was supported by the University of Kaiserslautern Excellence Cluster “Dependable Adaptive Systems and Mathematical Modeling,” the DFG–SFB 666 at TU Darmstadt, the program DFG–SPP 1253, and the DAAD PPP D/0628176. http://www.siam.org/journals/sisc/30-3/66379.html † Department of Mathematics, TU Darmstadt, Schloßgartenstraße 7, 64289 Darmstadt, Germany (
[email protected],
[email protected]). ‡ Department of Mathematics, TU Kaiserslautern, Postfach 3049, 67653 Kaiserslautern, Germany (
[email protected],
[email protected]). § Fraunhofer Institute ITWM, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany (klar@itwm. fhg.de).
1490
DISCRETE OPTIMIZATION AND PDE SUPPLY
1491
This model was investigated analytically in [15]. Although the computation times for the model are smaller than those using a discrete event approach they are still excessively large, in particular if optimization of large networks with many processors is considered. Starting from the PDE model we derive a simplified model. This is done using a straightforward two-point discretization of the equations on each arc of the network. For the optimization of the supply chain model one could proceed in different ways. In the PDE framework a natural approach would be to use an optimization procedure based on an adjoint calculus for the original PDE model or the discretized version. Here we use a second approach: the resulting discretized equations are interpreted as a mixed-integer problem relating the PDE scale to a mixed-integer problem. This gives rise to a linear programming based branch-and-cut approach for the numerical solution of the optimization problem which allows the treatment of large scale problems [23, 9]. Other benefits of such an approach are the easy inclusion of extensions like, for example, the computation of bounded queues, time-dependent controls (distribution rates) Av,e (t), maintenance intervals, and other reasonable constraints. From the point of view of the operations management literature for supply chains, advantages include the following: The PDE approach gives a guideline to develop new and dynamically more accurate models, since the usual mixed-integer problem can be viewed as a simple and very coarse (two-point) approximation of the PDE. Moreover, the approach opens a way to introduce nonlinearities in a straightforward and consistent way into these models and to treat nonlinear problems like chemical production lines by appropriate methods. In general, the paper shows a connection between the PDE approach and the mixed-integer problem-approach which yield (after discretizing the PDE) essentially the same models but differ by the solution method (gradient computations for adjoint systems versus combinatorial methods). For a comparison of the two approaches (i.e., on the one hand the treatment of the optimization problem in the PDE context using adjoint equations and a gradient scheme, and on the other hand the interpretation of the discretized equations as a mixed-integer problem and use of a corresponding solver like CPLEX) we refer to [18]. The outline of the paper is as follows: In section 2 we introduce a continuous optimal control problem governed by PDEs for supply chains on networks and derive a mixed-integer model after discretization of the PDEs using approximations to the cost functional. In section 3, we present computational results first on a sequence of processors, then on more complex networks and real-world applications. 2. Derivation of the mixed-integer problem. The starting for the derivation of a mixed-integer problem for supply chains is the continuous network model recently introduced in [2, 12]. Below we recall the proposed continuous model and its corresponding optimal control problem. For a detailed discussion of the properties and the validity of this model we refer to [2] and [12]. First, let us give a brief idea of the detailed discussion to follow: In the next section we will assume that a general supply chain is represented as a network of directed arcs. On each of its arcs the dynamics of the density evolution of the processed parts is given by a PDE. At each vertex of the network we can influence the distribution of the mass flux of the parts by a control. Also, different arc dynamics are coupled at a vertex by an ordinary differential equation (ODE). Finally, we measure some entities on the whole network and obtain the optimal control problem by asking for a control such that the dynamics on arcs and vertices yield a maximal entity. The dynamics, controls, and measured
1492
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN ¾ ½
¾
¯
rrrrrrr rrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrr rrrrrrr
½
½
¾
¾
Fig. 2.1. Two arcs (processors) linked by queue q 2 .
entities will be made precise below in sections 2.1 and 2.2. The general optimal control problem is then reformulated by approximations as a mixed-integer problem in section 2.3. Moreover, the mixed-integer problem allows for many extensions of the current model. We discuss possible extensions in section 2.4. 2.1. The continuous problem based on PDEs. We introduce a directed graph (V, A) consisting of a set of arcs A and a set of vertices V. Each arc e ∈ A corresponds to a single processor and is parameterized in space by an interval x ∈ [ae , be ]. We assume that each processor has constant maximal processing capacity μe , constant processing velocity v e , and a length Le := be − ae . Different processors are coupled at vertices, and queues are located at the vertices; see below. We introduce the following notation: For a fixed vertex v we denote by δv− the set of all ingoing arcs, and by δv+ the set of all outgoing arcs. Let xev := ae if e ∈ δv+ , and xev := be if e ∈ δv− . We explain the notation introduced above in Figure 2.1. Here, a simple situation is depicted where only two processors are linked. We have one ingoing arc, i.e., δv− = e1 , and one outgoing arc, i.e., δv+ = e2 . Obviously, it holds that b1 = a2 at the vertex. To describe more general situations with multiple incoming and outgoing arcs at a single vertex we use the notation xev in the following. The model in [12] describes the evolution of the density of the parts ρe (x, t) at x in time t inside each processor e and the time evolution of the buffering queue q e (t) of processor e. On each arc e, ρe (x, t) is transported with velocity ue if the mass flux is less than the maximal processing rate; i.e., ρe satisfies the conservation law (2.1)
∂t ρe + ∂x f e (ρe ) = 0, f e (ρ) := min{μe , ue ρ}.
At an arbitrary vertex v ∈ V, different processors with different maximal capacities are connected: Here, we have the freedom to distribute the total mass flux from all incoming supply chains to outgoing processors; see Figure 2.2. We introduce a + time-dependent vector Av,e (t) ∈ R|δv | having entries Av,e (t) ∈ [0, 1] and satisfying ' v,e (t) = 1. The functions Av,e (t) will be obtained as solutions of the optie∈δv+ A mization problem described below. Depending on the capacity of the outgoing processor e ∈ δv+ , incoming parts will be buffered in the queue q e or passed to the processor. Therefore, a queue q e satisfies the ODE ⎛ ⎞ (2.2) ∂t q e (t) = Av,e (t) ⎝ f e˜(ρe˜(xev˜ , t))⎠ − f e (ρe (xev , t)), e˜∈δv−
where the outgoing mass flux f e is given by (' ) e e˜ e˜ e˜ e min{Av,e (t) e˜∈δv− f (ρ (xv , t)) ; μ }; q (t) = 0, e e e (2.3) f (ρ (xv , t)) = q e (t) > 0. μe ;
DISCRETE OPTIMIZATION AND PDE SUPPLY
1493
rrrrrrrrrrrrrrr rrr rr rrrrr rr rrrrr rr rrrrr rrrr r r r r rrr rrrrr rrrrr rrrr rrrrr rrrrr r r r r rr rrrrr rrrrr rrrrr
rrrrr rrrrr rr rrrrr rr rrrrr rrrrr rr rrrrr rr rrrrr rrr rrrrr rrrrr rrr rrrrr rrr rrrrr rrrrr rr rrrrr rr rrrrr rr rrr rrrrr rr r r r r rrrrrrr rrrrrrrrrrrrrrr rrr rrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrr r r r rrrrr rrrrrrrrrrrrrrrrr rrrr rrr r r rrrrr rrr r rrr rrrrr rrrrr rr rrrrr rrr rrrrr rr rrrrr rrrrr rrrrr rrr rrrrr rrrrr rrr rrrrr r rrrrr r r rrr rrrrr rrr rrrrr rrrrr rrrrr rrrrr rr rrr rr rrrrr rrrrr rrrrr rrrrr rr rrrrrrrrrrrr rrrrr rrrrr rr rr r r r rrrrr rr rrrrr r rrrrr rrrrrrrrrrrrrrr
rrrrrrr r rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r rrrrrr
Fig. 2.2. Network with distribution rates A1,2 (t) and A2,6 (t).
Equation (2.3) is motivated as follows: Fix an outgoing processor e. If its buffering queue still contains parts at time t, then those parts should be processed at maximal capacity μe . Otherwise, assume the queue is empty. Then we process either the total inflow to this processor or, if the inflow exceeds the maximal capacity, we work at maximal capacity. Finally, we supplement (2.1) with initial data and obtain the continuous supply chain model for the evolution of (ρe , q e )e∈A on the network (V, A) by (2.1), (2.2), and (2.3). As already discussed in [5], (2.3) induces a discontinuous dependence on the queue q e (t). In this article, the following regularization is proposed: Given a small parameter 1, then, on a time scale t ∈ O(1/ ), the regularization (2.4) relaxes to (2.3). @ A e e e e e q (t) (2.4) f (ρ (xv , t)) = min μ ; . Last, we remark that for initial data ue ρe0 (x) ≤ μe ,
(2.5)
(2.1) simplifies: Due to the presence of buffering queues q e in front of each processor, we guarantee that the inflow ue ρe (xev , t) for e ∈ δv+ is bounded by μe . Hence, under the assumption on the initial data (2.5), the conservation law (2.1) reduces to the simple transport equation: ∂t ρe + ue ∂x ρe = 0.
(2.6)
2.2. The continuous optimal control problem. The entries of the distribution matrices Av,e (t), v ∈ V, are the controls for the supply chain process described above. We determine the optimal controls Av,e (t) as solutions to an optimal control problem. The general problem on a graph (V, A) with given initial data ρe0 (x) satisfying (2.5), initial queues q e (0), control horizon T , constant processing velocities ue , e ∈ A, and constant processing rates μe , e ∈ A, is T be (2.7) min F(ρe (x, t), q e (t)) dx dt v,e A
(2.8a)
(t),v∈V
0
e∈A
ae
subject to ∂t ρe + ue ∂x ρe = 0, @
(2.8b)
e
e
f (ρ
(xev , t))
q e (t) = min μ ; e
A ,
1494
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
⎛ (2.8c)
∂t q e (t) = Av,e (t) ⎝
⎞
f e˜(ρe˜(xev˜ , t))⎠ − f e (ρe (xev , t)).
e˜∈δv−
The cost functional in (2.7) will be chosen later with respect to the real-world application in section 3.4. One possibility consists in using 1 e q (t), Le which measures the entity of goods per time unit, but we stress the fact that the following discussion proceeds analogously for different, but separable, cost functionals. Similar to [10], the cost functional (2.9) can be interpreted as the maximization of the outflow; see also [13] for more examples on the choice of F. Problem (2.7), (2.8) is an optimization problem with partial and ordinary differential and algebraic equations as constraints. Each evaluation of the cost functional (2.7) requires in particular the solution of the PDE on the whole network. Usually, a numerical solution of the problem will be obtained by a suitable discretization of the and PDEs and ODEs and the application of an appropriate nonlinear optimization method, like an adjoint calculus; see, for example, [18]. Depending on the discretization in space and time one obtains a hierarchy of different models ranging from a very coarse two-point discretization for each arc, which gives a fast but not accurate solution of the PDE, to a very fine discretization of the dynamics on each arc, which is accurate but very expensive. In the next section, we propose a reformulation of the discretized versions of problem (2.7), (2.8) in terms of a mixed-integer model. Depending on the discretization, this reformulation then allows for a fast optimization with respect to the distribution at vertices even for large scale networks.
(2.9)
F(ρe , q e ) = ρe (x, t) +
2.3. From a continuous optimization problem towards a mixed-integer formulation. The reformulation is based on a coarse grid discretization of the PDE (2.8). This is possible, since (2.8a) does not allow for complex dynamics like backwards traveling shock waves. Hence, we propose a two-point upwind discretization of each arc e. Finally, a reformulation of (2.8b) using binary variables yields the mixedinteger model for supply chains. The details are as follows: For each fixed arc e ∈ A we introduce two variables for the flux at the boundary and a single variable for the queue for each time t of a timegrid t = 1, . . . , NT : (2.10)
fte := f e (ρe (ae , t)),
gte = f e (ρe (be , t)),
qet := q e (t) ∀e, t.
A two-point upwind discretization in space and time of (2.8a) is given by (2.11)
e gt+1 = gte +
Δt e e u (ft − gte ) Le
∀e, t,
where we use the same time discretization Δt for all arcs e. Condition (2.8b) is reformulated by introducing binary variables ζte ∈ {0, 1} for e ∈ A, t = 1, . . . , NT and given by (2.12a)
μe ζte ≤ fte ≤ μe ,
(2.12b)
qet qe − M ζte ≤ fte ≤ t ,
(2.12c)
μe ζte ≤
qet ≤ μe (1 − ζte ) + M ζte ,
DISCRETE OPTIMIZATION AND PDE SUPPLY
1495
where M is a sufficiently large constant. To be more precise, M may be set to T e maxe∈A μ . Next, we need to reformulate the coupling conditions (2.8c) and (2.8b). We introduce variables het for the total inflow to arc e at x = ae and require the following equalities for each vertex v ∈ V: (2.13a) het = gte ∀v, t, e∈δv+
(2.13b)
e∈δv−
qet+1 = qet + Δt (het − fte )
∀e, t.
Note that we use an explicit time discretization of the ODE. This is mainly due to the fact that an implicit discretization would introduce an additional coupling between different arcs on the network. On the contrary, the explicit discretization introduces only a local coupling between the arcs connected at a fixed vertex v ∈ V. From condition (2.8b), we observe that the ODE is stiff whenever 0 < q e (t) ≤ μe . A suitable discretization of the ODE and the PDE should satisfy the CFL condition and a stiffness condition. Hence, we choose Δt as (2.14)
Δt = min{ ; Le /ue : e ∈ A}
in the case of the coarsest discretization. A natural choice for is = Δx/ue , since, as already mentioned above, q e (t)/ represents a relaxed flux. More detailed, we know that a flux can be rewritten as the product of the part density and the processing velocity, f e = ue ρe . Due to the fact that the density at the first discretization point x = ae of the processor is the same as q e /Δx, the parameter is determined. This leads to the condition (2.15)
Δt = min{Le /ue : e ∈ A}.
Moreover, we have the following box constraints ∀e ∈ A, ∀t = 1, . . . , NT : (2.16)
0 ≤ fte ≤ μe ,
0 ≤ gte ≤ μe ,
0 ≤ qet .
Finally, we assign initial data to f1e , g1e , and qe1 . For a discretization of the cost functional we use a trapezoid rule in space and a rectangle rule in time and obtain , Le + (2.17) F(fte /ue , qet ) + F(gte /ue , qet ) . Δt 2 e,t Summarizing, the mixed-integer model derived by discretization of the network formulation of the supply chain dynamics is given by min (2.18)
subject to
(2.17) (2.11), (2.12), (2.13), (2.16).
A few remarks are in order. First, it is a matter of simple calculations to recover the entries of the distribution vectors Av,e from the values of het . Second, other objective t functionals can be envisioned, and in the case of a nonlinearity in (2.7), we might have to introduce additional binary variables to obtain a mixed-integer approximation. This is standard and can be found, for example, in [10]. Third, if we use an implicit discretization of the ODE (2.8c), ) ( e , (2.19) qet+1 = qet + Δt het+1 − ft+1
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
1496
we end up with no restriction on the timestep. From the continuous point of view such an approach is not favorable due to the additional introduced strong coupling between all arcs in the network. We conclude the modeling with the following remark: In the particular case of a supply chain consisting of a sequence of processors and vertices of degree at most two, there is no possibility of distributing parts. In this case, the mixed-integer model coincides with the two-point upwind discretization of the PDE and both yield the same dynamics. The mixed-integer problem reduces to a feasibility problem in this case. 2.4. Model extensions. In real-world examples the introduced model is too simple to give realistic results. Hence, we propose a few extensions to the mixedinteger model (2.18) on an arbitrary network. We give examples which will also be used in section 3.4. 1. Finite size buffers. Usually, in the design of production lines, it is mandatory to limit the size of the buffering queues qet . This condition can be implemented in the mixed-integer context by adding box constraints as follows: qet ≤ const
(2.20)
∀e, t.
Similarly, we could add the constraints q e (t) ≤ const to the continuous problem (2.7), (2.8) and obtain a state-constrained optimal control problem. 2. Optimal inflow profile. Under the assumption of finite sizes in the buffering queues, the question arises of finding the maximum possible inflow to the network, such that the buffering capacities of the queues are not exceeded. This can be modeled by replacing the cost functional (2.17) or (2.7), respectively, by the objective function fte , (2.21) max e∈A ,t
where A ⊂ A is the set of all inflow arcs of the network. 3. Processor shutdown due to maintenance. Maintenance of processors can also be included in the mixed-integer model: Assume that processor e˜ has to be switched off for maintenance for N consecutive time intervals. Further assume that this period can be chosen freely during the whole simulation time t = 1, . . . , NT . Then we supplement the mixed-integer model with the condition (2.22a) (2.22b)
˜ het+l ≤ max{μe : e ∈ A}|A| · (1 − φet˜) ∀t, ∀l = 0, . . . , N − 1, NT
φet˜ = 1,
t=1
where for each processor e ∈ A and every time t we introduce a binary variable φet ∈ {0, 1} that indicates whether process e is shut down at time t. If φet˜0 = 1, then the maintenance interval starts at time t0 , and in the time interval t0 , t0 + N, the processor e˜ is not available. 4. Min-up/min-down restrictions. The previously presented shutdown scenario can be seen as a specific case of the general min-up/min-down restriction. Takriti, Krasenbrink, and Wu [27] (see also [20]) formulated min-up/mindown times constraints in their study of a unit commitment problem. These
1497
DISCRETE OPTIMIZATION AND PDE SUPPLY
can also be useful for some specific real-world applications of our model, where fast and frequent changes in a processor between up and down is unwanted, for example, causing increased operator stress and reduced machine life. Let U e and De be positive integers. If process e switches from inactive (or down) to active (or up), then it must be up for at least U e timesteps (or at least till the end of the considered planning horizon). Conversely, if process e switches from up to down, then it must be down for at least De consecutive timesteps (or at least till the end of the planning horizon). According to [27], these constraints can be formulated as follows: For the min-up restrictions (2.23)
ϕet − ϕet−1 ≤ ϕeτ
∀2 ≤ t < τ ≤ min{t + U e − 1, NT },
and for the min-down restrictions (2.24)
ϕet−1 − ϕet ≤ 1 − ϕeτ
∀2 ≤ t < τ ≤ min{t + De − 1, NT }.
In contrast to φ, ϕet ∈ {0, 1} represent the activity of process e at time t. We have ϕet = 1 if and only if e is up in timestep t. The inequalities het ≤ ϕet max{μe˜ : e˜ ∈ A}
(2.25)
∀t
are now used to switch off the processors. 3. Numerical results. All computations are performed on the same platform, namely, a 2.4 GHz AMD64X2 Linux computer with 2 GB of RAM. Instances of the mixed-integer problems are set up using the modeling language Zimpl [19] and solved with ILOG Cplex9.1 [17] using default settings. We have the following general remark: When considering optimization problems as (2.7), (2.8) we have the choice of either applying the mixed-integer approach leading to (2.18) or of solving the continuous optimality system (2.7), (2.8). For the latter problem typically gradient descent methods are used. Furthermore, a suitable discretization has to be applied when solving this numerically. This work has been carried out in [18]. We emphasize that if we use the same discretization as in section 2.3 for the continuous problem, we obtain exactly the same optimality system. The difference between the continuous approach [18] and the mixed-integer problem (2.18) is just the solution method. For further comparison of computational times of the mixed-integer problem with the continuous approach we refer to [18]. 3.1. Chain of processors. We report on computing times for a sequence of processors as in Figure 3.1. rrrrrr rrrrrr rrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrr rrrrrr rrrrrr
½
¾
º
º º
¿
Fig. 3.1. Sample network.
As already pointed out, for this particular network, the mixed-integer problem is in fact a feasibility problem, since there is no possibility of distributing parts at vertices. Moreover, in this case the mixed-integer problem is a coarse grid discretization of the dynamics of PDEs given by (2.8a), (2.8b), and (2.8c). Hence, the computing times in Table 3.1 are presolve times of the mixed-integer problem. The most important technique here is the so-called bounds strengthening, where the bounds of one
1498
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
Table 3.1 Computing times for a sequence of NP processors and NT time intervals. The mixed-integer problem (MIP) consists of 3 · NT · NP real and NT · NP binary variables. NP
NT
MIP [sec]
NP
NT
MIP [sec]
10 20 30 40 50 100
500 500 500 500 500 500
0.360 0.830 1.340 1.950 2.510 5.600
25 25 25 25 25 25
200 400 600 800 1000 2000
0.340 0.830 1.350 1.950 2.600 6.290
variable are propagated onto the lower and upper bounds of others. Details of this can be found in [25] (see also [1, 9] for a survey). The solution time for the mixed-integer problem is thus the time needed to eliminate all variables by bounds strengthening, which we empirically found out to be proportional to NT · ln(NT ) and NP · ln(NP ). The results in Table 3.1 are given for different numbers of processors NP and different numbers of time intervals Δt. For each arc e ∈ A the capacities μe , velocities ue , and lengths Le are all equally set to 1. We use a sinoidal inflow profile at x = a1 and zero initial data on all other arcs. 3.2. Convergence results. In this section, we investigate the behavior of the solution for finer space discretizations. Therefore, we introduce an equidistant grid in space. That means each arc is now discretized with D internal points such that xi ∈ [ae , be ] represents one space discretization point and Δx = Le /(D −1) the spatial grid size. We denote by yte,i , i = 1, . . . , D, the flux inside the processor measured at points xi ∈ [ae , be ] and by yte,1 = fte the incoming flux into the processor at x = ae . All fluxes yte,i are bounded by the maximal capacity on this arc: (3.1)
0 ≤ yte,i ≤ μe
∀e, ∀i, ∀t.
Obviously, D = 2 yields the two-point upwind discretization introduced in section 2.3 with Δx = Le . Using the new notation, (2.11) can be rewritten as , Δt e + e,i−1 e,i = yte,i + − yte,i yt+1 (3.2) ∀e, t, i = 2, . . . , D. u yt Δx Furthermore, condition (2.15) changes to (3.3)
Δt = min{Δx/ue : e ∈ A},
which induces smaller time intervals and, consequently, larger systems. For the computation, all other network equations remain unchanged. The underlying network for our convergence analysis is depicted in Figure 3.2. We use the parameter setting ue = 1, Le = 1 ∀e ∈ A and μ = (100, 40, 30, 20, 20, 5, 10, 10), where μ = (μ1 , . . . , μ8 ). We consider the following optimization problem with yte,D = gte : 1 8,D y , (3.4) − t + 1 t t
(3.5)
minAv,e t ,v∈V subject to
(3.4) (3.2), (2.12), (2.13), (3.1).
1499
DISCRETE OPTIMIZATION AND PDE SUPPLY rrrrrrrrrrrrrrrr rr rrrrr rr rrrrr rrr rrrrr rrrrr rrrrr r r r r rr rrrrr r r r r rrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr
rr rr rrr rrr rr rr rr rr rr rr rr rr rrrrrrr rr r rr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r r rrr rrrrrrrrrrrrrrrrr rrrrrrr rrrrr r rrrrr rrr rr rrr rrrrr rrrrr rr r rrrrr rrrrr rr rrr rrrrr rrrrr rr rrrrr rrrrr rr rrrrr rrrrr r r r rrrrr rr r rr rrrrr rr rrrrr r r rrrrr r r r rr rrrrr rr rrr rr rrrrr rrrrr rrrrr rrrrr rr rrrrrrrrrrrr rrrrr rrrrr rr rr rrrrr rrrrr rr rrrr r rrrrr rrrrrrrrrrrrrrr
rrrrrrr r rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r rrrrrrr
rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rrrrr rr rrrrr rr rrrrr rr r rrrrrrrrrrrrrrrr
rrrrrrr r rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr r rrrrrrr
Fig. 3.2. Maximal processing capacities on each arc.
45
10
40
9
7 6
parts / time
f1(t)
30 25 20
5 4
15
3
10
2
5 0 0
Δ x=1 Δ x=0.03125
8
35
1
10
20
30
40 time
50
60
70
0
80
0
10
20
30
40 time
50
60
70
80
Fig. 3.3. Inflow f 1 (t) into the system (left). Outflow yt8,D of the system plotted for different space discretizations Δx = 1 and Δx = 0.03125 (right).
Table 3.2 Convergence results for finer space discretizations. Δx 1 0.5 0.25 0.125 0.0625 0.03125
'
yt8,D 495 472.5 461.25 455.625 452.8125 451.4063
Δt
t
The inflow into the network is always prescribed at the first arc. We choose the continuous inflow profile as shown in Figure 3.3. -T Obviously, it holds that 0 f 1 (t) dt = 450. The conservation of mass guarantees that the flux given into the network also leaves the network after several timesteps (for T large enough); cf. Figure 3.3. Therefore, this value can be used as a reference value for the comparison of different space discretizations. In Table 3.2, we observe ' that for smaller step sizes Δx the values of Δt t yt8,D converge to the value of the integral. We also recognize that the difference between two computed values decreases linearly with the space grid size. We observe that the error decreases to 0.5% for the finest spatial grid. Depending on the actual application other choices for Δx might be reasonable and have to be chosen according to the desired accuracy. Note that smaller values of Δx significantly increase the computational effort. Another example using the same setting where only the length of one arc is changed, L7 = 10, shows that the solution of the coarse discretization Δx = 1 is smoothed out on this arc, whereas the solution of the fine discretization Δx = 0.03125
1500
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
10 Δ x=1 Δ x=0.03125
9 8
parts / time
7 6 5 4 3 2 1 0
0
10
20
30
40 time
50
60
70
80
Fig. 3.4. Evolution of the flux yt7,D plotted for different space discretizations Δx = 1 and Δx = 0.03125.
Fig. 3.5. A network with X = 4 and Y = 3.
converges to the limit given by μ7 = 10; see Figure 3.4. This result also emphasizes the use of mixed-integer models for solving PDE-constrained optimal control problems. 3.3. Connected networks. Here, we solve the mixed-integer problem on connected networks to gain further insight into the computational complexity of our mixed-integer approach. The network we use is a composition of standard building blocks. Each block has two inflow and two outflow arcs. The blocks are coupled into a rectangular two-dimensional network of size X × Y . An example of a 4 × 3 network is shown in Figure 3.5. In what follows we present the results of several tests on this type of networks. As mentioned above, the corresponding mixed integer models (2.18) were solved by Cplex, using a linear programming–based branch-and-cut approach. The underlying linear programming relaxation, where the integrality constraints on the binary variables are dropped, is solved using the simplex method for linear programming. The relaxation is further strengthened by introducing cutting planes. Among the various cuts known in the literature (see [7] or [9], for instance), the following turned out to be particularly useful: implied bound cuts [16], flow cover cuts [24, 14], mixed-integer rounding cuts [22], disjunctive cuts [21], and Gomory fractional cuts [11]. In our first experiment, the values L, v, μ are set to 1 for all processors. Thus, = 1. Moreover, we set NT = 100. For different sizes of networks, X, Y ∈ [2, . . . , 5], we evaluate the time for solving the corresponding mixed-integer problem models. The results are given in Table 3.3. Clearly, the larger the network in the X or Y direction, the more time is needed to compute a globally optimal solution. In a further test we computationally evaluate the capability of our approach to solve large scale instances of the problem, consisting of networks with several hundreds of vertices and arcs. For this we take k × 2 networks of the above type, where k varies
1501
DISCRETE OPTIMIZATION AND PDE SUPPLY Table 3.3 Computing times in seconds for a sequence of X × Y processors and 100 time intervals.
Y Y Y Y
=2 =3 =4 =5
X=2 2 4 4 8
X=3 4 6 13 24
X=4 5 12 32 56
X=5 13 20 48 93
7.000 100.000
6.000
80.000
5.000 time (sec)
4.000
time 60.000 (sec)
3.000 40.000 2.000 20.000
1.000 0
0
0
100 200 300 400 500 600 700 800 size of instance (k)
0
20
40 60 size of instance (k)
80
100
Fig. 3.6. Computing times in seconds for k × 2 networks and 100 + k/20 (left) and 100 + k (right) timesteps.
Table 3.4 Computing times in seconds for the 4 × 3 processors in Figure 3.5 and various time intervals. NT
Time [sec]
NT
Time [sec]
10 50 100 150
1 4 12 26
200 250 300 350
48 74 104 144
from 2 to 1000. Note that such a network consists of around 2k many vertices and 5k many arcs. In order to have sufficient time for the dynamic of the flow, we select the number of timesteps NT depending on the network’s respective size k. To this k (see the left-hand side end, we conducted two tests. In the first we let NT = 100 + 20 graph of Figure 3.6), and in the second we let NT = 100 + k (right-hand side graph of Figure 3.6). In both experiments one can see that the computing time rapidly grows with the size of the instances. As a result of these tests we conclude that the number of timesteps is a more limiting factor in solving large scale instances of the problem, whereas the size of the network (number of arcs and vertices) is not that crucial. In the next experiment, we want to evaluate the dependency of the solution times with respect to the number of timesteps. We consider here the network with X = 4 and Y = 3 as in Figure 3.5. Again we set L, v, μ := 1 for all processors. The number of timesteps varies from 10 to 350. The results are given in Table 3.4. As one might expect, the solution time increases if more timesteps are taken into account.
1502
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
1.250
1.000
750
500
250
0 10
20
30
40
50
60
Fig. 3.7. Distribution of computation times.
Fig. 3.8. Layout of a production line.
Empirically, the solution time is proportional to NT · ln(NT ), which corresponds to solution time for the chain of processors above. In a final experiment, we test the dependency of the solution time on the actual values of the instance. That is, we take the network with X = 3 and Y = 3, and set v, μ := 1 for all processors. The length of the processor is chosen randomly uniformly distributed in {1, . . . , 5} for each processor. We generated 10,000 instances in this way and solved them to optimality. The distribution of the solution times is shown in Figure 3.7. It might come as a surprise that the solution time heavily depends on the actual values of L. Some instances were solved quickly (9 seconds was the minimum), but at the other end of the range, there were some instances that needed up to 20,000 seconds. Figure 3.7 shows the solution time distribution for those instances that needed less than 60 seconds, which are approximately 77% of all instances. At the moment it is not understood why some instances are easy and others are so difficult to solve. 3.4. A real-world application. As already mentioned, we finally consider the real-world example for illustrating results of the mixed-integer model and its extensions. For our investigations, we use the network given in Figure 3.8 and its abstract form in Figure 3.9. The first arc in Figure 3.9 is artificial and is used to prescribe the inflow profile only. In Figure 3.8, a layout of a supply chain producing toothbrushes is
DISCRETE OPTIMIZATION AND PDE SUPPLY
1503
— rrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrr r rrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrr rrrrr rrrrrr rrrr rrrrrrrrrrrr rrrrrrrr rrrrrrr rrrr rrrrrrrrrrrrrrrrrrrrrrr r r rrrrrrrr rrrrrrr rr r rr rrrrr rrrrrrr rrrr rr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrr rrrrrrrrrr rr rr rrrr rrrr rrrrrrrrrrrrrrrrrrrrrrrrr rrrrr rrrrrrrrr rrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrr rr rrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rr rrrrrrrrrrrrr rrrr rrrrrr rrrr rrrr rrrr rrrrrrrrrrrrrrrrrrrrrrrr rr rrrrrrrrrr rrrrr rrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrr rr r rrrr r r rrrrrrrrrrrrrrr rrrrrrrr r rrrrr rrrrrrr rr rrrr r r rrr rrr rrr r r r r r r r r r r r r r r r r r r r r r r r r r rrrrrrrrrrrrr rrrrrr rrrrrrrrrrrrrrr rrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrr rrrrrr rrrr rrr r r r r r rrrr r r rrrrrrrrrr rrrr rrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrr rrrrr rrrrr rrrrr rrrrrr rrrrrr rrrrrrr r r r r r r rrrrrrrr rrrrr rrrrrrrrrr rrrrrrrr r r r r r rrrrrrrrrrr r r r rrrrrr rrrrrrrrrrrrrr rrrrrrrrrrrr rrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
Fig. 3.9. Corresponding network. Table 3.5 Relevant data split according to maximal processing rates μe , lengths Le , processing velocities ue , and bounds for qe . Processor 1 2 3–8 9 10–11 12
μe 100 0.71 0.06666 0.71 0.24 0.71
ue 0.01333 0.35714 0.01333 0.04762 0.119 0.35714
Le 1 1.5 1 3 1.5 1.5
qe 100 18 8 1 1 1
shown. We are interested in simulating and optimizing the number of pallets on which the half-finished toothbrushes are carried through the network. In detail, pallets are fed into the network at processor 2. There, each pallet is assembled before they are processed in processors 3–11. The finished toothbrushes are removed in processor 12 and the pallets start circling again. We assume that the amount of pallets is given by an inflow profile which we prescribe on arc 1. Here, the inflow data is given by a step function: ft1 := 0.852 for 0 ≤ t ≤ NT /2 and ft1 := 0 otherwise. Further data is given in Table 3.5. The aim of the optimization problem is maximizing the outflow of processor 12, i.e., gt12 . That means optimizing the amount of pallets (in particular finished toothbrushes) passed by processor 12 under the constraints introduced in section 2.3. The mixed-integer problem (2.17) is adapted to the objective function
(3.6)
−
t
1 12 g t+1 t
and given by (3.7)
minAv,e t ,v∈V subject to
(3.6) (2.11), (2.12), (2.13), (2.16).
We present results on (3.7) as well as on several extensions discussed in section 2.4. For all computations we fix the parameters of the smoothing parameter = 1, the run-time T = 200, and the constant M defined above. 3.4.1. Finite size buffers. In the first setting, we compare the situation of unbounded and bounded queues (i.e., finite size buffers). In the latter case the restriction (3.8)
qet ≤ qe
is additionally imposed in (3.7) for qe as defined in Table 3.5. In Figure 3.10, the evolution of the queues in the case of unbounded and bounded queues is shown. For
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
1504 20
20 q2 q3 q4
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
q2 q3 q4
18
parts
parts
18
50
100 time
150
200
0
0
50
100 time
150
200
Fig. 3.10. Amount of parts in the queues in the unbounded (left) and bounded (right) cases.
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1 0
0.1 0
50
100 time
150
200
0
0
50
100 time
150
200
Fig. 3.11. Distribution rates A3,4 in the unbounded (left) and bounded (right) cases. t
simplicity, we plot only the queues q2t , q3t , and q4t . Obviously, these queues are forced to stay below the box constraints in the case of finite size buffers. The queues q9t , . . . , q12 t remain empty. Figure 3.11 shows the time evolution of the distribution rates A3,4 t ; this is the percentage of the incoming parts coming from arc 2 and being distributed to arc 4. The additional constraints on the queues (3.8) lead to a qualitatively different behavior of this distribution factor. 3.4.2. Shutdown of processor 3 for maintenance. We report on results for the maintenance of processor 3. We assume this processor to be down for at least N = 20 consecutive time intervals. Then the mixed-integer problem is given by (3.7) with the additional constraint (2.22) for arc i = 3. In Figure 3.12, we investigate the behavior of the time evolution of the queues q2 , q3 , and q4 in the situation of bounded queues with no maintenance and in the case of bounded queues and shutdown due to maintenance. In the latter case the redistribution to queues q3 and q4 is such that the queues reach the allowed maximal buffer size. In Figure 3.13 we plot the time evolution of the optimal distribution from parts coming from arc 2 and entering arc 3. The optimal time for a shutdown was determined to ΔT = [180, 200], and in the right part of Figure 3.13 we see that of course then A3,3 = 0. t 3.4.3. Inflow profile optimization. In this example, we are interested in the shape of an optimal inflow profile subject to the buffering capacities of the queues. Due
1505
DISCRETE OPTIMIZATION AND PDE SUPPLY
20
20 q2 q3 q4
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
q2 q3 q4
18
parts
parts
18
50
100 time
150
0
200
0
50
100 time
150
200
Fig. 3.12. Number of pallets in the case of only bounded queues (left) and maintenance in processor 3 (right).
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
50
100 time
150
0
200
0
50
100 time
150
200
Fig. 3.13. Distribution rates At3,3 in the case of bounded queues (left) and for the shutdown of processor 3 (right).
to this fact we define the objective function by (3.9), where we add binary variables with a small factor in the objective to reduce the computing time of the mixed-integer problem. Hence, we consider
(3.9)
e=1,t
fte +
1 e ζ , 1000 e,t t
with the corresponding mixed-integer problem max (3.10)
subject to
(3.9) (2.11), (2.12), (2.13), (2.16) and qet ≤ qe .
Additionally, we assume that the inflow does not exceed the time limit t = NT /2 1 | ≤ 1/10 in order to avoid large oscillations. On the left side and that |ft1 − ft+1 in Figure 3.14 the default inflow profile is shown. There is a constant inflow with ft1 = 0.852 until t ≤ 100. The optimal inflow profile is different and inherits small oscillations. In Figure 3.15 we observe the effect of the inflow profile on the time evolution of the buffering queues.
¨ ¨ FUGENSCHUH, GOTTLICH, HERTY, KLAR, AND MARTIN
1506
1
1 parts / time
1.5
parts / time
1.5
0.5
0.5
0
0 0
50
100 time
150
200
0
50
100 time
150
200
Fig. 3.14. Standard inflow profile (left) and optimal inflow profile (right).
20
20 q2 q3 q4
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
0
q2 q3 q4
18
parts
parts
18
50
100 time
150
200
0
0
50
100 time
150
200
Fig. 3.15. Evolution of the queues using standard inflow (left) and optimal inflow (right).
4. Summary and conclusions. Starting from a PDE model we derived a simplified model by using a straightforward two-point discretization of the equations on each arc. For the optimization of the supply chain model the resulting equations are interpreted as a mixed-integer problem. This allows a linear programming approach for the numerical solution of the optimization problem. In addition, some extensions of this model are presented and investigated numerically. Benefits of the new model are the computation of time-dependent controls Av,e (t) and the opportunity for extensions. As a next step, multicommodity aspects and further extensions will be included in the model. Moreover, comparisons with other models and optimization approaches will be performed. In particular, a comparison of the present optimization approach based on the mixed-integer interpretation and an optimization procedure based on the adjoint approach to the original PDE model or the discretized version is currently under investigation. REFERENCES [1] E. D. Andersen and K. D. Andersen, Presolving in linear programming, Math. Programming, 71 (1995), pp. 221–245. [2] D. Armbruster, P. Degond, and C. Ringhofer, A model for the dynamics of large queuing networks and supply chains, SIAM J. Appl. Math., 66 (2006), pp. 896–920.
DISCRETE OPTIMIZATION AND PDE SUPPLY
1507
[3] D. Armbruster, P. Degond, and C. Ringhofer, Kinetic and fluid models for supply chains supporting policy attributes, Bull. Inst. Math. Acad. Sin. (N.S.), 2 (2007), pp. 433–460. [4] D. Armbruster, D. Marthaler, and C. Ringhofer, Kinetic and fluid model hierarchies for supply chains, Multiscale Model. Simul., 2 (2003), pp. 43–61. [5] D. Armbruster, C. de Beer, M. Freitag, T. Jagalski, and C. Ringhofer, Autonomous control of production networks using a pheromone approach, Phys. A, 363 (2006), pp. 104– 114. [6] R. E. Bixby, D. Simchi-Levi, A. Martin, and U. Zimmermann, Mathematics in the supply chain, Oberwolfach Rep., 1 (2004), pp. 963–1036. [7] R. E. Bixby, M. Fenelon, Z. Gu, E. Rothberg, and R. Wunderling, MIP: Theory and practice—closing the gap, in System Modelling and Optimization, Kluwer Academic, Boston, MA, 2000, pp. 19–49. [8] C. F. Daganzo, A Theory of Supply Chains, Springer-Verlag, New York, Berlin, Heidelberg, 2003. ¨genschuh and A. Martin, Computational integer programming and cutting planes, in [9] A. Fu Handbook on Discrete Optimization, K. Aardal, G. Nemhauser, and R. Weismantel, eds., Handbooks Oper. Res. Management Sci. 12, Elsevier, Amsterdam, 2005, pp. 69–122. ¨genschuh, M. Herty, A. Klar, and A. Martin, Combinatorial and continuous models [10] A. Fu for the optimization of traffic flows on networks, SIAM J. Optim., 16 (2006), pp. 1155– 1176. [11] R. E. Gomory, An Algorithm for the Mixed Integer Problem, Tech Report RM-2597, The Rand Corporation, Santa Monica, CA, 1960. ¨ ttlich, M. Herty, and A. Klar, Network models for supply chains, Commun. Math. [12] S. Go Sci., 3 (2005), pp. 545–559. ¨ ttlich, M. Herty, and A. Klar, Modelling and optimization of supply chains on [13] S. Go complex networks, Commun. Math. Sci., 4 (2006), pp. 315–330. [14] Z. Gu, G. L. Nemhauser, and M. W. P. Savelsbergh, Lifted cover inequalities for 0-1 integer programs, INFORMS J. Comput., 10 (1998), pp. 427–437. [15] M. Herty, A. Klar, and B. Piccoli, Existence of solutions for supply chain models based on partial differential equations, SIAM J. Math. Anal., 39 (2007), pp. 160–173. [16] K. Hoffman and M. Padberg, Improving representations of zero-one linear programs for branch-and-cut, ORSA J. Comput., 3 (1991), pp. 121–134. [17] ILOG CPLEX Division, Incline Village, NV; information available at http://www.cplex.com. ¨ ttlich, and A. Klar, Optimal control for continuous supply [18] C. Kirchner, M. Herty, S. Go network models, Netw. Heterog. Media, 1 (2006), pp. 675–688. [19] T. Koch, Rapid Mathematical Programming, Ph.D. thesis, Technische Universitat Berlin, 2004; available online from http://edocs.tu-berlin.de/diss/2004/koch thorsten.pdf. [20] J. Lee, J. Leung, and F. Margot, Min-up/min-down polytopes, Discrete Optim., 1 (2004), pp. 77–85. [21] A. N. Letchford, On disjunctive cuts for combinatorial optimization, J. Comb. Optim., 5 (2001), pp. 299–315. [22] H. Marchand and L. A. Wolsey, Aggregation and mixed integer rounding to solve MIPs, Oper. Res., 49 (2001), pp. 363–371. [23] G. Nemhauser and L. A. Wolsey, Integer and Combinatorial Optimization, WileyInterscience, New York, 1999. [24] M. W. Padberg, T. J. van Roy, and L. A. Wolsey, Valid linear inequalities for fixed charged problems, Oper. Res., 33 (1985), pp. 842–861. [25] M. W. P. Savelsbergh, Preprocessing and probing for mixed integer programming problems, ORSA J. Comput., 6 (1994), pp. 445–454. [26] D. Simchi-Levi, P. Kaminsky, and E. Simchi-Levi, Managing the Supply Chain: The Definitive Guide for the Business, McGraw–Hill, New York, 2003. [27] S. Takriti, B. Krasenbrink, and L. S. Y. Wu, Incorporating fuel constraints and electricity spot prices into the stochastic unit commitment problem, Oper. Res., 48 (2000), pp. 268– 280.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1508–1526
c 2008 Society for Industrial and Applied Mathematics
PERFORMANCE AND ACCURACY OF LAPACK’S SYMMETRIC TRIDIAGONAL EIGENSOLVERS∗ JAMES W. DEMMEL† , OSNI A. MARQUES‡ , BERESFORD N. PARLETT† , AND ‡ ¨ CHRISTOF VOMEL Abstract. We compare four algorithms from the latest LAPACK 3.1 release for computing eigenpairs of a symmetric tridiagonal matrix. These include QR iteration, bisection and inverse iteration (BI), the divide-and-conquer method (DC), and the method of multiple relatively robust representations (MR). Our evaluation considers speed and accuracy when computing all eigenpairs and additionally subset computations. Using a variety of carefully selected test problems, our study includes a variety of today’s computer architectures. Our conclusions can be summarized as follows. (1) DC and MR are generally much faster than QR and BI on large matrices. (2) MR almost always does the fewest floating point operations, but at a lower MFlop rate than all the other algorithms. (3) The exact performance of MR and DC strongly depends on √ the matrix at hand. (4) DC and QR are the most accurate algorithms with observed accuracy O( nε). The accuracy of BI and MR is generally O(nε). (5) MR is preferable to BI for subset computations. Key words. LAPACK, symmetric eigenvalue problem, inverse iteration, divide and conquer, QR algorithm, multiple relatively robust representations, algorithm, accuracy, performance, benchmark AMS subject classifications. 15A18, 15A23 DOI. 10.1137/070688778
1. Introduction. One goal of the latest 3.1 release [28] of LAPACK [2] is to produce the fastest possible symmetric eigensolvers subject to the constraint of delivering small residuals and orthogonal eigenvectors. For an input matrix A that may be dense or banded, one standard approach is the conversion to tridiagonal form T , then finding the eigenvalues and eigenvectors of T , and last the eigenvectors of T transformed to eigenvectors of A. Depending on the situation, all the eigenpairs or just some of them may be desired. LAPACK, for some algorithms, allows selection by eigenvalue indices (“find λi , λi+1 , . . . λj , where λ1 ≤ λ2 ≤ · · · ≤ λn are all the eigenvalues in increasing order, and their eigenvectors”) or by an interval (“find all the eigenvalues in [a, b] and their eigenvectors”). This paper analyzes the performance and accuracy of four algorithms: 1. QR iteration, in LAPACK’s driver STEV (QR for short); 2. bisection and inverse iteration, in STEVX (BI for short); 3. divide and conquer, in STEVD (DC for short); 4. multiple relatively robust representations, in STEVR (MR for short). Section 2 gives a brief description of these algorithms with references. For a representative picture of each algorithm’s capacities, we developed an extensive set of test matrices [11], broken into two classes: (1) “practical matrices” based on reducing matrices from a variety of practical applications to tridiagonal form, and generating some other tridiagonals with similar spectra, and (2) synthetic ∗ Received by the editors April 19, 2007; accepted for publication (in revised form) December 26, 2007; published electronically April 9, 2008. http://www.siam.org/journals/sisc/30-3/68877.html † Mathematics Department and Computer Science Division, University of California, Berkeley, CA 94720 (
[email protected],
[email protected]). ‡ Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (
[email protected],
[email protected]).
1508
EIGENSOLVER PERFORMANCE AND ACCURACY
1509
“testing matrices” chosen to have extreme distributions of eigenvalues or other properties designed to exercise one or more of the algorithms; see section 3.1 for details. The timing and accuracy tests were performed on a large set of current computing platforms which are described in section 3.2. Ideally one of these algorithms would be the best in all circumstances. In reality, the performance of an algorithm may depend on the matrix, the platform, and possibly underlying libraries like the basic linear algebra subprograms (BLAS) [1, 5], so we do need to judge very carefully. To study and illuminate some aspects of the astonishing variability in behavior is the goal of this paper. Section 4 presents performance results when computing all eigenpairs. Its first part, section 4.1, consists of an overall summary of performance on practical matrices across all investigated architectures. DC and MR are usually much faster than QR and BI for these matrices. Section 4.2 looks at one architecture in detail and shows that MR almost always does the fewest floating point operations but at a lower MFlop rate than all the other algorithms. For certain matrix classes for which there is a great deal of deflation, DC becomes much faster. Section 4.3 further illustrates the dependence of algorithm performance on certain matrix characteristics by closely studying the behavior on selected synthetic test problems. The performance of subset computations is analyzed in section 5. Only BI and MR allow the computation of subsets at reduced cost. We show that MR beats BI on average and identify matrices with subsets where one algorithm wins over the other. Section 6 shows that QR and DC are the most accurate algorithms, measured in terms of producing both pairwise orthogonal eigenvectors and small residual norms. MR is less accurate but still achieves errors of size O(nε), where n is the dimension and ε is machine epsilon. Depending on the matrix and platform, it is known that BI may completely fail to guarantee orthogonal eigenvectors [13], though this is rare and did occur only in a few subset calculations with our test matrices. Summary and conclusions are given in section 7. 2. Description of algorithms. Table 2.1 gives an overview of LAPACK’s symmetric eigensolvers. See also [2, 4]. The driver column lists the LAPACK driver name. The subset column indicates whether the algorithm can compute subsets at reduced cost. With respect to memory, we note that QR uses the least, then MR and BI, and DC uses the most. Note that the workspace that is reported for DC corresponds to the case COMPZ = “I”. The workspace that is reported for BI is for SYEVX, the driver that combines STEBZ and STEIN. 2.1. QR iteration. QR applies a sequence of similarity transforms to the tridiagonal T until its off-diagonal elements become negligible and the diagonal elements have converged to the eigenvalues of T . It consists of a bulge-chasing procedure that implicitly includes shifts and uses only plane rotations while preserving the tridiagonal form [31]. The plane rotations are accumulated to find the eigenvectors of T . The overall complexity of the method is 3bn3 + O(n2 ), where b denotes the average numTable 2.1 LAPACK codes for computing the eigenpairs of a symmetric matrix of dimension n. Algorithm QR DC BI MR
Driver Subsets Workspace References STEV N Real: 2n − 2 [10, 23, 31] STEVD N Real: 1 + 4n + n2 , int.: 3 + 5n [7, 8, 24] STEVX Y Real: 8n, int.: 5n [13, 26] STEVR Y Real: 18n, int.: 10n [12, 15, 16, 18, 32, 33]
1510
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
ber of bulge chases per eigenvalue; see [10]. No part of the bulge-chasing procedure currently makes use of higher levels (that is, 2 or 3) of the BLAS. 2.2. Bisection and inverse iteration. Bisection based on Sturm sequences requires O(nk) operations to compute k eigenvalues of T . If the distance between the eigenvalues is large enough (relative to T ), then computing the corresponding eigenvector by inverse iteration also is an O(nk) process. If, however, the eigenvalues are not well separated, Gram–Schmidt orthogonalization is employed to try to achieve numerically orthogonal eigenvectors. In this case the complexity of the algorithm increases to O(nk 2 ). In the worst case where almost all eigenvalues of T are “clustered,” the complexity can increase to O(n3 ). Furthermore, from the accuracy point of view this procedure is not guaranteed to be reliable; see [10, 13]. Neither bisection nor inverse iteration make use of higher-level BLAS. 2.3. Divide and conquer. The divide-and-conquer method can be described in terms of a binary tree where each node corresponds to a submatrix and its eigenpairs, obtained through recursively dividing the matrix in halves; see the exposition in [10]. The tree is processed bottom up, starting with submatrices of size 25 or smaller.1 DC uses QR to solve the small eigenproblems and then computes the eigenpairs of a parent using the already computed eigenpairs of the children. A parent’s eigenvalues can be computed as solutions of a secular equation. The eigenvector computation consists of two steps. The first one is a relatively inexpensive scaling step. The second one, which is most of the work, multiplies the eigenvectors of the current matrix by the eigenvector matrix accumulated so far. This step uses the level 3 BLAS (BLAS 3) routine GEMM (dense matrix-matrix multiply). In the worst case, DC is an O(n3 ) algorithm. On practical matrices studied in Table 4.1, the effective exponent is less than three. The complexity of the eigenvector computation can sometimes be reduced substantially by a process called deflation. If a submatrix eigenvalue nearly equals another, or certain entries in the submatrix eigenvector are small enough, the matrix column can be excluded from the BLAS 3 operation. In Table 4.3, we will see that for some matrices, deflation may occur for most eigenpairs, substantially accelerating the computation. 2.4. Multiple relatively robust representations. MR is a sophisticated variant of inverse iteration that avoids Gram–Schmidt orthogonalization and thus becomes an O(n2 ) algorithm. The algorithm can be described in terms of a (generally irregular) representation tree. The root node describes the entire spectrum of T , and the children define gradually refined eigenvalue approximations. See the references in Table 2.1 for the details. The overall complexity of the algorithm depends on the clustering of the eigenvalues. If some eigenvalues of T agree to d digits on average, then the algorithm has to do work proportional to dn2 . The algorithm uses a random perturbation to ensure with high probability that eigenvalues cannot be too strongly clustered; see [17] for details. MR cannot make use of higher-level BLAS. 3. The testing environment. 3.1. Description of test matrices. In this section, we give a brief overview of the collection of test matrices used in this study. A more detailed description of the testing infrastructure is given in [11]. We focus on two types of matrices. 1 In a future release, the current fixed threshold 25 will be tunable for each platform to get the highest performance.
EIGENSOLVER PERFORMANCE AND ACCURACY
1511
Table 3.1 Platforms, timers, and BLAS libraries used for testing. Architecture Power 3 Power 5 Sun UltraSparc 2i MIPS R12000 Itanium 2 Pentium 4 Xeon Cray X1 Opteron
Symbol (MHz) OS SP3 375 AIX SP5 1900 AIX SUN 650 Solaris SGI 600 IRIX ITN2 1400 Linux P4 4000 Linux X1 800 UNICOS/mp OPT 2200 Linux
Compiler IBM xlf90 -O3 IBM xlf90 -O3 SUN f90 forte 7.0 -O4 MIPS pro 7.3.1.3m -O2 Intel ifort 9.0 -O2 Intel ifort 9.0 -O3 Cray ftn 5.4.0.4 -O2 Pathscale pathf90 2.1 -O3
Timer PAPI PAPI CPU TIME ETIME ETIME ETIME CPU TIME CPU TIME
BLAS ESSL ESSL SUNPERF SCS MKL MKL LIBSCI ACML
The first class of tridiagonals stems from important applications and thus are relevant to a group of users. For the smaller matrices, the tridiagonal form of the sparse matrices was obtained with LAPACK’s tridiagonal reduction routine sytrd. For the larger matrices we generated tridiagonals by means of a simple Lanczos algorithm without reorthogonalization which, in finite precision, tends to produce copies of eigenvalues as clusters. • Matrices obtained from Fann using the NWChem computational chemistry package [3, 27]: These matrices have clustered eigenvalues that require a large number of reorthogonalizations in BI. This motivated the development of the MR which can cope well with this type of matrix; see [12, 14]. • Examples from sparse matrix collections, including matrices from a wide range of applications in the BCSSTRUC1 set in [20, 21, 22] and matrices from the Alemdar, National Aeronautics and Space Administration, and Cannizzo sets in [9]. These matrices, coming from a variety of applications including power system networks, shallow wave equations, and finite-element problems, were chosen for their spectrum which typically consists of a part with eigenvalues varying almost “continuously” and another one with several isolated large clusters of eigenvalues of varying tightness. The second class of matrices are synthetic “test matrices” that exhibit the strengths, weaknesses, or idiosyncrasies of a particular algorithm. This class includes distributions that are already used in LAPACK’s tester, synthetic distributions, and matrices that are used in [12]. Furthermore, it includes Wilkinson matrices [25, 34] and glued Wilkinson matrices (see, for example, [17]). A detailed description of these matrices is given later, in Table 4.3. 3.2. Description of test platforms. In order to reliably quantify accuracy and performance and also to detect architecture-specific issues, we perform tests on a large number of today’s computer systems including a variety of superscalar ones (Power 3, Power 5, Xeon, and Opteron), an explicitly parallel instruction computing (EPIC) (Itanium 2), and a vector computer (X1). Table 3.1 summarizes the architectures, compilers, and timers used for our experiments. 3.3. Important metrics. For a symmetric tridiagonal matrix T ∈ Rn×n , computed eigenvectors Z = [z1 z2 . . . zm ] and corresponding eigenvalues Λ = (λ1 . . . λm ), m ≤ n, we compute both the loss of orthogonality (3.1)
O(Z) = max i=j
|ziT zj | nε
and the largest residual norm (3.2)
R(Λ, Z) = max i
||T zi − λi zi || . ||T ||nε
1512
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
The factor nε in the denominators of (3.1) and (3.2) is used for normalization. Here, ε refers to the relative machine precision. For an algorithm to be satisfactorily accurate, both of these metrics should be bounded by a “modest constant” for all matrices. Because of its prominent role in DC, we also measure the amount of deflation encountered during the algorithm. We define the fraction of deflated eigenvalues frdef l as the total number of deflations over the total number of submatrix eigenvalues, over all submatrices encountered while running DC. If frdef l is close to 1, nearly all eigenvalues and eigenvectors were computed with little work, and if frdelf is close to 0, then the maximum amount of floating point work was done, mostly in calls to the BLAS 3 routine GEMM. 4. Performance when computing all eigenpairs. 4.1. Comparison on practical matrices across architectures. Table 4.1 summarizes the results for all architectures when computing the full spectrum. The “slope of time trends” refers to the fitting of a straight “trend” line to the timing data on a log-log plot. If the running time satisfied the simple formula t = c · ne for some constants c and e, then the measured values of its run time would lie on the straight line log t = log c + e · log n with slope e when plotted on a log-log plot. DC and MR are usually much faster than QR and BI for large practical matrices (dimension 500 or larger). The median ratio run time(QR)/run time(MR) varies from 1.8 to 70 across platforms (10 to 70 omitting the Cray X1) and is as large as 710. The median ratio run time(BI)/run time(MR) varies from 2.9 to 7.1 across platforms and is as large as 310. MR is faster than DC on large practical matrices (i.e., the median value of the ratio run time(DC) over run time(MR) exceeds one) on 5 of our 8 platforms and is slower than DC on 3 platforms. MR ranges from 12 times faster than DC to 40 times slower on these matrices (12 times faster to 17 times slower omitting the Cray X1). Compared to the other architectures, one observes extraordinary trend line slopes of QR, BI, and DC on the Cray X1. At this point, the reasons for this are not fully understood. The results are reported for reference, as a detailed investigation will require extra work out of the scope of this paper. However, the superfast speed of DC can be explained by fast vendor BLAS; see also the remarks at the end of section 4.2. 4.2. Performance details for practical matrices on the Opteron. This section studies in depth the situation on one target architecture, the Opteron. In this section, we call a matrix small whenever its dimension is less than n = 363, which Table 4.1 Performance summary when computing all eigenpairs.
Arch.
SP3 SP5 SUN SGI ITN2 P4 X1 OPT
Performance summary for large (n ≥ 500) practical matrices Slope of time trends Minimum, median, and maximum for time ratios QR BI DC MR QR/MR BI/MR DC/MR min med max min med max min med max 3.3 2.1 2.8 2.5 1.5 10 48 .66 4.1 75 .10 1.3 2.8 3.0 2.6 2.5 2.3 1.5 12 110 .66 3.8 150 .067 1.1 7.0 3.8 2.0 2.6 2.4 11 70 710 .91 4.7 180 .35 2.5 4.6 3.5 3.2 2.7 2.3 2.0 26 180 .75 7.1 310 .17 1.9 11 3.0 2.4 2.5 2.3 1.6 15 110 .63 3.3 72 .060 .82 6.0 3.0 2.6 2.5 2.4 1.6 15 92 .50 2.9 91 .058 .86 4.4 2.4 2.0 1.9 2.2 .56 1.8 5.2 .86 3.2 16 .024 .24 .66 2.9 2.9 2.5 2.2 4.3 41 190 .74 6.3 300 .18 2.1 12
1513
EIGENSOLVER PERFORMANCE AND ACCURACY Table 4.2 Performance summary for practical matrices on Opteron.
Metric Time Ratios Flop Count Ratios GFlop Rates
GFlop Ratios
Performance summary on practical matrices on Opteron n < 363 n ≥ 363 Alg(s) Min Median Max Min Median QR/MR .57 1.3 4.5 2.6 38 BI/MR .75 1.3 4.8 .74 5.8 DC/MR .34 .46 1.6 .18 2.0 QR/MR 2.4 4.8 12 9.0 74 BI/MR 1.2 2.1 6.4 1.1 9.8 DC/MR .72 1.0 3.3 .5 8.3 QR 1.9 2.2 2.5 1.0 1.4 BI .66 .92 1.1 .6 .9 DC 1.1 1.4 1.7 1.5 2.9 MR .55 .58 .8 .5 .6 QR/MR 2.6 3.5 4.1 1.5 2.1 BI/MR .83 1.6 2.0 .9 1.6 DC/MR 1.7 2.3 2.7 2.5 4.4
Max 190 300 12 390 380 66 2.0 1.3 4.1 .8 3.5 2.3 6.9
marks the largest n for which an n-by-n double precision matrix (of eigenvectors) can fit in the Opteron’s 1MB L3 cache.2 Table 4.2 summarizes the performance of the four algorithms on the Opteron, for eigenvector matrices that fit in the cache (n < 363) and those that do not (n ≥ 363). Figures 4.1 and 4.2 show the run time and flop counts, respectively, of all algorithms on a log-log plot. The color and symbol code used for all plots is as follows: QR data is blue, using “+”; BI data is magenta, using “x”; DC data is red, using “o”; and MR data is black, using diamonds to mark data points. The straight lines in the plots are least-squares fits to the data of the same color, for n ≥ 363. The slopes of these lines are shown in the legend of each plot in parentheses after the name of the corresponding algorithm. The slopes indicate that QR is an O(n2.9 ) algorithm measured by time and an 3.0 O(n ) algorithm measured by flop counts, reasonably matching the expected O(n3 ). The same is true of BI, although we see that the BI data are actually rather more spread out. This is because BI does O(n) flops on eigenvectors with well-separated eigenvalues and up to O(n2 ) work per eigenvector on matrices with tightly clustered eigenvalues. MR is O(n2.2 ) measured either way and has the lowest exponent of any algorithm, though slightly higher than the anticipated O(n2 ). Given the spread out nature of the data, it is not clear how significant the “.2” part of the slope is. Interestingly, DC is O(n2.5 ) measured using time and O(n2.8 ) measured using flop counts. As we will see, this is because the MFlop rate for DC increases considerably with dimension. Note that the run times for n < 363 are increasing somewhat more slowly than these slopes for larger matrices would indicate; these are the dimensions where the output matrix fits in the L3 cache. Flop counts offer an interesting insight. MR always does fewer flops than QR and BI, up to 390 times and 380 times fewer, respectively. MR does up to 66 times fewer flops than DC, and never more than twice as many, with a median of 8.3 times fewer 2 Note that in this section, because of the cache, we call slightly more matrices “large” than in section 4.1 which looks at all architectures simultaneously.
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
1514
Time vs n for Opteron, practical matrices
4
10
QR ( 2.9) BI ( 2.9) DC ( 2.5) MR ( 2.2)
3
Running times in seconds
10
2
10
1
10
0
10
−1
10
−2
10
−3
10
1
2
10
3
10
10
4
10
Dimension
Fig. 4.1. Run time of all algorithms on Opteron. The slopes of the least-squares fit, shown in parentheses, are computed from the larger matrices. Flop count vs n for Opteron, practical matrices
13
10
QR ( 3.0) BI ( 2.9 ) DC ( 2.8 ) MR ( 2.2 )
12
10
11
Flop count
10
10
10
9
10
8
10
7
10
6
10
1
10
2
3
10
10
4
10
Dimension
Fig. 4.2. Flop counts of all algorithms on Opteron. The slopes of the least-squares fit, shown in parentheses, are computed from the larger matrices.
flops than DC for large matrices. Figure 4.3 shows the flop counts of each algorithm relative to the one of MR. PAPI [6, 19] was used to obtain the flop counts. Figure 4.4 shows the MFlop rates. MR generally is the slowest algorithm by this metric. Thus MR does the fewest flops but at higher cost. Indeed, an inspection shows that the number of divides MR performs always exceeds a fixed significant nonzero fraction of the total number of floating point operations; see [18, 30]. It is also natural to ask why QR’s MFlop rate drops when n increases past 362, BI’s and MR’s remain roughly the same for all n, and DC’s MFlop rate increases
1515
EIGENSOLVER PERFORMANCE AND ACCURACY Ratio of Flop count/Flop count(MR) vs n for Opteron, practical matrices
3
10
QR ( 0.8) BI ( 0.7 ) DC ( 0.7 )
Flop count/Flop count(MR)
2
10
1
10
0
10
−1
10
1
2
10
3
10
10
4
10
Dimension
Fig. 4.3. Flop counts relative to MR’s for practical matrices on Opteron. MFlops vs n for Opteron, practical matrices
4
10
MFlops
QR ( 0.0 ) BI ( 0.0 ) DC ( 0.3 ) MR (−0.0)
3
10
2
10 1 10
2
3
10
10
4
10
Dimension
Fig. 4.4. MFlop rates for practical matrices on Opteron.
continuously with n. In the case of QR, the algorithm updates pairs of columns of the eigenvector matrix during the course of a QR sweep. This means that many sweeps touch the whole eigenvector matrix, performing just a few floating point operations per memory reference. This BLAS1-like operation on the eigenvector matrix explains the drop in the MFlop rate when the eigenvector matrix no longer fits in the L3 cache. In contrast, BI and MR compute one eigenvector at a time, so as long as a few vectors of length n fit in the cache, there will be minimal memory traffic and the MFlop rate should remain constant. DC’s flop count and MFlop rate are more complicated and depend on two phenomena unique to DC: the use of level 3 BLAS to update the eigenvector matrix,
1516
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL Table 4.3 Selected synthetic test matrices from [11]. Property
Line color
More strongly clustered eigenvalues n − 1 evs at 1/κ, 1 ev at 1 n − 1 evs at 1, 1 ev at 1/κ Weakly clustered eigenvalues Evs at 1 and 1/κ · [1 : n −√1] Evs at 1/κ, 2, and 1 + 1/ κ · [1 : n − 2] Evs at 1 + 100/κ · [1 : n] Geometric distributions Exactly geometric Randomly geometric Uniform distributions Exactly uniform Randomly uniform Wilkinson Wm+1 Glued Wilkinson
Blue
Red
Green
Black
Cyan
Line type
Symbol Fraction deflated (DC) minimum maximum S Solid S1 .98 1 Dashed S2 .88 .98 W Solid W1 .34 .81 Dashed W2 .01 .05 Dashed-Dotted W3 .06 .09 G Solid G1 .16 .36 Dashed G2 .16 .19 U Solid U1 0 .03 Dashed U2 0 .03 Solid Wi .35 .84 Dashed GW .59 .78
and deflation, as discussed in section 2.3. If there is little deflation, most of the flops will be performed in calls to the BLAS 3 routine GEMM on large matrices, so the MFlop rate should increase, as long as the larger matrices on which GEMM is called result in faster MFlop rates. On the other hand, if there is a lot of deflation, DC will perform many fewer flops, but they will be scalar and not run fast. In the case of practical matrices, the fraction of deflated eigenvalues frdef l never exceeds .502 and has a median value of .125. So we expect DC to perform many flops as shown by the slope O(n2.8 ). It is thus crucial for DC to use the fastest available BLAS library. Table 4.1 shows that on the Cray X1, DC on practical matrices behaves like an O(n1.9 ) algorithm; this is because the speed of level 3 BLAS increases very rapidly as the size of these matrices increases. 4.3. Performance details for synthetic matrices on the Opteron. In this section, we study in depth how the performance of an algorithm can depend on the matrix at hand, focusing on the differences between the DC and the MR. The goal is to push algorithms to their extremes, in the positive and the negative sense. Table 4.3 lists the matrices considered. For more details and references, see [11]. Matrices that are generated from a given eigenvalue distribution using the LAPACK tester have a trailing “p” or “n” indicating that all eigenvalues are positive or are randomly negated, respectively. κ means the matrix condition number. By default, we choose κ = 1/ε, with ε being the working √ accuracy. For the strongly clustered eigenvalues, we use either κ = 1/ε or κ = 1/ ε, indicated by an additional trailing “e” and “s,” respectively. As an example, “S1ne” refers to a matrix from class S1 with randomly negated eigenvalues and κ = 1/ε. The last two rows refer to Wilkinson-type matrices. The last two columns of Table 4.3 show the fraction of deflations in DC as defined in section 3.3. For some matrix classes (S, W1, Wi, and GW), the fraction deflated is significant, sometimes even close to 1, while for others (U, W2, and W3), there are nearly no deflations. For cases with a significant amount of deflation, we expect a noticeable speedup for DC. Moreover, Wilkinson and glued Wilkinson matrices are known to be notoriously difficult for MR due to strongly clustered eigenvalues [17]. Thus we expect to see MR perform poorly on these classes of matrices. To verify our intuition, we show least-square fits of time and flop counts normalized by n2 separately for each matrix
1517
EIGENSOLVER PERFORMANCE AND ACCURACY Time(DC)/n 2 in seconds
−6
10
U1n U2p U1p W2n W2p Prac U2n G2p
Time(DC)/n2 in secs
G1p
W3n W3p G2n G1n
W1n W1p
−7
10
S1ne S1pe S2ne S2pe S1ps S2ns S2ps GW S1ns Wi
−8
10
2
10
3
4
10
10
Dimension
Fig. 4.5. Performance trend lines of DC for run time divided by n2 on Opteron.
4
Flop count(DC) scaled by n2
10
U2p W2n U1n U1p U2n Prac W2p W3n W3p G2p G2n G1p G1n
3
Flop count(DC)/n 2
10
W1n W1p 2
10
1
10
GW S2ns S2ne S2ps S1pe S1ne S2pe S1ps S1ns
0
10 2 10
3
10
Wi 4
10
Dimension
Fig. 4.6. Performance trend lines of DC for flop counts divided by n2 on Opteron.
class. Figures 4.5 and 4.6 show the data for DC and Figures 4.7 and 4.8 for MR. Note that the vertical axes in Figures 4.5 and 4.7 are the same, going from 10−8 to 10−6 . However, the vertical axis for MR in Figure 4.8 is ten times lower than the one for DC in Figure 4.6. For reference, we also added “Prac,” the practical matrices from section 4.2. The dotted black lines indicate the slopes +1 and −1, in order to better recognize slopes of the trend lines. The figures exhibit how much the performance of both algorithms depends on the matrix class and can vary dramatically. Figure 4.6 shows that DC either does close to O(n3 ) flops (trend lines nearly parallel to the upper diagonal dotted black line) or does close to O(n) flops (the other trend lines), when deflation is extreme.
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
1518
Time(MR)/n2 in seconds
−6
10
Wi GW S2ne S2ns
Time(MR)/n2 in secs
S2ps S2pe
W3n Prac G2n G1n U2n U2p U1n W1n W2n W2p W3p
W1p U1p G2p
−7
10
G1p
S1ns
S1ne S1pe S1ps −8
10
2
10
3
4
10
10
Dimension
Fig. 4.7. Performance trend lines of MR for run time divided by n2 on Opteron. Flop count(MR) scaled by n2 3
10
Wi GW S2ne S2ns S2ps S2pe W3n U2n U2p G2n U1n G1n Prac W2n W1n W2p W3p W1p U1p
2
10
Flop count(MR)/n2
G1p
G2p
S1ns 1
10
0
10
S1ne S1pe S1ps
−1
10
2
10
3
10
4
10
Dimension
Fig. 4.8. Performance trend lines of MR for flop counts divided by n2 on Opteron.
It is interesting to compare to the run time of DC shown in Figure 4.5. As noted in section 4.2, the majority of flops in DC is done using BLAS 3 so that the trend lines for the timings do not completely align with those for the flop counts. The nearly horizontal trend lines of MR in Figure 4.8 indicate that the algorithm does close to O(n2 ) flops for the corresponding matrix classes. In this case, run time trends fairly well reflect the flop trend lines. We now report on some of the classes individually. Our examples are classes where • both DC and MR are much faster than on practical matrices (Figure 4.9 showing “strongly clustered” matrices with positive eigenvalues (S1pe)),
1519
EIGENSOLVER PERFORMANCE AND ACCURACY
• MR is much faster than DC (Figure 4.10 showing “uniformly distributed” matrices with positive eigenvalues (U2p)), and • DC is much faster than MR (Figure 4.11 showing glued Wilkinson matrices (GW)). Each of the figures has six subplots. The rows, from top to bottom, show run time, flop counts, and MFlop rate. The left column shows the data (except for the last one, the MFlop rate) normalized by n2 , and the right column presents the data relative to MR results. For the strongly clustered matrices in Figure 4.9, the fraction of eigenvalues deflated in DC is at least 88% and often 100%. This makes DC appear to be much faster than O(n2 ). In fact both DC and MR perform much faster for these matrices than they do on practical matrices of the same size. DC runs uniformly slower on the uniformly distributed matrices than on practical matrices. One example is shown in Figure 4.10. Note that the fraction deflated in DC is less than 3%. This means that the algorithm performs O(n3 ) work, at the
Time/n2 for S1pe
3
−6
10
−8 3
10
10 2
Flop counts/n for S1pe
10 Flop counts/n
2
QR ( 1.2) BI ( 0.9) DC (−1.5) MR (−1.3)
0
10
2
10 4
3
1
10
10
10
MFlop rate for S1pe QR (−0.2) BI (−0.2) DC (−1.2) MR (−0.6)
3
10
2
10
1
10 2 10
3
10
4
10
Flop count/Flop count(MR) for S1pe 5 10
4
10 Speed in MFlops
2
10
10 2 10
4
Flop cnt/Flop cnt(MR)
10 5
QR ( 2.1) BI ( 1.9) DC ( 0.4)
0
2
QR ( 2.5) BI ( 2.2) DC (−0.2)
0
10 2 10
3
10
4
10
MFlop rate/MFlop rate(MR) for S1pe 2 10 MFlop/MFlop(MR)
10
Time/Time(MR) for S1pe
10 Time/Time(MR)
QR ( 1.4) BI ( 1.2) DC (−0.3) MR (−0.7)
2
Time/n in seconds
−4
10
QR ( 0.4) BI ( 0.4) DC (−0.6)
1
10
0
3
10 Dimension
4
10
10 2 10
3
10 Dimension
4
10
Fig. 4.9. Performance data for strongly clustered matrices with positive eigenvalues (S1pe) on Opteron. κ = 1/ε.
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
1520
Time/n2 for U2p
2
−5
−6
10
−7
10
10
3
10
10
2
QR ( 1.0) BI ( 0.1) DC ( 0.9) MR (−0.0)
4
10
3
10
2
4
3
10
4
10
MFlop rate for U2p
10
QR ( 0.1) BI ( 0.1) DC ( 0.3) MR (−0.0)
3
10
2
10 2 10
4
10
QR ( 1.0) BI ( 0.1) DC ( 0.9)
1
10
0
10 2 10
3
10
4
10
MFlop rate/MFlop rate(MR) for U2p 1 10 MFlop/MFlop(MR)
10 2 10
3
10
Flop count/Flop count(MR) for U2p 2 10
Flop cnt/Flop cnt(MR)
Flop counts/n for U2p
10 Flop counts/n
1
10
10 2 10
4
2
Speed in MFlops
QR ( 0.9) BI ( 0.0) DC ( 0.5)
0
2
5
Time/Time(MR) for U2p
10 Time/Time(MR)
QR ( 0.9) BI ( 0.0) DC ( 0.5) MR ( 0.0)
10
2
Time/n in seconds
−4
10
QR ( 0.1) BI ( 0.1) DC ( 0.4)
0
10
−1
3
10 Dimension
4
10
10
2
10
3
10 Dimension
4
10
Fig. 4.10. Performance data for uniformly distributed matrices with positive eigenvalues (U2p) on Opteron.
speed of the BLAS 3 routine GEMM. Several other matrix classes with similarly few deflations are given in [11]. Classical orthogonal polynomials such as Chebyshev, Legendre, Laguerre, and Hermite that are defined by three-term recurrence give rise to symmetric tridiagonal matrices with very small amounts of deflation in DC. It is interesting that on the other hand, these matrices pose no difficulties for MR: Their eigenvalues are not very strongly clustered. For such matrices, as for the uniformly distributed ones in Figure 4.10, MR can run much faster than DC. Finally, Wilkinson and glued Wilkinson matrices are the matrix classes on which DC performs fastest and MR worst. The results for the latter class are shown in Figure 4.11. The difficulties of MR for these matrices are well understood: Since the eigenvalues of glued matrices come in groups of small size but extreme tightness, the representation tree generated by the MR algorithm is very broad and the overhead for the tree generation is considerable; see [11, 17]. On top of the difficulties of MR, the fraction deflated in DC is ∈ [59%, 78%]; that is, DC is extraordinarily efficient and even faster than for practical matrices.
1521
EIGENSOLVER PERFORMANCE AND ACCURACY
Time/n2 for GW
1
−6
−7
10
−8 3
10
2
QR ( 0.7) BI (−0.1) DC (−1.2) MR ( 0.0)
2
10
0
10 2 10 4
3
10
3
10
2
10 2 10
QR ( 0.7) BI (−0.1) DC (−1.2)
0
−2
10
2
10
QR (−0.1) BI ( 0.0) DC (−0.7) MR (−0.0)
4
10
10
10
MFlop rate for GW
3
10
Flop count/Flop count(MR) for GW 2 10
4
10
2
10
Flop counts/n for GW
10 Flop counts/n
−1
10
10 2
Speed in MFlops
0
10
10
4
Flop cnt/Flop cnt(MR)
10 4
QR ( 0.8) BI (−0.2) DC (−0.5)
−2
2
3
10
4
10
MFlop rate/MFlop rate(MR) for GW 1 10 MFlop/MFlop(MR)
10
Time/Time(MR) for GW
10 Time/Time(MR)
QR ( 0.9) BI (−0.2) DC (−0.5) MR ( 0.0)
10
2
Time/n in seconds
−5
10
QR (−0.1) BI ( 0.0) DC (−0.7)
0
10
−1
3
10 Dimension
4
10
10
2
10
3
10 Dimension
4
10
Fig. 4.11. Performance data for glued Wilkinson matrices (GW) on Opteron.
5. Subset computations: Performance. Table 5.1 shows timing statistics of BI relative to MR, for all test matrices. For each matrix, we randomly chose 15 subsets by index range (“find eigenvalues IL:IU”) and 15 subsets by intervals (“find eigenvalues in [VL,VU]”). One can see that there are subsets for which BI is up to six times faster than MR, whereas there are other intervals where it is 27 times slower than MR.3 This shows that performance depends on the subset. The medians tell that MR on average is faster and thus preferable to BI for subsets. It is interesting to investigate what matrices show the biggest differences in run time. MR does significantly better than BI on subsets of Fann’s practical matrices. This should not be a surprise as it is known that BI require a large number of reorthogonalizations for these matrices; see the remarks in section 3.1 and [12, 14]. BI’s relatively best performances occur in two different cases. First, there are the “very easy” tests where BI does not require any reorthogonalization and both BI 3 There is some variation in the maximum ratios across the machines which we cannot explain satisfactorily, so the largest ratios should be considered with a grain of salt.
1522
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
Table 5.1 Performance summary of BI relative to MR when computing subsets of eigenpairs, either by index (eigenvalues IL:IU) or by interval (eigenvalues in [VL,VU]).
Arch. SP3 SP5 SUN SGI ITN2 P4 X1 OPT
min .24 .22 .38 .30 .24 .19 .39 .30
Subset performance By index By interval med max min med max 1.3 8.5 .22 1.3 9.0 1.2 6.5 .20 1.2 6.8 1.7 25.2 .32 1.7 27.4 1.3 15.2 .29 1.4 14.2 1.1 4.7 .22 1.1 4.9 1.1 7.2 .16 1.1 7.8 1.4 4.3 .31 1.5 4.5 1.3 21.1 .28 1.3 16.7
and MR are very fast. More interesting, in a pathological sense, are the Wilkinson and glued Wilkinson matrices. As noted in [17], MR requires a substantial amount of work to deal with the extremely strong clustering of the eigenvalues, and its overhead turns out to be more significant than the reorthogonalization required by BI. 6. Accuracy. 6.1. Residual norm and loss of orthogonality. Using the metrics for orthogonality loss and residual norm in (3.1) and (3.2), respectively, the worst case residuals and losses of orthogonality for all matrices and all platforms are reported in Table 6.1. Some plots detailing the accuracy on the Opteron are given in Figure 6.1. The important trends are that the errors decrease as n increases for DC and QR. Given the n in the denominators of the above formulas, this means that the DC and QR errors do not increase proportionally to n. This is confirmed by the slopes of the trend lines which are shown in the legends. In general, inverse iteration and MR do not achieve the same level of accuracy as QR and DC; their accuracy is O(nε) in general. For matrices that split into smaller blocks, the block size governs the orthogonality loss rather than the matrix dimension. 6.2. Reliability: Comparing MR from LAPACK 3.1 to version 3.0. For the LAPACK 3.1 release, we have done extensive development on MR. It now allows the computation of subsets of eigenpairs [29]. Moreover, the new algorithm is significantly more reliable than the version in LAPACK 3.0, which had very large errors on a significant subset of our test matrices.
Table 6.1 Summary of the worst observed result on any practical or testing matrix, given as multiples of nε.
Arch. SP3 SP5 SUN SGI ITN2 P4 X1 OPT
QR .23 .30 .20 .30 .30 .30 .30 .30
Worst case error for all matrices Residual Orthogonality loss BI DC MR QR BI DC MR 100 .13 22 .34 140 .25 70 59 .13 18 .39 70 .25 163 331 .11 14 .45 440 .19 92 210 .13 14 .46 280 .19 160 240 .13 29 .45 320 .19 190 39 .13 33 .38 53 .19 140 34 .11 80 .46 45 .19 160 100 .11 14 .46 130 .19 160
1523
EIGENSOLVER PERFORMANCE AND ACCURACY Resids vs n for Opteron, practical
3
Orth vs n for Opteron, practical
3
10
10
2
10
2
10 1
Orthogonality loss
10
0
Residuals
10
−1
10
1
10
0
10
−2
10
−1
10 −3
10
−2
10 −4
QR (−0.3) BI (−0.5) DC (−0.5) MR ( 0.4)
10
−5
10
1
10
QR (−0.3) BI (−0.3) DC (−0.4) MR ( 0.5)
−3
2
3
10
10
10
4
10
1
10
Dimension
3
10
4
10
Dimension
Resids vs n for Opteron, testing
2
2
10
Orth vs n for Opteron, testing
2
10
10
1
10
1
10
0
10
−1
Orthogonality loss
10
Residuals
−2
10
−3
10
−4
0
10
−1
10
10
−2
10
−5
10
−6
10
−3
10 QR (−0.4) BI (−0.5) DC (−0.7) MR (−0.3)
−7
10
−8
10
2
10
QR (−0.3) BI (−0.7) DC (−0.7) MR (−0.2)
−4
3
10
Dimension
4
10
10
2
10
3
10
4
10
Dimension
Fig. 6.1. Residuals and losses of orthogonality for all matrices on Opteron. (Top: All practical matrices. Bottom: All synthetic testing matrices. Note the difference in vertical scales.)
Figure 6.2 shows that the version of MR tested here is more accurate than the old MR from LAPACK 3.0, which not only had numerous large errors as shown in the plot but failed to return any answer on 22 of our test matrices, including 9 practical matrices. There are two reasons for the improved reliability and accuracy of the 3.1 version of MR. First, it includes a remedy to the recently discovered problem that, for a tridiagonal where no off-diagonal entries satisfy the splitting criterion, the eigenvalues can still be numerically indistinguishable down to the underflow threshold; see [17]. Second, the internal accuracy threshold on relative gaps has been tightened. This
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
1524
Resids vs n for Old vs New MR on SUN
Orth vs n for Old vs New MR on SUN
14
10
12
10 10
10
DC (−0.6) oMR (−0.1) MR ( 0.2)
10
DC (−0.6) oMR (−0.3) MR ( 0.0)
10
Orthogonality loss
8
5
Residual
10
0
10
6
10
4
10
2
10
10
0
10
−2
10
−5
10
−4
1
10
2
10
3
10
Dimension
4
10
10
1
10
2
10
3
10
4
10
Dimension
Fig. 6.2. Residual norms and loss of orthogonality. Failures of the LAPACK 3.0 version of MR have been corrected for version 3.1.
threshold is directly proportional to the upper bounds on numerical residual and orthogonality of the computed vectors; see [18]. Instead of a threshold of one over the matrix dimension, a fixed threshold of 10−3 in double precision is used. This makes the new MR more accurate on larger matrices. 7. Summary and conclusions. In preparation for the latest LAPACK 3.1 release, we performed a systematic study of performance and accuracy of the computation of eigenpairs for a symmetric tridiagonal matrix. Our evaluation considers the speed and accuracy of QR iteration, bisection and inverse iteration, the divide-andconquer method, and the method of multiple relatively robust representations when computing all, or a subset of, eigenpairs of a variety of matrices on today’s computer architectures. Our conclusions are as follows: 1. DC and MR are generally much faster than QR and BI on large matrices. MR is faster than DC on large practical matrices (i.e., the median value of the ratio run time(DC) over run time(MR) exceeds one) on five of our eight platforms, and is slower than DC on three platforms. For matrix classes for which there is a great deal of deflation, DC becomes much faster. 2. Using hardware performance counters to count floating point operations on the Opteron, we discover that MR almost always does the fewest floating point operations but at a lower MFlop rate than all the other algorithms. This is because it performs more divides than the other algorithms: The number of divides MR performs always exceeds a fixed significant nonzero
EIGENSOLVER PERFORMANCE AND ACCURACY
1525
fraction of the total number of floating point operations, whereas the fraction of divides for the other algorithms approaches zero as the dimension grows. DC has a MFlop rate that grows significantly with n (as proportionally more operations are done using level 3 BLAS GEMM operations). This increase in DC’s MFlop rate is enough to make DC look like an O(n2.5 ) algorithm as determined empirically by fitting a straight line to the log of the running time, even though it is an O(n2.8 ) algorithm as determined by the operation count. QR’s MFlop rate drops when the eigenvector matrix is too large to fit in the cache. Its use of memory, repeatedly sweeping over the eigenvector matrix performing Givens rotations, with just 1.5 flops per memory reference, is inefficient. BI’s complexity depends on the amount of reorthogonalization, that is, the eigenvalue clustering. 3. QR and DC are the most accurate algorithms, measured both in terms of producing pairwise orthogonal eigenvectors and small residual norms T x − λx . MR is less accurate but still achieves errors of size O(nε), where n is the dimension and ε is machine epsilon, never more than 190nε loss of orthogonality and 80nε residuals for any matrix on any platform. Depending on the matrix and platform, it is known that BI may completely fail to guarantee orthogonal eigenvectors [13], though this is rare and did not occur on any of our test matrices. 4. MR is preferable to BI for subset computations. Independent of the architecture, the median of the relative time of BI/MR exceeds one on all architectures. 5. The LAPACK 3.1 version of MR addresses some reliability issues of version 3.0. For computing all eigenpairs of a symmetric tridiagonal matrix, QR, BI, and MR use the least memory (n2 + O(n)), whereas DC uses about twice as much (2n2 + O(n)). Looking at the dense symmetric eigenvalue problem, QR needs n2 + O(n), MR and BI need 2n2 + O(n), and DC needs 3n2 + O(n). If memory is not an obstacle, the choice between DC and MR is matrix-dependent. However, unless the performance differences are substantial, they will be masked in the dense case by reduction to tridiagonal form and backtransformation of the eigenvectors. DC always is the algorithm of choice when superior accuracy matters. When computing a subset of the eigenvalues and vectors, MR is the algorithm of choice over BI. Acknowledgments. We thank the two referees for their careful reading and detailed comments that helped improve the presentation. REFERENCES [1] Basic Linear Algebra Subprograms Technical Forum Standard, Internat. J. High Perform. Comput. Appl., 16 (2002), pp. 1–111 and pp. 115–199. [2] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK User’s Guide, 3rd ed., SIAM, Philadelphia, 1999. [3] E. Apra et al., NWChem, a computational chemistry package for parallel computers, version 4.7, Technical report, Pacific Northwest National Laboratory, Richland, WA, 2005. [4] Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and Henk van der Vorst, Templates for the Solution of Algebraic Eigenvalue Problems—A Practical Guide, SIAM, Philadelphia, 2000. [5] L. S. Blackford, J. W. Demmel, J. J. Dongarra, I. S. Duff, S. Hammarling, G. Henry, M. Heroux, L. Kaufman, A. Lumsdaine, A. Petitet, R. Pozo, K. Remington, and R. C. Whaley, An updated set of basic linear algebra subprograms (BLAS), ACM Trans. Math. Software, 28 (2002), pp. 135–151.
1526
¨ J. DEMMEL, O. MARQUES, B. PARLETT, AND C. VOMEL
[6] S. Browne, J. Dongarra, N. Garner, G. Ho, and P. Mucci, A portable programming interface for performance evaluation on modern processors, Int. J. High Perf. Comp. Appl., 14 (2000), pp. 189–204. [7] J. Bunch, P. Nielsen, and D. Sorensen, Rank-one modification of the symmetric eigenproblem, Numer. Math., 31 (1978), pp. 31–48. [8] J. J. M. Cuppen, A divide and conquer method for the symmetric tridiagonal eigenproblem, Numer. Math., 36 (1981), pp. 177–195. [9] T. A. Davis, University of Florida sparse matrix collection, NA Digest, 92 (1994), NA Digest, 96 (1996), and NA Digest, 97 (1997). [10] J. W. Demmel, Applied Numerical Linear Algebra, SIAM, Philadelphia, 1997. ¨ mel, A testing infrastructure [11] J. W. Demmel, O. A. Marques, B. N. Parlett, and C. Vo for symmetric tridiagonal eigensolvers, Technical report LBNL-61831, Lawrence Berkeley National Laboratory, Berkeley, CA, 2006. [12] I. S. Dhillon, A New O(n2 ) Algorithm for the Symmetric Tridiagonal Eigenvalue/Eigenvector Problem, Ph.D. thesis, University of California, Berkeley, CA, 1997. [13] I. S. Dhillon, Current inverse iteration software can fail, BIT, 38 (1998), pp. 685–704. [14] I. S. Dhillon, G. Fann, and B. N. Parlett, Application of a new algorithm for the symmetric eigenproblem to computational quantum chemistry, in Proceedings of the Eighth SIAM Conference on Parallel Processing for Scientific Computing, SIAM, Philadelphia, 1997. [15] I. S. Dhillon and B. N. Parlett, Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices, Linear Algebra Appl., 387 (2004), pp. 1–28. [16] I. S. Dhillon and B. N. Parlett, Orthogonal eigenvectors and relative gaps, SIAM J. Matrix Anal. Appl., 25 (2003), pp. 858–899. ¨ mel, Glued matrices and the MRRR algorithm, [17] I. S. Dhillon, B. N. Parlett, and C. Vo SIAM J. Sci. Comput., 27 (2005), pp. 496–510. ¨ mel, The design and implementation of the MRRR [18] I. S. Dhillon, B. N. Parlett, and C. Vo algorithm, ACM Trans. Math. Software, 32 (2006), pp. 533–560. [19] J. J. Dongarra, S. Moore, P. Mucci, K. Seymour, D. Terpstra, and H. You, Performance Application Programming Interface (PAPI), http://icl.cs.utk.edu/papi/, 2007. [20] I. S. Duff, R. G. Grimes, and J. G. Lewis, Sparse matrix test problems, ACM Trans. Math. Software, 15 (1989), pp. 1–14. [21] I. S. Duff, R. G. Grimes, and J. G. Lewis, Users’ Guide for the Harwell-Boeing Sparse Matrix Collection (Release I), Technical report RAL-TR-92-086, Atlas Centre, Rutherford Appleton Laboratory, Oxfordshire, UK, 1992. [22] I. S. Duff, R. G. Grimes, and J. G. Lewis, The Rutherford-Boeing Sparse Matrix Collection, Technical report RAL-TR-97-031, Atlas Centre, Rutherford Appleton Laboratory, Oxfordshire, UK, 1997. Also Technical report ISSTECH-97-017 from Boeing Information & Support Services and Report TR/PA/97/36 from CERFACS, Toulouse. [23] G. H. Golub and C. van Loan, Matrix Computations, 3rd ed., The John Hopkins University Press, Baltimore, MD, 1996. [24] M. Gu and S. C. Eisenstat, A divide-and-conquer algorithm for the symmetric tridiagonal eigenproblem, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 172–191. [25] N. J. Higham, Algorithm 694: A collection of test matrices in MATLAB, ACM Trans. Math. Software, 17 (1991), pp. 289–305. [26] I. C. F. Ipsen, Computing an eigenvector with inverse iteration, SIAM Rev., 39 (1997), pp. 254– 291. [27] R. A. Kendall et al., High performance computational chemistry: An overview of NWChem a distributed parallel application, Comput. Phys. Comm., 128 (2000), pp. 260–283. [28] Lapack 3.1, http://www.netlib.org/lapack/lapack-3.1.0.changes, 2006. ¨ mel, Computations of eigenpair subsets with the [29] O. A. Marques, B. N. Parlett, and C. Vo MRRR algorithm, Numer. Linear Algebra Appl., 13 (2006), pp. 643–653. ¨ mel, Benefits of IEEE-754 features in modern [30] O. A. Marques, E. J. Riedy, and C. Vo symmetric tridiagonal eigensolvers, SIAM J. Sci. Comput., 28 (2006), pp. 1613–1633. [31] B. N. Parlett, The Symmetric Eigenvalue Problem, SIAM, Philadelphia, 1998. [32] B. N. Parlett and I. S. Dhillon, Fernando’s solution to Wilkinson’s problem: An application of double factorization, Linear Algebra Appl., 267 (1997), pp. 247–279. [33] B. N. Parlett and I. S. Dhillon, Relatively robust representations of symmetric tridiagonals, Linear Algebra Appl., 309 (2000), pp. 121–151. [34] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press, Oxford, 1965.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1527–1547
c 2008 Society for Industrial and Applied Mathematics
MULTIGRID ONE-SHOT METHOD FOR AERODYNAMIC SHAPE OPTIMIZATION∗ SUBHENDU BIKASH HAZRA† Abstract. The paper deals with a numerical method for aerodynamic shape optimization using simultaneous pseudo-timestepping. We have recently developed a method for the optimization problem in which stationary states are obtained by solving the pseudo-stationary system of equations representing state, costate, and design equations. The method requires no additional globalization techniques in the design space and a reduced sequential quadratic programming (RSQP) methodsbased preconditioner can be used for convergence acceleration. The overall cost of computation is reduced up to 70% of that of traditional gradient methods. However, the number of optimization iterations is comparatively large since we update the design parameters in each time-step of state and costate runs. In this paper we use a multigrid strategy to reduce the total number of optimization iterations keeping the idea of the one-shot method. Design examples of drag reduction, together with geometrical constraints, for an RAE2822 airfoil and an SCT wing are included. The total effort required for the convergence of the optimization problem is less than 2 times (4 times) that of the forward simulation runs in two dimensions (in three dimensions). Key words. shape optimization, simultaneous pseudo-timestepping, multigrid methods, Euler equations, preconditioner, RSQP methods, reduced Hessian, one-shot method, airfoil, wing AMS subject classifications. 49J20, 65L20, 65N20 DOI. 10.1137/060656498
1. Introduction. Due to their cost effectiveness, numerical methods are being used extensively by scientists and engineers in the preliminary design phase in aerodynamics. The optimization problems arising in preliminary aerodynamic design involve systems of partial differential equations (PDEs) which model the fluid dynamics. As progress has been made in computer technology and in numerical algorithms for PDEs, efficient computational fluid dynamics (CFD) codes are available to solve the fluid flow problems. However, for solving the optimization problem using, for example, adjoint-based gradient methods (see [19]), one has to solve the flow equations as well as the adjoint equations quite accurately several times. Despite using efficient CFD techniques, the overall cost of computation is quite high in these methods. In [12] we proposed a new method for solving such optimization problems using simultaneous pseudo-timestepping. In [13, 10, 9, 11] we applied the method for solving aerodynamic shape optimization problems without additional state constraints, and in [14, 16, 15] we applied the method to problems with additional state constraints. The overall cost of computation in all the applications has been 2–8 times that of the forward simulation runs, whereas in traditional gradient methods the cost of computation is 30–60 times that of the forward simulation runs. In all the applications mentioned above the cost of computation is reduced drastically in comparison to the traditional gradient methods. However, the number of optimization iterations is comparatively large since we update the design parameters after each time-step of the state and costate solver. Therefore, additional computational overhead due to, for example, grid generation, surface parameterization, etc., is high, especially for problems in three dimensions, since they are to be performed ∗ Received by the editors April 6, 2006; accepted for publication (in revised form) August 24, 2007; published electronically April 18, 2008. http://www.siam.org/journals/sisc/30-3/65649.html † Department of Mathematics, University of Trier, D-54286 Trier, Germany (
[email protected]).
1527
1528
SUBHENDU BIKASH HAZRA
in each optimization iteration. In this paper we use a multigrid strategy to accelerate the optimization convergence. The “optimization-based” multigrid method, as proposed and applied to model problems in [31, 28, 25, 26], has been used here. The basic difference is that we use the multigrid method in the context of simultaneous pseudo-timestepping. That means different optimization subproblems of similar structure are solved on different grid levels using one-shot simultaneous pseudo-timestepping. The coarse grid solution can be used to find the optimal direction of the fine grid optimization problem efficiently. Also, the problem on the coarse grid is computationally less expensive than that on the fine grid. Since the subproblems on different grid levels are of similar structure, another advantage is the use of the same algorithm and the same software modules to solve them on all grid levels. We have included computational examples of drag reduction, with some geometrical constraints, for an RAE2822 airfoil and for a supersonic-cruisetransport (SCT) wing. The number of optimization iterations are reduced by more than 65% of that of single grid computations and the overall cost of computation of the optimization problem on the fine grid is less than 2 forward simulation runs in two dimensions and less than 4 forward simulation runs in three dimensions. The paper is organized as follows. In the next section we discuss the abstract formulation of the shape optimization problem and its reduction to the preconditioned pseudo-stationary system of PDEs. Section 3 presents the multigrid algorithm which we have used in the application examples. Numerical results are presented in section 4. We draw our conclusions in section 5. 2. The optimization problem and pseudo-unsteady formulation of the KKT conditions. The focus of the present work is on aerodynamic shape optimization problems which are large scale PDE constrained optimization problems. These problems can be written in abstract form as (see [12, 13]) (1)
min I(w, q) s.t. c(w, q) = 0,
where (w, q) ∈ X × P (X, P are appropriate Hilbert spaces) and I : X × P → R and c : X × P → Y are twice Frechet-differentiable (with Y an appropriate Hilbert space). ∂c , is assumed to be invertible. Here, the equation c(w, q) = 0 The Jacobian, J = ∂w represents the steady-state flow equations (in our case the Euler equations) together with boundary conditions, w is the vector of dependent variables, and q is the vector of design variables. The objective I(w, q) is the drag of an airfoil/wing for the purposes of this paper. The necessary optimality conditions can be formulated using the Lagrangian functional (2)
L(w, q, λ) = I(w, q) − λ c(w, q),
where λ is the Lagrange multiplier or the adjoint variable from the dual Hilbert space and the second term on the right-hand side is a duality pairing. If zˆ = (w, ˆ qˆ) is a ˆ minimizer, then there exists a λ such that (3)
ˆ = ∇z I(ˆ ˆ ∇z c(ˆ z , λ) z) − λ z ) = 0. ∇z L(ˆ
Hence, the necessary optimality conditions, known as the Karush–Kuhn–Tucker (KKT) conditions, are
MULTIGRID ONE-SHOT METHOD
(4a)
c(w, q) = 0
(state equation),
(4b)
∇w L(w, q, λ) = 0
(costate equation),
(4c)
∇q L(w, q, λ) = 0
(design equation).
1529
It is to be noted that the statement of optimality conditions is formal for the target problems, both in the function space setting as well as for the discretized problems. In ∂c general, derivatives like ∂w cannot be guaranteed to exist. However, the total derivatives with respect to the design variables exist typically and are all that is necessary for computations. Adjoint-based gradient methods have been used in most of the practical applications for solving the preceding system of equations. The disadvantage of these methods is their high computational cost due to the fact that state and costate equations have to be solved quite accurately in each optimization iteration. Computational results based on these methodologies have been presented in [20, 21, 29, 30, 7] on structured grids. An application of this method on unstructured grids has been presented in [1]. One can use, for example, reduced sequential quadratic programming (RSQP) methods to solve above set of equations. A discussion of the method can be found in [12, section 2] and references therein. A step of this method can also be interpreted as an approximate Newton step for the necessary conditions of finding the extremum of problem (1), since the updates of the variables are computed according to the linear system ⎛ ⎞ 0 0 A ⎛ ⎞ ⎛ ⎞ ⎟ ⎜ Δw −∇w L ∂c ⎜ 0 B ⎟ ⎜ ⎟ ⎝ Δq ⎠ = ⎝ −∇q L ⎠ , (5) ∂q ⎜ ⎟ ⎝ ⎠ Δλ −c ∂c 0 A ∂q where A is the approximation of the Jacobian J and B is the reduced Hessian. We used in [9, 10, 11, 12, 13, 14, 15, 16] a new method for solving problem (4) using simultaneous pseudo-timestepping. In this method, to determine the solution of (4), we look for the steady-state solutions of the following pseudo-time embedded evolution equations: dw + c(w, q) = 0, dt (6)
dλ + ∇w L(w, q, λ) = 0, dt dq + ∇q L(w, q, λ) = 0. dt
This formulation is advantageous since the steady-state flow (as well as adjoint) solution is obtained by integrating the pseudo-unsteady Euler (or Navier–Stokes) equations in this problem class. Therefore, one can use the same timestepping philosophy for the whole set of equations, and preconditioners can be used to accelerate the convergence. The preconditioner that we have used stems from RSQP methods as discussed above and in detail in [12]. 2.1. Preconditioning and the solution algorithm. The pseudo-time embedded system (6) usually results, after semidiscretization, in a stiff system of ODEs. Therefore, explicit timestepping schemes may converge very slowly or might even diverge. In order to accelerate convergence, this system needs some preconditioning. We
1530
SUBHENDU BIKASH HAZRA
use the inverse of the matrix in (5) as a preconditioner for the timestepping process. Hence, the pseudo-time embedded system that we consider is ⎞ ⎛ ⎞ ⎛ w˙ −∇w L ⎝ q˙ ⎠ = K ⎝ −∇q L ⎠ , (7) −c λ˙ where ⎡
0
⎢ ⎢ ⎢ K =⎢ 0 ⎢ ⎣ A
0 B ∂c ∂q
A ∂c ∂q 0
⎤−1 ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
⎡
∂c B A A− ⎢ ∂q ∂q ⎢ ⎢ =⎢ ∂c ⎢ −1 −B A− ⎢ ∂q ⎣ A− −1 ∂c
−1
−A
−1 ∂c
∂q
⎤ B
−1
A
−1
B −1
0
0
0
⎥ ⎥ ⎥ ⎥. ⎥ ⎥ ⎦
This seems natural since (5) can be considered as an explicit Euler discretization for the corresponding timestepping that we envision. Also, due to its block structure, the preconditioner is computationally inexpensive. The preconditioner employed is similar to the preconditioners for KKT systems discussed in [3, 2] in the context of Krylov subspace methods and in [4] in the context of Lagrange–Newton–Krylov–Schur methods. The block matrices A and A corresponding to the state and the costate equations in the preconditioner are just identity matrices in the current implementation. In that case system (7) reduces to ⎡ ⎤ ∂c −1 ∂c ∂c −1 ⎛ ⎞ ⎞ − B I ⎥⎛ ⎢ ∂q B w˙ −∇w L ∂q ∂q ⎢ ⎥ ⎥ ⎝ −∇q L ⎠ , ⎝ q˙ ⎠ = ⎢ (8) ⎢ −B −1 ∂c B −1 0 ⎥ ⎦ ⎣ −c λ˙ ∂q I 0 0 which we may rewrite as (9a) (9b) (9c)
B q˙ = − ∇q L −
λ˙ = −∇w L, L ∂c ∇w L , ∂q
w˙ +
∂c q˙ = −c. ∂q
Within the inexact RSQP preconditioner, one has to look for an appropriate approximation of the reduced Hessian. In particular, when dealing with PDEs constituting the state equations, the reduced Hessian can often be expressed as a pseudodifferential operator. This can be exploited for preconditioning purposes as in [12]. However, the problem of aerodynamic shape optimization involves the nonlinear system of hyperbolic PDEs. The solution of these equations often contains discontinuities, especially in transonic and supersonic regimes. Hence, the symbol of the Hessian is difficult to deduce in terms of pseudo-differential operators. Therefore, it is necessary to find some other means to approximate the reduced Hessian during the optimization iterations. As shown in [9], the reduced Hessian update based on the most recent reduced ∂c gradient, g = {∇q L−( ∂q ) (A )−1 ∇w L}, and parameter update information are good
MULTIGRID ONE-SHOT METHOD
1531
enough for this problem class, so we use the same update strategy here. We define sk := (qk+1 − qk ) and zk := (gk+1 − gk ), where k represents the iteration number. Then the reduced Hessian update is based on the sign of the product (zkT sk ). If the sign is positive, the reduced Hessian is approximated by Γk =
zkT sk ¯ k δij , , Bk = βΓ zkT zk
where β¯ is a constant. Otherwise, it is approximated by βδij , where β is another constant. Additionally, we impose upper and lower limits on the factor so that T
z sk < βmax . βmin < β¯ kT zk z k This prevents the optimizer from taking steps that are too small or too large. The constants can be chosen, e.g., depending on the accuracy achieved in one time-step by the forward and adjoint solvers. The accuracy achieved in one time-step depends on many factors in a CFD code, such as the geometry, the type, and the size of the computational grid, type and order of spatial discretization scheme, timestepping scheme, CFL condition, acceleration techniques used, etc. For example, the same timestepping scheme results in a larger reduction of the residual in a coarser grid computation than in a finer grid computation. Similarly, a larger reduction of the residual results when the multigrid acceleration technique is used for the PDE solver than when single grid computations ¯ βmin , βmax ) are to be chosen so that larger are used. Hence, accordingly the betas (β, design step is used in a coarser grid computation than in a finer grid computation. Similarly, for a multigrid CFD solver a larger design step is used than for a single grid CFD solver. There is no mathematical formula so far to determine the constants; the user of the method needs to perform a few trials to come up with optimized values of those so that they represent proper scaling of the updates of design parameters for the CFD solver used for a particular application example in the optimization problem. In the present paper, we have implemented this method for the shape design examples using the Euler equations. The details of governing equations, discretization, geometry parameterization, gradient computation, and grid perturbation strategy can be found in [13] for the two-dimensional (2D) problem and in [16] for the threedimensional (3D) problem. The overall algorithm for solving system (9) reads as follows. Algorithm 1. The simultaneous pseudo-timestepping for the preconditioned system is as follows: (0) (1) (2) (3)
Set k := 0; start at some initial guess w0 , λ0 , q0 . Compute λk+1 using 5-stage Runge–Kutta timestepping as discussed in [13]. Determine some approximation Bk−1 = γI (as discussed previously) of the reduced Hessian of the Lagrangian. March the design L equation one step in time using q k+1 = q k − Δtγ˜ g , where g˜ =
∇q L −
(4) (5)
∂c ∂q
∇w L
.
Compute wk+1 using 5-stage Runge–Kutta timestepping to w˙ = −c as discussed in [13]. k := k + 1; go to (1) until convergence.
1532
SUBHENDU BIKASH HAZRA
In the above algorithm g˜ is an approximation to g, the reduced gradient of the objective function. 3. The multigrid algorithm. The optimization problem that we are dealing with involves PDEs as constraints. Therefore, multigrid methods for PDEs [5, 8, 27, 22] can be used to accelerate the convergence of the PDE solver, thereby accelerating the convergence of the optimization problem. This has been done in [11, 9] to improve the results of [13]. In this paper we use an “optimization-based” multigrid method, as discussed in [31, 28, 25, 26], for the full optimization problem. The basic difference in the current implementation is that we use multigrid in the context of simultaneous pseudo-timestepping. In the multigrid algorithm we solve different optimization subproblems in different grid levels. The coarse grid solution, which is achieved with much less computational effort, is used to accelerate the convergence of the optimization problem on the fine grid. The solution methodology is based on simultaneous pseudo-timestepping, as mentioned in section 2. The ideas are demonstrated through practical aerodynamic shape optimization applications with Euler equations in two dimensions as well as in three dimensions. We denote by h the current mesh resolution and by H the next coarser mesh resoh denotes the prolongation operator and IˆhH denotes the restriction operator. lution. IˆH The multigrid algorithm reads as follows. Algorithm 2. (i) If on the finest level, solve partially min I(wh , qh ) s.t. c(wh , qh ) = 0,
(10) (1)
and get qh . (ii) Compute g (1) = IˆhH ∇I. (1) (1) (iii) Compute qH = IˆhH qh . (iv) Solve in each coarse grid iteration T
(11)
(v) (vi) (vii) (viii)
min I(wH , qH ) + g (1) qH s.t. c(wH , qH ) = 0
to get the update vector eH = −ΔtBk−1 gH (as in step (3) of Algorithm 1). h Compute eh = IˆH eH . (1) new Update qh = qh + eh . Goto step (iii) in case of more than one coarse grid iteration. Otherwise, Solve partially (12)
min I(wh , qh ) s.t. c(wh , qh ) = 0,
with initial solution qhnew . This defines the V-cycle template of the multigrid algorithm. The objective function of coarse grid problem differs from that in [25, 26] since we use inexact gradients (a simple adjustment of a correction term for inexact gradients, as suggested in [28], can be made which will lead to the current objective function). (1) Lemma 3.1. The gradient of the coarse grid problem (11) at qH is the projection (1) of the gradient of the fine grid problem (10) at qh together with an additive correction.
MULTIGRID ONE-SHOT METHOD
1533
Proof. Straightforward calculation of gradients of the objective functions will lead to the result. This fact ensures that the steps based on the coarse grid problem will lead to (faster) improvement for the fine grid problem. The computations are started on the finest level. Solve “partially” means a few iterations of the one-shot method are carried out. Linear interpolation is used for prolongation and simple injection is used for restriction (of the design variables). Problems (10) and (11) are of similar structure. Hence, all steps of Algorithm 1 can be carried out at respective discretization levels. Only for problem (11), step 3 of Algorithm 1 is carried out after the prolongation of the update vector to next higher level. Therefore, we can use all the modules of the codes, developed in our earlier works, in different discretization levels with minor modifications. 4. Numerical results and discussions. The test cases chosen are for minimizing drag of the profile or wing with geometric constraints. That means the objective being minimized is Cd . 4.1. Drag reduction with constant thickness for RAE2822 airfoil. The optimization method is applied to test cases of the RAE 2822 airfoil at Mach number 0.73 and angle of incidence 2 degrees. The physical domain is discretized using an algebraically generated (193 × 33) C-grid. This is the grid on the finest level (denoted by L1). The next coarser grid (denoted by L2) is of size (97×17), and the coarsest grid (denoted by L3) is of size (49 × 9). On these grid levels the preconditioned pseudostationary equations, resulting from the necessary optimality conditions corresponding to the optimization subproblems of section 3, are solved using Algorithm 1. The airfoil is decomposed into thickness and camberline for parameterization purposes. The parameters for thickness are kept unchanged to satisfy the constraint of constant thickness. The camberline is parameterized by 21 Hicks–Henne [17] parameters, and this is the number of design parameters in L1. The number of design parameters in L2 is 11 and in L3 is 6. The complete optimization cycle is performed under the optimization platform SynapsPointerPro [6]. We start the optimization iteration (i.e., w0 and λ0 ) with the solution obtained after 100 time-steps of the state and costate equations on any level. During a switch from “h” to “H” or from “H” to “h” the “restart” facility is used to read the solution of the last iteration on the same level. Since there is considerable change in geometry when the computations return to a particular level, a few (35, in the current implementations) time-steps of the state and costate solvers are carried out to reduce the numerical error in the computation (see the figures of state and costate convergence histories). After the convergence of the optimization problem, another 100 time-steps are carried out (in L1) for the state equation to get more accurate values of the surface pressure and force coefficients (which are comparable to the values obtained by other optimization methods). We use the FLOWer code [23, 24] of the German Aerospace Center (DLR), which has been modified and integrated for one-shot methods in [9, 10, 11, 12, 13, 14, 15, 16], for solving the forward and adjoint equations. In the current study we have used the reduced Hessian approximation as explained in section 2 at all grid levels. The values of the constants β, as well as βmin and βmax , which represent the scaling of the updates of design parameters, are chosen so that the design steps are larger in coarser levels than those used in the finest level. Table 1 presents the number of iterations required for the convergence of the optimization problem around a local minimizer (where a shock-free airfoil results) in all the cases of 2D computations reported below.
1534
SUBHENDU BIKASH HAZRA
Drag 0.008
Cd
0.006
0.004
0.002
50 100 Optimization Iteration
Fig. 1. Convergence history of the optimization iterations (single grid).
100
State Costate
10-1
10-3
10-5 10-6 10
Opt. end
10-4
Opt. begin
Log(Res)
10-2
-7
100
200 cycle
300
Fig. 2. Convergence history of state and costate residuals (single grid).
Figure 1 presents the optimization convergence history of the single grid computation reported in [9]. Figure 2 presents the convergence history of the state and the costate residuals, and Figure 3 presents a comparison of baseline and optimized camberlines, airfoils, and surface pressure distributions for the single grid computation. Case 1: Multigrid computation on two grid levels. In this case the computations on two grid levels are carried out using the multigrid strategy as explained in Algorithm 2. We start the computation on the finest level (L1). Each V-cycle consists of 4 iterations on the finest level and 6 iterations on the coarser level (L2). The optimization requires 3 V-cycles to approach convergence. After the last V-cycle another 16 iterations are carried out in the finest level to reach the convergence of the optimization problem. The optimization convergence history is presented in Figure 4. Figure 5 presents the convergence history of the state and the costate residuals on both levels. One notices in this figure that the drag value on L2 (coarser grid) is higher than that on L1 (finer grid) even though the shock is much weaker on the coarser grid (see Figure 21). As is well known in CFD in two dimensions, the computed drag value has two components—one due to shock (also known as “wave drag”) and the other due to numerical error (also known as “spurious drag”). For coarser grids the wave drag is less but the drag due to numerical error leads to a higher value. Figure 6 presents a comparison of baseline and optimized camberlines, airfoils, and surface pressure distributions. In these applications, the airfoil shapes are smooth (except at the leading and trailing edges) on any of the meshes used. On the other hand, smooth features in the airfoil can give rise to nonsmooth features (e.g., shocks) in the flow. This coupling
1535
MULTIGRID ONE-SHOT METHOD
-1.5 0.08
Baseline Optimized
0.012
-1 0.04 -0.5 Baseline Optimized
0
cp
y
Camber
0.006
0
0
0.5 -0.04
Baseline Optimized
1 -0.006 0
-0.08 0.25
0.5 x/c
0.75
1
0
0.2
0.4
0.6
0.8
1
1.5 0
0.2
x/c
0.4
0.6
0.8
1
x/c
Fig. 3. Comparison of camberlines, airfoils, and surface pressure distributions (single grid).
0.024 Drag 0.02
Cd
0.016
L2
L2
0.012
L2
0.008
L1
0.004
L1
L1
L1 10 20 30 40 Optimization Iteration
50
Fig. 4. Convergence history of the optimization iterations (Case 1).
10
100
0
10-1 -2
10-2
10-3 10
State(L2) Costate(L2)
-1
Log(Res)
Log(Res)
10
10
10-3
-4
10-5 10
State(L1) Costate(L1)
10-6 100
200 Time Step
300
-4
10-5
50
100 Time Step
150
Fig. 5. Convergence history of state and costate residuals on level-1 (left) and level-2 (right) (Case 1).
between low-frequency and high-frequency features suggests that it is not clear a priori whether one should necessarily reduce the number of design variables on coarser meshes since the airfoil profiles are already smoothed by being represented on coarser meshes. We carry out that test by using the same number of design variables in L1 and L2. All the other parameters and criterion remain the same as above. Figure 7 presents the comparison of the convergence histories for both cases (one with the reduced number of design parameters in L2 as above (marked “Drag”) and the other with same number of design parameters in L2 (marked “Drag(all-par)”)). The force coefficients obtained are also presented in Table 1 (Case 1 (3V, all-par)). The results obtained are quite close. This also proves the fact that since the design variables
1536
SUBHENDU BIKASH HAZRA
0.08
-1.5
Baseline Optimized
0.012
-1 0.04 -0.5 Baseline Optimized
0
cp
Y
Camber
0.006
0
0
0.5
Baseline
-0.04
Optimized
1 -0.006 0
0.25
0.5 x/c
0.75
1
-0.08 0
0.25
0.5 x/c
0.75
1
1.5 0
0.25
0.5 x/c
0.75
1
Fig. 6. Comparison of camberlines, airfoils, and surface pressure distributions (Case 1).
0.024 Drag Drag(all-par)
0.02
Cd
0.016
L2
L2
L2
0.012 0.008
L1
0.004
L1
L1
L1 10 20 30 40 Optimization Iteration
50
Fig. 7. Comparison of convergence history of the optimization iterations (Case 1).
Drag 0.024 0.02
Cd
0.016
L2
L2
0.012
L2
L2
0.008
L1 0.004
L1
L1
L1 L1 10 20 30 40 50 Optimization Iteration
60
Fig. 8. Convergence history of the optimization iterations (Case 1, 4 V-cycles).
themselves are not discretized quantities, one can, in principle, use the same set of design variables at all mesh levels. However, in our computations reported here we used a reduced number of design variables on the coarser mesh levels. Next, we carry out the same computation for one more V-cycle, that is, for 4 V-cycles. After the 4th V-cycle another 16 iterations are carried out in L1 as in the case of the earlier computation. The optimization convergence history is presented in Figure 8. As we see, the last V-cycle has almost no effect in reducing the drag on the fine grid level, since the algorithm has arrived at a point very close to an optimizer. This confirms that the multigrid computations are effective during the early optimization iterations, i.e., when far away from the solution. This also shows a kind of numerical stability of the method for computing solutions. Figure 9 presents the convergence history of state and costate solutions on both levels. Figure 10 presents
1537
MULTIGRID ONE-SHOT METHOD
10 10
10
0
0
State(L2) Costate(L2)
-1
10
-1
10
10-2
Log(Res)
Log(Res)
10-2 -3
10-4 10
10-3
-5
State(L1) Costate(L1)
10-6 10-7
100
200 Time Step
300
10
-4
10
-5
50
100 150 Time Step
200
Fig. 9. Convergence history of state and costate residuals on level-1 (left) and level-2 (right) (Case 1, 4 V-cycles).
0.08
-1.5
0.012 -1 0.04 -0.5 Baseline Optimized(4V) Optimized(3V)
0
cp
Y
Camber
0.006
0
0.5
0.25
0.5 x/c
0.75
Baseline
-0.04
Baseline Opt.(4V) Opt.(3V)
-0.006 0
0
1
Optimized(4V) Optimized(3V)
1
-0.08 0
0.25
0.5 x/c
0.75
1
1.5 0
0.25
0.5 x/c
0.75
1
Fig. 10. Comparison of camberlines, airfoils, and surface pressure distributions (Case 1).
a comparison of baseline and optimized camberlines, airfoils, and surface pressure distributions of both the computations. There is no significant difference in the final values. Case 2: Multigrid computation on three grid levels. In this case the multigrid computations are carried out on three grid levels. Initial iterations are started at the finest level (L1). Four iterations are carried out on this level. Then the computations are carried out, for just 1 iteration, in the next coarser level (L2). Next we proceed to the coarsest level (L3). Three iterations are carried out in this level. Then 2 iterations are carried out in the prolongation from L3 to L2. Finally, 4 iterations are carried out on the finest level (L1). Two V-cycles and a total of 50 optimization iterations are required for convergence. The convergence histories are presented in Figures 11 and 12. A comparison of baseline and optimized camberlines, airfoils, and surface pressure distributions are presented in Figure 13. In this case, the number of optimization iterations remains the same as in Case 1; however, one fewer V-cycle is necessary in this case, thereby reducing the total number of state and costate iterations in the finest level. The grid in L3 is too coarse and due to dominating “spurious” drag (see Figure 21) the computational cost is not reduced significantly from that of two level computations. Since the computations on the coarser level are cheaper, one would expect more optimization iterations on the coarser level. But, as mentioned, due to dominating “spurious” drag in the coarser level, this is not the case. One has to find a balance between the number of iterations and the numerical error present in the solution. The numbers mentioned here, in all the computations, are based on computational experience.
1538
SUBHENDU BIKASH HAZRA
0.024
L1
L3
L1
Drag
0.02
Cd
0.016 0.012 0.008 0.004 L2
L2
10
20 30 40 Optimization Iteration
50
Fig. 11. Convergence history of the optimization iterations (Case 2).
10 10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
10
0
100
State(L2) Costate(L2)
State(L3) Costate(L3)
10-1
-1
10-2 Log(Res)
10-2
Log(Res)
Log(Res)
10
0
10-3
10-3
10-4 10
-4
10
-5
State(L1) Costate(L1) 75
150 Time Step
10 225
300
50
100 150 Time Step
-5
200
25
50 75 100 Time Step
125
Fig. 12. Convergence history of state and costate residuals on level-1 (left), level-2 (middle), and level-3 (right) (Case 2).
0.08
-1.5
Baseline Optimized
0.012
-1 0.04 -0.5 Baseline Optimized
0
cp
Y
Camber
0.006
0
0
0.5
Baseline Optimized
-0.04 1 -0.006 0
0.25
0.5 x/c
0.75
1
-0.08 0
0.25
0.5 x/c
0.75
1
1.5 0
0.25
0.5 x/c
0.75
1
Fig. 13. Comparison of camberlines, airfoils, and surface pressure distributions (Case 2).
Case 3: Multigrid computation on two grid levels with larger parameter space. In this case the design space is parameterized by 41 design parameters. This means the number of design parameters in L1, L2, and L3 are 41, 21, and 11, respectively. The computations are carried out on two-grid levels. Initially 4 iterations are carried out on the finest level (L1). Then 4 iterations are carried out on L2. Two V-cycles and a total of 30 iterations are required for the convergence of the optimization problem. The convergence histories are presented in Figures 14 and 15. A comparison of baseline and optimized camberlines, airfoils, and surface pressure distributions are presented in Figure 16. As we argued in [14], with finer parameter space the convergence of the optimization is faster in the pseudo-time one-shot method. This is true in the multigrid context as well.
1539
MULTIGRID ONE-SHOT METHOD
0.024 Drag 0.02
Cd
0.016
L2
L2
0.012 0.008
L1 0.004
L1 5
L1 10 15 20 25 Optimization Iteration
30
Fig. 14. Convergence history of the optimization iterations (Case 3).
10 10 10
-2
10
10-3 10
State(L2) Costate(L2)
10-1 -2
Log(Res)
Log(Res)
100
0
-1
10-3
-4
10
10-5
10-5
State(L1) Costate(L1)
10-6 75
150 Time Step
-4
225
50 100 Time Step
Fig. 15. Convergence history of state and costate residuals on level 1 (left) and level 2 (right) (Case 3).
0.08
-1.5
Baseline Optimized
0.012
-1 0.04 -0.5 Baseline Optimized
0
cp
Y
Camber
0.006
0
0
0.5
Baseline
-0.04
Optimized
1 -0.006 0
0.25
0.5 x/c
0.75
1
-0.08 0
0.25
0.5 x/c
0.75
1
1.5 0
0.25
0.5 x/c
0.75
1
Fig. 16. Comparison of camberlines, airfoils, and surface pressure distributions (Case 3).
Case 4: Multigrid computation on three grid levels with larger parameter space. In this case the computations are carried out on three grid levels as in Case 2 with the parameterization as explained in Case 3. On the finest level 4 iterations are carried out. On the next coarser level (L2) as well as on the coarsest level (L3) 1 iteration is carried out. In the prolongation steps from L3 to L2 and from L2 to L1, 1 iteration each is carried out. The convergence of the optimization requires one V-cycle and a total of 30 optimization iterations. The convergence history of the drag is presented in Figure 17. The convergence history of the state and the costate iterations is presented in Figure 18. A comparison of camberlines, airfoils, and surface pressure distributions is presented in Figure 19. In this case also, the total number of optimization iterations
1540
SUBHENDU BIKASH HAZRA
Drag
L2
0.024 0.02
Cd
0.016 0.012 0.008 L1
0.004 L1
L3
8 16 24 Optimization Iteration
Fig. 17. Convergence history of the optimization iterations (Case 4).
100
0
100
10-1
10-1
10-1
-2
10-2
10
10-3
-4
10
10-5 10
Log(Res)
10
10-3 10
State(L3) Costate(L3)
-2
Log(Res)
Log(Res)
10
State(L2) Costate(L2)
50
100 150 Time Step
10-4
10-5
State(L1) Costate(L1)
-6
10-3
-4
10 200
250
50 Time Step
-5
100
25
50 Time Step
75
100
Fig. 18. Convergence history of state and costate residuals on level 1 (left), level 2 (middle), and level 3 (right) (Case 4).
0.08
-1.5
Baseline Optimized
0.012
-1 0.04 -0.5 Baseline Optimized
0
cp
Y
Camber
0.006
0
0
0.5
Baseline
-0.04
Optimized
1 -0.006 0
0.25
0.5 x/c
0.75
1
-0.08 0
0.25
0.5 x/c
0.75
1
1.5 0
0.25
0.5 x/c
0.75
1
Fig. 19. Comparison of camberlines, airfoils, and surface pressure distributions (Case 4).
is the same as in Case 3; however, one fewer V-cycle is necessary in this case, and thus a reduced number of state and costate iterations is required on the fine grid level. One sufficiently converged forward solution needs about 350 time-steps. As we see, the total effort in the finest level is less than two forward solutions. Table 1 presents a comparison of the number of iterations and the force coefficients of baseline and optimized geometries of all the above cases of multigrid optimization iterations. The optimized force coefficients are almost the same in all the cases. However, the total number of optimization iterations is reduced up to 75% of that of the single grid computation. As we see, the drag reduction is about 63% in all the cases. The lift and pitching moment coefficients are also presented in the same table. Since
1541
MULTIGRID ONE-SHOT METHOD
Table 1 Comparison of number of iterations and force coefficients for baseline and optimized airfoils using different multigrid iterations. Geometry Baseline Single grid Case 1 (3V) Case 1 (3V, all-par) Case 1 (4V) Case 2 Case 3 Case 4
Iter
CD 0.849651E-02 0.314641E-02 0.314252E-02 0.310599E-02 0.309381E-02 0.317398E-02 0.304247E-02 0.307547E-02
130 50 50 60 50 30 30
CL 0.826399E+00 0.746177E+00 0.732291E+00 0.726345E+00 0.724621E+00 0.733226E+00 0.733132E+00 0.736523E+00
CM 0.126806E+00 0.105484E+00 0.103026E+00 0.101171E+00 0.100792E+00 0.102741E+00 0.102170E+00 0.103127E+00
there is no constraint on these two quantities, they are also reduced by about 11% and 19%, respectively. Figure 20 presents pressure and Mach contours of the baseline and optimized (Case 4) geometries. This also confirms the shock-free airfoil as a result of the optimization. 4.2. Drag reduction with geometric constraints for SCT wing. In this case optimization is carried out for drag reduction with geometric constraints for an SCT wing at Mach number 2.0 and angle of attack 3.22949 degrees. The geometric
1
1
0
cp 0.896719 0.668317 0.439915 0.211512 -0.0168897 -0.245292 -0.473694 -0.702096 -0.930498 -1.1589
0.5
y
-0.5
-0.5
1
1 13
Level
15 13
17
19
0.5
Level
11
11
11
0
11
13
11
17
11
-0.5 11
11
-0.5
-1 -1
0
2
1
ma 1.17544 1.05788 0.940324 0.822767 0.705211 0.587654 0.470098 0.352541 0.234985 0.117428
9 15
1
x
15
11
0
0
13
19 17 15 13 11 9 7 5 3 1
0.5
-1 -1
2
13
x
15
0
y
-1 -1
y
0
11
y
0.5
cp 0.885607 0.651939 0.418271 0.184603 -0.0490646 -0.282732 -0.5164 -0.750068 -0.983736 -1.2174
x
1
2
-1 -1
0
x
1
19 17 15 13 11 9 7 5 3 1
ma 1.13588 1.01925 0.902619 0.785987 0.669356 0.552724 0.436092 0.31946 0.202828 0.0861966
2
Fig. 20. Comparison of baseline (left) and optimized (right) pressure (top) and Mach (bottom) contours (Case 4).
1542
-1.5
-1.5
-1.5
-1
-1
-1
-0.5
-0.5
-0.5
cp
cp
cp
SUBHENDU BIKASH HAZRA
0
0
0
0.5
0.5
0.5
1
1
1
1.5 0
0.25
0.5 x/c
0.75
1
1.5 0
0.25
0.5 x/c
0.75
1.5 0
1
0.25
0.5 x/c
0.75
1
Fig. 21. Surface pressure distributions on (193 × 33) (left), (97 × 17) (middle), and (49 × 9) (right) grids.
100
0.01 Drag
10-1 10
-2
10
-3
Cd
Log(Res)
0.0075
0.005
10-4 10-5 10-6
State Costate
10-7 0.0025 100 200 300 400 Optimization Iteration
500
200
400 cycle
600
800
Fig. 22. Convergence histories of the optimization iterations (single grid).
constraints are taken care of via the parameterization of the wing. The physical domain is discretized by a grid of C-H topology consisting of (97×17×25) grid points. The wing is decomposed into thickness, camberline, and twist distributions for parameterization purposes (see [16] for details). The resulting wing is constructed by linear lofting of the modified wing sections. The thickness deformation has been based on B-splines which set free the range and the chordwise position of the maximum thickness, the leading edge radius, and the trailing edge angle at 8 wing sections. The positions of these sections are chosen according to the spanwise distribution of the geometrical constraints on maximum thickness. The camberline has been modified by adding 10 Hicks–Henne functions at 8 wing sections. The twist distribution has been described by a Bezier curve defined by 10 nodes. The center of rotation for the twist has been set at the leading edge of the wing. A total of 122 design variables are used to change the twist, the thickness, and the camberline. This is the number of design variables in the finest level. Complete optimization cycle is performed under the optimization platform SynapsPointerPro [6]. Case 5: Single grid results. We start the optimization iteration (i.e., w0 and λ0 ) with the solution obtained after 200 time-steps of the state and 250 time-steps of the costate equations. A total of 500 optimization iterations are carried out for drag reduction. Convergence histories of the drag as well as state and costate solutions are presented in Figure 22. Figure 23 presents a comparison of initial and final geometries and pressure distributions at 4 different spanwise sections. From the pressure distributions in the figure, we see that the pressure peak is reduced almost all over the wing.
1543
MULTIGRID ONE-SHOT METHOD
-0.3
2
Baseline Optimized
-0.3
3
Baseline Optimized
-0.3
Baseline Optimized
2
2 1
1
0
0
0 0
y
cp
y
cp
0
y
cp
1
0 0.3
0.3
0.3
-1
-1 -1
0.6 30
40
50 x
60
70
-2
0.6
50
x
60
70
-0.3
-2
0.6
55
60
x
65
70
-2
1
0.5 0 y
cp
0 Baseline Optimized
0.3
-0.5
-1
0.6
60
x
65
70
-1.5
Fig. 23. Comparison of initial and optimized wing-sections and pressure distributions at 4 different sections η = 0.24, 0.39, 0.49, 0.70 (from top-left to bottom) (single grid).
0.01 L1
Drag
Cd
0.0075
0.005
0.0025
L2
50 100 150 Optimization Iteration
Fig. 24. Convergence history of the optimization iterations (Case 6).
Case 6: Multigrid results on two grid levels. In this case the same computations of Case 5 are carried out using the optimization-based multigrid methods on two grid levels. The fine grid discretization and parameterization is described as above. The coarser computational grid consists of (49×9×13) grid points. In the parameterization for the coarser level, the thickness parameterization has been kept unchanged from that described above for the fine level, since the geometric constraints are taken care of by this. The camberline is modified by 6 Hicks–Henne functions at 8 wing sections, and the twist distribution has been described by a Bezier curve defined by 6 nodes. This results in a total of 86 parameters in the coarser level. The initial optimization iteration starts at the fine level (L1), where 15 iterations are carried out. In the coarser level (L2) 4 iterations are carried out. The optimization requires 4 V-cycles and a total of 180 optimization iterations. The optimization convergence history is presented in Figure 24. The convergence history of state and costate residuals is presented in Figure 25. Baseline and optimized geometries and
1544
SUBHENDU BIKASH HAZRA
100
100
10
-2
10
-3
10
-4
10-1 10-2 Log(Res)
Log(Res)
10-1
10
10-5
-3
10-4
10-6
10-5 State(L1) Costate(L1)
10-7 200
400 cycle
State(L2) Costate(L2)
10-6
600
50
100 cycle
150
Fig. 25. Convergence history of state and costate residuals on level 1 (left) and level 2 (right) (Case 6).
-0.3
2
Baseline Optimized
-0.3
3
Baseline Optimized
-0.3
Baseline Optimized
2
2 1
1
0
0
0 0
y
cp
y
cp
0
y
cp
1
0 0.3
0.3
0.3
-1
-1 -1
0.6 30
40
50 x
60
70
-2
0.6
50
x
60
70
-0.3
-2
0.6
55
60
x
65
70
-2
1
0.5 0 y
cp
0 Baseline Optimized
0.3
-0.5
-1
0.6
60
x
65
70
-1.5
Fig. 26. Comparison of initial and optimized wing-sections and pressure distributions at 4 different sections at η = 0.24, 0.39, 0.49, 0.70 (from top-left to bottom) (Case 6).
surface pressure distributions at 4 wing sections are presented in Figure 26. The results are quite similar to those obtained by single grid computation. Similarity can also be seen in the pressure and Mach contours presented in Figure 27. Table 2 presents a comparison of the number of iterations and baseline and optimized force coefficients obtained using single grid as well as multigrid computations. Using the multigrid strategy a 68% drag reduction could be achieved by 180 iterations, whereas using single grid computation this needs about 500 iterations. One fully converged forward simulation run needs about 400 iterations on the fine grid level. Hence, the total effort required in the fine grid level is less than 4 forward simulation runs. From the lift coefficient (CL) values presented in the same table, we see that the reduction of this value is more than 58%. This is due to the fact that in three dimensions the computed drag value contains those components mentioned in two
1545
MULTIGRID ONE-SHOT METHOD
cp 0.510482 0.462664 0.414847 0.367029 0.319212 0.271394 0.223577 0.175759 0.127942 0.0801242 0.0323066 -0.0155109 -0.0633284 -0.111146 -0.158964
cp 0.510482 0.462664 0.414847 0.367029 0.319212 0.271394 0.223577 0.175759 0.127942 0.0801242 0.0323066 -0.0155109 -0.0633284 -0.111146 -0.158964
ma 2.3844 2.31719 2.24997 2.18275 2.11554 2.04832 1.9811 1.91388 1.84667 1.77945 1.71223 1.64501 1.5778 1.51058 1.44336
ma 2.3844 2.31719 2.24997 2.18275 2.11554 2.04832 1.9811 1.91388 1.84667 1.77945 1.71223 1.64501 1.5778 1.51058 1.44336
Fig. 27. Pressure (top) and Mach (bottom) contours on the wing obtained by single grid (left) and multigrid (right) computations.
dimensions and additionally a third component, known as induced drag, which is due to lift [18]. Therefore, huge drag reduction in this case is at the cost of loss of lift. Hence, practical shape optimization problem (especially in three dimensions) should involve additional constraints (e.g., drag reduction with constant lift together with geometrical constraints). 5. Conclusions. An “optimization-based” multigrid strategy is used in the context of simultaneous pseudo-timestepping methods for aerodynamic shape
1546
SUBHENDU BIKASH HAZRA
Table 2 Comparison of the number of iterations and force coefficients for baseline and optimized wing’s using single grid and multigrid computations. Geometry Baseline Single grid Multigrid
Iter 500 180
CD 0.972837E-02 0.293458E-02 0.315004E-02
CL 0.120660E+00 0.452202E-01 0.515958E-01
CM 0.350336E-01 0.473493E-02 0.705556E-02
optimization. The preconditioned pseudo-stationary state, costate, and design equations are integrated simultaneously in time at different discretization levels. Coarse grid solution, which is less expensive to compute, is used to accelerate the convergence of the optimization problem on the fine grid. Due to the similar structure of the optimization subproblems at different levels, the same algorithm and software modules can be used, with minor modification, to solve those subproblems. The overall convergence is achieved in about 25%–35% of the effort of that required by single grid computations. The overall cost of computation is less than 2 times that of the forward simulation runs in two dimensions and is less than 4 times that of the forward simulation runs in three dimensions. Application to problems with additional state constraints is our next goal. Acknowledgments. I would like to thank DLR, Braunschweig, for giving access to the FLOWer code. Thanks are also due to the anonymous referees for their comments and suggestions on this work. REFERENCES [1] W. K. Anderson and V. Venkatakrishnan, Aerodynamic design optimization on unstructured grids with a continuous adjoint formulation, Comput. Fluids, 28 (1999), pp. 443–480. [2] A. Battermann and M. Heinkenschloss, Preconditioners for Karush-Kuhn-Tucker systems arising in the optimal control of distributed systems, in Control and Estimation of Distributed Parameter Systems (Vorau, 1996), W. Desch, F. Kappel, and K. Kunisch, eds., Birkh¨ auser, Basel, 1998, pp. 15–32. [3] A. Battermann and E. W. Sachs, Block preconditioners for KKT systems in PDE-governed optimal control problems, in Fast Solution of Discretized Optimization Problems, K. H. Hoffmann, R. H. W. Hoppe, and V. Schulz, eds., Birkh¨ auser, Basel, 2001, pp. 1–18. [4] G. Biros and O. Ghattas, Parallel Lagrange–Newton–Krylov–Schur methods for PDEconstrained optimization. Part I: The Krylov–Schur solver, SIAM J. Sci. Comput., 27 (2005), pp. 687–713. [5] A. Brandt, Multigrid Techniques: 1984 Guide with Applications to Fluid Dynamics, GMDStudien 85, Gesellschaft mbH, St. Augustin, Germany, 1984. [6] O. Fromman, SynapsPointerPro v2.50, Synaps Ingenieure, Gesellschaft mbH, Bremen, Germany, 2002. [7] N. R. Gauger and J. Brezillon, Aerodynamic shape optimization using adjoint method, J. Aero. Soc. India, 54 (2002), pp. 247–254. [8] W. Hackbusch, Multigrid Methods and Applications, Springer Ser. Comput. Math. 4, SpringerVerlag, Berlin, 1985. [9] S. B. Hazra, Reduced Hessian updates in simultaneous pseudo-timestepping for aerodynamic shape optimization, in Proceedings of the 44th Annual AIAA Aerospace Science Meeting and Exhibit, Reno, NV, 2006, AIAA paper 2006-933. [10] S. B. Hazra, An efficient method for aerodynamic shape optimization, in Proceedings of the 10th Annual AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Albany, NY, 2004, AIAA paper 2004-4628. [11] S. B. Hazra and N. Gauger, Simultaneous pseudo-timestepping for aerodynamic shape optimization, PAMM, 5 (2005), pp. 743–744. [12] S. B. Hazra and V. Schulz, Simultaneous pseudo-timestepping for PDE-model based optimization problems, BIT, 44 (2004), pp. 457–472.
MULTIGRID ONE-SHOT METHOD
1547
[13] S. B. Hazra, V. Schulz, J. Brezillon, and N. R. Gauger, Aerodynamic shape optimization using simultaneous pseudo-timestepping, J. Comput. Phys., 204 (2005), pp. 46–64. [14] S. B. Hazra and V. Schulz, Simultaneous pseudo-timestepping for aerodynamic shape optimization problems with state constraints, SIAM J. Sci. Comput., 28 (2006), pp. 1078–1099. [15] S. B. Hazra and V. Schulz, Simultaneous pseudo-timestepping for state constrained optimization problems in aerodynamics, in Real-Time PDE-Constrained Optimization, L. Biegler, O. Ghattas, M. Heinkenschloss, D. Keyes, and B. van Bloemen Waanders, eds., Computational Science and Engineering 3, SIAM, Philadelphia, 2007, pp. 169–182. [16] S. B. Hazra, V. Schulz, and J. Brezillon, Simultaneous Pseudo-Timestepping for 3D Aerodynamic Shape Optimization, Forschungsbericht 05-2, FB IV - Mathematik/Informatik, Universit¨ at Trier, Trier, Germany, 2005. [17] R. M. Hicks and P. A. Henne, Wing design by numerical optimization, J. Aircraft, 15 (1978), pp. 407–412. [18] A. Holt and L. Marten, Aerodynamics of Wings and Bodies, Dover, New York, 1965. [19] A. Jameson, Aerodynamic design via control theory, J. Sci. Comput., 3 (1988), pp. 23–260. [20] A. Jameson, Automatic design of transonic airfoils to reduce the shock induced pressure drag, in Proceedings of the 31st Annual Israel Conference on Aviation and Aeronautics, Tel Aviv, Israel, 1990, pp. 5–17. [21] A. Jameson, Optimum aerodynamic design using CFD and control theory, in Proceedings of the 12th Annual AIAA Computational Fluid Dynamics Conference, San Diego, CA, 1995, AIAA paper 95-1729. [22] A. Jameson, Solution of the Euler equations for two dimensional transonic flow by a multigrid method, Appl. Math. Comput., 13 (1983), pp. 327–356. [23] N. Kroll, C. C. Rossow, K. Becker, and F. Thiele, The MEGAFLOW—A Numerical Flow Simulation System, presented at the 21st ICAS Symposium, Melbourne, Australia, 1998, paper 98-2.7.4. [24] N. Kroll, C. C. Rossow, K. Becker, and F. Thiele, The MEGAFLOW project, Aerosp. Sci. Technol., 4 (2000), pp. 223–237. [25] R. M. Lewis and S. G. Nash, A multigrid approach to the optimization of systems governed by differential equations, in Proceedings of the 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Long Beach, CA, 2000, AIAA-2000-4890. [26] R. M. Lewis and S. G. Nash, Model problems for the multigrid optimization of systems governed by differential equations, SIAM J. Sci. Comput., 26 (2005), pp. 1811–1837. [27] S. F. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, Frontiers Appl. Math. 6, SIAM, Philadelphia, 1989. [28] S. G. Nash, A multigrid approach to discretized optimization problems, Optim. Methods Softw., 14 (2000), pp. 99–116. [29] J. Reuther and A. Jameson, Aerodynamic shape optimization of wing and wing-body configurations using control theory, in Proceedings of the 33rd AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 1995, AIAA paper 95-0123. [30] J. Reuther, A. Jameson, J. Farmer, L. Martinelli, and D. Saunders, Aerodynamic shape optimization of complex aircraft configurations via an adjoint formulation, in Proceedings of the 34th Annual AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, 1996, AIAA paper 96-0094. [31] S. Ta’asan, Multigrid One-Shot Methods and Design Strategy, Lecture Notes on Optimization 4, Von Karman Institute, Sint-Genesius-Rode, Belgium, 1997.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1548–1571
c 2008 Society for Industrial and Applied Mathematics
ALGEBRAIC MULTIGRID SOLVERS FOR COMPLEX-VALUED MATRICES∗ SCOTT P. MACLACHLAN† AND CORNELIS W. OOSTERLEE‡ Abstract. In the mathematical modeling of real-life applications, systems of equations with complex coefficients often arise. While many techniques of numerical linear algebra, e.g., Krylovsubspace methods, extend directly to the case of complex-valued matrices, some of the most effective preconditioning techniques and linear solvers are limited to the real-valued case. Here, we consider the extension of the popular algebraic multigrid method to such complex-valued systems. The choices for this generalization are motivated by classical multigrid considerations, evaluated with the tools of local Fourier analysis, and verified on a selection of problems related to real-life applications. Key words. multigrid, algebraic multigrid, complex-valued matrices AMS subject classifications. 65F10, 65N22, 65N55 DOI. 10.1137/070687232
1. Introduction. Many real-world physical systems may be modeled mathematically using the tools of partial differential equations. For many such models, the degrees of freedom are naturally real-valued, for example, displacements in an elastic body or velocities of a fluid. For some models, however, complex-valued degrees of freedom also arise naturally, as in frequency-domain modeling of electromagnetic waves or other phenomena. Because of the many interesting real-valued models, development of the numerical linear algebra tools needed for the solution of discrete linear systems has focused on the real-valued case. While some of these techniques may be easily extended to the complex-valued case (e.g., GMRES and BiCGStab for general matrices, or conjugate gradients for complex Hermitian matrices), many require special consideration to generalize the appropriate principles to the complex-valued case. Here, we consider the generalization of the algebraic multigrid method [6, 23], an effective solver (or preconditioner) for many linear systems that arise from the discretization of elliptic or parabolic differential equations. The complex-valued linear systems considered here arise from different physical applications, for example, in modeling electromagnetic waves. Under the assumption of time-harmonic variation in the electromagnetic fields, Maxwell’s equations may be reduced into a scalar Helmholtz equation with a complex shift (see, e.g., [16]). Similarly, when the acoustic (or elastic) wave equation is considered in the frequency domain, Sommerfeld boundary conditions and attenuation both introduce a complex component in the resulting Helmholtz equation; multigrid solvers for these (indefinite) matrices were considered in [15]. In the field of lattice quantum chromodynamics (QCD), a model of the interactions of fermions (or quarks) on a lattice is given in ∗ Received by the editors April 3, 2007; accepted for publication (in revised form) August 27, 2007; published electronically April 18, 2008. This research was supported by the European Community’s Sixth Framework Programme, through Marie Curie International Incoming Fellowship MIF1-CT2006-021927. http://www.siam.org/journals/sisc/30-3/68723.html † Department of Mathematics, Tufts University, 503 Boston Avenue, Medford MA 02155 (scott.
[email protected]). ‡ Delft University of Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Mekelweg 4, 2628 CD Delft, The Netherlands (
[email protected]) and CWI, National Research Institute for Mathematics and Computer Science, Amsterdam, The Netherlands.
1548
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1549
terms of a complex-valued gauge field that directly leads to a complex-valued linear system of equations [7, 24]. Multigrid methods are a family of techniques known to provide optimal (or nearoptimal) solution of the linear systems that arise in many real-world applications. Through the careful coupling of a relaxation scheme (to reduce high-frequency errors) and a coarse-grid correction process (to reduce low-frequency errors), geometric multigrid techniques are among the most efficient solvers available for models with slowly varying coefficients [26]. For problems with significant heterogeneity, either in the coefficients of the continuous model or the geometry on which it is discretized, algebraic multigrid (AMG) techniques often perform as well as geometric techniques do for homogeneous models. Although first proposed in the 1980s [6, 23], there has been much recent interest in AMG because of its potential to handle large-scale models with realistic material properties and geometries, particularly in parallel environments [7, 16]. The success of AMG for a wide range of models is due to a careful combination of multigrid principles with more general ideas of numerical linear algebra. This combination, however, does not automatically yield a black-box approach for solving all types of linear systems. Effective multigrid performance results from a complementary choice of local relaxation and coarse-grid correction; AMG is not freed from these constraints, even though it is no longer dependent on knowledge of many details of the discrete problem under consideration. AMG performance for problems of structural mechanics, for example, is greatly improved if AMG is tempered with knowledge of the block structure of the linear system [23]. Such extensions to AMG are, in principle, straightforward and are not considered here. While there has been much development of AMG for real-valued matrices, less investigation has occurred for complex-valued matrices. Lahaye et al. consider AMG for the Helmholtz equation with a complex shift and apply AMG to the real part of the matrix in order to define the coarse grids and interpolation operators [16]. For these models, the dominant part of the operator (corresponding to the second-order derivative terms) is entirely real, while the imaginary part represents only a mass matrix term, and, so, coarsening the complex-valued problem based on its real part is quite effective. Generalizing this approach, Reitzinger, Schreiber, and van Rienen propose using a real-valued auxiliary matrix to define the AMG hierarchy [21]. Such an approach is again appropriate when it is known that the dominant part of the operator may be represented by a real matrix. Both these approaches, however, require knowing how to split the given matrix in such a way as to define a real-valued auxiliary problem. Such an approach, then, is less general than the AMG approach for real-valued systems, which is based only on the entries in the linear system. An alternate approach is to consider the equivalent real form of the complex system, splitting A ∈ Cn×n into its real and complex parts, A = A(R) + ıA(I) , and rewriting Au = b as (R) (R) (R) −A(I) u b A = . (I) (R) (I) A A u b(I) Day and Heroux consider several possible orderings of the equivalent real form and show that ILU preconditioners applied to the equivalent real forms may be as effective as those applied to the complex formulation [13]. Adams uses an approach based on applying smoothed aggregation multigrid [28] to the equivalent real form [1]; such an approach was first considered in a two-level setting in [27]. The smoothed aggregation framework bases the multigrid interpolation operator on a specified set of so-called rigid body modes for the stiffness matrix (i.e., the dominant differential operator)
1550
S. P. MACLACHLAN AND C. W. OOSTERLEE
and, so, these modes may be easily extended to match those of the equivalent real form. The adaptive smoothed aggregation multigrid method [9] is also applied to the equivalent real form of a system from QCD in [7]. Here, we consider the application of AMG directly to complex-valued linear systems, which may be more efficient than approaches based on the equivalent real form. In section 2, we give an introduction into the classical algebraic multigrid method, as it applies to symmetric real-valued matrices. Then, in section 3, we consider the extension of this algorithm to complex-valued matrices. These options are then analyzed using local Fourier analysis (LFA) in section 4. Finally, based on the choices recommended by the analysis in sections 3 and 4, a complex AMG algorithm is tested for several realistic models in section 5. 2. AMG for symmetric real-valued matrices. Just as in all multigrid methods, the key to achieving efficiency in algebraic multigrid is an effective partitioning of the space of errors. In geometric multigrid, this partitioning is based on the ideas of smooth and oscillatory errors; those that appear smooth (relative to the underlying grid) are given to the coarse grid for resolution, while oscillatory errors must be appropriately attenuated by the chosen relaxation scheme. In algebraic multigrid, these roles are reversed; the subspace of errors that are effectively reduced by relaxation is taken to be fixed, and all complementary errors (the so-called algebraically smooth errors) must be reduced by an appropriate coarse-grid correction. An important step in designing an effective AMG approach, then, is to characterize the errors that are slow to be attenuated by the chosen relaxation process. AMG was originally proposed as an extension of the successful geometric multigrid methods for finite-difference discretizations of Poisson’s equation on irregular meshes [6]; as such, it is easily motivated by considering the performance of a simple relaxation scheme, such as the Jacobi iteration, for the class of M-matrices. A positive-definite (real-valued) matrix, A, is said to be an M-matrix if ai,j ≤ 0 for i = j. Furthermore, for an M-matrix, A, unknown i is said to strongly depend on unknown j if −aij ≥ θ maxk=i {−aik } for some θ ∈ (0, 1]. Following these definitions, Jacobi and Gauss–Seidel relaxation can be shown to be slow to reduce errors that vary slowly between strongly connected nodes in the M-matrix, A, and that yield small residuals, b − Au, compared to the errors in u [4]. Consider, then, defining interpolation to a fine-grid node, i, for such an algebraically smooth error. Using the small-residual property, localized to node i, we write (2.1) (Ae)i = aij ej = aii ei + aij ej + aik ek ≈ 0, j
j∈Fi
k∈Ci
where adj(i) = {j : aij = 0} is split into the two sets Ci and Fi , where Ci is the set of all coarse-grid points on which i strongly depends and Fi = adj(i) \ Ci . Assuming equality in (2.1) gives (2.2) aii ei = − aij ej − aik ek j∈Fi
k∈Ci
so that, if the sum over Fi were not present, (2.2) could be used to directly define an interpolation stencil for node i in terms of its coarse-grid neighbors, k ∈ Ci . Thus, the task of defining interpolation is one of eliminating the connections to j ∈ Fi from (2.2). If aij is small, relative to other coefficients in row i of A, then ej does not contribute much to this balance. To define “small,” we return to the definition of strong
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
connections; let
1551
@ Si =
A j : −aij ≥ θ max{−aik } . k=i
Rather than completely removing connections to Fiw = Fi \ Si from the balance, they are added to the diagonal by making the approximation that ej ≈ ei . Note that this is a relatively safe choice; if point j has been wrongly classified as a weak connection, then ej ≈ ei , since algebraically smooth errors vary slowly along strong connections. Thus, defining Fis = Fi ∩ Si and Fiw = Fi \ Fis , (2.2) is transformed into ⎞ ⎛ ⎝aii + (2.3) aij ⎠ ei = − aij ej − aik ek . j∈Fiw
j∈Fis
k∈Ci
In the AMG coarse-grid selection process, each strongly connected fine-grid neighbor, j, of point i, is ensured to also be strongly connected to at least one point in Ci . Then, the value of ej in (2.3) may be approximated by a weighted average of j’s strongly connected neighbors in Ci . However, since a weak connection between j and k ∈ Ci is reflected by a small coefficient, ajk , it is safe to take a simpler approach and approximate ej by a weighted average of all its neighbors in Ci , M ajk ek ajk . (2.4) ej ≈ k∈Ci
k∈Ci
Substituting this into (2.3), we arrive at the AMG interpolation formula for the finegrid point, i, as ⎛ ⎞ ' aij ajk aik + j∈Fis 'l∈C ajl i ⎝ ⎠ ek . ' (2.5) ei = − aii + j∈F w aij k∈Ci
i
With this definition of interpolation, the goals of the AMG coarse-grid selection process are clear. Each fine-grid point, i, should be strongly connected to (at least) one coarse-grid point, k, in order to take advantage of the property that ei ≈ ek . Further, the requirement that each strongly connected fine-grid neighbor of i be itself strongly connected to some strongly connected coarse-grid neighbor of i must also be enforced. Finally, as with all multigrid schemes, there is the desire to make the coarse grid as small as possible, such that a good correction to the troublesome error components on the fine grid is still available. An initial coarse grid is selected as a maximal independent subset of the graph of strong connections [23]; thus, each finegrid point must be strongly connected to at least one coarse-grid point, but the coarse set does not contain any pairs of strongly connected nodes. Then, a second pass of coarsening is performed, adding points to the tentative coarse grid from the first pass, ensuring that the necessary strong connections exist. Finally, now that we have specified how to choose a coarse grid and interpolation from it, it remains to be seen how to restrict residuals to that grid and how to define an operator on the coarse grid. Both questions are answered by making use of the fact that the symmetric and positive-definite matrix, A, defines an inner-product and norm. Defining the A-inner product as u, vA = vT Au and the A-norm as
u 2A = uT Au, the coarse-grid correction, P ec , that minimizes the A-norm of the corrected error satisfies P T AP ec = P T (b−Ax). Thus, consistent with this variational principle, restriction is taken to be P T , where P is the AMG interpolation operator, and the coarse-grid operator is chosen to be P T AP .
1552
S. P. MACLACHLAN AND C. W. OOSTERLEE
3. AMG for complex-valued matrices. Here, we consider the needed generalizations of the AMG components in the extension to the complex-valued case. In making these choices, we would like to design an algorithm that is consistent with the algorithm from section 2 for the case of a real-valued symmetric operator and that makes sense for the special cases of a complex-valued symmetric or Hermitian operator. In this section, we analyze these choices from an operator point of view. 3.1. Relaxation. In generalizing the AMG algorithm to complex-valued matrices, we must ensure that relaxation performs as expected, in particular that (weighted) Jacobi and Gauss–Seidel relaxation are convergent for a reasonable class of problems and that they act as appropriate smoothers. Smoothing properties are discussed in detail in section 4. Just as classical AMG was originally proposed for M-matrices, for which the convergence of Jacobi and Gauss–Seidel is well understood [31, section 4.5], we consider here H-matrices, the complex generalization of M-matrices. Definition 3.1. Let A ∈ Cn×n be such that its comparison matrix, @ |aii | if i = j, (M(A))ij = −|aij | if i = j, is an M-matrix. Then, A is called an H-matrix. For this class of matrices, the convergence of both weighted Jacobi and Gauss– Seidel relaxation is given in [29]. Theorem 3.2 (Theorem 1 from [29]). For any nonsingular H-matrix, A ∈ Cn×n , let D be the diagonal of A and −L be the strictly lower triangular part of A (so that A − (D − L) = U is strictly upper triangular). Taking Jω (A) = I − ωD−1 A to be the error propagation operator for the weighted Jacobi iteration with weight ω and Gω (A) = I − ω(D − ωL)−1 A to be the error propagation operator for the weighted Gauss–Seidel (SOR) iteration with weight ω, then • ρ(J1 (A)) ≤ ρ(J1 (M(A))) < 1, • for any ω ∈ (0, 1+ρ(J21 (A)) ), ρ(Jω (A)) ≤ ωρ(J1 (A)) + |1 − ω| < 1, and 2 ), ρ(Gω (A)) ≤ ωρ(J1 (A)) + |1 − ω| < 1, • for any ω ∈ (0, 1+ρ(J1 (M(A))) where ρ(B) denotes the spectral radius of matrix B. Note, in particular, that the first point of the theorem, convergence of the unweighted Jacobi iteration for both A and M(A), guarantees convergence of the underrelaxed weighted Jacobi iteration (ω ∈ (0, 1)) as stated in the second point. Similarly, the convergence of the (unweighted) Gauss–Seidel iteration is also guaranteed. Obviously, the class of H-matrices is not the only class of complex-valued matrices for which Jacobi and Gauss–Seidel are convergent. However, as we are primarily interested in the performance of these schemes as smoothers, we would like to know more about the spectra of the Jacobi and Gauss–Seidel iteration matrices than simple bounds like those in Theorem 3.2 can give. As these spectra depend strongly on that of A, we will use LFA to gain more insight into smoothing in section 4. 3.2. Coarse-grid correction. The definition of a good AMG coarse-grid correction scheme depends, of course, on the properties of the relaxation that it complements. While these properties are highly problem dependent, there are still certain broad principles that can guide AMG development. Central among these is that errors that are slow to be reduced by relaxation must lie in (or near to) the range of interpolation and that their residuals must be accurately restricted to the coarse grid. Here, we consider the components of the coarse-grid correction process independently and the principles that guide their selection.
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1553
Interpolation. Within the multigrid coarse-grid correction process, fine-grid errors are updated by the calculation enew = (I − P Bc−1 RA)eold , where Bc−1 represents the approximate (or exact) inversion of the true coarse-grid operator, Ac . Such a correction affects only the parts of eold that are in the range of P . Thus, the first principle for AMG coarse-grid correction does not change from the real-valued case; algebraically smooth errors must be in the range of P . To accomplish this, standard AMG principles should be applied. For Hermitian and positive-definite matrices, the small-residual assumption (residuals of errors that are slow to be reduced by relaxation are small) again holds for Jacobi and Gauss– Seidel (see, for instance, [23, section 4.4]). Similarly, for Hermitian and positivedefinite H-matrices, errors that are slow to be reduced by these relaxation schemes must vary slowly over connections where aij is large within row i of A. Thus, with a similar definition of strong connections, we can define interpolation for complex-valued matrices using (2.5), just as in the real-valued case. As for real-valued matrices, these requirements amount to assumptions on the class of matrices to which the complex AMG algorithm will be applied. If these assumptions are violated by the given problem, alternate techniques (such as the adaptive AMG algorithm [10]) should be used to define interpolation; see section 5.3. Here, we use a simple extension of the classical AMG strong-connection measure, S = i ' {j : |aij | ≥ θ maxk=i |aik |}. This choice is justifiable for H-matrices, A, such that j aij ≈ 0 for each i, similarly to the real case, where it is justifiable for M-matrices that satisfy the same conditions [23]. Under this assumption, it must also be the case that algebraically smooth errors vary slowly between strongly connected points. Once this definition is made, AMG coarse grids may be selected using a maximal independent set algorithm, as in classical AMG [23]. Choice of strong connections, and AMG coarsening in general, is still an area of active research [5, 8, 18]. It is interesting to note the relationship between multigrid approaches for nonsymmetric real matrices and the equivalent real form of a complex matrix. Writing A ∈ Cn×n as A = A(R) + ıA(I) for A(R) , A(I) ∈ Rn×n , the complex system, Au = b, can be expressed in terms of its real parts as (R) (R) (R) −A(I) u b A = , (3.1) A(I) A(R) u(I) b(I) where u = u(R) + ıu(I) and b = b(R) + ıb(I) . Dendy [14] suggests that for (nonsymmetric) matrices, interpolation should be built based on the symmetric part of the operator. This is motivated by considering convection–diffusion problems, where numerical experiments show that bilinear interpolation works well (when the second-order term is the constant-coefficient Laplacian), even when the convective term dominates. More recently, LFA has been used to confirm that this choice of interpolation works well for these problems [30]. For (I) (I) a Hermitian operator, the equivalent real form is symmetric (as Aij = −Aji ) and, so, applying this principle results in no loss of generality. For a complex symmetric operator, on the other hand, the symmetric part of the equivalent real form is a blockdiagonal matrix, and this principle suggests determining information based only on the real-part of A. Indeed, this approach has been investigated for complex matrices several times; cf. [16, 21].
1554
S. P. MACLACHLAN AND C. W. OOSTERLEE
We could consider generalizing this choice here, for example, by using the Hermitian part of the operator (which is the real part of a complex-symmetric matrix); however, such a choice would prejudice our algorithm toward the case where the differentially dominant operator occurs in the real part. In particular, consider a matrix arising in the discretization of −Δu + ık 2 u = f . Multiplying this equation by the complex unity, ı, gives −k 2 u − ıΔu = ıf . Since the discrete Laplacian is symmetric (and not Hermitian), the Hermitian part of this operator corresponds only to the mass matrix, and any information about the diffusion term would be lost in interpolation. In principle, it seems wise to base interpolation on the differentially dominant term, if this is easily identified and extracted from the rest of the operator, but such a choice would not be consistent with the algebraic setting of the multigrid interpolation operator. In sections 4 and 5, we investigate these choices more thoroughly; however, because of examples such as that above, it appears that using only part of the matrix to generate interpolation is too restrictive to be applicable in all interesting cases as a black-box solver. Therefore, in section 5, we use the natural complex extension of (2.5) to define interpolation. Restriction. Choosing restriction operators for the non-Hermitian definite case is more complicated, as no variational principle may be applied. One consideration for this choice is that the result of applying the rule to Hermitian-definite operators reduces to a variational approach when appropriate. Here, we propose several techniques for choosing restriction operators, motivated primarily by AMG considerations. A common assumption in algebraic multigrid is that the residual vector after relaxation is small (close to zero), particularly at gridpoints associated only with the fine grid (the so-called F -points). This arises from a reduction-based multigrid (MGR) viewpoint [17, 22]. In MGR (or AMGr), is assumed to have an 5 −1 relaxation 6 Af f 0 error propagation operator of the form I − A. After such a relaxation, the 0 0 residual is exactly zero at the F -points. As the role of restriction is to transfer the residual from the fine grid to the coarse grid, this analysis suggests the choice of simple injection for restriction. Making this choice, however, is based on a rather extreme assumption that residuals at F -points are so small that they can be neglected entirely in the coarse-grid problem. The choice of restriction as injection is rarely used in AMG, particularly in the cases of Hermitian-definite or complex-symmetric operators, where the use of injection in the Galerkin product often leads to poor convergence. Dendy suggests that restriction should be determined as the adjoint of interpolation for the adjoint of A [14], based on experiments with convection–diffusion problems. However, this idea may also be justified by considering a two-level (nonsymmetric) multigrid iteration with error-propagation operator, (3.2)
T = (I − M2−1 A)(I − P Bc−1 RA)(I − M1−1 A),
where M1 and M2 represent the approximate inverses used in the (stationary) pre and postrelaxation steps and Bc represents the action of the coarse-grid solve process for some coarse-grid matrix, Ac . For a matrix, A, that is Hermitian definite, the usual variational conditions (that result in nonzero restriction weighting from the F -points) provide explicit guidance. In the general case, A itself cannot be used to define an appropriate norm, but the normal form, A A, can (where A denotes the Hermitian transpose of A). Considering, then, the A A inner product and norm, we see that the adjoint of T in the A A inner product is (A A)−1 T (A A) and so
T A A = (A A)−1 T (A A) A A .
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1555
Following (3.2), we can write ,+ ,+ , + I − A R (Bc−1 ) P I − A (M2−1 ) T = I − A (M1−1 ) + ,+ ,+ , = A I − (M1−1 ) A I − R (Bc−1 ) P A I − (M2−1 ) A A −1 = A T (A )−1 , where T takes the form of a two-grid cycle on A , with the roles of R and P interchanged with their Hermitian transposes. Putting these together, we have that 0 −1 1 (A A) T (A A)v, T (A A)v 2
T A A = max v
(A A)v, v 0 1 ((A )−1 T A )Av, ((A )−1 T A )Av = max v
Av, Av = max w
T w, T w = T 2 .
w, w
Thus, the multigrid cycle given by T can be an effective cycle for A (measured in the A A-norm) if and only if T is an effective cycle for A (measured in the 2 -norm). But, to design an effective cycle for A , we should apply the same principles to the choice of interpolation for this cycle (now R ) as we would for the cycle for A. In particular, the principle that R accurately represents the algebraically smooth errors of A should be enforced. In other words, R should be constructed as we would construct AMG interpolation for A ; R should be the adjoint of interpolation for the adjoint of A, just as was proposed for the real case in [14]. When A is also symmetric or Hermitian, this argument is consistent with typical multigrid approaches. If A is Hermitian, then A = A, and this approach says that restriction should be the adjoint of AMG interpolation for A, R = P . This is, of course, consistent with the variational conditions that typically guide multigrid development in the Hermitian-definite case. For complex-symmetric A = A(R) +ıA(I) , A = A(R) − ıA(I) . If the rule for creating the AMG-style interpolation preserves this conjugation, then P (A ) = P T (A). In other words, the choice of R(A) = P (A ) results in R(A) = P T (A) if A is complex symmetric and if + , + , % P (A(R) + ıA(I) ) = % P (A(R) − ıA(I) ) (3.3) + , + , and ! P (A(R) + ıA(I) ) = −! P (A(R) − ıA(I) ) , where %(M ) denotes the matrix whose (i, j)th entry is the real part of mij and !(M ) is defined similarly for the imaginary part. In practice, this means that if the rule for determining interpolation only involves basic arithmetic operations (over which complex conjugation can be distributed) and the same points are selected as strong and weak connections for A and A , then this rule results in a restriction operator that is the (non-Hermitian) transpose of interpolation. A subspace decomposition point of view suggests a third approach for choosing restriction. When A is Hermitian and definite, a natural partition arises for Rn , into the range of P and its A-orthogonal complement. In a two-level multigrid cycle, the coarse-grid correction stage exactly eliminates errors that lie in the range of P , while errors that are A-orthogonal to this space must be adequately reduced by relaxation
1556
S. P. MACLACHLAN AND C. W. OOSTERLEE
on the fine grid. Let Pˆ be the 2 -projection onto the range of the full-rank operator, P , and let A be Hermitian and definite. Then Pˆ v, wA = v, A−1 Pˆ AwA . Thus, by the fundamental theorem of linear algebra, the space Rn may be partitioned as Rn = R(Pˆ ) ⊕A N (A−1 Pˆ A). But, since Pˆ is a projection onto the range of P , R(Pˆ ) = R(P ). Furthermore, because Pˆ is an 2 -projection, it is Hermitian, and since P has full rank, N (A−1 Pˆ A) = N (P A). Thus, we have that Rn = R(P )⊕A N (P A). Analyzing the error within the multigrid iteration using this subspace decomposition, we can identify those errors within the range of P as being the algebraically smooth errors. Thus, errors that are quickly attenuated by relaxation must lie in the null-space of P A. Within the multigrid error propagation operator (as in (3.2)), we can then identify the role of the residual projection (application of P A or, in the non-Hermitian case, RA) as being to filter out those errors that can be easily treated through relaxation alone. As in this non-Hermitian case, A no longer defines a proper inner product; we can only consider the 2 -adjoint of RA, A R to use this analysis. Requiring that N (RA) includes all errors that are effectively reduced by relaxation is then equivalent to requiring that R(A R ) includes the algebraically smooth errors. Thus, if only algebraically smooth errors are to be in the range of A R , then the small-residual assumption implies that R(A(A R )) is small on fine-grid points. But R(A(A R )) = R((AA )R ), suggesting that R must be accurate for algebraically smooth errors of the normal equations, AA . This leads to another possible rule for defining restriction, as the Hermitian conjugate of an AMG interpolation for the normal operator, AA . Such a choice, while motivated by typical AMG considerations, is not as attractive from a cost perspective as those discussed previously. The costs of forming AA in order to form restriction are obviously significant and would almost certainly lead to an increase in complexity of the AMG coarse-grid operators if applied within (2.5). On the other hand, if the basic AMG interpolation scheme is adapted to such complications, then this approach can be quite effective. Investigation of a similar approach within smoothed aggregation multigrid is currently underway [11]. Forming the coarse-grid operator. In many cases, physical intuition may be used to define an appropriate coarse-grid operator that complements the given choices of interpolation and restriction, but this is difficult to use consistently in the algebraic setting considered here. Instead, we choose the obvious generalization of the Galerkin condition from the symmetric or Hermitian definite case and we define Ac = RAP . In particular, this can be viewed as a restriction of the fine-grid operator, A, to exactly those components identified as needing correction from the coarse grid (the algebraically smooth errors). Multiplication on the right serves to restrict the domain of A to the range of P that, by assumption, contains these errors. Multiplication on the left by R restricts the range of A to that of R, which may be chosen, as described above, based on an understanding of the action of A on algebraically smooth errors. Using the definition of R discussed above, this choice for the coarse-grid operator also preserves the Hermitian symmetry or regular symmetry (if conditions (3.3) are satisfied) of the fine-grid operator. While it is possible to make separate choices of restriction and interpolation for use in the Galerkin product and in the multigrid cycle, in this paper we choose the same R and P for both roles. 3.3. Relation to systems AMG. While the proposed approach directly treats the complex values in the given matrix, it is also possible to implement this approach indirectly, within an existing AMG code that allows so-called point-based treatment of systems of equations. Viewing the equivalent real form (3.1) as a system of real-
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1557
valued equations with two unknowns per discrete node point (the real and imaginary parts of the complex value), decisions within AMG can be made based on the 2 × 2 (R) (I) blocks that represent aij = aij + ıaij as
(R)
a Aij = ij (I) aij
(I)
−aij (R) aij
.
In this setting, pointwise relaxation on the complex form of A is equivalent to block relaxation, simultaneously solving the 2 × 2 blocks associated with each node point. The definition of strong connections is recovered using the L2 -norm of the block matrices, Aij , in place of |aij |. Interpolation (and, thus, restriction) may be defined using block matrices (and block inverses), following (2.5) to define the 2 × 2 nodal blocks of interpolation. Finally, the Galerkin condition also can be computed in this block form. The extra costs of such an approach, however, make the complex-valued approach proposed here attractive, especially for the very large systems that arise in many naturally complex-valued applications. A naive implementation of the block algorithm would require twice the storage and twice the work to compute a matrix-vector product as the complex-valued AMG algorithm does. Furthermore, increased work would also be necessary in the AMG setup algorithm, unless modifications to the systems AMG code were made to take advantage of the complex structure. With such modifications, however, the computations performed by the systems AMG algorithm would simply mimic those described here. 4. LFA. Since the early days of multigrid, Fourier smoothing and two-grid analyses have been used to make quantitative estimates of the smoothing properties of basic iterative methods and for quantitative evaluation of the other multigrid components in a two-grid method; see, for example [3, 25, 26]. LFA (called local mode analysis in [3]) is the main multigrid analysis option for problems that do not lead to Hermitian and positive-definite matrices, such as the complex-valued problems of interest here. As we are interested in the definition of the coarse-grid correction components within a multigrid cycle, smoothing analysis alone is not sufficient, and we also consider two- and three-grid LFA [30]. Especially in this complex-valued setting, increased insight into the quality of the transfer-operator-dependent Galerkin coarsening is of value. Here, three-grid analysis is briefly outlined for two-dimensional problems with standard coarsening. We consider a discrete problem, Ah uh = fh , where uh represents the exact discrete solution on a regular grid with mesh size h. The main idea in the Fourier analysis is to formally extend all multigrid components to an infinite grid, Gh := {x = (kx h, ky h) : kx , ky ∈ Z}. On Gh , we have a unitary basis of grid functions called the Fourier components, ϕh (θ, x) := exp (ıθ · x/h) = exp (ık · θ) with x ∈ Gh , k = (kx , ky ), and Fourier frequencies, θ = (θx , θy ) ∈ R2 . These components are eigenfunctions of any discrete, real- or complex-valued operator, Ah , on Gh with constant coefficients.
1558
S. P. MACLACHLAN AND C. W. OOSTERLEE
m Recall that the error, em h = uh −uh , after iteration m is transformed by a two-grid operator as
(4.1)
= Th2h em em+1 h h Th2h = Shν2 Kh2h Shν1
with
h and Kh2h = Ih − P2h (A2h )−1 Rh2h Ah ,
and, after a three-grid cycle, is given by (4.2)
em+1 = Th4h em h h Th4h = Shν2 Kh4h Shν1
with
h 4h γ and Kh4h = Ih − P2h (I2h − (T2h ) )(A2h )−1 Rh2h Ah .
ν2 ν1 4h 4h 2h 2h Here, T2h defined by (4.1) reads T2h = S2h (I2h − P4h (A4h )−1 R4h )S2h . Ah , A2h , and A4h correspond to discretizations on the h-, 2h-, and 4h-grids, although A2h and A4h may also be based on Galerkin principles, as described above. Sh and S2h are the smoothing operators on the fine and the first coarse grid, and νi (i = 1, 2) represents 4h h 2h the number of pre- and postsmoothing steps. Rh2h , R2h and P2h , P4h denote restriction and prolongation operators, respectively, between the different grids. Ih and I2h are the identity operators with respect to the h- and the 2h-grids. Instead of inverting A2h , as is done in (4.1), the 2h-equations are solved approxi4h , with zero mately in a three-grid cycle (4.2) by performing γ two-grid iterations, T2h −1 initial approximation. This is reflected by the replacement of (A2h ) from (4.1) by 4h γ ) )(A2h )−1 . the expression (I2h − (T2h In two-grid Fourier analysis, we distinguish between low and high frequencies, 2 Θ2g low = (−π/2, π/2]
and
2g 2 Θ2g high = (−π, π] \ Θlow ,
in such a way that the low-frequency components are “visible” on both grids Gh and G2h . Each low-frequency component is coupled with three related high-frequency components that alias on G2h , leading to a splitting of the Fourier space into fourdimensional subspaces, the spaces of 2h-harmonics: span{ϕ(θ α , x); α = (αx , αy ), αx , αy ∈ {0, 1}} θ=θ
00
∈
Θ2g low
and θ
αx αy
with
:= (θx − αx sign(θx )π, θy − αy sign(θy )π).
Th2h is unitarily equivalent to a block diagonal matrix consisting of 4 × 4 blocks. This simple representation is then used to calculate the corresponding spectral radius and, thus, the LFA two-grid convergence factor, ρ2g . The smoothing factor, μ, which measures the reduction of high-frequency error components by relaxation is defined based on a coarse-grid correction operator in (4.1) that annihilates the low-frequency error components. Kh2h is thus replaced in (4.1) by a projection, Q2h h , onto the space of high frequencies. Similar to the two-grid case, in three-grid LFA we distinguish between low and high frequencies, but now with respect to three grids, Gh , G2h , and G4h . It is then appropriate to divide the Fourier space into a direct sum of 16-dimensional subspaces, the so-called 4h-harmonics [30]. As a consequence, Th4h is unitarily equivalent to a block diagonal matrix with at most 16×16 blocks. We obtain the LFA three-grid convergence factor, ρ3g , as the supremum of the spectral radii of the block matrices. The assumptions needed for LFA to be valid seem far from the algebraic setting considered here; however, the construction of the complex-valued smoothing and coarse-grid
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1559
correction components can still be guided by the LFA results. In particular, the algorithm should also converge well for structured-grid problems, under the usual LFA restrictions. In this sense, LFA results serve as a first indication of the quality of the coarse-grid correction proposed. 4.1. LFA results. As Fourier analysis applies only to structured grids, we analyze a structured-grid variant of the AMG interpolation described above. The natural structured-grid variant of AMG interpolation is that used in Dendy’s black-box multigrid (BMG); for details, see [2, 14]. For restriction, our operator of choice is
h ] (A ), Rh2h = [P2h
(4.3)
as in section 3.2. Next to this choice of restriction, we also include the straight injection operator in our evaluation. The third option discussed in section 3.2 with restriction based on AA is not considered here due to the expense needed to calculate it. We start the LFA experiments with a Laplacian-type operator, ⎤ ⎡ −1 ∧ A = ⎣ ı 4 ı ⎦. −1 We fix two red–black Jacobi relaxation sweeps as the smoother with ω = 0.9 (as discussed below) and compare the performance of the simple real-valued transfer operators of full-weighting restriction (FW) and bilinear interpolation (BL) to the complex-valued BMG interpolation and restriction, based on (4.3) and on injection (INJ). Table 4.1 gives the LFA two- and three-grid convergence factors. The LFA smoothing factor for this red–black relaxation is μ2 = 0.217. We see in Table 4.1 that the complex-valued transfer operators perform satisfactorily, slightly better than the real-valued transfer operators. Injection also performs well on this problem, giving superior results with BMG interpolation, even in combination with red–black relaxation and five point stencils. Similar behavior is seen when the red–black Jacobi relaxation is replaced by a forward–backward Gauss–Seidel relaxation. We next consider a definite Helmholtz operator, −Δu + αu, discretized either by standard finite differences or by bilinear finite elements on a uniform mesh, leading to the standard O(h2 ) discretization stencils. Figure 4.1 displays smoothing factors for this problem, with α = k 2 (real) and α = k 2 ı (complex), and their dependence on the relaxation parameter ω (commonly used in multigrid smoothers). Three smoothers are compared: pointwise damped Jacobi, damped red–black Jacobi (which is identical to red–black Gauss–Seidel for five-point stencils) and lexicographical Gauss–Seidel (a forward sweep followed by a backward sweep) with ν = 2 smoothing steps. One forward–backward pair of Gauss–Seidel sweeps is considered as two smoothing steps. We compare in each Table 4.1 LFA two- and three-grid convergence factors for a Laplacian with complex entries.
ρ2g
BL-FW 0.217
BL-INJ 0.299
BMG-BMG 0.188
BMG-INJ 0.158
ρV 3g
0.217
0.310
0.188
0.158
1560
S. P. MACLACHLAN AND C. W. OOSTERLEE
(a) Pointwise Jacobi
(b) Red–black Jacobi
(c) Forward–backward Gauss–Seidel
Fig. 4.1. Dependence of LFA smoothing factors (ν = 2) on the relaxation parameter, ω, for Jacobi, red–black Jacobi, and forward–backward Gauss–Seidel smoothers. Table 4.2 LFA smoothing, two- and three-grid factors for the FE discrete complex Helmholtz operator.
μ2 ρ2g ρV 3g
ω-JAC ω = 0.9
ω-JAC-RB ω = 0.9
GS-FWBW ω = 1.0
0.12 0.14 0.35
0.12 0.15 0.29
0.18 0.17 0.17
subfigure the finite difference and finite element stencils for problems with positive real- or complex-valued Helmholtz terms, α = k 2 or α = k 2 ı. Parameters are set as h = 1/64, k 2 = 1600. Table 4.2 presents two- and three-grid LFA convergence factors for V(1,1) cycles applied to the FE discretization of the complex-valued Helmholtz operator with complex-valued transfer operators. The LFA three-grid V -cycle factors show degradation for standard and red–black Jacobi relaxation, which is an indication that the coarse-grid problems are not defined optimally for these smoothers. A closer look at the Galerkin coarse-grid operators built with these transfer operators shows that on the third grid, operators with only positive elements arise. While the convergence with Jacobi relaxation degrades, Gauss–Seidel relaxation is not influenced by these coarse-grid discretizations and performs satisfactorily. Finally, we mention that the use of injection as the restriction operator for this Helmholtz operator did not lead to satisfactory LFA factors. The two- and three-grid factors increase to at least 0.56 for different smoothers and interpolation operators, indicating the advantages of nontrivial restriction. Remark 1. A remark on the need for complex-valued interpolation follows. The LFA assumptions of full coarsening and constant stencils are more restrictive than our expectations for AMG. While LFA does provide useful insight into several properties of multigrid for complex systems, our analysis cannot distinguish between the benefits of real-valued and complex-valued interpolation operators. Instead, we provide a simple example (a special case of the operators considered in section 5.3) to demonstrate the benefits of a “fully complex” AMG approach. Consider the Hermitian matrix, A, defined over a two-dimensional mesh by 4ui,j − e−iφi−1,j ui−1,j − eiφi,j ui+1,j − e−iψi,j−1 ui,j−1 − eiψi,j ui,j+1 = fi,j , where the fields, {φi,j } and {ψi,j }, are chosen randomly. Applying the AMG strengthof-connection test to this stencil suggests that all neighboring points are strongly
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1561
connected, as all off-diagonal entries are of the same size. As a result, AMG naturally chooses a structured red–black coarsening pattern. In this setting, two-level AMG 6 5 A A with complex interpolation operators is an exact solver; partitioning A = Af ffc Afccc , Af f is diagonal, and AMG naturally chooses the complex interpolation operator, −A−1 Af c f f P = . I By contrast, choosing a real-valued interpolation operator, say, −1 Af f |Af c | P = , I where |M | denotes the matrix with entries |mij |, leads to a two-level convergence factor of 0.585. This discrepancy remains in three-level and multilevel convergence factors (0.155 with complex interpolation, 0.589 with real interpolation). LFA based on red–black coarsening may give further insight into this choice and is a question for future research. 5. Numerical results. All numerical experiments are run on a 64-bit AMD Athlon 3700+ system, running at 2.2 GHz, with 3 GB of RAM. We use the standard gnu compiler collection (gcc) C-compiler with appropriate optimization options enabled for these machines. This compiler supports the C99 complex standard and, so, we use the native complex arithmetic functions to implement the algorithms described above. Systems on the coarsest level of the multigrid hierarchy are solved using a direct solver (LAPACK’s zgbtrs routine); in all examples, multigrid coarsening is continued until the coarsest level is so small that this is a negligible part of the iteration cost. For all examples, we consider V(1,1) multigrid cycles, using Gauss–Seidel relaxation ordered so that the coarse-grid points are relaxed first on the downward sweep of the V-cycle and last on the upward sweep (the so-called CF − F C ordering) with (complex-valued) interpolation chosen as the generalization of (2.5) from the real-valued case. 5.1. Simple problems. First, we consider variants of several simple problems for which standard multigrid and AMG performance are well understood, in order to demonstrate that the generalization to complex arithmetic maintains these properties. Additionally, this provides a benchmark for comparison of the costs of AMG in real arithmetic versus complex arithmetic. Table 5.1 shows the performance of real-valued AMG for bilinear finite element discretizations of the positive-definite Poisson equation, with and without a positivedefinite shift, −Δu = f and −Δu + k 2 u = f , with k = 0.625/h. The coarse-grid Table 5.1 Real-valued AMG performance for finite-element Poisson, with and without a definite shift.
512 × 512 1024 × 1024
ρM G # Iters. tsolve ρM G # Iters. tsolve
standard 0.116 7 2.6 0.136 7 10.4
−Δu Si modulus Si 0.116 7 2.6 0.136 7 10.4
−Δu + k2 u standard Si modulus Si 0.041 0.041 6 6 2.6 2.4 0.041 0.041 6 6 10.4 9.8
1562
S. P. MACLACHLAN AND C. W. OOSTERLEE Table 5.2 Complex-valued AMG performance for simple problems.
512 × 512 1024 × 1024
ρM G # Iters. tsolve ρM G # Iters. tsolve
−Δu 0.116 7 4.0 0.136 7 17.5
−ıΔu 0.116 7 4.0 0.136 7 16.6
−Δu + k2 u 0.041 6 3.8 0.041 6 15.4
−Δu + ık2 u 0.171 11 5.3 0.172 12 22.5
selection is based on a strong-connection definition of either −aij ≥ θ maxk=i {−aik } (standard) or |aij | ≥ θ maxk=i {|aik |} (modulus-based) for θ = 0.25 and, here, we take R = P T and Ac = P T AP . Shown in Table 5.1 are the maximum convergence factor observed over (up to) 200 iterations, as well as the iteration count and total time needed to reduce the 2 norm of the residual by a relative factor of 109 for each problem. For the unshifted Poisson problem, there is no difference in the results using the standard or modulus-based definition of strong connections, due to the M-matrix structure of the finite-element operators on these regular meshes. This is preserved on coarse meshes, so that the standard and modulus-based definitions coincide, and the performance of the two approaches is identical. The same is not true for the shifted problem, where the coarsening of the mass matrix induces positive off-diagonal entries in the coarse-grid operators. As a result, the cost of a multigrid V(1,1) cycle is slightly lower for the modulus-based measure of strength of connection, leading to slightly faster times for that approach. Table 5.2 details the performance of the complex-valued AMG algorithm for these problems, along with two simple complex generalizations. For the real-valued problems, we see that the complex AMG solver performs the same as the real-valued AMG solver does (cf. Table 5.1) when measured in terms of convergence factors, ρM G or iteration counts. In terms of CPU time, however, we see that there is a premium to be paid for doing complex arithmetic; however, the cost is only 50% to 70% greater than that of the real-valued AMG algorithm. For the complex Poisson operator, −ıΔu, performance of complex AMG essentially matches that of both the real and complex AMG algorithms applied to the usual Poisson operator. Different results are seen for the complex-shifted Helmholtz operator, −Δu + ık 2 u, where the convergence factors (while still bounded nicely away from 1.0) increase somewhat. Comparing to Figure 4.1, we see that the solver performance for the complex-shifted problem is quite close to that predicted by LFA with lexicographic relaxation order, while the performance for the real shift is much better. Using lexicographic-ordered Gauss–Seidel relaxation for the positive shift leads to performance similar to that predicted by LFA. For this problem, however, an unsymmetric ordering of relaxation offers a significant improvement over lexicographic ordering. The complexity of the algebraic multigrid iterations may be measured in terms of the AMG grid complexity, cg , and operator complexity, cA . The grid complexity, defined as the sum of the number of grid points on all levels in the AMG hierarchy divided by the number of grid points on the first level, gives a measure of the storage costs needed for coarse-level approximations, residuals, and right-hand sides during the multigrid iteration. The operator complexity, defined as the sum of the number of nonzeros in the system matrices defined on all levels of the AMG hierarchy divided by
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1563
the number of nonzeros in the fine-grid operator, gives both a measure of the storage needed for the coarse-grid operators within the multigrid hierarchy and of the cost per multigrid V-cycle, relative to that of a fine-grid matrix-vector multiply. For realvalued AMG, we measure these complexities in terms of real values stored, while for complex-valued AMG, we measure these complexities by the number of complex values stored. For the problems considered in this section, all grid and operator complexities are quite low. Using real-valued AMG with the standard definition of Si for the shifted problem resulted in the largest complexities, cg = 1.38 and cA = 1.63. For all other problems considered here, grid and operator complexities were nearly uniform, with cg = 1.33 and cA = 1.41. These are typical of geometric multigrid complexities for the regular stencil patterns and structured grids considered in this section. 5.2. Unstructured grid application. In this section, we consider the effect of unstructured grids on the performance of the complex-valued AMG algorithm. The discrete problems arise from a linear finite-element discretization of a Helmholtz problem with complex shift that arises from a reduction of Maxwell’s equations [16]. As a result, the discrete problem is complex symmetric; thus, we consider complex AMG with the choice of R = P T as discussed above, along with preconditioning of BiCGStab (as CG is no longer a suitable choice). In the special case of a time-harmonic source current, Maxwell’s equations may be reduced to a frequency-domain Helmholtz equation for the z-component of the Fourier transform of a vector potential, A. Details of this reduction can be found, for example, in [16], resulting in the equation for Aˆz , 1 ˆ ∇Az + ıωσ Aˆz = Jˆs,z . −∇ · μ We consider only half an annular domain, as depicted in Figure 5.1, discretized using standard linear finite elements. Lahaye et al. solve these systems using a real-valued AMG algorithm (based on the real part of the system matrix) as a preconditioner for BiCGStab [16].
Fig. 5.1. Mesh geometry for induction motor. Reprinted with permission from D. Lahaye, H. DeGersem, S. Vandewalle, and K. Hameyer, Algebraic multigrid for complex symmetric systems, c IEEE Trans. Magn., 36 (2000), pp. 1535–1538. Copyright 2000 IEEE.
1564
S. P. MACLACHLAN AND C. W. OOSTERLEE Table 5.3 AMG performance for finite-element models of induction motor. Problem 1028 nodes nnz = 6520
3959 nodes nnz = 26601
15302 nodes nnz = 104926
34555 nodes nnz = 239661
75951 nodes nnz = 529317
Solver real AMG complex AMG AMG-BiCGStab cAMG-BiCGStab real AMG complex AMG AMG-BiCGStab cAMG-BiCGStab real AMG complex AMG AMG-BiCGStab cAMG-BiCGStab real AMG complex AMG AMG-BiCGStab cAMG-BiCGStab real AMG complex AMG AMG-BiCGStab cAMG-BiCGStab
cg 1.80 1.80 1.80 1.80 1.85 1.85 1.85 1.85 1.83 1.82 1.83 1.82 1.81 1.81 1.81 1.81 1.77 1.77 1.77 1.77
cA 2.17 2.17 2.17 2.17 2.67 2.67 2.67 2.67 2.86 2.85 2.86 2.85 2.91 2.91 2.91 2.91 2.87 2.87 2.87 2.87
tsetup 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.2 0.1 0.2 0.4 0.4 0.4 0.4 1.0 1.1 1.0 1.1
tsolve 0.0 0.0 0.0 0.0 0.1 0.1 0.1 0.1 0.6 0.7 0.4 0.3 1.7 1.7 1.0 1.0 4.5 4.2 2.6 2.5
# Iters. 19 20 6.5 6 29 39 8.5 9 29 32 9 8 31 30 8.5 8.5 31 29 8.5 8
We consider five different resolutions on the half-annulus geometry of Figure 5.1. The triangulation on the coarsest mesh has 1028 nodes, while subsequent meshes are refinements of this initial triangulation. The finest mesh has 75,951 nodes and approximately 530,000 nonzero entries in the system matrix. For each mesh, we consider the performance of ILU-preconditioned BiCGStab, AMG based on the realpart of the matrix and based on the complete, complex matrix, both as a standalone solver and as a preconditioner for BiCGStab. The performance of the AMG variants for these problems is detailed in Table 5.3. For each problem and each AMG approach, we measure both the grid complexity, cg , and operator complexity, cA , of the complex AMG solver. Additionally, we report setup and solve times (in seconds), as well as the number of iterations needed to reduce the 2 -norm of the residual by a relative factor of 109 . Because of the small sizes of the least-refined meshes, some times are below the threshold that can be accurately measured; such times are reported as 0.0. Table 5.3 shows that the complex-valued AMG achieves performance similar to that seen with the real-valued AMG preconditioning investigated in [16]. In particular, the number of iterations required to reduce the residual from a zero initial guess by a relative factor of 109 are quite close to those of an AMG algorithm based solely on the (real) differential part of the operator. In timing these results, we have not optimized the distribution of real- and complex-valued arithmetic in the real-valued AMG case. This means that, in practice, preconditioning based solely on the real part of the operator is more efficient, given the added efficiency possible using real-valued storage for the interpolation operators. For comparison, we consider the performance of BiCGStab preconditioned with ILU(0) for these problems. Note that because of the fixed nonzero structure in the preconditioner (matching that of A), the effective operator complexity for these preconditioners is 2, as both the original matrix and its ILU factors must be computed and stored. Because of the simple calculation of this factorization, setup times for these preconditioners are negligible. Iteration costs, however, are significant, with
1565
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
over 6000 iterations needed for the second-largest grid and stalling convergence on the largest problem.
5.3. Gauge Laplacian. While examples such as the problems considered in the previous subsections are interesting, they do not require the full strength of the complex AMG algorithm introduced here. In order for a complex-valued approach to be a necessity, the dynamics of the problem must be somehow inherently complex valued, whereas for problems such as the complex-shifted Helmholtz equation, the dominant part of the operator is, in the end, real-valued. In this section, we discuss model problems from a large class of applications with inherently complex dynamics, related to the numerical simulation of quantum field theory [20, 24]. Quantum chromodynamics (QCD) is the part of the standard model of physics that describes the strong interaction between quarks and gluons within particles such as protons and neutrons. In order to test the validity of the standard model, QCD is used to make predictions of properties of known particles, which may be independently measured using a particle accelerator [12]. Discretizing QCD requires the computation of a Feynman path integral over certain Grassmann (anticommuting) variables, which may be simplified by introducing an effective gauge field [20]. Such a simplification, however, introduces an inverse of the Dirac operator. A full description of this problem is beyond the scope of this work; we refer the interested reader to [20, 24] and will concentrate here on a model problem that displays many of the same complications as the discretization of the full Dirac operator. The Dirac equation is a first-order system of 12 coupled PDEs posed on a fourdimensional space. The 12 variables (scalar-functional degrees of freedom), however, appear as a tensor-product of a four-dimensional space (associated with the quantum dynamical spin) with a three-dimensional space (associated with the quantum dynamical color). The Dirac operator is thus block-structured, with differential coupling between variables of the same spin index but only algebraic coupling between variables of different spins. Exploiting this structure, the Dirac operator may be written as
M (A) =
4
(γμ ⊗ (I3 ∂μ − ıAμ )) − mI12 ,
μ=1
4
where μ = 1, . . . , 4 denote the four canonical space–time directions, {γμ }μ=1 are four fixed unitary matrices with all entries 0,±1, and ±ı, ∂μ is the standard partial derivative in the μ-direction, I3 and I12 are the 3 × 3 and 12 × 12 identity matrices, respectively, the constant, m, is a mass parameter, and ⊗ denotes a standard tensor product of operators. The field of complex 3×3 matrices, Aμ (x), is known as the gauge potential and is chosen through a Monte Carlo process in the numerical simulation of QCD. Here, we consider a two-dimensional model related to the Dirac equation, with a scalar potential, a(x, μ) (where x now varies over a two-dimensional lattice, and μ = 1, 2). Considering the covariant derivative term, Dμ = ∂μ − ıa(x, μ), we can discretize this term over the - xlattice by integrating its action along the edge between two lattice points. Just as xij ∂μ ψ(x)dxμ = ψ(xj ) − ψ(xi ) leads to a standard finite-
1566
S. P. MACLACHLAN AND C. W. OOSTERLEE
difference discretization of the first derivative, ∂μ , we may approximate
xj
xi
+ , eıa(x,μ) ∂μ e−ıa(x,μ) ψ(x) dxμ xi , + ıa(ˆ ≈ e x,μ) e−ıa(xj ,μ) ψ(xj ) − e−ıa(xi ,μ) ψ(xi ) .
(∂μ − ıa(x, μ)) ψ(x)dxμ =
xj
ˆ = xi leads to the approximation Choosing x Dμ ψ(x) ≈
, 1 + ı(a(xi ,μ)−a(xj ,μ)) e ψ(xj ) − ψ(xi ) , h
ˆ = xj leads to while choosing x Dμ ψ(x) ≈
, 1+ ψ(xj ) − eı(a(xj ,μ)−a(xi ,μ)) ψ(xi ) . h
Defining α(xi , xj ) = a(xi , μ) − a(xj , μ) to be the weighting of the covariant derivative ˆ lead over the lattice edge between nodes i and j, we see that these two choices of x to closely related forward and backward difference formulae. Defining the second derivative stencil as the weighted difference of these forward and backward differences (to define the first derivatives at xi± 12 , assuming nodes xi , xi±1 are adjacent), we have , 1 + −Dμ2 ψ(xi ) ≈ 2 −e−ıα(xi−1 ,xi ) ψ(xi−1 ) + 2ψ(xi ) − eıα(xi ,xi+1 ) ψ(xi+1 ) . h Proper specification of the gauge potential is needed in order to appropriately model QCD applications. Here, we consider the case of a unit lattice spacing (h = 1) and take α(xi , xj ) to be a random variable of the form α(xi , xj ) = 2πβθ(xi , μ), where β is a temperature parameter and θ(xi , μ) is chosen independently for each node, xi , and direction, μ, on the lattice from a normal distribution with mean 0 and variance 1. We consider doubly periodic two-dimensional lattices. For β = 0, this recovers the positive-semidefinite five-point finite-difference Laplacian; for β > 0, the matrices are positive definite and Hermitian. Figure 5.2 shows the convergence factors for three variants of multigrid applied to the discrete covariant Laplacian on a 513 × 513 grid as β increases. The simplest
Fig. 5.2. Convergence factors for covariant Laplacian with varying β.
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1567
variant is multigrid based on full coarsening, with bilinear interpolation and fullweighting restriction. The method of [16, 21] is also considered, where the classical (Ruge–St¨ uben) AMG coarsening and interpolation are performed based on the real part of the matrix, denoted by “real AMG.” Finally, the complex AMG proposed here is also tested. For β = 0, the matrix is the usual five-point periodic Laplacian, and all three methods perform well. As β increases, however, the performance of geometric multigrid and real AMG quickly degrade, confirming that important information is lost when the imaginary part of A is discarded. Only performance of the fully complex AMG remains consistently good as β increases. In particular, note that complex AMG converges roughly twice as fast as real AMG. The complex AMG operator complexities remain steady for β away from 0; when β = 0.25, cA = 3.04, while cA = 3.05 for β = 1.0. Operator complexities for geometric multigrid are approximately 1.60 for all grids, while real AMG generates complexities that are similar to those of complex AMG for small β but that are somewhat smaller for larger β; when β = 0.25, cA = 2.63, while cA = 2.06 for β = 1.0. Thus, even with the performance advantage of real arithmetic over complex arithmetic, the complex AMG solver proposed here is more efficient than the use of a solver with real-valued transfer operators for these problems. As β increases, the effect of the randomness in the definition of the covariant Laplacian becomes more pronounced. An interesting test problem arises when the covariant Laplacian appears in combination with a Helmholtz term, −
Dμ2 ψ(x) + mψ(x),
μ
where the coefficient, m, is chosen so that the matrices remain positive definite but match the conditioning of the usual Laplacian. Such a shift mimics the behavior of the mass term in the Dirac operator. To do this, we compute the maximum-magnitude ' eigenvalue, λ, of the matrix, M , obtained by taking μ Dμ2 (so that the off-diagonal entries have positive sign) and setting the ' diagonal to zero. By Gerschgorin’s theorem, we expect the largest eigenvalue of − μ Dμ2 to be approximately 4 + λ ≈ 8, while the smallest should be roughly 4 − λ, where 0 ≤ λ ≤ 4. Then, m is chosen so that 8 1 1 2 4−λ+m = h2 , i.e., m = 8h − (4 − λ). We then diagonally scale the matrix by 4+m so that it has constant unit diagonal. Even for large h, the effect of such a shift on AMG performance can be dramatic. The eigenvector approximation criterion [4, 19] states that for AMG to be effective, each eigenvector of A must be approximated by something in the range of interpolation with accuracy proportional to its eigenvalue. For large eigenvalues of A, the shift by m has little effect on this approximation property. For the smallest eigenvalues, however, the shift by m has a significant effect and these modes may be very slow to be resolved by a simple AMG cycle, as very accurate interpolation is needed to complement the very slow performance of relaxation on the modes associated with the smallest eigenvalues of A. Table 5.4 shows some representative AMG convergence factors. However, this shift is relatively significant for only a few modes of the matrix and, thus, the poor AMG performance is easily overcome through the use of a Krylov subspace accelerator. As A is Hermitian and positive definite (and the AMG cycle can easily also be made so), we consider here the performance of an AMG-preconditioned conjugate gradient algorithm. Figure 5.3 shows the convergence histories of geometric multigrid and AMG (both based on the real part and the complex AMG proposed
1568
S. P. MACLACHLAN AND C. W. OOSTERLEE
Table 5.4 Complex AMG convergence factors for the shifted covariant Laplacian, with variation in β and grid size.
65 × 65 129 × 129 257 × 257 513 × 513
β = 0.0 0.104 0.143 0.166 0.231
β = 0.25 0.988 0.998 0.9992 0.99986
β = 0.5 0.993 0.997 0.9996 0.99987
β = 0.75 0.993 0.997 0.9995 0.99985
β = 1.0 0.990 0.998 0.9993 0.99988
Fig. 5.3. Convergence histories for geometric multigrid, AMG based on the real part of the matrix, complex AMG and adaptive complex AMG for the 513 × 513 shifted covariant Laplacian operator with β = 1.0. Solid lines indicate unaccelerated performance, while dashed lines indicate MG-PCG performance.
here) for β = 1.0. For all three methods, slow (and stalling) convergence is seen for the unaccelerated solvers, while the MG-PCG combinations converge (relatively) quickly. Notice that the complex-AMG-PCG combination beats the real-AMG-PCG technique for convergence to any fixed tolerance by a factor of roughly 2. An alternative to preconditioning to overcome the slowing down of convergence for the shifted covariant Laplacian is the use of adaptive multigrid techniques [9, 10]. In adaptive AMG [10], the approximation (2.4) used to collapse a strong connection between two fine-grid points, i and j, is replaced by one that takes into account the form of a representative algebraically smooth error exposed by adding an initial relaxation phase to the AMG setup algorithm. Thus, on each level in the AMG setup, we first relax on the homogeneous problem, Au = 0, with a random initial guess for u to expose errors that relaxation is slow to resolve. This prototypical algebraically smooth error is then used in the definition of interpolation, in place of the AMG assumption that such errors vary slowly along strong connections. When this error is very different from the constant, the improvement in performance of adaptive AMG over classical AMG may be significant, as the classical AMG algorithm aims to satisfy the eigenvector approximation criteria for the constant vector only. Figure 5.4 shows the algebraically smooth error found by performing 200 iterations of Gauss–Seidel relaxation on Au = 0 (so that the error is well resolved), with a random initial guess for u, on the shifted covariant Laplacian with β = 1.0 on a 65 × 65 grid. Thus, we expect a significant benefit of using adaptive AMG over the classical AMG assumption. Indeed, in Figure 5.3, the adaptive AMG convergence, both with and without PCG acceleration, is significantly better than that of any of the other methods. Table 5.5 shows adaptive AMG setup times and convergence factors
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
1569
Fig. 5.4. Algebraically smooth error for the shifted covariant Laplacian with β = 1.0 on a 65 × 65 mesh with the real part shown at left and the imaginary part shown at right. Table 5.5 Adaptive AMG setup times and convergence factors for the shifted covariant Laplacian.
65 × 65 129 × 129 257 × 257 513 × 513
β = 0.5 tsetup ρ 0.04 s 0.431 0.2 s 0.341 0.8 s 0.467 5.3 s 0.576
β = 0.75 tsetup ρ 0.04 s 0.375 0.2 s 0.308 1.1 s 0.463 3.5 s 0.442
β = 1.0 tsetup ρ 0.03 s 0.454 0.2 s 0.440 0.9 s 0.391 3.6 s 0.457
for several grid sizes and values of β. The convergence factors in Table 5.5 do not appear to degrade as β increases and degrade only slightly with increase in problem size. It is not immediately clear if this degradation is due only to the increase in grid size or if it is related to the changes in the random sample taken for the gauge field on each grid. Setup times scale nearly with problem size, although a slight increase in the work needed (relative to problem size) for the adaptive AMG setup stage is required for each finer grid. In comparison, setup time for regular AMG on the 513 × 513 grid is 3.2 s, while approximately 0.36 s are required for a single V(1,1) cycle on that grid. Thus, the increase in cost of the adaptive AMG setup phase over the regular AMG setup phase (3.5-5.3 seconds versus 3.2 seconds) is roughly equivalent to the cost of one to six V-cycles, much less than the expected improvement offered in the adaptive AMG solve phase. 6. Conclusions. A natural extension of the algebraic multigrid method for complex-valued matrices is presented. Unlike previous extensions, our approach is completely algebraic in nature and relies on no special structure of the complexvalued matrix. Choices for the generalization are motivated by a combination of classical multigrid considerations and local Fourier analysis. Numerical results confirm the performance on simple model problems, realistic complex Helmholtz problems on unstructured meshes, and, in combination with Krylov acceleration or adaptive multigrid ideas, for ill-conditioned matrices based on covariant derivatives. Acknowledgments. The authors would like to thank Domenico Lahaye for the problems considered in section 5.2 and James Brannick and Mike Clark for helpful discussion in defining the covariant Laplacian operators considered in section 5.3.
1570
S. P. MACLACHLAN AND C. W. OOSTERLEE REFERENCES
[1] M. F. Adams, Algebraic multigrid methods for direct frequency response analyses in solid mechanics, Comput. Mech., 39 (2007), pp. 497–507. [2] R. E. Alcouffe, A. Brandt, J. E. Dendy, and J. W. Painter, The multigrid method for the diffusion equation with strongly discontinuous coefficients, SIAM J. Sci. Statist. Comput., 2 (1981), pp. 430–454. [3] A. Brandt, Multi-level adaptive solutions to boundary-value problems, Math. Comp., 31 (1977), pp. 333–390. [4] A. Brandt, Algebraic multigrid theory: The symmetric case, Appl. Math. Comput., 19 (1986), pp. 23–56. [5] A. Brandt, General highly accurate algebraic coarsening, Elect. Trans. Numer. Anal., 10 (2000), pp. 1–20. [6] A. Brandt, S. F. McCormick, and J. W. Ruge, Algebraic Multigrid (AMG) for Automatic Multigrid Solution with Application to Geodetic Computations, Tech. report, Institute for Computational Studies, Colorado State University, 1982. [7] J. Brannick, M. Brezina, D. Keyes, O. Livne, I. Livshits, S. MacLachlan, T. Manteuffel, S. McCormick, J. Ruge, and L. Zikatanov, Adaptive smoothed aggregation in lattice QCD, in Domain Decomposition Methods in Science and Engineering XVI, Lecture Notes in Comput. Sci. Engrg. 55, Springer, New York, 2007, pp. 505–512. [8] J. Brannick, M. Brezina, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, An energy-based AMG coarsening strategy, Numer. Linear Algebra Appl., 13 (2006), pp. 133–148. [9] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, Adaptive smoothed aggregation (αSA) multigrid, SIAM Rev., 47 (2005), pp. 317–346. [10] M. Brezina, R. Falgout, S. MacLachlan, T. Manteuffel, S. McCormick, and J. Ruge, Adaptive algebraic multigrid, SIAM J. Sci. Comput., 27 (2006), pp. 1261–1286. [11] M. Brezina, T. Manteuffel, S. McCormick, J. Ruge, G. Sanders, and P. Vassilevski, On Smooth Aggregation Multigrid for Nonsymmetric A, Tech. report, University of Colorado, Boulder, 2006. [12] C. Davies and P. Lepage, Lattice QCD meets experiment in hadron physics, AIP Conf. Proc., 717 (2004), pp. 615–624. [13] D. Day and M. A. Heroux, Solving complex-valued linear systems via equivalent real formulations, SIAM J. Sci. Comput., 23 (2001), pp. 480–498. [14] J. E. Dendy, Black box multigrid for nonsymmetric problems, Appl. Math. Comput., 13 (1983), pp. 261–283. [15] Y. A. Erlangga, C. W. Oosterlee, and C. Vuik, A novel multigrid based preconditioner for heterogeneous Helmholtz problems, SIAM J. Sci. Comput., 27 (2006), pp. 1471–1492. [16] D. Lahaye, H. De Gersem, S. Vandewalle, and K. Hameyer, Algebraic multigrid for complex symmetric systems, IEEE Trans. Magn., 36 (2000), pp. 1535–1538. [17] S. MacLachlan, T. Manteuffel, and S. McCormick, Adaptive reduction-based AMG, Numer. Linear Algebra Appl., 13 (2006), pp. 599–620. [18] S. MacLachlan and Y. Saad, A greedy strategy for coarse-grid selection, SIAM J. Sci. Comput., 29 (2007), pp. 1825–1853. [19] S. F. McCormick and J. W. Ruge, Multigrid methods for variational problems, SIAM J. Numer. Anal., 19 (1982), pp. 924–929. ¨nster, Quantum Fields on a Lattice, Cambridge Monogr. Math. Phys., [20] I. Montvay and G. Mu Cambridge University Press, Cambridge, 1994. [21] S. Reitzinger, U. Schreiber, and U. van Rienen, Algebraic multigrid for complex symmetric matrices and applications, J. Comput. Appl. Math., 155 (2003), pp. 405–421. [22] M. Ries, U. Trottenberg, and G. Winter, A note on MGR methods, J. Linear Algebra Appl., 49 (1983), pp. 1–26. ¨ben, Algebraic Multigrid (AMG), in Multigrid Methods, S. F. Mc[23] J. W. Ruge and K. Stu Cormick, ed., Frontiers in Appl. Math. 3, SIAM, Philadelphia, 1987, pp. 73–130. [24] J. Smit, Introduction to Quantum Fields on a Lattice, Cambridge Lecture Notes Phys. 15, Cambridge University Press, Cambridge, 2002. ¨ben and U. Trottenberg, Multigrid Methods: Fundamental Algorithms, Model Prob[25] K. Stu lem Analysis and Applications, in Multigrid Methods, W. Hackbusch and U. Trottenberg, eds., Lecture Notes in Math. 960, Springer-Verlag, Berlin, 1982, pp. 1–176. ¨ller, Multigrid, Academic Press, London, [26] U. Trottenberg, C. W. Oosterlee, and A. Schu 2001. ˇk, J. Mandel, and M. Brezina, Two-Level Algebraic Multigrid for the Helmholtz [27] P. Vane
ALGEBRAIC MULTIGRID FOR COMPLEX-VALUED MATRICES
[28] P. [29] R. [30] R. [31] D.
1571
Problem, in Domain Decomposition Methods, 10 (Boulder, CO, 1997), Contemp. Math. 218, AMS, Providence, RI, 1998, pp. 349–356. ˇk, J. Mandel, and M. Brezina, Algebraic multigrid by smoothed aggregation for Vane second and fourth order elliptic problems, Comput., 56 (1996), pp. 179–196. S. Varga, On recurring theorems on diagonal dominance, Linear Algebra Appl., 13 (1976), pp. 1–9. Wienands and C. W. Oosterlee, On three-grid Fourier analysis for multigrid, SIAM J. Sci. Comput., 23 (2001), pp. 651–671. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, 1971.
c 2008 Society for Industrial and Applied Mathematics
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1572–1595
MULTILEVEL PROJECTION-BASED NESTED KRYLOV ITERATION FOR BOUNDARY VALUE PROBLEMS∗ YOGI A. ERLANGGA† AND REINHARD NABBEN† Abstract. We propose a multilevel projection-based method for acceleration of Krylov subspace methods. The projection is constructed in a similar way as in deflation but shifts small eigenvalues to the largest one instead of to zero. In contrast with deflation, however, the convergence rate of a Krylov method combined with this new projection method is insensitive to the inaccurate solve of the Galerkin matrix, which with some particular choice of deflation subspaces is closely related to the coarse-grid solve in multigrid or domain decomposition methods. Such an insensitivity allows the use of inner iterations to solve the Galerkin problem. An application of a Krylov subspace method to the associated Galerkin system with the projection preconditioner leads to a multilevel, nested Krylov iteration. In this multilevel projection Krylov subspace method, information about small eigenvalues to be projected is contained implicitly in the Galerkin system associated with the matrix of the linear system to be solved. These small eigenvalues, from a Krylov method point of view, are responsible for slow convergence. In terms of projection methods, this is conceptually similar to multigrid but different in the sense that in multigrid the projection is done by the smoother. Furthermore, with the only condition being that the deflation matrices are full rank, we have in principle more freedom in choosing the deflation subspace. Intergrid transfer operators used in multigrid are some of the possible candidates. We present numerical results from solving the Poisson equation and the convection-diffusion equation, both in two dimensions. The latter represents the case where the related matrix of coefficients is nonsymmetric. By using a simple piecewise constant interpolation as the basis for constructing the deflation subspace, we obtain the following results: (i) h-independent convergence for the Poisson equation and (ii) almost independent of h and the P´eclet number for the convection-diffusion equation. Key words. Krylov subspace, GMRES, deflation, multilevel projection, Poisson equation, convection-diffusion equation AMS subject classifications. 65F10, 65F50, 65N22, 65N55 DOI. 10.1137/070684550
1. Introduction. It is well known that the convergence of Krylov subspace methods applied to the linear system (1.1)
Au = b,
A ∈ Rn×n ,
depends to some extent on the spectrum of A. Suppose that A is symmetric positive definite (SPD). The convergence of conjugate gradient (CG) methods then depends in particular on the condition number of A, denoted by κ(A) [19], which in this case is equal to the ratio of the largest eigenvalue to the smallest one. If A is obtained from a discretization of PDEs with elliptic, self-adjoint operators, the convergence is then typically characterized by the smallest eigenvalues, which can be of order O(h2 ). So, as the grid is refined, the convergence usually deteriorates. For general matrices, no practical convergence bounds similar to the SPD case exist so far. Nevertheless, it is commonly accepted that a Krylov subspace method will converge faster if A has a more clustered spectrum. Similar to the SPD case, the convergence, however, may be hampered if some eigenvalues are very close to zero. ∗ Received by the editors March 7, 2007; accepted for publication (in revised form) September 20, 2007; published electronically April 18, 2008. The research was supported by the Deutsche Forschungsgemeinschaft (DFG) project NA248/2-2. http://www.siam.org/journals/sisc/30-3/68455.html † Institut f¨ ur Mathematik, TU Berlin, MA 3-3, Strasse des 17. Juni 136, D-10623 Berlin, Germany (
[email protected],
[email protected]).
1572
MULTILEVEL PROJECTION KRYLOV METHODS
1573
In order to improve the convergence, preconditioners are usually incorporated. A usual way of understanding the role of a preconditioner in Krylov subspace methods is again by looking at the condition number of the preconditioned system (in the case of SPD systems). An efficient preconditioning matrix M is usually chosen such that the condition number of M −1 A, denoted by κ(M −1 A), is close to one. Hence, it is very natural to seek for M such that M −1 A ≈ I, where I is the identity. Incomplete decompositions (e.g., incomplete LU and incomplete Cholesky) and the approximation inverse, among others, belong to this class of preconditioners. This approach, however, does not exploit detail information about the spectrum of A. Since small eigenvalues are responsible for slow convergence, the convergence of a Krylov subspace method can be accelerated if by any means components in the residuals which correspond to the small eigenvalues can be removed during the iterations. One way to achieve this is by deflating a number of the smallest eigenvalues to zero. Nicolaides [14] shows that, by adding some vectors related to the smallest eigenvalues, the convergence of the CG may be improved. For GMRES, Morgan [10] shows that, by augmenting the Krylov subspace by some eigenvectors related to small eigenvalues, these eigenvectors no longer have components in the residuals, and the convergence bound of GMRES can be made smaller; thus, a faster convergence may be expected. See a somewhat unified discussion on this subject by Eiermann, Ernst, and Schneider in [1]. A similar approach is proposed in [6], where a projection matrix resembling deflation of some small eigenvalues is used as a preconditioner. Suppose that r smallest eigenvalues are to be deflated to zero. Define the projection (1.2)
PD = I − AZE −1 Y T ,
E = Y T AZ,
where Z, Y ∈ Rn×r and have rank r. Here Z is called the deflation subspace. It can be proved [6, 12] that if A is SPD and Z = Y any rectangular matrices with full rank, then the spectrum of PD A contains r zero eigenvalues. Furthermore, it has also been shown in [11] that, with larger r, the effective condition number reduces as well. Hence, if one uses a larger deflation subspace, the convergence can be improved considerably. A larger deflation subspace, however, raises a negative implication. Since in the preconditioning step one needs to compute E −1 , a larger deflation subspace can make this computation more difficult and expensive. Related to this, it has been shown in [12] that PD is sensitive to inaccurate inversion of E. This means that E must be inverted exactly (by a direct method for sparse systems) or iteratively up to machine precision to obtain fast convergence. We note here that the form E = Y T AZ in (1.2) is similar to the coarse-grid matrix used in multigrid and domain decomposition methods. If Z = Y , it is called Petrov– Galerkin coarse-grid approximation. In our discussion, we do not particularly call this term “the coarse-grid matrix.” Instead, we generally refer the product E = Y T AZ to the “Galerkin matrix” associated with the matrix A and deflation subspaces Z and Y . The linear system related to this Galerkin matrix is then called the “Galerkin system.” Of importance is that, compared to some other forms of projection-like preconditioners, e.g., the abstract balancing preconditioner [9] or the additive coarse-grid correction preconditioner [15], under certain conditions, the deflation preconditioner leads to better conditioned systems and faster convergence [11, 12, 13, 3, 21]. Furthermore, with a straightforward implementation, a CG with deflation, for example, requires fewer matrix-vector multiplications than a CG with the abstract balancing preconditioner. The convergence rate of CG with the abstract balancing preconditioner is, however, insensitive to an inexact solve of the Galerkin system. It is actually
1574
YOGI A. ERLANGGA AND REINHARD NABBEN
the sensitivity of deflation to an inexact solve of the Galerkin system which hinders the use of a large deflation subspace in order to enhance the convergence. In this case, work spent for this solve will undermine gains in the number of iterations. In this paper, we propose another projection-like method which allows the exploitation of a large deflation subspace. The proposed method is found to be stable with respect to an inaccurate solve of the Galerkin system. With this advantage, an iterative procedure based on a Krylov method can be designed to accomplish this solve, which can then be considered as approximately solving a coarse-grid problem. The convergence rate for solving the Galerkin system is accelerated by employing the same projection-type preconditioner on the Galerkin matrix E, leading to a multilevel, nested iterations. We call the resultant method the “multilevel projection Krylov method.” Assuming that the inverse of the Galerkin matrix E is available, GMRES [17] combined with our stable projection preconditioner requires only one additional matrixvector multiplication. This is the same as the unstable deflation preconditioner and less than the stable abstract balancing preconditioner. This paper is organized as follows. In section 2, we discuss deflation from an eigenvalue computation point of view. We show that the deflation preconditioner (1.2) can be derived from the deflation process in eigenvalue computations. Based on this view, we propose our stable abstract projection-type preconditioner. In section 3, spectral analysis is given for the projected linear systems. We establish that, for arbitrary full rank deflation matrices Z and Y , the deflation and new projection preconditioner lead to projected linear systems of similar spectra. Sections 4 and 5 discuss implementation aspects of the abstract projection preconditioner. Particularly in section 5, we develop our multilevel projection algorithm and give a brief discussion on its relation with multigrid. One of the possible choices of the deflation subspace is discussed in section 6, which is based on agglomeration. Computational results are presented in section 7 for two model problems: the Poisson equation and the convection-diffusion equation. Section 8 draws some conclusions. 2. Wielandt deflation and deflation preconditioner. For the purpose of analysis, we consider more general notations for the linear system (1.1), namely, (2.1)
ˆu = ˆb, Aˆ
where Aˆ is nonsingular and in general nonsymmetric. Aˆ can be understood as the preconditioned matrix. For example, for left preconditioning, Aˆ = M −1 A, and therefore u ˆ = u, and ˆb = M −1 b. Aˆ can also represent a split preconditioned matrix, i.e., ˆ A = M1−1 AM2−1 , with u ˆ = M2 u and ˆb = M1−1 b. ˆ denoted by λ(A) ˆ or λ if it is clear from We assume that the eigenvalues of A, ˆ Then the context, are all positive real numbers. Denote the spectrum of Aˆ by σ(A). ˆ σ(A) = {λ1 , . . . , λn } and λi ≤ λi+1 , ∀i ∈ N. Hence, we order the eigenvalues of Aˆ increasingly. This spectral assumption is satisfied by symmetric positive definite matrices. For nonsymmetric matrices, this assumption does not generally hold. Unless the symmetric part is sufficiently dominating, the eigenvalues can be complex. For the convection-diffusion equation, this assumption is satisfied if the diffusion part is sufficiently dominating. We start our discussion by looking at the action of the deflation preconditioner (1.2) ˆ But in doing so, we prefer not to follow the augmented Krylov subspace point on A. of view. Instead, we will formulate the deflation preconditioner alternatively from the deflation process in computing a few largest or smallest eigenvalues in eigenvalue
MULTILEVEL PROJECTION KRYLOV METHODS
1575
computations. Note that, due to the use of new notations, (1.2) is now rewritten as ˆ ˆ E ˆ −1 Y T , where E ˆ = Y T AZ. PDˆ = I − AZ Consider the Power method [5, 27, 18] applied to Aˆ−1 . The method iteratively computes an eigenvalue of Aˆ−1 , which in this case is equal to the smallest eigenvalue ˆ i.e., λ1 , and the corresponding eigenvector z1 . The second small(in magnitude) of A, est eigenvalue can also be computed by using the Power method but after deflating λ1 to zero. Define the deflation process as follows: (2.2) y T z1 = 1, Aˆ1 = Aˆ − λ1 z1 y T , with y an arbitrary vector yet to be defined. This is called the Wielandt deflation [5, 27, 18]. Application of the Power method on Aˆ−1 1 results in the eigenpair (λ2 , z2 ). In eigenvalue computations, shifting the first, already-computed, small eigenvalues to zero, however, is not the only way to ensure computing the next smallest eigenvalue, if we are concerned only with a few, say, r n, smallest eigenvalues. For example, we define the generalized deflation process as follows: Aˆ1,γ = Aˆ − γ1 z1 y T , (2.3) y T z1 = 1, γ ∈ R. 1
Regarding this generalized deflation, we have the following theorem. Theorem 2.1 (generalized Wielandt). Let Aˆ1,γ1 be defined as in (2.3). The spectrum of Aˆ1,γ is then given by (2.4)
σ(Aˆ1,γ1 ) = {λ1 − γ1 , λ2 , . . . , λn }.
Proof. The proof follows, e.g., [5, 27, 18]. For i = 1, the left eigenvector yi of Aˆ1,γ1 ˜ i y T , or AˆT yi = (AˆT −γ1 yz T )yi = ˆ 1 z1 y T ) = λ satisfies the relation yiT Aˆ1,γ1 = yiT (A−γ 1,γ1 1 i AˆT yi − γ1 yz1T yi = λi yi , due to the orthogonality of z1 and yi . Since the eigenvalues of a matrix and its transpose are the same, the above statement is proved for i = 1. ˆ 1 − γ1 z1 y T z1 = (λ1 − γ1 )z1 . For i = 1, Aˆ1,γ1 z1 = (Aˆ − γ1 z1 y T )z1 = Az It is clear from the above theorem that by choosing γ1 = λ1 we obtain a spectrum, which is related to the Wielandt deflation (2.2). But, if we are concerned only with λ2 , this value can also be captured from Aˆ−1 1,γ1 by the Power method if λ1 − γ1 ≥ λ2 . The shift γ1 cannot, however, be chosen arbitrarily, because it is possible that 0 < λ1 − γ1 < λ2 . In this case, the Power method will return λ1 − γ1 as the smallest eigenvalue of Aˆ1,γ1 and not λ2 . The problem, however, is that λ2 is the value that we are going to compute from Aˆ1,γ1 ; hence, λ2 is not known yet. Since λ1 , λ2 ≤ λn , one alternative for the shift without having to know any information about λ2 is by choosing γ1 = λ1 − λn . In this case, by Theorem 2.1 we actually shift λ1 to λn , and the application of the Power method to Aˆ−1 1,γ1 gives λ2 . We summarize these choices of shifts in the following corollary. Corollary 2.2. Let Aˆ1,γ1 be defined as in (2.3). Then, (i) for γ1 = λ1 , σ(Aˆ1,γ1 ) = {0, λ2 , . . . , λn } (Wielandt deflation), and (ii) for γ1 = λ1 − λn , σ(Aˆ1,γ1 ) = {λn , λ2 , . . . , λn }. We now suppose that the r smallest eigenvalues are already known. The (r + 1)th eigenvalue can next be computed by the Power method by first applying simultaneous ˆ Denote by Z = [z1 . . . zr ] the eigenvector deflation of the r smallest eigenvalues to A. matrix corresponding to the first r eigenvalues, and define Aˆr = Aˆ − ZΓr Y T , (2.5) Y T Z = I, where Γr = diag(γ1 , . . . , γr ) and Y = [y1 . . . yr ]. Z is often called the deflation subspace. The following theorem is related to deflation (2.5).
1576
YOGI A. ERLANGGA AND REINHARD NABBEN
Theorem 2.3. Let Aˆr be defined as in (2.5), with Γr = diag(γ1 , . . . , γr ) and Y Z = I. Then, for i = 1, . . . , r; (i) γi = λi , the spectrum of Aˆr is σ(Aˆr ) = {0, . . . , 0, λr+1 , . . . , λn }; (ii) γi = λi − λn , the spectrum of Aˆr is σ(Aˆr ) = {λn , . . . , λn , λr+1 , . . . , λn }. Proof. We consider the case with Γr = diag(γ1 , . . . , γr ). From (2.5), for i = 1, . . . , r, we have T
ˆ i − [z1 . . . zr ]diag(γ1 , . . . , γr )[y1 . . . yr ]T zi Aˆr zi = (Aˆ − ZΓr Y T )zi = Az = λi zi − γi zi = (λi − γi ), because of the orthogonality condition in (2.5). For i = r + 1, . . . , n, the orthogonality condition leads to the relation Aˆr zi = λi zi . The desired results follow immediately after substituting the particular choice of Γr . Thus, while the first r eigenvalues are simultaneously deflated (or shifted), the remaining n − r eigenvalues are untouched. The rectangular matrix Y is, however, yet to be defined. Let Y = [y1 . . . yr ] be the eigenvector matrix of AˆT . In this case, the eigenvalue ˆ = Y T AZ. ˆ = diag(λ1 , . . . , λr ) satisfies the relation E ˆ We first consider the matrix E ˆ and case where γi = λi , i = 1, . . . , r. In this case, Γr = E, ˆ T = Aˆ − AZ ˆ ˆ T = Aˆ − Z E ˆE ˆ −1 EY ˆ E ˆ −1 Y T A, Aˆr = Aˆ − Z EY ˆ Thus, ˆ = ZE ˆ and AˆT Y = Y E. because AZ (2.6)
ˆ ˆ E ˆ −1 Y T )Aˆ =: P ˆ A, Aˆr = (I − AZ D
or (2.7)
ˆ =: AQ ˆ ˆ. ˆ − ZE ˆ −1 Y T A) Aˆr = A(I D
(In this paper, the notations P (e.g., PDˆ ) and Q (e.g., QDˆ ) are reserved for denoting the left and the right projection preconditioner, respectively.) So, suppose that (2.1) ˆ E ˆ −1 Y T , i.e., is left preconditioned by PDˆ := I − AZ (2.8)
ˆu = P ˆ ˆb. PDˆ Aˆ D
The spectrum of PDˆ Aˆ is then given by Theorem 2.3. With Z and Y consisting of the ˆ = σ(AQ ˆ ˆ ). eigenvectors of Aˆ and AˆT , respectively, σ(PDˆ A) D If a Krylov subspace method is applied on (2.8), the method implicitly approxiˆ Since the smallest eigenvalue of P ˆ Aˆ is λr+1 , the method mates eigenvalues of PDˆ A. D will approximate λr+1 and not λ1 . In the Krylov subspace terminology, this also means that the eigenspace spanned by Z, which is associated with eigenvalues λi , i = 1, . . . , r, no longer has components in the residuals; these components are projected out of the residual. (This fact raises the notion of a “projection preconditioner,” and PDˆ is one of the projection preconditioners.) It is then instructive to use the effective condition number κef f instead of the usual condition number κ to measure the convergence rate of a Krylov method. If Aˆ is symmetric positive definite, we then have that κef f := λn /λr+1 ≤ λn /λ1 =: κ because λr+1 ≥ λ1 . In this case, a Krylov ˆ subspace method applied to PDˆ Aˆ will converge faster than if applied to A. We now consider the case where γi = λi − λn , i = 1, . . . , r. In this case, Γr = ˆ − λn Ir , with Ir the identity matrix of appropriate size. For Y = [y1 . . . yr ] the E
MULTILEVEL PROJECTION KRYLOV METHODS
1577
ˆ = Y T AZ, ˆ we have eigenvector matrix of AˆT and E ˆ − λn Ir )Y T = Aˆ − Z EY ˆ T + λn ZY T Aˆr,γ = Aˆ − ZΓr Y T = Aˆ − Z(E ˆ T + λn Z E ˆ T = Aˆ − AZ ˆ ˆE ˆ −1 EY ˆ −1 EY ˆ E ˆ −1 Y T Aˆ + λn Z E ˆ −1 Y T A. = Aˆ − Z E Thus, (2.9)
ˆ ˆ E ˆ −1 Y T + λn Z E ˆ −1 Y T )Aˆ =: P ˆ A. Aˆr,γ = (I − AZ N
ˆ if a Krylov subspace method is applied to P ˆ A, ˆ λr+1 will be the Similar to PDˆ A, N ˆ smallest eigenvalue to be approximated and not λ1 . Since PDˆ A and PNˆ Aˆ are spectrally similar (cf. Theorem 2.3), one may expect that the convergence of a Krylov subspace method applied to them should also be similar. The form (2.9) is the left preconditioning version of this type of projection. A right preconditioning version of it can be derived as follows: ˆ T + λn ZY T Aˆr,γ = Aˆ − ZΓr Y T = Aˆ − Z EY ˆ T + λn Z E ˆ E ˆ −1 Y T . ˆE ˆ −1 EY ˆE ˆ −1 Y T = Aˆ − AZ ˆ E ˆ −1 Y T Aˆ + λn AZ = Aˆ − Z E Thus, (2.10)
ˆ − ZE ˆ −1 Y T Aˆ + λn Z E ˆ −1 Y T ) =: AQ ˆ ˆ. Aˆr,γ = A(I N
Remark 2.4 (balancing preconditioner). Instead of shifting small eigenvalues to zero as in deflation, small eigenvalues can also be shifted to one as in the case of the balancing preconditioner [9, 8], widely used in domain decomposition methods. In abstract formulation, for Aˆ = A, the balancing preconditioner can be written as (2.11)
PB = QD M −1 PD + ZE −1 Y T ,
E = Y T AZ,
where M is the preconditioner associated with the coarse-grid solve. See also monographs [20, 22] for different expositions of the balancing preconditioner and [12, 3] for comparisons with deflation. Remark 2.5 (additive coarse-grid correction). The idea of shifting the small eigenvalues towards the maximum eigenvalue is not new and has been discussed in [15]. The resultant projection operator in that paper is, however, different from ours. In fact, the projection operator in [15] belongs to the additive coarse-grid correction preconditioner. Remark 2.6 (multistep fixed point iteration). Many existing multilevel methods can be viewed from multistep fixed point iterations. A multistep fixed point iteration consists of a sequence of smoothing and coarse-grid correction. A multigrid or domain decomposition method can always be represented by or decomposed into this sequence. From an abstract point of view, e.g., the balancing preconditioner is a fixed point iteration with the following steps: coarse-grid correction, smoothing, and then coarsegrid correction. The new error enew can be related to the previous one by (2.12)
enew = Keold ,
where K is the iteration matrix. For PNˆ , however, we could not find a decomposition which associates it with a multistep fixed point iteration. Hence, similar to deflation, it is somewhat necessary
1578
YOGI A. ERLANGGA AND REINHARD NABBEN
to see PNˆ Aˆ fully only from the Krylov subspace iteration context and not from a fixed point iteration. It has been shown for SPD matrices in [12] and nonsymmetric matrices in [3] that for some starting vectors u ˆ0 a Krylov subspace method with deflation produces, ˆ respectively, the A-norm of error and the 2-norm of residual, which are never larger ˆ ˆ and than the abstract balancing preconditioner. Since PNˆ Aˆ and PDˆ Aˆ (as well as AQ N ˆ ˆ ) have similar spectra, it is also worthwhile to consider P ˆ and Q ˆ as another AQ D N N ˆ type of projection-based preconditioner for A. We note here that our definition of PDˆ is similar to that used in [16]. By direct 2 computation, we have that PD ˆ ; hence, PD ˆ is a projector. PN ˆ , however, is not ˆ = PD 2 a projector, because PNˆ = PNˆ . We classify PNˆ as a projection preconditioner only because the action of PNˆ on Aˆ results in projection of some eigenvalues of Aˆ to a value suitable for convergence acceleration of a Krylov subspace method. 3. Further spectral properties. In this section, we focus mainly on the left ˆ with Aˆ = M −1 A. We rewrite P ˆ Aˆ below: projection preconditioner PNˆ A, N (3.1)
ˆ E ˆ −1 Y T + λn Z E ˆ −1 Y T = P ˆ + λn Z E ˆ −1 Y T , PNˆ ≡ I − AZ D
ˆ = Y T AZ ˆ = Y T M −1 AZ. We first note that, if Aˆ is symmetric, it is natural where E to set Y = Z in order to preserve symmetry. This choice results in a symmetric PDˆ Aˆ ˆ But, even with Aˆ symmetric and Z = Y , P ˆ Aˆ is nonsymmetric. for a symmetric A. N In relation with PDˆ , we have the following properties, which are valid for any possible ˆ M −1 A (left), AM −1 (right), and M −1 AM −1 (split preconditioning formulation for A: 1 2 preconditioning). Lemma 3.1. Let Z, Y ∈ Rn×r be any matrices of rank r. Aˆ is nonsingular. Then the following relations hold: ˆ = Z T AQ ˆ ˆ = 0; (i) PDˆ AZ D T (ii) Z PDˆ = QDˆ Z = 0; ˆ ˆ. (iii) PDˆ Aˆ = AQ D Proof. As all equalities can be established by direct computations, we show only the proof for the first part of (i); the rest can be proved similarly. In this case, we have (3.2)
ˆ = AZ ˆ − AZ ˆ E ˆ −1 Y T AZ ˆ = 0, PDˆ AZ
ˆ ˆ = Y T AZ. due to E In section 2 we considered a projection which is based on invariant vectors associated with Aˆ (and AˆT ). In practice, such invariant vectors are not known and have yet to be computed. This computation itself is already expensive. In order to make the projection preconditioner viable, we need to use vectors which are different from eigenvectors to construct Z and Y . In the discussions to follow, we neither specify the vectors for Z and Y nor determine how to construct them. We just assume that such vectors are available. ˆ ˆ ) behaves under “arbitrary” We now need to see how the spectrum of PNˆ Aˆ (and AQ N Z and Y . We have an intermediate result, which establishes spectral equivalence of PNˆ Aˆ with another but similar preconditioner, as follows. Theorem 3.2. Let Z, Y ∈ Rn×r be any rectangular matrices of rank r, and let ˆ −1 Y T . Then PNˆ be defined as in (3.1). Define PB& = QDˆ PDˆ + λn Z E (3.3)
ˆ = σ(P & A). ˆ σ(PNˆ A) B
MULTILEVEL PROJECTION KRYLOV METHODS
1579
Proof. We have that ˆ −1 Y T Aˆ = P 2ˆ Aˆ + λn Z E ˆ −1 Y T Aˆ PNˆ Aˆ ≡ PDˆ Aˆ + λn Z E D ˆ ˆ + λn Z E ˆ ˆ + λ n I − λn Q ˆ ˆ −1 Y T Aˆ = P ˆ AQ = PDˆ AQ D D D D = λn (I + (λ−1 P ˆ Aˆ − I)Q ˆ ). n
D
D
Thus, ˆ = σ(λn (I + (λ−1 ˆ λ ∈ σ(PNˆ A) ˆ A − I)QD ˆ )) = λn + μ, n PD ˆ with μ ∈ σ(λn (λ−1 ˆ A − I)QD ˆ ). But n PD −1 ˆ ˆ σ(λn (λ−1 ˆ A − I)QD ˆ ) = σ(λn QD ˆ (λn PD ˆ A − I)). n PD
Hence, ˆ = σ(λn I + (P ˆ Aˆ − λn I)Q ˆ ) = σ(λn I + Q ˆ (P ˆ Aˆ − λn I)) λ ∈ σ(PNˆ A) D D D D ˆ ˆ −1 Y T )A), ˆ = σ(λn I + Q ˆ P ˆ A − λn Q ˆ ) = σ((Q ˆ P ˆ + λn Z E D
D
D
D
D
which completes the proof. ˆ T is very similar to the Notice that the preconditioner PB& := QDˆ PDˆ + λn Z EY abstract balancing preconditioner and is the same as the abstract balancing preconditioner if M = I and λn = 1 (cf. (2.11) in Remark 2.4). ˆ For arbitrary full ranked Z and Y , the first r eigenvalues We now consider PDˆ A. ˆ = 0; cf. Lemma 3.1.) Denote by μi the of PDˆ Aˆ are zero. (This is because PDˆ AZ ˆ eigenvalues of PDˆ A for r + 1 ≤ i ≤ n. The corresponding eigenvectors are zi ; thus ˆzi = μi zi . The next theorem relates the spectrum of P ˆ Aˆ with P & A. ˆ PDˆ A D B n×r ˆ Theorem 3.3. Let Z, Y ∈ R be any matrices of rank r. Let A be nonsingular. If (3.4)
ˆ = {0, . . . , 0, μr+1 , . . . , μn }, σ(PDˆ A)
then (3.5)
ˆ = {λn , . . . , λn , μr+1 , . . . , μn }. σ(PB& A)
ˆ = 0. Thus, Proof. For i = 1, . . . , r, we know that PDˆ AZ (3.6)
ˆ = Q ˆ P ˆ AZ ˆ + λn Z E ˆ = λn Z. ˆ −1 Y T AZ PB& AZ D D
ˆ For r + 1 ≤ i ≤ n, In this case Z = [z1 . . . zr ] are the eigenvectors of PDˆ Aˆ and PB& A. we have ˆ ˆ zi = Q ˆ P ˆ AQ ˆ ˆ zi + λn Z E ˆ ˆ zi ˆ −1 Y T AQ PB& AQ D D D D D ˆ zi = μi Q ˆ zi , = Q ˆ P ˆ A D
D
D
ˆ and the corresponding eigenvector is because of Lemma 3.1. Thus, μi ∈ σ(PB& A), QDˆ zi . Combining Theorems 3.2 and 3.3 results in the following spectral comparison.
1580
YOGI A. ERLANGGA AND REINHARD NABBEN
Theorem 3.4. Let Z, Y ∈ Rn×r be any matrices of rank r. Let Aˆ be nonsingular. If ˆ = {0, . . . , 0, μr+1 , . . . , μn }, σ(PDˆ A)
(3.7) then
ˆ = {λn , . . . , λn , μr+1 , . . . , μn }. σ(PNˆ A)
(3.8)
Thus, for any full ranked Z and Y , the spectra of PNˆ Aˆ and PDˆ Aˆ are still similar. ˆ is symmetric, and Suppose that Aˆ is SPD, and set Y = Z. In this case, E −1 T ˆ ˆ ˆ ˆ PDˆ A = A − AZE Z A is also symmetric and amounts to the difference between an ˆ E ˆ −1 Z T A. ˆ If λi and μi are the SPD matrix Aˆ and a positive semidefinite matrix AZ ˆ respectively, then μn ≤ λn (cf. [7]). The condition number eigenvalues of Aˆ and PDˆ A, ˆ = μn /μr+1 . P ˆ Aˆ is, however, not symmetric, even if Aˆ is SPD of PDˆ Aˆ is κ(PDˆ A) N and Y = Z. The condition number of PNˆ Aˆ becomes very complicated to determine. We define for any nonsingular matrix with positive real eigenvalues the quality of spectral clustering as the ratio between the largest and the smallest eigenvalue, i.e., ˆ := λn /μr+1 ≥ μn /μr+1 =: ˆ (PNˆ A) λmax (A)/λmin (A) =: κ ˆ (A). For PNˆ Aˆ we have κ ˆ i.e., the spectrum of P ˆ Aˆ is better clustered than the spectrum of P ˆ A. ˆ κ ˆ (PDˆ A); D N Since Krylov subspace methods converge rapidly for a linear system whose matrix has a much-clustered spectrum, this conclusion again shows the effectiveness of deflation for Krylov subspace convergence acceleration. But there exists an ω1 ∈ R such ˆ =κ ˆ and ˆ (PNˆ A), ˆ (PDˆ A) that μn = ω1 λn . Thus, if λn in PNˆ is replaced by ω1 λn , then κ ˆ ˆ PDˆ A and PNˆ A have the same quality of clustering. In this situation, we may expect ˆ a similar convergence for PDˆ Aˆ and PNˆ A. Before closing this section, we establish the spectral equivalence between PAˆ Aˆ ˆ ˆ for any full ranked Z, Y ∈ Rn×r in the following theorem. and AQ N Theorem 3.5. Let Z, Y ∈ Rn×r be any matrices of rank r. Then ˆ = σ(AQ ˆ ˆ ). σ(PNˆ A) N
(3.9)
Proof. By using the definition of QNˆ , we have ˆ E ˆ −1 Y T = AQ ˆ E ˆ −1 Y T ˆ ˆ = AQ ˆ ˆ + λn AZ ˆ 2ˆ + λn AZ AQ N D D ˆ ˆ + λn (I − P ˆ ) = λn (I + P ˆ (λ−1 ˆ ˆ − I)). = PDˆ AQ D D D n AQD Thus, ˆ ˆ − I))) = λn + μ, ˆ ˆ ) = σ(λn (I + P ˆ (λ−1 λ ∈ σ(AQ N D n AQD ˆ ˆ − I)). Since with μ ∈ σ(λn PDˆ (λ−1 n AQD ˆ ˆ − I)) = σ(λn (λ−1 ˆ ˆ − I)P ˆ ), σ(λn PDˆ (λ−1 n AQD n AQD D we then have ˆ ˆ ) = σ(λn I + λn P ˆ (λ−1 ˆ ˆ − I)) = σ(λn I + λn (λ−1 ˆ ˆ − I)P ˆ ) λ ∈ σ(AQ n AQD N D n AQD D ˆ ˆ P ˆ − λn P ˆ ) = σ(A(Q ˆ ˆ P ˆ + λn Z E ˆ −1 Y T )) = σ(λn I + AQ D
D
D
D
D
ˆ −1 Y T )A). ˆ = σ((QDˆ PDˆ + λn Z E ˆ = σ(AQ ˆ ˆ ). Because of Theorem 3.2, we finally get σ(PNˆ A) N
MULTILEVEL PROJECTION KRYLOV METHODS
1581
ˆ ˆ. Therefore, the spectral analysis given previously for PNˆ Aˆ also holds for AQ N ˆ ˆ can also be well Furthermore, the convergence behavior of GMRES applied to AQ N ˆ explained by the spectrum of PNˆ A. 4. Implementation aspects. If we use a small deflation subspace Z, the results ˆ is still small, and from section 3 appear to be in favor of PDˆ . In this case, the matrix E its inverse can therefore be easily computed and stored. In this section, we discuss ˆ needed in our projection-based an important aspect related to the inversion of E preconditioner. So far, we have not particularly specified Z and Y . Since E ≡ Y T AZ is the Galerkin product associated with the matrix A, we will just call E “the Galerkin ˆ will be then called matrix” throughout this section. The linear system related to E “the Galerkin system.” 4.1. Inexact coarse-grid solve. For deflation, it has been shown theoretically as well as numerically that larger deflation subspace leads to better convergence [12, 6]. ˆ must be computed, there is, however, a Since the inverse of the Galerkin matrix E limit with respect to the size of the deflation subspace in order to make the overall ˆ one may think of using an iterperformance efficient. For a large Galerkin matrix E, ative method to approximately invert it. It has been shown in [12, 21], however, that the convergence rate of a Krylov method with deflation is sensitive to the inaccuracy in solving the Galerkin system. In this case, the smallest eigenvalues are not shifted (or deflated) exactly to zero but to very small values 0 < | | λ1 . The presence of very small eigenvalues makes the convergence even worse. If we choose to use an iterative method to solve the Galerkin system, then a very tight termination criterion must be employed. Such a problematic behavior of PDˆ does not appear in PNˆ . This is shown in the following proposition. Proposition 4.1. Let Z = [z1 . . . zr ] be a matrix whose columns are the eigenˆ E −1 Y T + λn Z E −1 Y T , ˆ and assume that, in P * = I − AZ vectors of A, N −1 = diag 1 − 1 . . . 1 − r , E λ1 λr ˆ where | i |i=1,r 1. Then the spectrum of PN *A is (4.1)
ˆ σ(PN *A) = {(1 − 1 )λn + λ1 1 , . . . , (1 − r )λn + λr r , λr+1 , . . . , λn }.
Proof. For i = 1, . . . , r, we have −1 Y T AZ ˆ ˆ ˆ −1 Y T AZ ˆ + λn Z E ˆ PN *AZ = AZ − AZ E 1 − 1 1 − r ˆ − AZdiag ˆ ,..., diag (λ1 , . . . , λr ) = AZ λ1 λr 1 − 1 1 − r + λn Zdiag ... diag (λ1 , . . . , λr ) λ1 λr ˆ − AZdiag ˆ = AZ (1 − 1 , . . . , 1 − r ) + Zdiag (λn (1 − 1 ), . . . , λn (1 − r )) = Zdiag ((1 − 1 )λn + λ1 1 , . . . , (1 − r )λn + λr r ) . For r + 1 ≤ i ≤ n, ˆ ˆ i + λn Z E ˆ i ˆ ˆ −1 Z T Az −1 Z T Az PN *Azi = Azi − AZ E ˆ E −1 Z T (λi zi ) + λn Z E −1 Z T (λi zi ) = λi zi , = λi zi − AZ because of orthogonality.
1582
YOGI A. ERLANGGA AND REINHARD NABBEN
ˆ With the Galerkin system solved only approximately, we have that κ ˆ(PN *A) := max{max{(1 − i )λn /λr+1 : i = 1, . . . , r}, λn /λr+1 }, with κ ˆ indicating the quality of ˆ ∼ ˆ Therefore, with P ˆ , ˆ (PN ˆ (PNˆ A). spectral clustering. For | i | 1, κ *A) = λn /λr+1 = κ N ˆ −1 the convergence rate of a Krylov method will not be dramatically deteriorated if E is computed without a sufficient accuracy. For PNˆ , having such a stability with respect to inexact solves of the Galerkin system allows us to enlarge the deflation subspace and solve the Galerkin system with an iterative method. This will shift many of the small eigenvalues and cluster the spectrum as much as possible around λn . Such a spectral clustering will in general lead to a faster convergence in terms of number of iterations. 4.2. Estimation of the maximum eigenvalue. Another point related to PNˆ is the determination of λn . In practice, λn is not known and has yet to be computed. This computation is expensive and must be avoided. At first sight, this seems to be a serious drawback of PNˆ as compared to PDˆ . In many cases, however, λn can be approximated, for example, by using Gerschgorin’s theorem, discussed below. First, the next proposition shows the spectrum of PNˆ Aˆ if λn is obtained from an approximation. Its proof is similar to, e.g., the proof of Proposition 4.1. Proposition 4.2. Let λn and |δ| λn be the largest eigenvalue of Aˆ and ˆ a constant, respectively. Z and Y consist of the right and left eigenvectors of A, = I − respectively, associated with the first r eigenvalues. In the case where PN * −1 −1 T ˆ ˆ ˆ ˆ AZ E Y + λn,est Z E Y , where λn,est = λn + δ, the spectrum of PN *A is ˆ σ(PN *A) = {λn,est , . . . , λn,est , λr+1 , . . . , λn }. Thus, as long as λn,est is not too far from λn (i.e., the error δ is not of the same order as λn ), the convergence rate of a Krylov subspace method with PNˆ is insensitive to λn,est . But, in this case, there again exists an ω2 ∈ R such that λn = ω2 λn,est , and ˆ ˆ ˆ (PNˆ A). therefore κ ˆ (PN *A) = κ A simple way to estimate λn is based on the Gerschgorin theorem. We skip this theorem and its proof, as they are well known, and refer the reader to, e.g., [24]. One consequence of the Gerschgorin theorem, which is relevant to our estimate on the maximum eigenvalue, is stated below. ˆ be the maximum eigenvalue of Aˆ in magnitude. I.e., Theorem 4.3. Let ρ(A) ˆ = max{|λ| : λ ∈ σ(A)}. ˆ ˆ is the spectral radius of A. ˆ For any Aˆ ∈ Cn×n , ρ(A) ρ(A) then ˆ ≤ max ρ(A) (4.2) |ˆ ai,j |. i∈N
j∈N
Proof. See [24, p. 4]. Example 4.4. To illustrate how sharp this estimate is, we first consider a symmetric case obtained from a finite difference discretization of the one-dimensional (1D) Poisson equation: −
d2 u = f, dx2
x = (0, 1),
with Dirichlet boundary conditions on x = 0, 1. Set M = I. Hence, Aˆ = A. The matrix A is SPD, ρ(A) = λn (A), and the bound (4.2) reduces to the bound for λn (A), i.e., λn ≤ max (4.3) |ai,j | = λn,est . i∈N
j∈N
MULTILEVEL PROJECTION KRYLOV METHODS
1583
Table 4.1 The computed maximum eigenvalue and its estimate for the 1D Poisson equation. n 10 50 100
λn 3.92E+2 9.99E+3 3.99E+4
λn,est 4.E+2 1.E+4 4.E+4
Table 4.2 The computed maximum eigenvalue and its estimate for the 1D convection-diffusion equation. Pe 20 40 100
|λn | 3.15E+2 6.29E+2 1.58E+3
|λn,est | 3.20E+2 6.40E+2 1.60E+3
The computed eigenvalue λn and the estimate λn,est are presented in Table 4.1 for different numbers of grid points. As clearly seen, λn,est is very close to λn . Example 4.5. Now we consider a finite difference discretization of the 1D convection-diffusion equation: 1 d2 u du − = q, dx P e dx2
x = (0, 1),
with Dirichlet boundary conditions on x = 0, 1. P e is the P´eclet number and q the source term. The resultant matrix of coefficients A is nonsymmetric. The nonsymmetric part becomes dominant if the P´eclet number is large (convection-dominated problem). We set M = I; therefore, Aˆ = A. The computed largest eigenvalue and its estimate are shown in Table 4.2. As compared to the computed maximum eigenvalue, the estimate again is quite good, even when P e is large, and in some eigenvalues may be complex-valued. In this case the spectral radius ρ(A) = max{|λ| : λ ∈ σ(A)}, but ρ(A) is not necessarily the same as the actual value of it. All convergence results presented in section 7 are based on this type of estimation ˆ of the maximum eigenvalue λn (A). 5. Multilevel projection, nested iteration. As discussed early in section 3, a very much-clustered spectrum of PNˆ Aˆ can be obtained by taking the deflation subspace Z ∈ Rn×r to be sufficiently large. A large matrix Z will, however, lead to a ˆ ∈ Rr×r (but still r < n). In such a case, solving the Galerkin large Galerkin matrix E system will be very costly. Proposition 4.1 suggests, however, that using only an ˆ −1 would not hamper the convergence rate of a Krylov subspace approximation to E method dramatically. One may, for example, use an incomplete LU factorization to ˆ and use these factors to approximately solve the Galerkin system. An ILU facE torization, however, is not well parallelizable and requires significant memory during factorization and to store the factors. In this section, we discuss the use of (inner) iterations to perform the inversion ˆ In the end, we show that such inner iterations can be constructed in a fashion of E. that leads to another type of multilevel methods. Before proceeding, we recall that PNˆ Aˆ is nonsymmetric, even if Aˆ is symmetric. Therefore, a Krylov subspace method for a nonsymmetric system should be employed, e.g., GMRES or Bi-CGSTAB [23]. Particularly in our case, we base the construction of the iterative algorithm on GMRES.
1584
YOGI A. ERLANGGA AND REINHARD NABBEN
We note that the left and right preconditioned GMRES produces different residuals. The left preconditioned GMRES, for example, produces the preconditioned residual during the iteration. In cases where the preconditioner does not accurately approximate A, the actual error associated with the preconditioned residual may still not be small when the termination criterion based on the preconditioned residual is reached. Since the actual residual is not a by-product of the algorithm, evaluation of the actual residual must be done at the expense of unnecessary computation of the approximate solution and one matrix-vector multiplication to compute the residual. This is not the case if the right preconditioned GMRES is used. In fact, the right preconditioned GMRES produces the actual residual during the iteration. Since we are going to also use GMRES for the inner iteration, it is important that the inner iteration terminates when the actual residual is already small. In this case, the matrix E should have already been inverted sufficiently accurately. This motivates the use of the right preconditioned GMRES throughout the rest of our discussion. The right preconditioned GMRES solves the system ˆ ˆ ˆu AQ Nˆ = b
(5.1) or, with Aˆ = AM −1 , (5.2)
AM −1 QNˆ u ˆ = b, where u = M −1 QN u ˆ,
for any nonsingular M . Since we assume any full ranked Z and Y , and since we are ˆ we need the scaling ω1 and ω2 to be able to shift going to use an estimate for λn (A), ˆ Recall from sections 3 and r small eigenvalues as close as possible to μn = λn (PNˆ A). 4.2 that μn = ω1 λn and λn = ω2 λn,est , and we have μn = ω1 ω2 λn,est . Below, we fully write QNˆ after accommodating this scaling: (5.3)
ˆ −1 Y T AM −1 + ωλn,est Z E ˆ −1 Y T , QNˆ = I − Z E
ˆ = Y T (AM −1 )Z, E
ˆ ˆ are spectrally where ω := ω1 ω2 is called the shift scaling factor. Since PNˆ Aˆ and AQ N the same (cf. Theorem 3.5), the convergence behavior of both implementations can be expected to be the same. 5.1. Two-level projection. To make the presentation self-contained, we write in Algorithm 1 below the right preconditioned GMRES for solving (5.2). Algorithm 1. FGMRES preconditioned by M and QNˆ . 1. Choose u0 , ω, and λn,est . Compute QN . 2. Compute r0 = b − Au0 , β = r0 2 , and v1 = r0 /β. 3. For j = 1, 2, . . . , k 4. xj := QNˆ vj ; 5. w := AM −1 xj . 6. For i = 1, 2, . . . , j 7. hi,j = (w, vj ); 8. w := w − hi,j vi . 9. Endfor 10. hj+1,j := w 2 and vj+1 = w/hj+1,j . 11. Endfor ˆ k = {hi,j }1≤i≤j+1;1≤j≤k . 12. Set Xk := [x1 . . . xk ] and H ˆ k y 2 and uk = u0 + M −1 Xk yk . 13. Compute yk = arg miny βe1 − H First notice that, in Algorithm 1, we store Xk instead of Vk := [v1 . . . vk ] in order to accommodate variable preconditioners. If QNˆ and M are constant during the
MULTILEVEL PROJECTION KRYLOV METHODS
1585
course of iterations, they can be taken out from Xk , and the approximate solution is extracted from Vk . In this case, Xk is no longer needed, and only Vk is to be stored in memory. With constant M and QNˆ , the second part of line 13 becomes uk = u0 + M −1 QNˆ Vk yk . Algorithm 1 is actually the flexible version of GMRES (or FGMRES). Second, in line 4 of Algorithm 1, we need to compute xj = QNˆ vj . In general, the matrix QNˆ is not sparse. Therefore, computing QNˆ only once and storing it in the memory are not advisable, especially if the size of Aˆ is large. In the actual implementation, the action of QNˆ on Aˆ is done implicitly as follows. By using the definition of QNˆ (5.3), line 4 of Algorithm 1 is rewritten as follows: ˆ −1 Y T Av ˆ −1 Y T vj ˆ j + ωλn,est Z E xj = vj − Z E ˆ −1 Y T (AM −1 − ωλn,est I)vj . = vj − Z E By setting s := AM −1 vj and v& := Y T (s − ωλn,est vj ), we then have (5.4)
ˆ −1 v&. xj = vj − Z E
ˆ −1 v& =: v can in principle be approximately computed by solving the In (5.4), E Galerkin system (5.5)
ˆ v, v& = E
v&, v ∈ Rr ,
for v iteratively. Once v is determined, we can continue computing t := Z v and finally xj := s − t. We note that Z : Rr → Rn and Y T : Rn → Rr . Without being specific to any particular choice of Z and Y , we call the linear map of x ∈ Rn into Rr a restriction and the linear map of w ∈ Rr into Rn a prolongation. In this case, Z is the prolongation operator, and Y T is the restriction operator. In summary, we write Algorithm 2 below. This algorithm is equivalent to Algorithm 1 but with line 4 expanded according to the above discussion. Algorithm 2. FGMRES preconditioned by M and QNˆ . 1. Choose u0 , ω, and λn,est . 2. Compute r0 = b − Au0 , β = r0 2 , and v1 = r0 /β. 3. For j = 1, 2, . . . , k 4. s := AM −1 vj . 5. Restriction: v& := Y T (s − ωλn,est vj ). ˆ v = v&. 6. Solve for v: E 7. Prolongation: t := Z v. 8. xj := vj − t. 9. w := AM −1 xj . 10. For i = 1, 2, . . . , j 11. hi,j = (w, vj ); 12. w := w − hi,j vi . 13. Endfor 14. Compute hj+1,j := w 2 and vj+1 = w/hj+1,j . 15. Endfor ˆ k = {hi,j }1≤i≤j+1;1≤j≤k . 16. Set Xk := [x1 . . . xk ] and H ˆ k y 2 and uk = u0 + M −1 Xk yk . 17. Compute yk = arg miny βe1 − H Notice from Algorithm 2 that at each FGMRES iteration one requires two matrixvector multiplications with A, two preconditioner solves, and one solve of small matrix
1586
YOGI A. ERLANGGA AND REINHARD NABBEN
ˆ Considering the case with M = I, the work needed by FGMRES with Q ˆ is the E. N same as the work with deflation (in this case, λn,est = 0) and less than the work with the abstract balancing preconditioner PB . In terms of computational work, this shows a clear advantage of QNˆ over the abstract balancing preconditioner. 5.2. Multilevel, nested iterations. We now focus on solving the Galerkin system (5.5). We will base our solution method for the Galerkin system on Krylov subspace iterations. To proceed, we need to change our notations. Without loss of generality, we set M = I; hence, Aˆ = A. We denote now the matrix A by A(1) , i.e., A(1) := A, and the Galerkin matrix E by A(2) , i.e., A(2) := E = Y T A(1) Z. The projection preconditioner (1) for A(1) is denoted by QN , where (5.6)
(1)
QN = I (1) − A(1) Z (1,2) A(2)
−1
T
(1)
Y (1,2) + ω (1) λn,est Z (1,2) A(2)
−1
T
Y (1,2) ,
(1)
where ω (1) is the shift scaling factor related to λn (A(1) ), and so on. Equation (5.6) is the two-level projection, discussed in section 5.1. If, for a special case, A(1) is SPD, and Z (1,2) = Y (1,2) , then A(2) is also SPD because T
q T A(2) q = q T Z (1,2) A(1) Z (1,2) q = (Z (1,2) q)T A(1) (Z (1,2) q) > 0
(5.7)
for q = 0. We cannot in general derive an interlacing property between σ(A(1) ) and σ(A(2) ) due to arbitrariness in Z (1,2) . But suppose that, in the worst case, σ(A(2) ) contains 0 < λmin (A(2) ) = λ1 (A(2) ) λ1 (A(1) ) and λmax (A(2) ) = λn (A(2) ) λn (A(1) ). In this case, we may expect a worse convergence rate of a Krylov method for solving the Galerkin system than the original one. This means that a substantial amount of work has to be spent on the Galerkin system (5.5). We can improve the convergence rate by applying a projection method similar to (5.6) to the Galerkin system (5.5). In the case that the Galerkin system is better conditioned than the original system, incorporating a projection preconditioner into the Galerkin system may improve the convergence further. Consider the Galerkin system (5.5) with M (1) = I. With right projection preconditioning, we write the Galerkin problem (5.5) as, after changing notations: (2)
A(2) QN p(2) = v&(2) ,
(5.8)
(2)
v&(2) = QN p(2) ,
p(2) ∈ Rr ,
where (5.9)
(2)
QN = I (2) − Z (2,3) A(3)
−1
T
(2)
Y (2,3) A(2) + ω (2) λn,est Z (2,3) A(3)
T
−1
T
Y (2,3) .
Here A(3) := Y (2,3) A(2) Z (2,3) , and Z (2,3) and Y (2,3) are again any rectangular matrices with full rank. A (inner) Krylov subspace method is then employed to solve the preconditioned Galerkin system (5.8) up to a specified accuracy. (2) In QN one needs to solve a Galerkin system problem associated with the Galerkin matrix A(3) . If A(3) is small, this can be done exactly by using a direct method. If A(3) is larger, a similar process to (5.8) has to be performed. Given a set of matrices A() , = 1, . . . , m, a multilevel nested Krylov subspace iteration results if, at every level < m, the right preconditioned system (5.10)
()
A() QN p() = v&() ,
()
v&() = QN p() ,
MULTILEVEL PROJECTION KRYLOV METHODS
1587
is solved, with −1
()
T
QN = I () − Z (,+1) A(+1) Y (,+1) A() (5.11)
−1
()
+ ω () λn,est Z (,+1) A(+1) Y (,+1)
T
and (5.12)
T
A(+1) = Y (,+1) A() Z (,+1) .
At = m, the Galerkin matrix A(m) is already small, and the associated Galerkin system can be solved exactly by a sparse direct method. At level = 1, . . . , m − 1, the problem (5.10) is solved iteratively by FGMRES. Remark 5.1. Different from the usual nested Krylov subspace iteration, where the inner iteration is used to solve a preconditioner of the same dimension as A(1) , in this multilevel projection method, since dim A(2) dim A(1) , the inner iteration solves a much smaller system. The final algorithm of the multilevel projection with nested Krylov methods is −1 written in Algorithm 3, with M (1) = I and Aˆ(1) = A(1) M (1) . In this algorithm, the FGMRES parts are simplified, by removing the solution computation steps. Algorithm 3. Multilevel projection with FGMRES. Given A(1) , M (1) , Z (,+1) , Y (,+1) for = 1, . . . , m, with A(1) = A and M (1) = M . T −1 () Compute Aˆ(+1) = Y (,+1) Aˆ() Z (,+1) , where Aˆ(1) = A(1) M (1) , and λn,est for = 1, . . . , m − 1, and choose ω () . 1. = 1. (1) With an initial guess u0 = 0, solve A(1) u(1) = b(1) with FGMRES by computing −1 2. s(1) := A(1) M (1) v (1) ; T (1) 3. restriction: v&(2) := Y (1,2) (s(1) − ω (1) λn,est v (1) ); 4. if + 1 = m −1 5. solve exactly Aˆ(2) v&(2) = v(2) ; 6. else 7. = + 1. (2) With v0 = 0, solve Aˆ(2) v(2) = v&(2) with FGMRES by computing 8. s(2) := Aˆ(2) v (2) ; T (2) 9. restriction: v&(3) := Y (2,3) (s(2) − ω (2) λn,est v (2) ); 10. if + 1 = m −1 11. solve exactly Aˆ(3) v&(3) = v(3) ; 12. else 13. = + 1; 14. ... 15. endif 16. prolongation: t(2) = Z (2,3) v(3) ; 17. x(2) = v (2) − t(2) ; 18. w(2) = Aˆ(2) x(2) ; 19. endif 20. prolongation: t(1) = Z (1,2) v(2) ; 21. x(1) = v (1) − t(1) ; −1 22. w(2) = A(1) M (1) x(2) .
1588
YOGI A. ERLANGGA AND REINHARD NABBEN
5.3. Multilevel projection and multigrid. At a first view, multigrid methods and our new multilevel projection Krylov method appear to be similar. But in principal they are different. To illustrate this, we take as an example a multigrid iteration with one pre- and postsmoothing. The error relation between two consecutive multigrid iterations can be written as (5.13)
h −1 H ej+1 = (I − M −1 Ah )(I − Ah IH AH Ih )(I − M −1 Ah )ej ,
where Ah = A is the fine-grid matrix, AH the coarse-grid matrix, M a matrix related h the restriction and interpolation matrix, respectively. to the smoother, and IhH and IH H h In multigrid, AH = Ih Ah IH is called “the Galerkin coarse-grid approximation matrix.” Thus, compared to PNˆ (or QNˆ ) in our multilevel projection Krylov method, they have similar ingredients. Indeed both methods try to project certain quantities, which are responsible for slow convergence of an iterative method. In the classical multigrid method, the projection is done by smoothing steps, which can be done with a Richardson-like iteration as Jacobi, Gauss–Seidel, or other more complicated methods. Some parts of errors which cannot be projected efficiently by smoothers then are projected out by the smoothers in the coarse-grid-correction step, which is the multilevel part in a multigrid method. In our multilevel projection Krylov method, some eigenvalues of the original operator are projected into one point. Information about the projected eigenvalues is contained in the Galerkin matrix. Hence, the accuracy of solving the Galerkin problem determines the effectiveness of our projection method. This projection, however, is done in a recursive multilevel way. Since we use, e.g., FGMRES for solving the Galerkin system, FGMRES then can be considered as a sort of smoother in our multilevel projection Krylov method. Its role, however, is not necessarily to smooth certain quantities as the smoother in multigrid. FGMRES (or any Krylov method) does not actually distinguish smooth or rough quantities (such as errors). Hence, such a distinction may not be relevant in our multilevel projection method. We also note that, in the classical geometric multigrid, the interpolation matrix is usually chosen such that errors on the coarse grid can accurately be interpolated into the fine grid. This can be achieved only if the errors on the coarse grid satisfies certain smoothness conditions. In the multilevel projection method proposed here, however, there exists more freedom in choosing the deflation subspace Z and Y . The only condition that must be satisfied is that Z and Y are full rank. This condition is naturally satisfied by the interpolation and restriction matrices in multigrid. But such matrices are only one of many possible options for constructing Z and Y . In the multilevel projection Krylov method, Z and Y can be built without any geometrical relation between the fine grid and the coarse grid. If one uses, e.g., eigenvectors of A and AT for Z and Y , respectively, for a class of matrices A satisfying the assumptions in section 2, E = diag(λ1 , . . . , λn ). In this case, we can hardly interpret the matrix E as a sort of coarse-grid discretization of PDEs. Hence, in our multilevel projection method, to call E “the coarse-grid matrix” appears to be too restrictive. In the next section, we make a specific choice of prolongation and restriction operators for our multilevel projection Krylov method, which is based on geometrical information about the underlying problem. 6. The choice of deflation subspaces Z and Y . In this section, we present one possible choice of constructing the deflation subspaces.
MULTILEVEL PROJECTION KRYLOV METHODS
Ω1
Ω2
Ω3
Ω4
1589
Fig. 6.1. 2D mesh with partitions.
6.1. Agglomeration and redistribution. For ease of presentation, we assume ˆ M = I. Hence, in the two-level projection notation, in this section that, in A, T E = Y AZ. For some classes of problems, the bilinear interpolation is a powerful technique to construct the prolongation operator in geometric multigrid methods. The restriction operator can be taken as the transpose of the prolongation operator. We can in principle directly bring these operators into our multilevel projection setting and use them as the deflation subspaces. We are, however, not going to do this. Instead, we will use a simpler technique than the bilinear interpolation, which is discussed below. Suppose that the domain Ω with index set I = {i|ui ∈ Ω} is partitioned into l nonoverlapping subdomains Ωj , j = 1, . . . , l, with respective index Ij = {i ∈ I|ui ∈ Ωj }. Then Z = [zij ] is defined as (6.1)
zij =
1,
i ∈ Ij ,
0,
i∈ / Ij .
We then set Y = Z. (This construction can also be considered as interpolation, called the piecewise constant interpolation [14].) To illustrate this construction, consider a square domain Ω discretized into an 8×8 equidistant mesh. The domain Ω is partitioned into four nonoverlapping subdomains: Ω1 , Ω2 , Ω3 , Ω4 , as shown in Figure 6.1. In this case, Z is a rectangular matrix with 4 columns. The first column has entries equal to one for points in Ω1 and zero for Ω \ Ω1 , etc. As the result, the matrix Z ∈ R64×4 has only 64 nonzeros, which is considered sparse. We can refine the partition by increasing the number of subdomains. Consider the case where every subdomain Ωj occupies only four points, shown in Figure 6.1 by the dashed lines. This mimics the standard coarsening in the geometric multigrid. However, in this case there exists no connectivity between points in Ωj and points in Ω \ Ωj . With (6.1), the prolongation and restriction processes are simpler than the prolongation and restriction matrices in the geometric multigrid. The resultant prolongation and restriction matrices are also sparser. Furthermore, this construction can easily be extended to arbitrary triangulation in finite elements, with complex local refinements.
1590
YOGI A. ERLANGGA AND REINHARD NABBEN
Consider again Figure 6.1 partitioned with four subdomains Ω1 , . . . , Ω4 . We number the unknowns as u = [u1 . . . u4 ]T . Thus, ⎡ A11 ⎢A21 ⎢ A=⎣ A31 A41
(6.2)
A12 A22 A32 A42
A13 A23 A33 A43
⎤ A14 A24 ⎥ ⎥, A34 ⎦ A44
with Ajj a block matrix of coefficients related to i ∈ Ij , j = 1, . . . , 4. Denote by 1j = {1, . . . , 1}T a vector of ones with length equal to the number of nodes in Ωj , and E = [eij ]. It is easy to show that eij = 1Ti Aij 1j .
(6.3)
Thus, the coefficients eij are nothing but the sum of all coefficients of Aij . If there is no connectivity between two subdomains, then Aij = 0, and as the consequence eij = 0. Thus, the Galerkin matrices E and A have a similar structure. (6.3) also suggests that eij need not be computed explicitly from E = Z T AZ and can be determined by simply summing up all coefficients of Aij . For comparison, in multigrid the restriction and prolongation matrices are usually stored in memory. In a few cases, one can avoid storing these matrices by computing the matrices once they are needed. This, however, should be done with an extra expense in computational work. We can further save some work and memory with respect to the action of Z and Y T on a vector. For example, because Y = Z, line 5 of Algorithm 2 can be written as, after dropping the iteration counter, (6.4)
v& = Z (s − ωλn v) = T
(si − ωλn vi ) · · ·
i∈I1
T (si − ωλn vi )
.
i∈Ik
Hence, the action of Z T on the vector s − ωλn v can be regarded as agglomerating values on the fine grid into the coarse grid. Again dropping the iteration counter, for t := vZ v (line 7, Algorithm 2), notice that (6.5)
ti = vi ,
i ∈ Ij ,
j = 1, . . . , k.
This is nothing but redistributing values on the coarse grid into the fine grid. With this choice of Z we observe that the fine-coarse grid transfer reduces to only agglomerating values of the fine grid and redistributing values of the coarse grid. They are simple processes. Our numerical computations reveal that, even with these simple processes and without any additional preconditioner M , the convergence is quiet satisfactory for some classes of problems. This construction of prolongation and restriction operators will not, however, lead to a good multigrid method, as will be seen in our numerical results in section 7. 6.2. Additional preconditioner M . A simple preconditioner which can be incorporated in the algorithm and at the same time preserves the structure of A at every coarse-grid level is the diagonal scaling. In this case, M = diag(A). In the multigrid language, the diagonal scaling is associated with the point Jacobi smoother.
MULTILEVEL PROJECTION KRYLOV METHODS
1591
Consider again the partition of domain Ω into 4 subdomains as depicted in Figure 6.1. By using the same numbering as for (6.2), we have ⎡ −1 ⎤ −1 −1 −1 D11 A11 D22 A12 D33 A13 D44 A14 ⎢D−1 A21 D−1 A22 D−1 A23 D−1 A24 ⎥ 11 22 33 44 ⎥ (6.6) AM −1 = ⎢ ⎣D−1 A31 D−1 A32 D−1 A33 D−1 A34 ⎦ , 11 22 33 44 −1 −1 −1 −1 D11 A41 D22 A42 D33 A43 D44 A44 where Djj = diag(Ajj ). An eigenvalue estimate for AM −1 can be obtained by using Theorem 4.3. In this case, the estimate is given by (6.7) |ai,j /ai,i |. λn,est (AM −1 ) = max i∈N
j∈N
7. Numerical experiments. In this section we present the application of the multilevel projection method for solving linear systems of equations arising from some PDEs. We consider two problems: the Poisson equation and the convection-diffusion equation. The Poisson equation represents the class of PDEs where the discretized form is symmetric (and positive definite). In this case standard methods, e.g., multigrid and domain decomposition methods, or deflated incomplete Cholesky CG (ICCG) already work sufficiently well. The second problem is related to the nonsymmetric linear system. 7.1. The Poisson equation. The first equation to solve is the Poisson equation: (7.1)
−∇ · ∇u = g
in Ω = (0, 1)2 .
For the boundary condition we set the homogeneous Dirichlet conditions on Γ = ∂Ω. The source term is the unit source placed in the middle of the domain. The computational domain is discretized by the central difference. The resultant matrix of coefficients is symmetric positive definite. We run the multilevel FGMRES to solve the linear system. The multilevel FGMRES is terminated if the residual of the iteration at level = 1 has been reduced by six orders of magnitude. No restart is used for the FGMRES. All coarse-grid problems are solved with a few FGMRES iterations (mostly only two). At the last level ( = m − 1), the related mth Galerkin problem is solved exactly. We set the shift scaling factor ω () = 1. The deflation subspace Z (,+1) is constructed by using (6.1) based on agglomeration of four neighboring points; see Figure 6.1. The convergence results are shown in Table 7.1. To obtain the results, five grid levels are employed. Hence, for example, the coarsest grid for a 2562 mesh is 162 . In general, one can still go further until only one point remains in the domain. On top of the convergence results in Table 7.1, the integers between parentheses indicate the numbers of FGMRES iterations used at level > 1. So, for instance, in the first group of convergence results, 4 FGMRES iterations are used at the second level, 2 at the third level, and so on, denoted compactly by (4, 2, 2, 2), etc. From Table 7.1 we see that the method mostly converges after 14 iterations. The error at convergence, i.e., econv = u − uconv , where u = A−1 b is obtained from a direct method, is also quite satisfactory (which is of order 10−6 ). Only when the method is run with intermediate iterations (2, 2, 2, 2) does the convergence slightly deteriorate. Since, compared to others, only two iterations are done at the second level ( = 2), it gives an indication that the Galerkin system at this level is not solved sufficiently
1592
YOGI A. ERLANGGA AND REINHARD NABBEN Table 7.1 FGMRES iteration counts for the 2D Poisson equation.
N 322 642 1282 2562
Iter 14 14 14 14
(4,2,2,2) u − uconv 2 4.53E–07 1.08E–06 4.50E–06 5.55E–06
Iter 14 14 14 14
(4,3,3,3) u − uconv 2 6.38E–07 1.55E–06 3.11E–06 6.01E–06
N 322 642 1282 2562
Iter 15 16 16 16
(2,2,2,2) u − uconv 2 1.05E–06 8.60E–07 2.45E–06 3.40E–06
Iter 14 14 14 14
(6,2,2,2) u − uconv 2 2.04E–07 4.70E–07 9.71E–07 1.82E–06
accurately. The convergence, however, seems to be insensitive to the accuracy of the solutions at the remaining levels. Therefore, it is the accuracy at the second level which is important. Using only two-level projection with the Galerkin system at level = 2 solved exactly, FGMRES converges after 14 iterations. Hence, for (4, 2, 2, 2), e.g., the method already converges with the best it can achieve if only one level preconditioner is used. Observe that the method converges independently of the grid size h. For comparison, multigrid with V-cycle, one pre- and post- red-black Gauss–Seidel smoothing, and bilinear interpolation achieves the same order of residual reduction after 11 iterations. If bilinear interpolation is replaced by the piecewise constant interpolation, multigrid does not converge after 40 iterations. In this case, for a 642 mesh the averaged residual reduction factor is only about 0.98. 7.2. The convection-diffusion equation. In this section we consider a nonsymmetric linear system arising from a discretization of the 2D convection-diffusion equation: 2 1 ∂ u ∂2u ∂u − (7.2) + 2 = 0 in Ω = (−1, 1)2 , ∂y P e ∂x2 ∂y where P e is the P´eclet number. The boundary conditions are determined as follows: u(−1, y) ≈ −1, u(1, y) ≈ 1, u(x, −1) = x, and u(x, 1) = 0. This problem resembles vertical flows with a boundary layer at the upper wall (y = 1). This problem is described in [2] and solved under finite element settings. For our second example, we use the vertex-centered finite volume discretization described in [26]. The flux term is approximated by using an upwind scheme. Figure 7.1 shows solutions for P e = 20 and 200; the boundary layer near the upper wall becomes thinner by an increase in P e. As discussed in [2], a multigrid method based on Galerkin coarse-grid discretization and bilinear interpolation intergrid transfer does not lead to an efficient method for P e > 1. In fact, it is a divergent method. Convergence can still be obtained if, e.g., interpolation based on [28] is used, even though the convergent is not of the “textbook multigrid” one. By using this interpolation one needs to incorporate properties of the flows into the grid transfers [28, 25]. First we consider problems with P e = 20, 50, 100, and 200 and construct the related linear systems based on equidistant meshes with 1282 , 2562 , and 5122 grid
0.5
0.5
0
0
y
y
MULTILEVEL PROJECTION KRYLOV METHODS
−0.5
1593
−0.5
−0.5
0 x
−0.5
0.5
0 x
0.5
Fig. 7.1. Contour of solutions of the 2D convection-diffusion equation with vertical wind: P e = 20 (left) and 200 (right).
Table 7.2 FGMRES iteration counts for the 2D convection-diffusion equation solved on equidistant grids.
Grid 1282 2562 5122
20 16 16 15
50 16 16 16
P e: 100 18 16 16
200 24 17 15
Table 7.3 FGMRES iteration counts for the 2D convection-diffusion equation preconditioned by diagonal scaling with local grid refinements near the upper wall.
Grid 1282 2562
20 17 16
50 16 16
P e: 100 18 16
200 25 23
points. The convergence results from FGMRES are shown in Table 7.2. The (4,2,2,2) multilevel projection is performed. Furthermore, we set ω () = 0.8. The deflation subspace Z (,+1) is constructed by using (6.1) based on agglomeration of four neighboring points, and Y (,+1) = Z (,+1) . The norm of error between the solutions obtained by FGMRES and a direct method is of order 10−5 . From Table 7.2, we observe an almost h- and P e-independent convergence. For P e = 200 solved on the 1282 mesh, convergence is reached after 24 iterations. In this case, however, nonphysical spurious wiggles appear in the solution at the vicinity of the upper wall (y = +1). Even though the solution is not correct due to these wiggles, the method is still convergent. In order to get rid of wiggles, a grid refinement is employed near the wall y = +1. Table 7.3 shows the results after including the grid refinement. We still employ the same construction of Z as in the case without grid refinements, so we do not take into account the influence of grid stretching in the refinement zone. We have also used diagonal scaling as the additional preconditioner M . For all cases wiggles do not appear in the solutions. The norm of error between the solutions obtained by FGMRES and a direct method is of order 10−5 . The convergence is observed to be almost independent of h and P e.
1594
YOGI A. ERLANGGA AND REINHARD NABBEN
8. Conclusion. In this paper, a multilevel projection-based Krylov subspace method has been discussed. From an abstract point of view, the projection is done similarly to the way the deflation preconditioner is used for Krylov subspace methods. With this projection, some small eigenvalues are shifted to the largest one. We showed that the spectrum of the projected system is similar to the spectrum of the original system preconditioned by the deflation preconditioner. This projection, however, has been shown to be insensitive with respect to an inexact solve of the Galerkin (or coarse-grid) system. This allows the construction of multilevel projected Krylov subspace iterations, working on a sequence of Galerkin (coarse-grid) problems. We claimed that the associated projection operator could not be decomposed into a sequence of smoothing and coarse-grid correction, typically used in multigrid and domain decomposition methods. Hence, the method presented here is not of the fixed point iteration type and must be considered solely from Krylov subspace iteration methods. While we in general have freedom in choosing the deflation subspace (we need only the full rank condition to determine the deflation subspace), in this paper we used a simple way of constructing the deflation subspace, which was based on the piecewise constant interpolation. With this particular choice, the process could be seen as simply agglomerating values to the coarse grid, solving coarse-grid problems, and redistributing values from the coarse grid to the fine grid. We could obtain convergence, which is almost independent of mesh and physical parameters, for a class of problems considered in this paper. With the algebraic approach we have used, at this moment we could not explain this nice convergence property. An application of our multilevel projection Krylov method with similar convergence behavior has also been done for the indefinite Helmholtz equation and is submitted for publication as well [4]. Finally, we state that this new multilevel Krylov method consists of several ingredients, such as a preconditioner for Krylov iterations, restriction and prolongation operators, approximation of the maximum eigenvalue, and approximation of the Galerkin matrix. In this paper, we have chosen a specific choice of all of these ingredients, some of which are integral parts of successful methods, such as multigrid (or domain decomposition) methods. Nevertheless, other choices or new developments in those methods can be easily implemented as well in our new multilevel Krylov framework (to obtain an even faster convergence). REFERENCES [1] M. Eiermann, O. G. Ernst, and O. Schneider, Analysis of acceleration strategies for restarted minimal residual methods, J. Comput. Appl. Math., 123 (2000), pp. 261–292. [2] H. C. Elman, D. J. Silvester, and A. J. Wathen, Finite Elements and Fast Iterative Solvers, Oxford University Press, Oxford, 2005. [3] Y. A. Erlangga and R. Nabben, Deflation and balancing preconditioners for Krylov subspace methods applied to nonsymmetric matrices, SIAM J. Matrix Anal. Appl., to appear. [4] Y. A. Erlangga and R. Nabben, On a Multilevel Krylov Method for the Helmholtz Equation Preconditioned by the Shifted Laplacian, submitted. [5] D. K. Faddeev and V. N. Faddeeva, Computational Methods of Linear Algebra, Freeman, San Francisco, 1963. [6] J. Frank and C. Vuik, On the construction of deflation-based preconditioners, SIAM J. Sci. Comput., 23 (2001), pp. 442–462. [7] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. [8] J. Mandel and M. Brezina, Balancing domain decomposition for problems with large jumps in coefficients, Math. Comp., 216 (1996), pp. 1387–1401.
MULTILEVEL PROJECTION KRYLOV METHODS
1595
[9] J. Mandel, Balancing domain decomposition, Comm. Numer. Methods Engrg., 9 (1993), pp. 233–241. [10] R. B. Morgan, A restarted GMRES method augmented with eigenvectors, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 1154–1171. [11] R. Nabben and C. Vuik, A comparison of deflation and coarse grid correction applied to porous media flow, SIAM J. Numer. Anal., 42 (2004), pp. 1631–1647. [12] R. Nabben and C. Vuik, A comparison of deflation and the balancing preconditioner, SIAM J. Sci. Comput., 27 (2006), pp. 1742–1759. [13] R. Nabben and C. Vuik, A Comparison of Abstract Versions of Deflation, Balancing and Additive Coarse Grid Correction Preconditioners, report 07-01, Delft University of Technology, Department of Applied Mathematical Analysis, Delft, 2007. [14] R. A. Nicolaides, Deflation of conjugate gradients with applications to boundary value problems, SIAM J. Numer. Anal., 24 (1987), pp. 355–365. [15] A. Padiy, O. Axelsson, and B. Polman, Generalized augmented matrix preconditioning approach and its application to iterative solution of ill-conditioned algebraic systems, SIAM J. Matrix Anal. Appl., 22 (2000), pp. 793–818. ¨ llin and W. Fichtner, Accuracy and performance issues of spectral preconditioners in [16] S. Ro semiconductor device simulation, in Proceedings of the European Conference on Computational Fluid Dynamics ECCOMAS CFD 2006, P. Wesseling, E. Onate, and J. Periaux, eds., TU Delft, 2006. [17] Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Comput., 7 (1986), pp. 856–869. [18] Y. Saad, Numerical Methods for Large Eigenvalue Problems, Halstead Press, New York, 1992. [19] Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, 2003. [20] B. Smith, P. Bjorstad, and W. Gropp, Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge University Press, Cambridge, 1996. [21] J. M. Tang, R. Nabben, C. Vuik, and Y. A. Erlangga, Theoretical and Numerical Comparison of Various Projection Methods Derived from Deflation, Domain Decomposition and Multigrid Methods, report 07-04, Delft University of Technology, Delft Institute of Applied Mathematics, Delft, 2007. [22] A. Tosseli and O. Widlund, Domain Decomposition Methods, Springer-Verlag, Berlin, 2005. [23] H. A. van der Vorst, Bi-CGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems, SIAM J. Sci. Comput., 13 (1992), pp. 631–644. [24] R. S. Varga, Gerschgorin and His Circles, Springer-Verlag, Berlin, 2004. [25] P. Wesseling, An Introduction to Multigrid Methods, John Wiley and Sons, Chichester, 1991. [26] P. Wesseling, Principles of Computational Fluids Dynamics, Springer-Verlag, Berlin, 2001. [27] J. H. Wilkinson, The Algebraic Eigenvalue Problem, Oxford University Press, London, 1995. [28] P. M. de Zeeuw, Matrix-dependent prolongations and restrictions in a blackbox multigrid solver, J. Comput. Appl. Math., 3 (1990), pp. 1–27.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1596–1612
c 2008 Society for Industrial and Applied Mathematics
COUPLING CONDITIONS FOR NETWORKED SYSTEMS OF EULER EQUATIONS∗ MICHAEL HERTY† Abstract. We are interested in the dynamics of the Euler equations on networks. We study the situation of a single vertex where the dynamics on the connected arcs are given by the polytropic Euler equations. Such problems arise, for example, in the context of gas transport in pipe networks. For the Euler equations in one spatial dimension coupling conditions for the vertex have been proposed recently. We present further conditions and introduce a two-dimensional representation of the vertex. We conduct numerical simulations and compare the obtained results with the theoretical predictions. Key words. Euler system at vertices, coupling conditions for compressible fluids AMS subject classifications. 35L65, 65N99, 76N10 DOI. 10.1137/070688535
1. Introduction. There has been intense research in the area of physical processes on networks governed by hyperbolic partial differential equations in recent years. Most popular is the discussion on traffic flow networks [16, 24, 7, 22], supply chain networks [14, 1, 2, 18], telecommunication networks [13], and gas networks. The coupled systems have been studied analytical [11, 4] as well as numerically [20]. Typically, the analytical treatment is split among the modeling of the dynamics on arcs and at the vertices. The models for the dynamics on arcs in the above mentioned applications are hyperbolic partial differential equations in one space dimension. Depending on the physical context there is a variety of established and well-understood models. The point of interest is therefore the dynamics induced by the vertices (or intersections). Here, the key points are the coupling conditions which have to be imposed and which yield boundary conditions for the one-dimensional models on the arcs. Different conditions depending on the physical process have been prescribed and investigated analytically. Depending on the model, there have been long and ongoing discussions. We only mention a few contributions in the case of traffic flow networks [7, 24, 15, 22] and gas dynamics [11, 10, 9]. Since the precise form of the proposed conditions influences heavily the dynamics on the whole network, there has been an ongoing discussion on this topic. Here, we want to contribute to this discussion a numerical study of coupling conditions for the full Euler equations used to model and predict, for example, gas flow in pipe networks. Recently, a theoretical discussion was been carried out for coupled systems of one-dimensional Euler equations [8]. Therein, coupling conditions have been proposed and the existence of solutions has been proven. No numerical considerations have been carried out. We provide numerical results for the discussion in [8] and other possible coupling condition introduced later. Additionally, we compare these to numerical simulations: The simulations are conducted for a zoomed situation where we consider the intersection not as single point but as a two-dimensional spatial domain. ∗ Received by the editors April 17, 2007; accepted for publication (in revised form) September 20, 2007; published electronically April 18, 2008. The author acknowledges the support of the DFG research program “1253” and the DAAD Vigoni project D/06/19582. http://www.siam.org/journals/sisc/30-3/68853.html † RWTH Aachen, Mathematik. Templergraben 55, D-52056 Aachen, Germany (herty@mathrc. rwth-aachen.de).
1596
COUPLING CONDITIONS FOR EULER EQUATIONS
1597
We simulate the gas flow in the domain and average the final solution to provide a comparison of the values of the conservative variables with the theoretically predicted results. We study this situation in different geometries; of course the one-dimensional model cannot capture the full two-dimensional dynamics. However, we expect and observe qualitatively similar results to validate the proposed coupling conditions. For real-world applications the main objective is to keep the one-dimensional description of the dynamics on the arcs. Therefore, boundary conditions have to be supplied. Our study presents one possibility to obtain such boundary conditions and validate existing conditions. Related work has been conducted in case of traffic flow intersections in [21]. Therein, a multilane model instead of a two-dimensional representation has been considered and simulated. There is also recent work on two-dimensional simulations of a simplified gas dynamic model in [23]. In contrast to [23] we consider here the full Euler system, introduce additional conditions, provide numerical results for the theoretical investigations of [8], present a different two-dimensional modeling, and consider a different setup for the examples as well as new averaging procedures. The paper is organized as follows. In section 2 we recall the coupling conditions for the one-dimensional system of Euler equations as well as some of the theoretical results proposed in [8]. In section 3 we model the corresponding two-dimensional situations and present simulation results in section 4. 2. Theoretical results in the one-dimensional situation. We consider the polytropic Euler equations at a single vertex with n connected arcs. Each arc j is described by a vector νj ∈ R3 \{0}. νj is originating from the vertex and parameterized by x ∈ R+ ; see Figure 1 where on each arc i ∈ {1, . . . , n} the polytropic Euler equations are x ∈ R+ , t > 0, and (1a) (1b) (1c) (1d)
∂t ρi + ∂x (ρi ui ) = 0, ( ) ∂t (ρi ui ) + ∂x u2i ρi + pi = 0, ∂t Ei + ∂x (ui (Ei + pi )) = 0, pi 1 + ρi u i . Ei = γ−1 2
ν
i
i
Fig. 1. One-dimensional setup for illustration of theoretical results.
Here γ is the adiabatic exponent and (1d) is the equation of state. We introduce the
1598
MICHAEL HERTY
entropy Si , the mass flux qi , and the speed of sound ci (2)
Si = log ei − (γ − 1) log ρi , qi = ρi ui , ci =
γpi /ρi .
p Further, the internal energy ρe satisfies ρe = γ−1 . We prescribe initial data (¯ ρi , ρ¯i u ¯i , + ¯ Ei )(x), x ∈ R on each arc. Furthermore, we have to prescribe boundary conditions at x = 0. These boundary conditions will arise from conditions at the intersection. In [8] the proposed conditions are based on the conservation of mass, conservation of energy and equality of the “dynamic pressure” Pi = u2i ρi + pi . Mathematically the proposed conditions imposed at the intersection are as follows: n ' (M) Conservation of mass:
νi qi (t, 0+) = 0, t > 0, i=1 n '
(E) Conservation of energy:
νi (ui (Ei + pi )) (t, 0+) = 0, t > 0,
i=1
(P) Equality of the dynamic pressure: ( 2 ) ( ) ui ρi + pi (t, 0+) = u2j ρj + pj (t, 0+) ∀i = j, t > 0, (S) Entropy is nonincreasing:
n '
νi qi (t, 0+)Si (t, 0+) ≥ 0, t > 0.
i=1
Remark 2.1. In the previous conditions, the pipe exiting a junction is described through a vector νi ∈ R3 \{0} directed along the pipe. The cross section of the pipe equals the norm of νi . There are alternative conditions replacing (P). In [8] it is assumed that the total ' 'n q2 linear momentum i=1 ( ρii + pi )(t, 0+)νi varies along i νi . This condition implies the condition (P ). In Proposition 2.4 we discuss the alternative condition of equal pressure pi (t, 0+) = pj (t, 0+) ∀i = j, t > 0. It can be shown that the system (1) together with the conditions (M), (E), (P) yield a well-posed initial boundary value problem [8, Theorem 3.2]. We recall the assumptions and some assertions for convenience and later comparison with the twodimensional situation. The eigenvalues of the Euler equations are given by (3)
λ1 = u − c, λ2 = u, λ3 = u + c.
− The set of subsonic data is defined as A0 := A+ 0 ∪ A0 , where
(4) (5)
+ + A+ 0 := {(ρ, q, E) ∈ R × R × R : λ1 (ρ, q, E) ≤ 0 and u ≥ 0}, + + A− 0 := {(ρ, q, E) ∈ R × R × R : λ3 (ρ, q, E) ≥ 0 and u ≤ 0}.
¯i )(x), i The following theorem yields existence provided that we have functions (¯ ρi , q¯i , E = 1, . . . , n given such that the conditions (M), (E), (P) are satisfied. The functions are assumed to be independent of time and strictly entropic. We call a state ¯i )n (t, x) strictly entropic, if it satisfies a strict entropy inequality, i.e., if (¯ ρi , q¯i , E i=1 n
νi ¯ qi (t, 0+)S¯i (t, 0+) > 0.
i=1
Theorem 2.2 (see [8, Theorem 3.2]). Assume strictly entropic, time-independent ¯1 )(x) ∈ int A+ and (¯ ¯i )(x) ∈ int A− for i = 2, . . . , n are functions (¯ ρ1 , q¯1 , E ρi , q¯i , E 0 0 given. Assume these functions satisfy (M ), (E), (P ) and are solutions to (1).
COUPLING CONDITIONS FOR EULER EQUATIONS
1599
Then, there exists positive constant δ such that for any initial data (ρi , qi , Ei ) ∈ ¯i ) + L1 (R+ ; R+ × R × R+ ) such that T V (ρi , qi , Ei ) ≤ δ there exists a weak (¯ ρi , q¯i , E entropic solution (ρi , qi , Ei )ni=1 to (1) and a.e. t > 0 the trace of the solution satisfies (M), (E), (P), and (S). The assertion is closely related to classical results: A special case of the previous theorem is the case n = 2, ν1 = −ν2 , and constant initial data. Then, the problem is equivalent to a classical Riemann problem for the polytropic Euler equations. Proposition 2.3 (see [8, Proposition 2.5]). Let n = 2, ν1 = −ν2 = 0, and ¯i ) for i = 1 and i = 2. assume constant initial data (¯ ρi , q¯i , E ˜ Denote by (˜ ρ, q˜, E)(x, t) the solution to the standard Riemann problem for (1) with initial data (6)
˜ (˜ ρ, q˜, E)(x, 0) =
¯2 ), (¯ ρ2 , −¯ q2 , E ¯ (¯ ρ1 , q¯1 , E1 ),
x<0 . x>0
˜ Then the functions (ρ1 , q1 , E1 )(x, t) := (˜ ρ, q˜, E)(x, t) if x > 0, and (ρ2 , q2 , E2 )(x, t) ˜ = (˜ ρ, −˜ q , E)(−x, t) if x < 0 are solutions in the sense of Theorem 2.2 and, in particular, satisfy (M), (E), (P), and (S). Furthermore, the converse also holds true: If ˜ ρ, q˜, E)(x, t) is a solution (ρi , qi , Ei ) is a solution in the sense of Theorem 2.2, then (˜ to a standard Riemann problem with data (6). 2.1. Numerical solution to the coupling conditions. We briefly describe the numerical treatment of the coupling conditions for given initial data Vi,0 := (ρi,0 , qi,0 , Ei,0 ), i = 1, . . . , n. The techniques follow the lines of the proof of the previous theorem [7, 22, 10]. Assume Vi,0 are stationary states satisfying the coupling ¯i ) be a small perturbation of Vi,0 with the ρi , q¯i , E conditions (M ), (E), (P ). Let V¯i = (¯ + − ¯ ¯ additional property V1 ∈ A0 and Vi ∈ A0 , i ≥ 2. The perturbations do not necessarily satisfy the conditions (M), (E), and (P). We construct states Vi satisfying the conditions and such that the following property holds true: The restriction to x > 0 of a solution (ρi , qi , Ei )(x, t) to the equations (1a)–(1d) to a Riemann problem and with initial data (7)
¯ Vi (ρi , qi , Ei )(x, 0) = Vi
x<0 x≥0
consisting of waves of nonnegative speed, only. This property guarantees that we have limx→0 (ρi , qi , Ei (x, t)) = Vi for t > 0. To construct the states V˜i , we consider parameterizations of the 2- and 3-wave curves. In the case of the 2-wave curve we obtain the following: For an arbitrary initial datum V¯i , any state Vi (σ) for |σ| sufficiently small can be connected to Vi by a 2-contact discontinuity if (8)
T 1 2 2 ¯ Vi (σ) = Vi + σ 1, q¯i /¯ ρi , q¯i /¯ ρi =: L2 (σ, V¯i ). 2
Since V¯1 ∈ A+ 0 the contact discontinuity travels with nonnegative speed. In the case of a 3-wave we obtain the following: For any given initial datum V¯i , every state Vi can be connected by either a 3-(Lax-)shock or a 3-rarefaction wave to
1600
MICHAEL HERTY
V¯i if the state Vi = Vi (σ) belongs to the curve given by ⎧ ⎛ ⎞ 1+βσ/p¯i ρ¯i ⎪ σ/ p ¯ +β ⎪ i ⎪ ⎜ ⎟ ⎪ ⎪ ⎜u ⎟ ⎪ √ 2ci √1−σ/p¯i ⎪ ¯ − i ⎪ ⎝ ⎛ ⎞ ⎪ γ(γ−1) 1+βσ/p¯i ⎠ ⎪ ρi ⎨ σ i ⎠ (σ) = (9) L3 (σ; V¯i ) := ⎝u ⎛ ⎞ ⎪ ⎪ pi ( ρ¯σi )1/γ ρ¯i ⎪ ⎪ + , ⎪ ⎪ ⎜ ⎟ ⎪ 2ci ⎪ 1 − ( p¯σi )(γ−1)/2/γ ⎠ ¯i − γ−1 ⎝u ⎪ ⎪ ⎩ σ
for σ ≥ p¯i ,
for σ < p¯i ,
γ+1 [30, 17]. Since V¯i ∈ A0 , we have nonnegativity of the wave speeds in the for β = γ−1 solution to (7). It remains to solve numerically the following nonlinear problem for (σ1 , . . . , σn , τ ) :
(10) 0 = F (τ, σ1 , . . . , σn ) ⎛ ⎞ '
ν1 q1 (L2 (τ ; L3 (σ1 ; V¯1 ))) + 'i>1 νi qi (L3 (σi , V¯i )) ¯ ⎟ ⎜ ν1 (u1 (E1 + p1 ))(L2 (τ ; L3 (σ1 ; V¯1 ))) + i>1 νi (ui (Ei + pi ))(L3 (σi , Vi ))⎟ ⎜ 2 2 ¯ ¯ ⎜ ⎟ ρ + p )(L (τ ; L (σ ; V ))) − (u ρ + p )(L (σ , V )) (u 1 2 3 1 1 2 3 2 2 1 1 2 2 := ⎜ ⎟. ⎜ ⎟ .. ⎝ ⎠ . 2 2 ¯ ¯ (u ρ1 + p1 )(L2 (τ ; L3 (σ1 ; V1 ))) − (un ρn + pn )(L3 (σn , Vn )) 1
Here, qi (L3 (σi , V¯i )) denotes the mass flux qi = ρ˜i u ˜i of the point V˜i (σ) given by (9). The problem (10) has a unique solution [8, Lemma 4.1], and we use Newton’s method to determine the solution (σ1∗ , . . . , σn∗ , τ ∗ ). The desired state at the intersection is finally given by Vi = L3 (σi∗ ; V¯i ) for i ≥ 2 and V1 = L2 (τ ∗ ; L3 (σ1∗ ; V¯1 )). 2.2. Alternative coupling conditions. In the case of simplified gas dynamics other coupling conditions have been introduced replacing (P) [5, 10]. We replace condition (P) by requiring the equality of the pressure at the vertex, i.e., (P ) Equality of the pressure: pi (t, 0+) = pj (t, 0+) ∀i = j, t > 0 In the case of n = 2 both conditions (P ) and (P ) yield the same solutions. We prove well-posedness of the conditions (M ), (E), and (P ) in the case of constant initial ¯ then the following proposition holds. data. Let V¯ := (¯ ρ, q¯, E), − ¯ Proposition 2.4. Given constant states V¯1 ∈ int A+ 0 and Vi ∈ int A0 additionally satisfying u ¯i + c¯i ≥ u1 for i = 2, . . . , n. Then, there exists δ > 0 such that for all initial states Vi , i = 1, . . . , n with Vi − V¯i ∞ ≤ δ, there exists a weak entropic solution (ρi , qi , Ei )(x, t) i = 1, . . . , n to (1), and the trace of the solution satisfies (M ), (E), and (P ) for t > 0. Proof. The proof is similar to the proof of [8, Theorem 2.7]. Using the parametrization (8), (9), and σ = (τ, σ1 , σ2 , . . . , σn ), we solve (11) instead of (10), (11) 0 = F (σ ) ⎛
⎞ '
ν1 q1 (L2 (τ ; L3 (σ1 ; V¯1 ))) + i>1 νi qi (L3 (σi , V¯i )) ⎟ ⎜ p1 (L2 (τ ; L3 (σ1 ; V¯1 ))) − p2 (L3 (σ2 , V¯2 )) ⎟ ⎜ ¯ ¯ ⎟ ⎜ p2 (L3 (σ2 ; V2 )) − p3 (L3 (σ3 , V3 )) ⎟ ⎜ := ⎜ ⎟, .. ⎟ ⎜ . ⎜ ⎟ ⎝ ⎠ − pn (L3 (σn , V¯n )) pn−1 (L3 (σn−1 ; V¯n−1 )) '
ν1 (u1 (E1 + p1 ))(L2 (τ ; L3 (σ1 ; V¯1 ))) + i>1 νi (ui (Ei + pi ))(L3 (σi , V¯i ))
1601
COUPLING CONDITIONS FOR EULER EQUATIONS
To prove well-posedness it suffices to verify that detDσ F (σ )|σ=0 is nonzero. We have ⎞ ⎛ 1 1 1 1 ... 1 ⎟ ⎜ a0 a1 −a2 n 4 ⎟ ⎜ ⎟ a −a
ν1 λ2 (V¯1 )
νi λ3 (V¯i ) det ⎜ 2 3 ⎜ ⎟ ⎝ ⎠ i=1 ... ... An A0 A1 A2 ⎞ ⎛ n n n 4 4 4 = ν1 λ2 (V¯1 )
νi λ3 (V¯i ) ⎝ aj (A1 − A0 ) + (Ai − a0 Ai + a0 A1 − A0 ) aj ⎠, i=1
j=2
i=2
i=j
where
ν1 λ2 (V¯1 )a0 = ∂τ p1 (L2 (τ ; L3 (σ1 ; V¯1 ))) = 0,
ν1 λ3 (V¯1 )a1 = ∂σ p1 (L2 (τ ; L3 (σ1 ; V¯1 ))) = c¯21 , 1
νj λ3 (V¯j )aj = ∂σj pj (L3 (σj ; V¯j )) = c¯2j , j = 2, . . . , n, 1 3 ¯ ,
ν1 λ2 (V¯1 )A0 = u 2 1 ¯j + p¯j E ¯ ¯ +u ¯j c¯j , j = 1, . . . , n.
νj λ3 (Vj )Aj = λ3 (Vj ) ρ¯j Due to the assumptions on V¯i , we observe (that λ2 (V¯1 ), A1)− A0 , λ3 (V¯i ), and ai , i = ui + c¯i )2 − u21 ≥ 0 for i = 2, . . . , n. This 1, . . . , n are positive. Further, Ai − A0 ≥ 12 (¯ yields the assertion. 3. Discussion of the two-dimensional situation. For further investigation of conditions for coupling the Euler equations, we propose a zooming of the local situation. Here, the intersection of several arcs is not considered as a single point but as a two-dimensional domain. The domain depends on the number of attached arcs as well as their direction νi . On the zoomed domain, we solve the polytropic Euler system given by (12a)–(12e): (12a) (12b) (12c) (12d) (12e)
∂t ρ + ∂x (ρu) + ∂y (ρv) = 0, ( ) ∂t (ρu) + ∂x u2 ρ + p + ∂y (ρuv) = 0, ( ) ∂t (ρv) + ∂x (ρuv) + ∂y v 2 ρ + p = 0, ∂t E + ∂x (u(E + p)) + ∂y (v(E + p)) = 0, ) 1 ( p + ρ u2 + v 2 . E= γ−1 2
We assign initial data according to the connected arcs; see below. We compute a fully evolved solution and average on subdomains. The resulting values are then compared with the predictions of the coupling conditions of the one-dimensional model. This approach is motivated by the following considerations: The one-dimensional model is typically used for simulation purposes since it is computationally less expensive than the two-dimensional model; see Table 1. Further, it is assumed that the twodimensional dynamics happen on a much faster timescale compared to the dynamics in the pipes. Hence, we want to use the simulation results for our one-dimensional model. We therefore consider only the region close to the center of the computational domain. Furthermore, we know that in the case of constant initial data the theoretical
1602
MICHAEL HERTY
results predict constant states at the intersection; see below for more details. Hence, we are interested in time-independent values obtained by two-dimensional computations. This motivates the averaging procedures given below. If one is interested in the transient behavior of the values, then the results of the two-dimensional simulations should be used as follows: For a combination of data the tables of the two-dimensional results yield the state which eventually will be attained by the dynamics at the intersection. Further remarks can be found in section 4.4. Table 1 CPU times in seconds for a two- and a one-dimensional model on a tee-shaped geometry and three connected pipes, respectively.
Grid 40 x 40 80 x 80 120 x 120 240 x 240
2d CPU [sec] 19.41 172.95 615.51 >15 min
Grid 40 80 120 240
1d CPU [sec] 0.71 2.79 6.28 24.14
Next, we turn to the setup of the two-dimensional case. With the other cases being similar, we restrict the discussion to the example of an intersection with n = 4 and ν1 = (1, 0) = −ν2 , ν3 = (0, 1) = −ν4 . 1 0.8
i=2 zoom
i=2
0.6 0.4
i=1
0.2
y
0
i=1
−0.2 −0.4 −0.6 −0.8 −1 −1
−0.5
0
0.5
1
x
Fig. 2. Zoom to obtain a two-dimensional geometry for numerical simulations.
We consider the two-dimensional domain as indicated in Figure 2 and set U (x, y, t) ¯i ), i = 1, . . . , 4 satisfying the ρi , q¯i , E = (ρ, ρu, ρv, E)(x, y, t). We assume data V¯i := (¯ assertions of Theorem 2.2 of the one-dimensional model. We use this data to prescribe initial data for (12) as follows. We fix a width w and decompose the computational domain into four subdomains according to the arcs. Then we set ⎧ ⎫ ¯1 ) (x, y) ∈ Ω1 := { x ≤ 0, |y| ≤ w} ⎪ (¯ ρ1 , −¯ q1 , 0, E ⎪ ⎪ ⎪ ⎨ ¯2 ) (x, y) ∈ Ω2 := {|x| ≤ w, y ≤ 0} ⎬ (¯ ρ2 , 0, q¯2 , E (13) U (x, y, 0) = ¯3 ) (x, y) ∈ Ω3 := { x ≥ 0, |y| ≤ w} ⎪ . (¯ ρ3 , q¯3 , 0, E ⎪ ⎪ ⎪ ⎩ ¯4 ) (x, y) ∈ Ω4 := {|x| ≤ w, y ≥ 0} ⎭ q4 , E (¯ ρ4 , 0, −¯ Note that in the assertion of the theorem the arcs one and four are pointing out of the intersection. According to Proposition 2.3, we change the sign of the initial velocity
COUPLING CONDITIONS FOR EULER EQUATIONS
1603
¯i in the corresponding domains. We compute at every time step the cell averages U for δ ∈ (0, ∞): ¯i (δ, t) := 1 (14) U U (x, y, t)dxdy, i = 1, . . . , 4, |Ωδi | Ωδi on Ωδ1 := {(x, y) : −δ ≤ x + w ≤ 0, |y| ≤ w}, Ωδ3 := {(x, y) : 0 ≤ x − w ≤ δ, |y| ≤ w}, Ωδ2 := {(x, y) : |x| ≤ w, 0 ≤ y − w ≤ δ}, Ωδ4 := {(x, y) : |x| ≤ w, −δ ≤ y + w ≤ 0}. For a given tolerance , we simulate the two-dimensional model (12) and (13) until time τ such that (15)
4 i=1
¯i (τ, γ) 2 ≤ . ∂t sup ∂δ U γ
The termination criterion (15) is a measure for the current dynamics inside each ˜i ). The criterion subdomain and indicates changes in the averaged moments (˜ ρi , q˜i , E proposed implies that only the traces of the solution near the intersection is important to the comparison. Exiting waves are lost to the comparison. We use this measure because of the following reason: In the following case we can expect a finite termination time τ : For constant data V¯i the solution in the one-dimensional model according to Theorem 2.2 is a superposition of shocks, contact discontinuities, and rarefaction waves moving with nonnegative velocity. Additionally, the trace of the solution at x = 0 satisfies (M), (E), and (P). In the one-dimensional model we observe waves moving out of the intersection leaving states V¯i (0, t) =: Vi independent of time behind. Hence, for sufficiently small δ we average on domains Ωδi which are close to the center of our computational domain and consider only traces of the two-dimensional solution. The measure (15) indicates whether or not we reached stationary values for the averaged moments. Once we satisfied the termination criterion we denote by i := U ¯i (τ, 0) the reference values for comparison with the theoretical predictions Vi . U Remark 3.1. The previous modeling can also be applied to different kinds of junctions, and we give numerical results in the case of three and four connected arcs. Related work on a two-dimensional model of junctions exists. In the case of traffic intersection, the two-dimensional domain is modeled by several lanes [21]. In the case of fluid dynamics for a different setting and assumptions some work has been carried out for the p-system in [23]. 4. Numerical results. The conservation law is solved numerically using a second-order finite volume method proposed by Jin and Xin [26] in the conservative variables (ρ, ρu, ρv, E). The relaxation system can be solved without using Riemann solvers or characteristic decompositions. It is based on nonoscillatory upwind discretization in space and TVD time integration [34, 19]. We use the MUSCL with minmod-slope limiter and a strongly stable TVD integration scheme. The case of multiple space dimensions is treated by dimensional splitting. We use a local estimate of the characteristic speeds. The subcharacteristic condition is satisfied locally. The schemes have also been extend to higher-order methods [6] using weighted essentially non-oscillatory (WENO) reconstructions; see, e.g., [3]. For further details we refer to [25, 33, 31, 29]. Of course, other numerical integration schemes can be applied to resolve the dynamics; see, for example, [27, 3, 28, 30] for further references.
1604
MICHAEL HERTY
We set γ = 1.4 and use Nx = Ny = 250 gridpoints in space on a [−1, 1] × [−1, 1] grid (except in the example on the grid convergence in section 4.3). We use solid wall boundary conditions. We choose the termination tolerance (15) in the same order as the discretization width and let δ vary in (0, 12 ). The tolerance for solving the nonlinear system (10) is 10−8 . The computations have been performed on a INTEL 2 Ghz processor with 1 GB RAM. 4.1. Relation to known results and computational times. We present results for a standard Riemann problems to validate the construction presented in section 2. We denote by V¯i the initial data and by V˜i the intermediate state at the intersection. The solutions to (10) are compared for the Lax shock tube test. The initial data V¯1 = (ρ1 , q1 , E1 ) = (0.5, 0, 0.4275) and V¯2 = (0.445, 0.311, 8.928) clearly does not satisfy (P ). The states V˜i obtained by solving (10) are V1 = (+0.3197, +0.5642, +6.0498) and V2 = (+0.3197, −0.5642, +6.0498). The parametrization of the wave curves together with Proposition 2.3 yields a solution consisting of a 1-rarefaction wave of negative velocity, followed by a 2-contact of positive velocity and a 3-(Lax-)shock. This coincides with the expected solution to standard Riemann problem with left data V˜2 and right data V˜1 . The predicted wave speeds are in [−2.6326, −1.3538] for the rarefaction and s2 = +1.7645 and s3 = +2.3234 for the contact and the shock, respectively. We show that even in the case of single pipe-to-pipe intersection a full twodimensional simulation is computationally expensive. We compare the computational times for the two-dimensional simulation on a tee-shaped geometry with a one-dimensional simulation on three connected pipes in Table 1. The initial data on the pipes is taken from Table 6 data ID 2. The time horizon is T = 0.5. The results indicate that it is not desirable to simulate a complex network by means of a two-dimensional model. We therefore aim to verify one-dimensional coupling conditions by sample calculations for common two-dimensional geometries of intersections. The values obtained by these simulations can be used as reference values for simulations of one-dimensional models. 4.2. Dependence on the initial data. Intersections with three connected arcs are most common when considering applications in gas pipeline systems [12, 32]. Typical fittings between pipes are tee-shaped and are hence modeled by a junction with ν1 = (0, −1), ν2 = (−1, 0), and ν3 = (1, 0). Perturbations of steady-state solutions. We consider steady data V¯i : We have ui = 0 and pi = 1 for i = 1, 2, 3. The densities ρ¯i are chosen as random numbers in (0, 1). We now study two possible perturbations: At first, we consider a 1-rarefaction wave of negative velocity on arc one hitting the vertex. We report the final states obtained by the coupling conditions (M), (E), (P) and (M), (E), (P’) after interaction of the wave with the vertex as well as the averaged values of the numerical simulation as discussed in section 3. The results are summarized in Table 3. Qualitatively the results are similar and densities, velocities, and energies are of the same order of magnitude. We observe also quantitatively similar results for the final states on the arc number three. Significant differences occur for the final values in the case of the first arc in some examples, e.g., (ID 1), (ID 2), and (ID 3). These examples correspond to respectively large initial densities. The difference can be due
1605
COUPLING CONDITIONS FOR EULER EQUATIONS
to the fact that the perturbation is generated on the first arc and therefore we can expect more complex dynamics on this arc. Table 2 Initial densities and perturbations in the case of a 1-rarefaction wave on arc one. Further initial data is ui = 0 and pi = 1. ID 1 +0.9218 +0.7382 +0.1763 +0.8588 +0.0744 +2.2673
ρ1 ρ2 ρ3 ρ1 q1 E1
ID 2 +0.4057 +0.9355 +0.9169 +0.3780 +0.0494 +2.2673
ID 3 +0.4103 +0.8936 +0.0579 +0.3822 +0.0496 +2.2673
ID 4 +0.1389 +0.2028 +0.1987 +0.1294 +0.0289 +2.2673
ID 5 +0.6038 +0.2722 +0.1988 +0.5625 +0.0602 +2.2673
Table 3 Final states in (ρ, q, E)T for perturbations (IDi to IDj as given by Table 2) of a steady solution by a 1-rarefaction waves reported for arcs one and three only. Values in the rows (P) and (P’) correspond to solutions satisfying conditions (M), (E), (P) and (M), (E), and (P’), respectively. The row (2d) corresponds to values of the 2d simulation as explained in section 3. The column labels IDj refer to the initial data used. Arc
Cplg.
1
(P) (P’) (2d)
3
(P) (P’) (2d)
ID 1 +0.3482 +0.0455 +2.3847 +0.3496 +0.0468 +2.393 +0.6674 +0.0817 +2.3805 +0.1709 −0.0149 +2.3940 +0.1707 −0.0154 +2.3910 +0.1829 −0.0108 +2.2744
ID 2 +0.8652 +0.1204 +2.2875 +0.8733 +0.1299 +2.3051 +0.6738 +0.0901 +2.2861 +0.8667 −0.0599 +2.3127 +0.8627 −0.0646 +2.2979 +0.8238 −0.0422 +2.1989
ID 3 +0.2191 +0.0423 +2.3802 +0.2204 +0.0440 +2.3931 +0.3401 +0.0536 +2.3787 +0.0561 −0.0086 +2.3939 +0.0560 −0.0089 +2.3894 +0.0804 −0.0067 +2.2730
ID 4 +0.1892 +0.0501 +2.3130 +0.1907 +0.0531 +2.3282 +0.1621 +0.0411 +2.3082 +0.1891 −0.0249 +2.3330 +0.1884 −0.0264 +2.3226 +0.1797 −0.0173 +2.2191
ID 5 +0.2243 +0.0359 +2.3810 +0.2251 +0.0368 +2.3890 +0.4523 +0.0677 +2.3651 +0.1925 −0.0165 +2.3896 +0.1923 −0.0170 +2.3867 +0.1862 −0.0113 +2.2711
Second, we consider perturbations which introduce two 1-shock waves of negative velocity approaching on arcs two and three. Again, we report the final states after interaction with the vertex for both the theoretical conditions as well as the results of the two-dimensional numerical simulation. The final states are given in Table 5. As in the previous example, the order of magnitude of the values is the same. The perturbation has been generated on arcs two and three and therefore we observe significant differences in the absolute final values of the conditions (P), (P’), and the numerical results. Additionally, the results on arc one differ, which can be due to the influence of the geometry as well as due to the more complex two-dimensional dynamics. Of course the complex two-dimensional dynamics cannot be fully captured by the one-dimensional models and we can only verify if the qualitative behavior coincides. Here, we observe similar trends and the persistence of qualitative features: For example, the sign of the velocities is the same in the numerics as well as in the theoretically predicted results. Further, the order of magnitude of the values is the same. Trends also remain and arcs with higher values for the density in the theory correspond to
1606
MICHAEL HERTY
Table 4 Initial data and perturbation by a 1-shock wave on arcs two and three. Further initial data is ui = 0 and pi = 1.
ρ1 ρ2 ρ3 ρ1 q1 E1
ID 1 +0.8462 +0.5252 +0.2026 +0.5844 −0.1032 +2.9130
ID 2 +0.6721 +0.8381 +0.0196 +0.9327 −0.1304 +2.9130
ID 3 +0.6813 +0.3795 +0.8318 +0.4223 −0.0877 +2.9130
ID 4 +0.5028 +0.7095 +0.4289 +0.7896 −0.1199 +2.9130
ID 5 +0.3046 +0.1897 +0.1934 +0.2111 −0.0620 +2.9130
Table 5 Final states in (ρ, q, E)T for perturbations (IDi to IDj as given by Table 4) of a steady solution by two 1-shock waves reported for arcs one and three only. Values in rows (P) and (P’) correspond to solutions satisfying conditions (M), (E), (P) and (M), (E), and (P’), respectively. The row (2d) corresponds to values of the 2d simulation as explained in section 3. The column labels IDj refer to the initial data used. Arc
Cplg.
1
(P) (P’) (2d)
3
(P) (P’) (2d)
ID 1 +0.6067 +0.1159 +3.0794 +0.6130 +0.1223 +3.1080 +0.6507 +0.0473 +2.7351 +0.6138 −0.0580 +3.1127 +0.6117 −0.0612 +3.0989 +0.5360 −0.0925 +2.6059
ID 2 +0.9499 +0.1780 +3.0078 +0.9641 +0.1930 +3.0467 +0.8016 +0.0349 +2.6171 +0.9669 −0.0890 +3.0581 +0.9609 −0.0965 +3.0322 +0.8116 −0.1570 +2.4852
ID 3 +0.4396 +0.0953 +3.0899 +0.4439 +0.1002 +3.1169 +0.6022 +0.0740 +2.8421 +0.4444 −0.0477 +3.1210 +0.4430 −0.0501 +3.1085 +0.4365 −0.0532 +2.6948
ID 4 +0.8012 +0.1689 +2.9947 +0.8139 +0.1840 +3.0352 +0.6167 +0.0612 +2.7209 +0.8166 −0.0844 +3.0484 +0.8109 −0.0920 +3.0196 +0.6762 −0.0762 +2.5931
ID 5 +0.2191 +0.0697 +3.0791 +0.2214 +0.0736 +3.1076 +0.2770 +0.0436 +2.7924 +0.2216 −0.0349 +3.1125 +0.2209 −0.0368 +3.0984 +0.1959 −0.0432 +2.6523
those in the numerical simulation. The same holds true for velocity and energy. However, both Tables 5 and 3 show that the two-dimensional numerical simulation cannot provide support for either condition (P) or (P’). Typically, the differences in these two conditions is smaller than to any numerical result. Nonzero initial velocity. We compare two-dimensional numerical solutions and theoretical expected values in the case of nonstationary initial data. We also present graphs of the time-dependent solution near the vertex in the following section. We ρij , u ¯ji , p¯ji ) with V¯2j = consider j = 1, . . . , 5 different sets of initial data given by V¯ij = (¯ 1+(j−1)/4 j j j j j j 1 (¯ ρ2 , − 4 , +1), V¯3 = (¯ ρ3 , − , +1), and V¯1 = (¯ ρ1 , +1, p¯i ), where ρ¯ji and p¯ji are 3 such that (M ), (E), and (P ) are satisfied. We then perturb each state V¯ij by a random 1 . We only report the final states on arcs one and three obtained by vector with norm 10 solving (10) for the perturbed data and the corresponding numerical results in Table 6. The nonzero initial velocities introduce initial dynamics in the two-dimensional case which are not present in the one-dimensional situation. However, we observe that by depending on the random perturbation of the initial data we have rather good agreement between the condition (P) and the numerical results. In most cases the difference between the absolute predicted values is less than 5%. Further, trends
1607
COUPLING CONDITIONS FOR EULER EQUATIONS
and qualitative behavior coincide with the theoretical predicted situation. Hence, the one-dimensional model captures at least the qualitatively behavior of a full twodimensional simulation. Table 6 Final states in (ρ, q, E)T for random perturbations (IDj) of a nonstationary data on arcs one and three. Values in the rows (P) and (2d) correspond to solutions satisfying conditions (M), (E), (P) and to values of the 2d simulation. Arc
Cplg.
1
(P) (2d)
3
(P) (2d)
ID 1 +0.8489 +0.4589 +2.1854 +0.8477 +0.4559 +2.2149 +0.7740 −0.2000 +2.5782 +0.7163 −0.1700 +2.4645
ID 2 +0.6739 +0.3234 +2.6237 +0.7270 +0.3002 +2.5408 +0.4607 −0.1441 +2.6238 +0.4279 −0.1001 +2.6031
ID 3 +0.8133 +0.1645 +2.6366 +0.5634 +0.1249 +2.6633 +0.3463 −0.0146 +2.7019 +0.2937 −0.0440 +2.8306
ID 4 +0.3694 +0.2335 +2.0721 +0.3265 +0.1987 +2.2651 +0.1279 −0.0325 +2.3508 +0.1251 −0.0548 +2.4044
ID 5 +0.2521 +0.1071 +2.2357 +0.1710 +0.0794 +2.4879 +0.0565 −0.0089 +2.3239 +0.0491 −0.0219 +2.4397
Time-evolution of two-dimensional and theoretical solution. We present the time-evolution for the examples given in Table 6. We simulate a tee-shape domain as indicated in Figure 3 with (x, y) ∈ [0, 1] × [0, 1]. We use the same setting as in Table 6. We stop the simulation once the termination criteria (15) is satisfied. To Pipe 2
x0 Pipe 3
y0
y
Ω
x
Pipe 1
Fig. 3. 2d geometry of a tee-shaped area for numerical simulations.
shorten the presentation we present the time-evolution of the density ρ(x, y, t) on pipe one only. Pipe one is given by the domain (x, y) ∈ Ω := [ 13 , 23 ] × [0, y0 ] with y0 = 23 . We plot graphs of the following function: 2 1 3 ρ(x, y, t)dx (16) ρ2d (y, t) := 3 31 and for ρ2d (y0 , t). The later function should be compared with the theoretical predictions at the intersection whereas the former depicts the evolution on the pipe. In Figure 4 we compare ρ2d (y0 , t) with the theoretical predicted values and with the averaged values as reported in Table 6. We give results for the data ID 1 to ID 5. Due to our termination criteria, the time horizon for the simulations depends on
1608
MICHAEL HERTY
the initial data considered. However, in all test cases we observe that the function ρ2d (y0 , t) approaches the value presented in the table. This justifies the presentation of time-independent values in the tables. 0.8
0.9 0.75
0.75 0.85 0.7
0.7
0.8 0.65
0.65 0.75
0.6 0.6 0.55
0.7
0.55
0.5
0.65 0.45 0.5
0.6
0.4 0.45
0.55 0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
t
0.8
1
1.2
0
0.1
0.2
0.3
t
0.24
0.4
0.5
0.6
t
0.36
0.22
0.34
0.2
0.32
0.3
0.18
0.28 0.16
0.26 0.14 0.24 0
0.1
0.2
0.3
0.4
0.5 t
0.6
0.7
0.8
0
0.2
0.4
0.6
0.8
1
1.2
1.4
t
Fig. 4. Graph of the numerical solution ρ2d (y0 , t) obtained by simulation of the two-dimensional situation over time. The volume averaged two-dimensional results listed in Table 6 are indicated by a cross (X). The theoretical predictions are indicated by a dot. The solutions are given for pipe one with data ID 1–ID 5 clockwise starting with ID 1 in the upper left part of the figure.
Finally, we present results on the evolution of the solution on the pipe. For pipe one the two-dimensional simulation results are averaged in the x-direction to be comparable with solutions to the theoretical predictions. To obtain graphs of the theoretical predicted solution we solve the one-dimensional polytropic Euler equations. We present results for the test case ID 3. The example is such that a perturbation moves into the vertex, interacts, and leaves the state reported in Table 6 behind. We denote by (ρa , pa , ua )(y, t) the solution with initial data (17). The vertex is located at y = 0 and the initial and perturbed data is taken as in ID 3. ⎞ ⎛ (+0.8133, +0.1645, +2.6366), y≤0 (17) (ρa , qa , Ea )(y, t) = ⎝(+0.3529, +0.3529, +1.9412), 0 ≤ y ≤ 13 ⎠ . (+0.5104, +0.1822, +2.9904), y ≥ 13 We report the graph of the densities ρa (y, t) and ρ2d (y, t) in Figure 5. Considering the right part of the figure we observe the perturbation moving towards the vertex. This is expected by the setup of the test case. Then, it interacts with the other states at the intersection (not shown) and a wave moves out of the vertex again. The theoretical predicted state remains as a constant state at y = 0. We observe a qualitatively similar behavior in the left part of the figure. However, in the twodimensional case the interaction generates a more complex wave pattern. Hence, we cannot expect to have a constant state at y = 0 once all interactions have taken place. Instead we observe ongoing wave interactions along the pipe. At the vertex the solution approaches a constant value as already seen in Figure 4.
1609
COUPLING CONDITIONS FOR EULER EQUATIONS
0.8
0.65
0.75 0.6 0.7 0.55
0.65 0.8
0.65
0.6
0.6
0.5
0.7
0.55
0.55
0.6
0.5 0.45
0.45
0.5
0.5
0.4
0.6
0.6
0.35
0.5
0.3
0.4
0.4 0.5 0.4
0.4 0.3
0.1 0.2
0.35
0.4 0.6 x
0
0.2 0.6
0.1
0.5
t
0.4
0.3
0.2 0.4
0.2
0.3
0.45
0.3
0.35
0.1
0.8 1
0
t
x
Fig. 5. Solution ρ2d (y, t) and ρa (y, t) in the left and right part of the picture, respectively. The vertex is located at y = 0.
4.3. Dependence on the geometry. We present results on the vertex introduced in section 3 with ν1 = (−1, 0) = −ν3 and ν2 = (0, 1) = −ν4 . We present an example on the grid convergence of the numerical simulations of the two-dimensional domain and an example for the values at the vertex obtained by conditions (P), (P’), and by the numerical simulations, respectively. Grid convergence example We simulate the dynamics on the geometry of (2) with initial data V¯i = (¯ ρi , u ¯i , p¯i ) = (1, 0, 3) for i = 2, . . . , 4 and with V¯1 = (0.2879, 0, 8671, 2.4573) on three different meshes with Nx = Ny ∈ {80, 160, 320}. We calculate up to the time t = +2 and present results on the cross section of density and velocities in Figure 6: The center of the intersection is at (x, y) = (0, 0) and the width w of each arc is w = 14 . We observe a qualitatively similar behavior for all mesh sizes. For decreasing grid size the results are also quantitatively roughly similar which justifies the chosen number of discretization points of Nx = Ny = 250 for the previous calculations. Perturbation of steady state solutions We consider a random perturbation ¯i ) = (+1, +0, +7.5) for i = 1, . . . , 4. This data of the following data: V¯i = (¯ ρi , q¯i , E clearly satisfies (M), (E), (P) and (M), (E), (P’). The perturbed initial data is given by V1 = (+0.8672, +0.2497, +6.1793), V2 = (+1.0950, +0.0231, +7.5607), V3 = (+1.0486, +0.0891, +7.5762), and V4 = (+1.0456, +0.0019, +7.5821). We report the final states at the vertex for the corresponding coupling conditions in Table 7. We also report the wave speeds in the solution to the coupling condition (P) and (P’), respectively. In both cases the solution is a superposition of 3-shock followed by a 2-contact discontinuity on arc one and a 3-rarefaction waves on the arcs j = 2, 3 and j = 4. As already observed in the cases of three connected arcs the numerical results are quite similar to the theoretical predictions. Arcs having higher densities in case of the conditions (P) or (P’) correspond to arcs of higher density in the numerical simulations. The sign of the velocities as well as their order of magnitude are the same. The energy in the numerical simulations is roughly about 5% larger compared with the theory. The densities are roughly 4% smaller than the theoretical results suggested. 4.4. Discussion of the results. The previous computations do not provide evidence for either the equal momentum or the equal pressure assumption. The graphs of
1610
MICHAEL HERTY 0.95 0.9425
0.945 0.942
0.94 0.9415
0.935 h=1.25e−2 h=6.25e−3 h=3.125e−3
0.93
0.941 0.9405
0.925 0.94
0.92
h=1.25e−2 h=6.25e−3 h=3.125e−3
0.9395
0.915 0.939
0.91
0.9385
0.905 0.9 −1
0.938
−0.8
−0.6
−0.4
−0.2
0 x
0.2
0.4
0.6
0.8
1
−1
−0.8
−0.6
−0.4
−0.2
0 y
0.2
0.4
0.6
0.8
1
−0.1 0.1
h=1.25e−2 h=6.25e−3 h=3.125e−3
−0.15 0.05 h=1.25e−2 h=6.25e−3 h=3.125e−3
−0.2
0
−0.25
−0.3 −0.05
−0.35 −0.1 −0.4 −1
−0.8
−0.6
−0.4
−0.2
0 x
0.2
0.4
0.6
0.8
1
−1
−0.8
−0.6
−0.4
−0.2
0 y
0.2
0.4
0.6
0.8
1
Fig. 6. Cross section of density ρ(x, 0, t) and ρ(0, y, t) for different mesh sizes in the top part. Cross section of the velocities u(x, 0, t) and v(0, y, t) in the bottom part.
Table 7 Final states in (ρ, q, E)T and wave speeds for random perturbations of a stationary data in the case of four connected arcs. Values in the rows (P) and (2d) correspond to solutions satisfying conditions (M), (E), (P) and to values of the 2d simulation. The two values for the wave speeds on arc one correspond to the 3-shock and the 2-contact discontinuity, respectively. For the 3-rarefaction waves on the other arcs report the extremal wave speeds. Cplg. (P)
(P’)
(2d)
Arc 1 +0.9490 +0.3504 +6.5677 +2.3291 / +0.3692 +0.9818 +0.4036 +6.7780 +2.3549/ +0.4111 +0.9168 +0.2932 +6.7250
Arc 2 +1.0134 −0.1317 +6.7922 (+1.8061, +1.9874) +1.0039 −0.1487 +6.7061 (+1.7843, +1.9874) +0.9874 −0.1448 +6.9210
Arc 3 +0.9727 −0.0632 +6.8182 (+1.9160, +2.0960) +0.9603 −0.0868 +6.6990 (+1.8855, +2.0960) +0.9475 −0.0478 +6.9391
Arc 4 +0.9637 −0.1554 +6.7763 (+1.8212, +2.0170) +0.9567 −0.1681 +6.7098 (+1.8040, +2.0170) +0.9605 −0.1287 +6.9221
the solutions indicate that even in the two-dimensional case the solution approaches a constant value close to the vertex. The difference in the final values of the twodimensional situation are partially due to the geometry effects which are not covered in the one-dimensional case. However, we note that quantitatively the obtained values are of the same order as the values obtained by both sets of coupling conditions. This suggest that the “correct” coupling conditions cannot be very different compared to the conditions currently imposed. We give some possibilities to modify the coupling conditions in order to introduce the 2d effects: We numerically determine loss factors fij depending on the inflow, outflow, and geometry data by, e.g., taking the absolute difference in theoretical prediction and numerical simulation. Then, a modified coupling condition obtained by 2d simulations would then, for example, replace (P ) or
COUPLING CONDITIONS FOR EULER EQUATIONS
1611
(P ), respectively, by a condition (P2d )
pi (t, 0+) = pj (t, 0+) − fij .
In case of water networks and gas networks governed by the isothermal Euler equations, such a modeling has been proposed in several publications in engineering science; see, e.g., [32, 35]. Another possibility would be to replace conditions (P ) or (P ) by introducing, e.g., a polynomial function of the inflow variables interpolating the available twodimensional pressure data for the different pipes. Such an approach has not yet been carried out. Recalling again the results of Table 1 we observe that a full two-dimensional simulation of a pipeline is clearly not desirable and up to now computationally too expensive. 5. Summary. We consider coupling conditions for vertices where the polytropic Euler equations govern the dynamics on arcs. We introduce an additional coupling condition and review recently proposed conditions. We introduce a two-dimensional model for vertices using a local zooming. We present numerical results for coupling conditions for the one-dimensional Euler equations as well as for the two-dimensional model. In most cases the numerical results of the two-dimensional model of the vertex coincide well with the expected theoretical results. However, it is not possible to use the numerical results to distinguish between different theoretical imposed coupling conditions. REFERENCES [1] D. Armbruster, P. Degond, and C. Ringhofer, A model for the dynamics of large queuing networks and supply chains, SIAM J. Appl. Math., 66 (2006), pp. 896–920. [2] D. Armbruster, D. Marthaler, and C. Ringhofer, Kinetic and fluid model hierarchies for supply chains, Multiscale Model. Simul., 2 (2003), pp. 43–61. [3] D. S. Balsara and C. W. Shu, Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy, J. Comput. Phys., 160 (2000), pp. 405–452. [4] M. K. Banda, M. Herty, and A. Klar, Coupling conditions for gas networks governed by the isothermal Euler equations, Netw. Heterog. Media, 1 (2006), pp. 41–56. [5] M. K. Banda, M. Herty, and A. Klar, Gas flow in pipeline networks, Netw. Heterog. Media, 1 (2006), pp. 41–56. [6] M. K. Banda and M. Seaid, Higher-order relaxation schemes for hyperbolic systems of conservation laws, J. Numer. Math., 13 (2005), pp. 171–196. [7] G. M. Coclite, M. Garavello, and B. Piccoli, Traffic flow on road networks, SIAM J. Math. Anal., 36 (2005), pp. 1862–1886. [8] R. M. Colombo and C. Mauri, Euler system at a junction, J. Hyperbolic Differ. Equ., to appear. [9] R. M. Colombo and M. Garavello, On the Cauchy problem for the p-system at a junction, SIAM J. Math. Anal., to appear. [10] R. M. Colombo and M. Garavello, On the p-system at a junction, Contemp. Math., 426, AMS, Providence, RI (2006), pp. 193–211. [11] R. M. Colombo and M. Garavello, A well-posed Riemann problem for the p-system at a junction, Netw. Heterog. Media, 1 (2006), pp. 495–511. [12] Crane Valve Group, Flow of fluids through valves, fittings and pipes, Crane technical paper 410, Stamford, CT, 1998. [13] C. D’Apice and R. Manzo, Calculation of predicted average packet delay and its application for flow control in data network, J. Inform. Optim. Sci., 27 (2006), pp. 411–423. [14] C. D’Apice and R. Manzo, A fluid dynamic model for supply chains, Netw. Heterog. Media, 1 (2006), pp. 379–389.
1612
MICHAEL HERTY
[15] M. Garavello and B. Piccoli, Source-destination flow on a road network, Commun. Math. Sci., 3 (2005), pp. 261–283. [16] M. Garavello and B. Piccoli, Traffic flow on a road network using the Aw-Rascle model, Comm. Partial Differential Equations, 31 (2006), pp. 243–275. [17] E. Godlewski and P.-A. Raviart, Numerical Approximation of Hyperbolic Systems of Conservation Laws, Appl. Math. Sci. 118, Springer-Verlag, New York, 1996. ¨ ttlich, M. Herty, and A. Klar, Network models for supply chains, Commun. Math. [18] S. Go Sci., 3 (2005), pp. 545–559. [19] S. Gottlieb, C.-W. Shu, and E. Tadmor, Strong stability-preserving high-order time discretization methods, SIAM Rev., 43 (2001), pp. 89–112. [20] M. Herty and A. Klar, Modelling, simulation, and optimization of traffic flow networks, SIAM J. Sci. Comput., 25 (2003), pp. 1066–1087. [21] M. Herty and A. Klar, Simplified dynamics and optimization of large scale traffic networks, Math. Models Methods Appl. Sci., 14 (2004), pp. 579–601. [22] M. Herty and M. Rascle, Coupling conditions for a class of second-order models for traffic flow, SIAM J. Math. Anal., 38 (2007), pp. 595–616. [23] M. Herty and M. Seaid, Simulation of transient gas flow at pipe-to-pipe intersections, Internat. J. Numer. Methods Fluids, to appear. [24] H. Holden and N. H. Risebro, A mathematical model of traffic flow on a network of unidirectional roads, SIAM J. Math. Anal., 26 (1995), pp. 999–1017. [25] S. Jin, Runge-Kutta methods for hyperbolic conservation laws with stiff relaxation terms, J. Comput. Phys., 122 (1995), pp. 51–67. [26] S. Jin and Z. Xin, The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Comm. Pure Appl. Math., 48 (1995), pp. 235–276. [27] A. Kurganov and D. Levy, A third-order semidiscrete central scheme for conservation laws and convection diffusion equations, SIAM J. Sci. Comput., 22 (2000), pp. 1461–1488. [28] A. Kurganov and E. Tadmor, New high-resolution central schemes for nonlinear conservation laws and convection-diffusion equations, J. Comput. Phys., 160 (2000), pp. 241–282. [29] C. Lattanzio and D. Serre, Convergence of a relaxation scheme for hyperbolic systems of conservation laws, Numer. Math., 88 (2001), pp. 121–134. [30] R. J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge University Press, Cambridge, New York, 2002. [31] H. L. Liu and G. Warnecke, Convergence rates for relaxation schemes approximating conservation laws, SIAM J. Numer. Anal., 37 (2000), pp. 1316–1337. [32] Mapress GmbH & Co. KG, mapress pressfitting system, Technical Report, www.mapress.de 2002, Langenfeld, Germany. [33] L. Pareschi and G. Russo, Implicit-explicit Runge-Kutta schemes for stiff systems of differential equations, Recent Trends in Numerical Analysis, Adv. Theory Comput. Sci. 3, (L. Brugano and D. Trigiante, eds.), 2001, pp. 269–288. [34] C. W. Shu and S. Osher, Efficient implementation of essentially nonoscillatory shockcapturing schemes, J. Comput. Phys., 77 (1988), pp. 439–471. [35] F. M. White, Fluid Mechanics, McGraw–Hill, New York, 2002.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1613–1633
c 2008 Society for Industrial and Applied Mathematics
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION TECHNIQUES∗ EDDIE WADBRO† AND MARTIN BERGGREN‡ Abstract. Microwave tomography is a technique in which microwaves illuminate a specimen, and measurements of the scattered electrical field are used to determine and depict the specimen’s dielectric and conductive properties. This article presents a new method to perform the reconstruction. The reconstruction method is illustrated by assuming time harmonic scattering in two space dimensions in a setup tailored for medical applications. We prove that the resulting constrained nonlinear least-squares problem admits a solution. The governing Helmholtz equation is discretized by using the finite-element method, and the dielectric properties are allowed to attain different values at each element within a given region. The reconstruction algorithm uses methodologies borrowed from topology optimization of linearly elastic structures. Numerical examples illustrate the reconstruction method in a parameter range typical for human tissue and for the challenging case where the size of the object is in the same order as the wavelength. A reasonable estimate of the dielectric properties is obtained by using one observation per 20 unknowns when the permittivity is allowed to vary continuously within a given interval. Using a priori information that the permittivity attains only certain values results in a good estimate and a sharp image. As opposed to topology optimization for structures, there is no indication of mesh dependency and checkerboarding when forcing the permittivity to attain discrete values. Key words. microwave tomography, topology optimization, inverse problems, Maxwell equations, Helmholtz equation AMS subject classifications. 65K05, 78A46 DOI. 10.1137/070679879
1. Introduction. In medical tomography, images depicting cross sections of the body are reconstructed from measurements of various quantities. There are many types of tomography, using different physical principles, often relying on illuminating the object with x rays, microwaves, or sound waves. X-ray (CT) and ultrasound tomography typically use wavelengths that are much shorter than the dimensions of the smallest object that is being reconstructed, which allows the use of ray theory in the algorithms for image reconstruction. The algorithms for this case are typically fast and produce images with a very high spatial resolution. However, ray theory does not apply when the wavelength is in the same order as the required resolution, a case that complicates the image reconstruction algorithms and reduces the sharpness of the pictures. Still, the long wavelength case is of high interest in applications, particularly in the case of microwave tomography for medical applications. Microwave tomography uses low-power electromagnetic radiation with wavelengths in the centimeter range to reconstruct centimeter-sized details in the dielectric and conductivity properties of tissue. Microwave tomography has many potential benefits compared to x-ray or ultrasound tomography. Important physiological conditions of living tissues, such as ∗ Received by the editors January 11, 2007; accepted for publication (in revised form) October 25, 2007; published electronically April 18, 2008. Funding for this research has been provided by the Swedish Research Council. http://www.siam.org/journals/sisc/30-3/67987.html † Department of Information Technology, Uppsala University, Box 337, SE–751 05 Uppsala, Sweden (
[email protected]). ‡ Department of Computing Science, Ume˚ a Universitet, S-901 87 Ume˚ a, Sweden (martin.
[email protected]).
1613
1614
EDDIE WADBRO AND MARTIN BERGGREN
blood flow reduction and infarction zones in the heart [20] and the presence of malignant tissue [14], are accompanied by changes in dielectric and conductivity properties. Moreover, these changes are detected without a contrast agent, the hardware is simple and inexpensive, and microwave radiation, not being ionizing, is safer and not subject to the strict regulatory requirements that complicate the use of x-ray equipment. In spite of these benefits, microwave tomography is not in clinical practice. An important reason is the difficult image reconstruction problem. The mathematical problem is essentially a nonlinear least-squares problem, which consists of fitting a complex permittivity function in the Maxwell equations to measurements of the electric field. Here we consider only frequency domain methods. A motivation for this restriction is the desire for quantitative imaging. Using a broad spectrum would complicate a quantitative imaging algorithm due to the strong frequency dependence of the permittivity. Low contrasts, that is, when there are only small spatial variations in the permittivity, admit the so-called Born or Rytov approximations, which leads to a linear least-squares problem. There are fast methods, usually labeled diffraction tomography, for the low-contrast case. Kak and Slaney [15] review these methods as well as tomographic techniques for the ray-theory case. However, biological tissues show high contrasts in permittivity, for instance, due to differences in water content. Therefore, medical applications necessitate methods that can handle the high-contrast case. One such method attempts to solve the nonlinear least-squares problem through a sequence of linear least-squares problems; the solution using the Born or Rytov approximation will then be a first step in a kind of stationary iteration procedure. Chew et al. [6] and Semenov et al. [19] discuss such methods, which seem to extend the contrast range that can be handled but not up to the ranges that are needed for medical applications. The most straightforward—and computationally intensive—way to completely eliminate the contrast restrictions is simply to attack the original nonlinear leastsquares problem with a numerical method, as reported, for instance, by Joachimowicz, Pichota, and Hugonin [13], Meaney et al. [16], Souvorov et al. [22], and most recently Semenov et al. [18]. Most authors assume, as we will do below, symmetries so that the Maxwell equations reduce to the scalar Helmholtz equations. Joachimowicz, Pichota, and Hugonin [13] and Semenov et al. [18] use a volume-integral method to solve the inhomogeneous Helmholtz equation. This method generates dense matrices and will therefore be computationally intensive also in two space dimensions. Meaney et al. [16] use a finite-element method for the inhomogeneous Helmholtz equation within the inversion region, coupled with a boundary element method for the exterior region. Both above models assume perfect absorption of outgoing waves and do not account for reflections from antennas or the experimental apparatus. The authors above typically apply Tichonov regularization and solve the resulting problem with a gradient-based algorithm such as Levenberg–Marquard. Although we concentrate on methods aimed at determining tissue properties, it is worth mentioning that similar problems are also of high interest in geophysics and nondestructive testing. Haber, Ascher, and Oldenburg [11] and Newman and Hoversten [17] discuss inversion of dielectric and conductivity properties in the context of geophysics. The subject usually denoted topology optimization has its origin in structural optimization and concerns the problem of determining the best arrangement of material within a given domain. Topology optimization has been the subject of intense research since the ground-breaking paper by Bendsøe and Kikuchi [2], where the method for the
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1615
first time was applied to a continuum structure. In a recent monograph [3], Bendsøe and Sigmund present a comprehensive review of topology optimization and its applications. Today topology optimization is a well-established technique for structural design, and for some classes of problems there exists commercial software, for example, from Altair Engineering and FE-Design. More recently, researchers have started to apply similar techniques to problems of wave propagation [7, 8, 12, 21, 24, 25]. The nonlinear least-squares problem associated with microwave tomography fits well within the framework of topology optimization. In fact, a variation of the method we use for topology optimization of acoustic horns [24, 25] is what we apply below. To the best of our knowledge this approach to microwave tomography has not been explored in the literature. We consider methods that attempt to determine the material properties at each point in the domain of interest. An interesting alternative, not considered here, is to track the interfaces between regions by using so-called level-set methods. A recent article by Dorn and Lesselier [9] review level-set methods applied to inverse scattering problems. The paper is organized as follows. Section 2 introduces the problem, specifies the mathematical model of the optimization problem, and contains a proof that there exists a solution to our problem. Section 3 discusses the discretization of the problem and contains the derivations of the gradient expressions needed for the optimization algorithm. The experimental and computational setup is discussed in section 4, and the results of our numerical experiments are reported in section 5. Finally section 6 contains our conclusions and a discussion on the results. 2. Mathematical modeling and analysis. 2.1. Problem description and governing equations. We consider the arrangement illustrated in Figure 2.1. A metallic, hexagonal-shaped container contains objects with unknown dielectric properties embedded in a saline solution with known dielectric properties. Attached to each side of the container is a waveguide filled with
Fig. 2.1. The problem consists of finding the dielectric properties of unknown objects located inside a metallic hexagonal-shaped container.
1616
EDDIE WADBRO AND MARTIN BERGGREN
t d
n
Fig. 2.2. A truncated waveguide with width d. The outward directed unit normal at the end of the waveguide is denoted by n, and t is orthogonal to n.
a low-loss material with known dielectric properties. The container, the waveguides, and the unknown objects are assumed to infinitely extend in the direction normal to the plane. Our aim is to depict the unknown objects inside the container and determine their dielectric properties. The electric field vector E is assumed to be governed by the Maxwell equations in the form ∂2E 1 ∂E ∇×E + 2 +σ = 0, (2.1) ∇× μ ∂t ∂t where μ is the permeability, σ the conductivity, and the permittivity. The relative permittivity r is given by r = / 0 , where 0 is the permittivity in free space. We assume that μ ≡ μ0 , the permeability in free space. Seeking time harmonic solutions & iωt }, where % of (2.1) for the angular frequency ω, using the ansatz E(x, t) = %{Ee denotes the real part and i the imaginary unit, results in the reduced wave equation N + , & = 0, & − k02 r − i μ0 σ E (2.2) ∇× ∇×E 0 k0 where k0 = ω/c is the free space wave number and c = ( 0 μ0 )−1/2 is the speed of light in free space. We assume that the electrical field is polarized normal to the plane, & = (0, 0, u). Since also the geometry and the unknown objects are assumed that is, E to possess cylindrical symmetry with respect to the direction normal to the plane, (2.2) reduces to the Helmholtz equation (2.3) where
Δu + εk02 u = 0, N μ0 σ ε = r − i . 0 k0
The computational domain Ω is depicted in Figure 2.1. The outer ends of the (m) waveguides are denoted by Γin , m = 1, 2, . . . , 6, and their union by Γin . The sides of the container and the waveguides are entitled Γ0 , that is, Γ0 = ∂Ω\Γin , and consist of perfectly conducting material; hence (2.4)
u=0
on Γ0 .
The width of the waveguides is selected such that the lowest mode is propagating but all higher order modes are evanescent. We want to be able to set the amplitude of the incoming waves at the end of the truncated waveguides without affecting the amplitudes of the outgoing waves. Consider the wave propagation in a single waveguide illustrated in Figure 2.2. In the waveguide, u satisfies the equation (2.5)
Δu + εwg k02 u = 0,
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1617
where εwg is the (constant) complex permittivity of the material filling the waveguide. Let x denote spatial points and n the unit normal at the end of the waveguide, and let t be a vector perpendicular to n. The solution of (2.5) in a waveguide with width d and zero Dirichlet boundary conditions at the upper and lower sides can be found by separation of variables, which, together with the assumption that all except the first nonplanar mode are evanescent, yields , x · tπ + & & C1 eikx·n + C2 e−ikx·n , (2.6) u(x) = sin d where C1 is a constant describing the complex amplitude of the incoming wave at k is the the left end of the waveguide, C2 is the amplitude of the outgoing wave, and & reduced wave number defined by + π ,2 π & k 2 ) ≤ arg(& , − < arg(& k) ≤ 0, k 2 = k02 εwg − d 2 where arg denotes the polar argument. The inequalities above ensure that the lowest mode wave is propagating and establish that & k is uniquely defined; a justification of this statement is given in section 4.1. Differentiating expression (2.6) in the n direction results in , ∂u x · tπ + & ikx·n & & (2.7) = n · ∇u = sin − C2 i& ke−ikx·n . C1 ike ∂n d Combining expressions (2.6) and (2.7) yields ∂u x · tπ & i& ku + = 2i& k sin C1 eikx·n . ∂n d At the outer end of the waveguide x · n is constant and thus ∂u x · tπ x · tπ & = 2i& k sin i& ku + C1 eikx·n = 2i& , kC sin ∂n d d where the constant C determines the amplitude of the incoming wave at the end of the waveguide. By imposing the boundary conditions ⎧ x · tm π (m) ⎨2i& kCm sin on Γin , ∂u & (2.8) iku + = d (k) ∂n ⎩0 on Γ , k = m, in
(m)
where tm is the tangential vector on Γin , we set the amplitude of the incoming wave (m) at Γin (by adjusting Cm ) while ensuring vanishing amplitude of the incoming wave at the ends of the other waveguides and perfect absorption of all outgoing waves. Remark 1. The assumptions leading to boundary conditions (2.8) are adequate for the present proof-of-concept setup. When modeling an actual experimental configuration, a more realistic antenna model is likely required. For both the mathematical analysis and the finite-element approximation, it is convenient to work with the governing equations in variational form. Letting V = {v ∈ H 1 (Ω) | v = 0 on Γ0 }, the variational form of (2.3) with the boundary conditions given in (2.4) and (2.8) is (2.9)
Find u ∈ V such that aε (u, v) = m (v) ∀ v ∈ V,
1618
EDDIE WADBRO AND MARTIN BERGGREN
where (2.10)
∇u · ∇v − k02 εuv + i& k uv, Ω Ω Γin x · tm π v. kCm sin m (v) = 2i& (m) d Γin
aε (u, v) =
(2.11)
Remark 2. We will throughout the article, as in expressions (2.10) and (2.11), not explicitly state the symbol for measure in the integrals whenever there is no risk for confusion. 2.2. Optimization problem. The objects with unknown dielectric properties are assumed to be located inside the hexagonal-shaped subregion Ω? located inside the container (Figure 2.3). At the end of each waveguide, there is a device able to radiate microwaves as well as measure the electric field. One at a time, the six sources radiate the object. For each irradiation, the electric field is observed at the end of each waveguide. Thus, there are six observations per irradiation. In order to obtain a larger number of observations, the container can be rotated, as illustrated in Figure 2.3, at angles θl ∈ [0, 60◦ ), l = 1, 2, . . . , L, with respect to the fixed region Ω? . The process described above can then be repeated for the new location of the container. Each such rotation results in 36 additional observations. By using the notation uln for the solution of (2.9) to indicate that the active (n) source is located at Γin and the dependence on the rotation angle θl , the (complex) (m) mean value of uln at Γin is 1 l (2.12)
un m = uln . d Γ(m) in l We assume that we have measurements vn,m of uln m that are associated with the unknown complex permittivity ε. To find the dielectric properties of the objects in Ω? , we define the space of admissible dielectric properties by
Ur = { r ∈ L∞ (Ω) | 0 < α ≤ r ≤ α}, Ui = { i ∈ L∞ (Ω) | β ≤ i ≤ β < 0}, A @ ε = εwg in the waveguides , U = ε ∈ Ur + iUi outside the waveguides and Ω? ε = εs
Ω?
Ω?
Fig. 2.3. The container can be rotated with respect to the fixed region Ω? , in which the objects with unknown dielectric properties are located.
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1619
where εs corresponds to the known dielectric properties of the saline solution and where α, α, β, and β are real constants. The problem of determining ε within Ω? can be formulated as (2.13)
min J(ε), ε∈U
J(ε) =
6 L 6 l 2 l un m − vn,m . l=1 m=1 n=1
2.3. Existence of solutions to the optimization problem. In this section we prove that there exists a solution to optimization problem (2.13) for any given set l . We first show that the forward problem (2.9) is well-posed. of complex numbers vn,m The fact that we have an everywhere lossy material (!{ε} < 0 everywhere) facilitates the proof of well-posedness, since Lemma 2.1 shows that the sesquilinear form aε defined in (2.10) will then be coercive on V × V for each ε ∈ U. Lemma 2.1. There exists c ∈ C and C ∈ R such that C > 0 and %{caε (u, u)} ≥ C u 2 for all u ∈ V and ε ∈ U. Proof. Let c = −β − 2iα. Multiplying the expression (2.10) with c yields @ A 2 2 2 2 & |∇u| − ck0 ε|u| + cik |u| %{caε (u, u)} = % c Ω Γin Ω ( ) = −β −β%{ε} + 2α!{ε} |u|2 |∇u|2 − k02 Ω Ω + , & & + 2α%{k} + β!{(k} |u|2 . Γin
Since for ε ∈ U, %{& k} ≥ 0 and !{& k} ≤ 0, we have ( ) |∇u|2 − k02 −βα + 2αβ |u|2 %{caε (u, u)} ≥ −β Ω Ω 2 2 2 = |β| |∇u| + k0 |β|α |u| ≥ C u 2 , Ω
Ω
where C = |β| min(1, k02 α). Corollary 2.2. For each ε ∈ U there exists a unique solution to problem (2.9). Proof. This follows from Lemma 2.1 together with the Lax–Milgram theorem. To prove that there exists a solution to problem (2.13) we will utilize the following lemmas. The first concerns compact inclusion of Sobolev spaces and follows from Theorem 1.4.3.2 in Grisvard [10]. Lemma 2.3. Let s > s ≥ 0, and assume that Ω is an open, bounded, and connected subset of Rn with a Lipschitz boundary. Then the injection of H s (Ω) into H s (Ω) is compact. The second lemma we need is a trace theorem for domains with Lipschitz boundaries, which follows from Theorem 1.5.1.2 in Grisvard [10]. Lemma 2.4. Let Ω be an open, bounded, and connected subset of Rn with a Lipschitz boundary ∂Ω. Assuming that s ∈ (1/2, 1], each function u ∈ H s (Ω) has a well-defined trace γu in the space H s−1/2 (∂Ω), and there exists a C > 0 such that
γu ≤ C u
∀ u ∈ H s (Ω).
1620
EDDIE WADBRO AND MARTIN BERGGREN
We now have the tools needed to prove the main theorem of this section. Theorem 2.5. Problem (2.13) has at least one solution. Proof. Let {εj }∞ j=0 ⊂ U be a minimizing sequence of J. Since U is bounded, there ∗ is an element ε∗ ∈ U and a subsequence, still denoted {εj }∞ j=0 ⊂ U, such that εj → ε ∞ weakly* in L (Ω). Let l correspond to an arbitrary rotation of the domain, and let n correspond to an arbitrary source location. Define uln,j as the solution to problem (2.9) for rotation l and source location n with ε replaced by εj . Lemma 2.6. The sequence {uln,j }∞ j=0 is bounded. Proof. From Lemma 2.1 and Corollary 2.2 it follows that C uln,j 2 ≤ %{caεj (uln,j , uln,j )} = %{cn (uln,j )} ≤ |cn (uln,j )| x · tn π l & c uln,j H 1/2 (∂Ω) ≤ & un,j ≤ c uln,j , = |c| 2ikCn sin (n) d Γin where the last inequality follows from Lemma 2.4. Thus
uln,j ≤
& c . C
Since the sequence {uln,j }∞ j=0 is bounded, there exists a subsequence, still del ∞ noted {un,j }j=0 ⊂ V, and a function uln∗ ∈ V—dependent on the rotation l and source location n—such that uln,j → uln∗ weakly in H 1 (Ω). Furthermore, the inclusion H 1 (Ω) ⊂ H 1−η (Ω) is compact for any 0 < η ≤ 1 (Lemma 2.3). Thus we select 0 < η0 < 1/2 and a further subsequence {uln,j } such that uln,j → uln∗ strongly in H 1−η0 (Ω). Lemma 2.7. For each rotation l and source location n, the function uln∗ solves problem (2.9) with ε replaced by ε∗ . Proof. We consider each term in (2.9) separately. First choose 0 < t < 1/2 − η0 , and let v ∈ V be arbitrary. For the first term of a we have ∇uln,j · ∇v = ∇uln∗ · ∇v + ∇(uln,j − uln∗ ) · ∇v → ∇uln∗ · ∇v as j → ∞, Ω
Ω
Ω
Ω
since ∇uln,j → ∇uln∗ weakly in L2 (Ω). For the second term of a we use that uln,j → uln∗ strongly in H 1−η0 (Ω) and εj → ε∗ weakly* in L∞ (Ω), and thus εl uln,j v = ε∗ uln∗ v + (εl − ε∗ )uln∗ v Ω Ω Ω l l∗ + εl (un,j − un )v → ε∗ uln∗ v as j → ∞. Ω
Ω
For the third term of a, l l∗ (un,j − un )v ≤ uln,j − uln∗ L2 (Γin ) v L2 (Γin ) = uln,j − uln∗ L2 (∂Ω) v L2 (Γin ) Γin
≤ uln,j − uln∗ H t (∂Ω) v L2 (Γin ) ≤ C uln,j − uln∗ H t+1/2 (Ω) v L2 (Γin ) , where the last inequality follows from Lemma 2.4. By making use of the fact that 0 < t < 1/2 − η0 and uln,j → uln∗ strongly in H 1−η0 (Ω), we find that for each v l l∗ (un,j − un )v ≤ C uln,j − uln∗ H t+1/2 (Ω) v L2 (Γin ) → 0 as j → ∞; Γin
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
hence
uln,j v Γin
uln∗ v
=
(uln,j
+
Γin
−
uln∗ )v
uln∗ v
→
Γin
1621
as j → ∞.
Γin
By combining the limits for the three terms and making use of the fact that v is arbitrary, it follows that n (v) = aεl (uln,j , v) → aε∗ (uln∗ , v),
and hence aε∗ (uln∗ , v) = n (v)
for all v ∈ V. Let m correspond to an arbitrary receiver position. Following the same arguments as for the third term of a in Lemma 2.7, with v replaced by χΓ(m) (the characteristic function of Γ(m) ), yields 1 1
uln,j m − uln∗ m = (uln,j − uln∗ ) = χ (m) (uln,j − uln∗ ) d Γ(m) d Γ Γ & l − ul ∗ H t+1/2 (Ω) → 0 ≤ C u as j → ∞; n,j
n
hence l l | uln,j m − vn,m |2 → | uln∗ m − vn,m |2
as j → ∞.
The objective function J in optimization problem (2.13) is a finite sum of terms of l the type | uln m − vn,m |2 , and thus inf J(ε) = lim J(εj ) = lim
ε∈U
j→∞
j→∞
=
6 L 6
l | uln,j m − vn,m |2
l=1 m=1 n=1
6 L 6
l | uln∗ m − vn,m |2 = J(ε∗ );
l=1 m=1 n=1 ∗
that is, ε is a solution of problem (2.13). In some situations one might have a priori information that the dielectric properties in the region Ω? lie in a certain range. In these situations, the problem of interest is problem (2.13) with U replaced by U& defined by @ A α ≤ r ≤ %{ε} ≤ r ≤ α a.e. in Ω? & (2.14) U = ε ∈ U . β ≤ i ≤ !{ε} ≤ i ≤ β a.e. in Ω? Since U& is a bounded subset of U, the arguments in Theorem 2.5 holds with U replaced & We thus have the following corollary. by U. Corollary 2.8. There exists at least one solution to problem (2.13) with U & replaced by U. l is such that J(ε∗ ) > 0, it is possible to Remark 3. If the target distribution vn,m ∗ argue, similar to Dobson [7], that ε attains at least one of the bound constraints in & for almost every x ∈ Ω? . the admissible class U (or U) Remark 4. Theorem 2.5 asserts the existence of a solution to problem (2.13), l . A different question, not anregardless of the quality of the supplied data vn,m swered by Theorem 2.5, is whether the solution is an acceptable estimate of an actual permittivity distribution. The quality of the estimated permittivity will depend on particulars, such as the number of observations and the noise levels.
1622
EDDIE WADBRO AND MARTIN BERGGREN
2.4. Penalization. Here we assume that we have a priori information that the dielectric properties attain only the predetermined values {ε1 , ε2 }, εi ∈ C, i = 1, 2, in the region Ω? . In this case we are interested in solving problem (2.13) with U replaced defined by by U, C B U = ε ∈ U& ε ∈ {ε1 , ε2 } a.e. in Ω? , where U& is defined as in (2.14). Unfortunately, the above proof of Theorem 2.5 The rub is that there is does not hold for problem (2.13) with U replaced by U. ∗ no guarantee that the weak* limit ε , for a given minimizing sequence {εj } of J, is We will nevertheless attempt to solve problem (2.13) numerically an element in U. Such a problem can be viewed as a with a discrete-valued permittivity class U. nonlinear integer programming problem. Large-scale integer programming problems are computationally expensive, so we choose instead to consider the problem 2 l uln m − vn,m + Jp (ε), (2.15) min J(ε), J(ε) = ε∈U&
l,m,n
where the second term is the penalty function Jp (ε) = γ |ε − ε1 |2 |ε − ε2 |2 , Ω?
Moreover, in which γ is a penalty constant. The penalty function is zero for all ε ∈ U. if γ > 0, then the penalty function is positive for all ε ∈ U. That is, the penalty function promotes the values ε1 and ε2 , while the intermediate values are penalized. Moreover, assuming that there exists an optimal solution ε∗ to problem (2.13) with then for sufficiently large γ this solution is also an optimum for U replaced by U, problem (2.15). However, it also holds that any ε ∈ U is a local minimum to (2.15) if γ is sufficiently large. Therefore, care is needed when increasing the penalty constant. Remark 5. The above strategy is a generalization to the present application of the penalty strategy used to impose discrete-valued designs in topology optimization [1, 4, 24]. The strategy is limited to the case of two materials and requires accurate estimates of the dielectric properties. In practice, available a priori information may be of a more general type. A typical example is that it is known that the sample consists of a number of materials each with dielectric properties varying within a certain range. We are presently investigating strategies to utilize such a priori information. Note that the penalization adds a concave term to the objective and thus acts in a nonstabilizing manner. Typically the addition of a penalization term needs to be accompanied with a regularizing procedure such as the addition of a filter or a constraint on the total variation of the design. However, as the numerical results show, such a procedure is not required in this case. 3. The discrete setting. 3.1. Discretization. We solve variational problem (2.9) numerically with the finite-element method. The region Ω is triangulated by using the type of mesh depicted in Figure 3.1. The region Ω? is triangulated by using equilateral triangles. When rotations are used to increase the number of observations, a separate mesh is created for each rotation angle by keeping the mesh in Ω? fixed and creating a new unstructured part to fit the revolved container.
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1623
Fig. 3.1. To the left, a hybrid mesh used in the computations. The mesh is a member of mesh group II in Table 4.1. To the right, a closeup on the structured central part of the mesh.
In our numerical experiments we use first- or second-order Lagrangian elements. We thus define Vh to be the space of all continuous, complex-valued functions being linear (respectively, quadratic) on each element and zero along the sides of the container and the waveguides. The function ε is approximated with a function εh ∈ Uh , where Uh is the set of all functions in U being constant on each element. The discretized version of variational problem (2.9) reads: Find uh ∈ Vh such that aεh (uh , vh ) = m (vh ) ∀ vh ∈ Vh .
(3.1)
The discretized version of optimization problem (2.13) is min Jh (εh ),
εh ∈Uh
2 l uh ln m − vn,m ,
Jh (εh ) =
l,m,n
where uh ln is the solution to (3.1) with source location n and rotation l. The discretized version of problem (2.15) is (3.2)
min Jh (εh ),
εh ∈U&h
Jh (εh ) =
2 l uh ln m − vn,m + Jp (εh ), l,m,n
where U&h is the set of all functions in U& being constant on each element. Remark 6. Problem (3.2) covers both the unpenalized and the penalized version of our optimization problem. For the special choice of parameters γ = 0, r = α, r = α, i = β, and i = β, problems (2.13) and (2.15) are equivalent. 3.2. Sensitivity analysis. Consider the function (3.3)
l Il,m,n (εh ) = uh ln m − vn,m ,
1624
EDDIE WADBRO AND MARTIN BERGGREN
where l corresponds to the rotation of the domain, n to the source location, and m to the receiver position. Problem (3.2) can be written as the nonlinear least-squares problem min Jh (εh ), Jh (εh ) = |Il,m,n (εh )|2 + Jp (εh ). εh ∈U&h
l,m,n
When solving the problem numerically, it is advantageous to use algorithms that utilize the least-squares structure. Such algorithms require ∇Il,m,n for all individual l, m, and n as well as ∇Jp (instead of just ∇Jh ). The differentiation of the penalty term is straightforward. For the derivation of the gradient of Il,m,n with respect to changes in εh , we fix εh and let δεh be an arbitrary variation of εh . Differentiating (3.3) with respect to this design variation results in δIl,m,n = δuh ln m ,
(3.4)
where δuh is the first variation of uh corresponding to δεh . Differentiating the variational form in problem (3.1) yields ∇δuh · ∇vh − k02 εh δuh vh − k02 δεh uh vh + i& k δuh vh = 0. (3.5) Ω
Ω
Ω
Γin
Let zh ∈ Vh be the solution of the adjoint equation ∇wh · ∇zh − k02 εh wh zh + i& k wh zh = wh ln m (3.6) Ω
Ω
Since (3.5) holds for all vh ∈ Vh , choosing vh = zh results in ∇δuh · ∇zh − k02 εh δuh zh − k02 δεh uh zh + i& k (3.7) Ω
∀ wh ∈ Vh .
Γin
Ω
Ω
δuh zh = 0.
Γin
By making use of (3.6) with wh = δuh , expression (3.7) reduces to δεh uh zh . (3.8)
δuh ln m = k02 Ω (k) er
(k) ei
Let and denote the real and the imaginary part, respectively, of the restriction of εh on the triangle Tk . Combining expressions (3.8) and (3.4) yields @ @ A A ∂%{Il,m,n } ∂!{Il,m,n } 2 2 = % k u z = ! k u z , h h h h , 0 0 (k) (k) Tk Tk ∂er ∂er A A @ @ (3.9) ∂%{Il,m,n } ∂!{Il,m,n } 2 2 = % ik u z = ! ik u z , h h h h . 0 0 (k) (k) Tk Tk ∂ei ∂ei The adjoint (3.6) and the state (3.1) equations are independent. Furthermore, the adjoint and the state equation are almost the same (only the right-hand side differs). Hence for each rotation of the outer container it suffices to solve an equation of the type A(εh )x = y for twelve right-hand sides in order to be able to compute the values and gradients for the 36 observations corresponding to the current rotation angle θl .
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1625
4. The computational setting. 4.1. Experimental setup. The container considered in our experiments (Figure 2.1) consists of a hexagonal-shaped core with one waveguide attached to each side. The values we use for the dielectric properties and the width of the waveguides are taken from the article by Semenov et al. [18]. In the numerical experiments, we work with a frequency of 900 MHz. For this frequency, the saline solution in the core has the dielectric properties1 ε = 79.0 − 10.5i, and the waveguides are filled with a dielectric material with ε = εwg = 90 − 0.0009i. The side length of the hexagon is 16 cm. The length of each waveguide is 16 cm, and the width of each waveguide is 2.2 cm. The general solution to the Helmholtz equation (2.5) in a waveguide of width d, with zero Dirichlet boundary conditions along the sides, can be expressed as a weighted sum of wave modes , mx · tπ + & & wm (x) = sin C1 eikx·n + C2 e−ikx·n , m = 1, 2, . . . , d where n is parallel to the waveguide, t is orthogonal to n (Figure 2.2), and + mπ ,2 & k 2 = k02 εwg − , d
−π ≤ arg(& k 2 ) ≤ arg(& k) ≤ 0.
For lossless material in the waveguide, that is, !{εwg } = 0, wave mode wm propagates as long as %{& k 2 } > 0, or equivalently d>
ω
πcm . %{εwg }
For the case when a low-loss material (|!{εwg }| |%{εwg }|) occupies the waveguide, the wave propagates as long as %{& k} |!{& k}|, or approximately as long as %{& k2 } > 0, which yields the same bound for d as in the lossless case. For a frequency of 900 MHz and %{εwg } = 90, we find that the mth mode propagates if the width of the waveguide is larger than m · 1.8 cm. Hence by selecting the width to be 2.2 cm, we ensure that the lowest mode propagates while all higher modes are geometrically evanescent. The region Ω is triangulated separately for each rotation used in our computations. We use five different resolutions and denote the meshes corresponding to each resolution mesh group I–mesh group V. The mesh groups are related such that, for each rotation, the mesh in group II is a uniform refinement of corresponding mesh in group I, the mesh in group III is a uniform refinement of corresponding mesh in group II, and so on. Data for the mesh groups can be found in Table 4.1. Note that there is a slight variation in the degrees of freedom and number of elements within each mesh group due to the different positions of Ω? inside the container. The values presented are for the case where Ω? is positioned as to the left in Figure 2.3. 4.2. Computational setup. The method has been implemented in Matlab. In a precomputing step, the meshes for the current mesh group are created, and the state matrices are assembled for the configuration where no unknown object is located in Ω? . 1 The difference in sign for the imaginary part, compared to Semenov et al. [18], is due to the choice of sign ± in the time harmonic ansatz E(x, t) = {Ee±iωt }.
1626
EDDIE WADBRO AND MARTIN BERGGREN
Table 4.1 Data for the meshes, where n1 and n2 are the degrees of freedom using first- (respectively, second-) order elements, N is the number of elements in Ω? , hmax is the length (in m) of the longest element edge in the mesh, and h? is the length of the element edges in Ω? . Mesh group I II III VI V
n1 ≈ 785 2933 11321 44465 176225
n2 ≈ 2933 11321 44465 176225 701633
N 384 1536 6144 24576 98304
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
hmax ≈ 0.0257 0.0128 0.00642 0.00321 0.00160
h? 0.01 0.005 0.0025 0.00125 0.000625
−0.05
0
0.05
0.1
Fig. 4.1. The dielectric properties when the phantom is present in Ω? . To the left, the real part; to the right, the imaginary part. The mesh in the background is the structured part of the meshes in the group mesh group III. The scales on the axes are in meters.
We solve optimization problem (3.2) with the initial guess εh ≡ 79.0 − 10.5i, the dielectric values of the saline solution. In each step of the optimization process, the discrete state (3.1) and adjoint (3.6) equations are solved. The gradient of the first term in the objective function is computed according to (3.9). If penalization is used, the analytical gradient of the penalty term is also evaluated. The method of moving asymptotes (MMA) [23] is used to update the design variables. Thereafter the state matrix is updated according to the changes in the dielectric properties. When penalization is applied, we use a continuation approach for the penalty constant. That is, the optimization problem is first solved without penalty with initial guess εh ≡ 79.0 − 10.5i. Then γ is set to a small number, so that the penalty term has only a minor influence on the optimization. The optimization problem is solved again with the result from the previous optimization as the initial guess. Then γ is increased by a factor of 10, and the process is repeated until the problem is solved with a sufficiently large penalty constant to yield an almost discrete-valued permittivity distribution. 4.3. Accuracy check. The experiments performed in this first study aim to find a triangular-shaped object, our phantom, with dielectric properties ε = 57.5 − 22.6i, depicted in Figure 4.1. In our experiments, we set the frequency of the incoming wave to be 900 MHz. At this frequency, the dielectric properties of the phantom are chosen to correspond to soft tissue. According to Table 1 in Semenov et al. [18] the wavelength at this frequency is about 3.7 cm in the saline solution and 4.3 cm in soft tissue. The fact that the diameter of the phantom is about one wavelength makes l it challenging to reconstruct ε. The values for vn,m used in the experiments are the
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1627
Table 4.2 l Relative difference in the vector with the values uh ln m − vn,m , computed by using 36 observal are computed for the phantom tions for different mesh sizes and element orders. The values of vn,m depicted in Figure 4.1, and uh ln is computed for a container filled only with the saline solution. The value computed on mesh group V with second-order elements is taken as a reference. Mesh group 1st order 2nd order
I 0.865 0.420
II 1.123 0.114
III 0.449 0.040
VI 0.124 0.011
V 0.036 REF
(m)
computed values of uln m , the mean complex amplitude at Γin , where uln is the solution where the phantom is present in Ω? for rotation l and irradiation position n. Our first experiment aims to check the consistency in the finite-element approximation for the different mesh groups and orders of basis functions. Here we solve for the mean complex amplitude at the end of the waveguides when the domain is filled only with the saline solution as well as when the phantom is present. For both of these cases, we compute the mean complex amplitude at the ends of the waveguides and compute the vector r, with components l rj = uh ln m − vn,m ,
where j = 36(l − 1) + 6(n − 1) + m.
Here uh ln is the solution for rotation l and irradiation position n when the domain l is the observation with the phantom filled only with the saline solution, and vn,m in place. The values of r depend on both the mesh size and the order of the basis functions. We consider the value of r computed on mesh group V by using the second-order basis function as our reference value. The relative difference between the computed value and the reference value of r is presented in Table 4.2. Based on these results and the wish to have both a reasonable accuracy and computational cost, we choose to perform the rest of our numerical experiments with second-order basis functions on mesh group III. 5. Computational results. 5.1. Without penalty. Our first experiment aims to find the dielectric properties of our phantom, the triangular object depicted in Figure 4.1, without using any penalty. For the box constraints in the definition of U& we used the values: r = 5, r = 80, i = −25, and i = −1. The results shown in Figure 5.1 are computed by using 6 irradiation positions, giving rise to 36 complex observations. Note that there are 6144 unknown complex permittivity values (Table 4.1), so the least-squares problem is seriously underdetermined. Rotating the container 30 degrees gives rise to 6 new irradiation positions. By making use of all 72 available observations, we get the results in Figure 5.2 and a clearer image than in Figure 5.1. The results from both computations suggest that it is easier to find the real part of the dielectric properties. Furthermore, for the imaginary part there is a visible shadow of the object located about one wavelength below the location of the unknown object. The inclusion of further observations into the optimization process is performed by rotating the chamber such that the new irradiation positions lie in between previous irradiation points, that is, with the notation of section 2.2, θl =
l−1 ◦ 60 , L
l = 1, . . . , L.
1628
EDDIE WADBRO AND MARTIN BERGGREN
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
0.1
Fig. 5.1. The reconstructed dielectric properties in the domain Ω? using 6 irradiation positions on mesh group III. To the left, the real part; to the right, the imaginary part. The scales on the axes are in meters.
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
0.1
Fig. 5.2. The reconstructed dielectric properties in the domain Ω? using 12 irradiation positions on mesh group III. To the left, the real part; to the right, the imaginary part. The scales on the axes are in meters.
Applying this strategy once more results in a total of 24 irradiation positions. By making use of the resulting 144 observations, the dielectric properties are reconstructed in (only!) 12 MMA iterations. The reconstructed dielectric properties are illustrated in Figure 5.3. The upper and the left sides of the triangle are clearly visible in the image showing the real part. However, the bottom side of the triangle is not well-resolved in this image; there is a small region where the real part of the reconstructed dielectric properties is essentially the same as the real part of those of the saline solution. Concurrently, the imaginary part of the reconstructed dielectric properties attains its minimum in this part. Moreover, a shadow of the object in the imaginary part of the dielectric properties is also in this case clearly visible. Remark 7. The convergence criterion for the optimization algorithms consisted of checking that the necessary conditions for optimality (the KKT conditions) was satisfied within a relative tolerance of 10−8 . The solution of the linear systems associated with the forward and adjoint equations and the calculations required for the parameter update by the MMA algorithm dominate the computational complexity of the algorithm. As outlined in section 3.2, each iteration requires the solution of L sparse linear systems, each with 12 right-hand sides. The computational cost for the MMA update calculations grows proportionally with L2 due to the least-squares
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
1629
0.1
Fig. 5.3. The reconstructed dielectric properties in the domain Ω? using 24 irradiation positions on mesh group III. To the left, the real part, to the right; the imaginary part. The scales on the axes are in meters.
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
0.1
Fig. 5.4. The reconstructed dielectric properties in the domain Ω? using noisy data and 24 irradiation positions on mesh group III. To the left, the real part; to the right, the imaginary part. The scales on the axes are in meters.
formulation of the problem. In the worst case for the numerical experiments reported here, the MMA update calculations required about the same time as the time required for the solution of the linear systems. It is important to note that, even though we have proven the existence of solutions to least-squares problem (2.13), this does not imply uniqueness. In fact, there are many distributions of the dielectric properties that give very low values of the objective function. The value of the objective function is essentially zero for all optimized distributions of the dielectric properties presented in this section. To check the robustness of the reconstruction we add artificial measurement noise corresponding to a signal-to-noise ratio of 40 dB—the same noise level as Semenov et al. [18] reported for their physical experiments. The reconstructed dielectric properties of our phantom are presented in Figure 5.4. These images display the same main characteristics as those reconstructed without noise; the size and location of the triangular object are also in this case properly reconstructed. The upper left corner of the triangle is still clearly marked, and the right corner is also visible; however, it is somewhat fainter compared to the image reconstructed by using exact data. The influence of the noise can be seen in that the reconstructed dielectric properties display a more varying behavior than those reconstructed without noise added to the
1630
EDDIE WADBRO AND MARTIN BERGGREN
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
0.1
Fig. 5.5. The reconstructed dielectric properties in the domain Ω? using a continuation approach in the penalization process and 6 irradiation positions on mesh group III. To the left, the real part; to the right, the imaginary part. The scales on the axes are in meters.
measurement data. Moreover, the shadow in the imaginary part is still present at about the same location and with the same size as previously. 5.2. With penalty. Our second experiment aims to find the same phantom (Figure 4.1) as in our first experiment. This time we will make use of the fact that we know that the dielectric properties are either ε1 = 79.0 − 10.5i (saline solution) or ε2 = 57.5 − 22.6i (soft tissue) inside Ω? . In order to promote the values ε1 and ε2 , the penalized version (2.15) of the optimization problem is used. For the box constraints, we used the following values: r = 57.5, r = 79.0, i = −22.6, and i = −10.5. As described in section 4.2, we first solve the optimization problem without penalty with initial distribution εh ≡ 79.0 − 10.5i. Then γ is set to a small number. The optimization problem is solved again with the result from the previous optimization as the initial distribution. Then γ is increased by a factor of 10, and the process is repeated, in our case a total of six times. The results shown in Figure 5.5 are reconstructed by using 6 irradiation positions together with the continuation approach. The resulting distribution of the dielectric properties is almost entirely in {ε1 , ε2 }, and the results are markedly sharper than those obtained by using the same number of observations without applying any penalization (Figure 5.1). In Figure 5.5 a triangular-shaped object can be seen. In comparison with the phantom (Figure 4.1), the reconstructed triangular object has approximately the same position and length, while the height is smaller than the height of the object in the phantom. Furthermore, the reconstructed object has a “tail” hanging from its bottom corner. This tail corresponds to the shadow of the object found in the optimization without applying any penalty. The dielectric properties in Figure 5.6 are reconstructed by using the continuation procedure for the penalization and 12 irradiation positions. Here the proportions of the reconstructed object are similar to the ones of the phantom. However, the sides of the reconstructed object are wiggly, and fragments of the shadow from the corresponding unpenalized results (Figure 5.2) are clearly visible. Making use of the continuation penalization procedure and 24 irradiation positions results in the reconstructed dielectric properties illustrated in Figure 5.7. The reconstructed triangular object has the same shape and dimensions as the phantom. There are only minor differences, along the edges of the object, between the reconstructed dielectric properties and the dielectric properties of the phantom.
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
1631
0.1
Fig. 5.6. The reconstructed dielectric properties in the domain Ω? using a continuation approach in the penalization process and 12 irradiation positions on mesh group III. To the left, the real part; to the right, the imaginary part. The scales on the axes are in meters.
0.05
0.05
0
0
−0.05
−0.05
−0.1
−0.05
0
0.05
0.1
−0.1
−0.05
0
0.05
0.1
Fig. 5.7. The reconstructed dielectric properties in the domain Ω? using a continuation approach in the penalization process and 24 irradiation positions on mesh group III. To the left, the real part; to the right, the imaginary part. The scales on the axes are in meters.
6. Discussion and conclusions. For the permittivity reconstruction problem in microwave tomography, this article introduces a method that borrows techniques previously used mainly for topology optimization of elastic structures. The algorithm for topology optimization that we prefer (MMA) uses separable convex approximations of a form that reflects the structure of the problem (that the dielectric properties and the state variable appear bilinearly in the Helmholtz equation). The algorithm easily handles box constraints on the admissible permittivities. The presence of such constraints is physically relevant and is essential for our proof of existence of a leastsquares solution (Theorem 2.5). Most other investigators of microwave tomography algorithms seem instead to rely entirely on the presence of a Tichonov term in the objective function to regularize the problem. Regularization of one type or another may be useful to combat noise, but we believe that enforcement of box constraints is more fundamental to the reconstruction problem. A priori information, such as knowledge that the permittivity can attain only two specific values, can be handled (approximately) in the optimization similarly as the 0–1 values that are imposed in topology optimization in structural mechanics. It is remarkable that there seems to be no need for further regularization when penalization is used. The numerical results do not display instabilities, such as checkerboards and mesh dependency. This rare property, shared with, for example, topology opti-
1632
EDDIE WADBRO AND MARTIN BERGGREN
mization in Stokes flow [5], stands in contrast to topology optimization for structural mechanics [3] as well as topology optimization of acoustic horns [24, 25], where regularization, for instance, in the form of a filter, is essential to obtain meaningful results. Although the results in section 5 are encouraging, many more studies need to be conducted: how to design the apparatus, the sensitivity to frequency and polarization (the full vector Maxwell equations are needed to model a different polarization), how to deal with more general a priori information and more material types, etc. Finally, it is crucial to consider real measured data after the initial studies have been performed. Acknowledgment. The authors are grateful to Daniel Noreland for encouraging us to look into the permittivity reconstruction problem and for valuable feedback throughout the investigations. REFERENCES [1] G. Allaire and R. V. Kohn, Topology optimization and optimal shape design using homogenization, in Topology Design of Structures, M. P. Bendsøe and C. A. M. Soares, eds., Kluwer Academic, Norwell, MA, 1993, pp. 207–218. [2] M. P. Bendsøe and N. Kikuchi, Generating optimal topologies in structural design using a homogenization method, Comput. Methods Appl. Mech. Engrg., 71 (1988), pp. 197–224. [3] M. P. Bendsøe and O. Sigmund, Topology Optimization. Theory, Methods, and Applications, Springer, New York, 2003. [4] T. Borrvall and J. Petersson, Topology optimization using regularized intermediate density control, Comput. Methods Appl. Mech. Engrg., 190 (2001), pp. 4911–4928. [5] T. Borrvall and J. Petersson, Topology optimization of fluids in Stokes flow, Internat. J. Numer. Methods Fluids, 41 (2003), pp. 77–107. [6] W. C. Chew, G. P. Otto, W. H. Weedon, J. H. Lin, C. C. Lu, Y. M. Wang, and M. Moghaddam, Nonlinear diffraction tomography-the use of inverse scattering for imaging, in 1993 Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, Vol. 1, IEEE, New York, 1993, pp. 120–129. [7] D. C. Dobson, Optimal mode coupling in simple planar waveguides, in IUTAM Symposium on Topological Design Optimization of Structures, Machines and Materials, M. P. Bendsøe, N. Olhoff, and O. Sigmund, eds., Springer, New York, 2006, pp. 301–310. [8] D. C. Dobson and S. J. Cox, Maximizing band gaps in two-dimensional photonic crystals, SIAM J. Appl. Math., 59 (1999), pp. 2108–2120. [9] O. Dorn and D. Lesselier, Level set methods for inverse scattering, Inverse Problems, 22 (2006), pp. R67–R131. [10] P. Grisvard, Elliptic Problems in Non-Smooth Domains, Pitman, London, 1985. [11] E. Haber, U. M. Ascher, and D. W. Oldenburg, Inversion of 3D electromagnetic data in frequency and time domain using an inexact all-at-once approach, Geophysics, 69 (2004), pp. 1216–1228. [12] J. S. Jensen and O. Sigmund, Systematic design of acoustic devices by topology optimization, in Proceedings of the Twelfth International Congress on Sound and Vibration, ICVS12 2005, Lisbon, 2005. [13] N. Joachimowicz, C. Pichota, and J. P. Hugonin, Inverse scattering: An iterative numerical method for electromagnetic imaging, IEEE Trans. Antennas and Propagation, 39 (1991), pp. 1742–1753. [14] W. T. Joines, Y. Zhang, C. Li, and R. L. Jirtle, The measured electrical properties of normal and malignant human tissues from 50 to 900 MHz, Med. Phys., 21 (1994), pp. 547–550. [15] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging, SIAM, Philadelphia, 2001. [16] P. M. Meaney, K. D. Paulsen, A. Hartov, and R. K. Crane, Microwave imaging for tissue assessment: Initial evaluation in multitarget tissue-equivalent phantoms, IEEE Trans. Biomed. Eng., 43 (1996), pp. 878–890. [17] G. A. Newman and G. M. Hoversten, Solution strategies for two- and three-dimensional electromagnetic inverse problems, Inverse Problems, 16 (2000), pp. 1357–1375. [18] S. Y. Semenov, A. E. Bulyshev, A. Abubakar, V. G. Posukh, A. E. Souvorov, P. M. van den Berg, and T. C. Williams, Microwave-tomographic imaging of the high dielectric-contrast object using different image-reconstruction approaches, IEEE Trans. Microwave Theory Tech., 53 (2005), pp. 2284–2294.
MICROWAVE TOMOGRAPHY USING TOPOLOGY OPTIMIZATION
1633
[19] S. Y. Semenov, A. E. Bulyshev, A. E. Souvorov, R. H. Svenson, Y. E. Sizov, V. Y. Vorisov, V. G. Posukh, I. M. Kozlov, A. G. Nazarov, and G. P. Tatsis, Microwave tomography: Theoretical and experimental investigation of the iteration reconstruction algorithm, IEEE Trans. Microwave Theory Tech., 46 (1998), pp. 133–141. [20] S. Y. Semenov, R. H. Svenson, and G. P. Tatsis, Microwave spectroscopy of myocardial ischemia and infarction. 1. Experimental study, Ann. Biomed. Eng., 28 (2000), pp. 48–54. [21] O. Sigmund and J. S. Jensen, Design of acoustic devices by topology optimization, in Short Papers of the 5th World Congress on Structural and Multidisciplinary Optimization WCSMO5, Lido de Jesolo, I. C. Cinquini, M. Rovati, P. Venini, and R. Nascimbene, eds., 2003, pp. 267–268. [22] A. E. Souvorov, A. E. Bulyshev, S. Y. Semenov, R. H. Svenson, A. G. Nazarov, Y. E. Sizov, and G. P. Tatsis, Microwave tomography: A two-dimensional Newton iterative scheme, IEEE Trans. Microwave Theory Tech., 46 (1998), pp. 1654–1659. [23] K. Svanberg, The method of moving asymptotes—A new method for structural optimization, Internat. J. Numer. Methods Engrg., 24 (1987), pp. 359–373. [24] E. Wadbro and M. Berggren, Topology optimization of an acoustic horn, Comput. Methods Appl. Mech. Engrg., 196 (2006), pp. 420–436. [25] E. Wadbro and M. Berggren, Topology optimization of wave transducers, in IUTAM Symposium on Topological Design Optimization of Structures, Machines and Materials, M. P. Bendsøe, N. Olhoff, and O. Sigmund, eds., Springer, New York, 2006, pp. 301–310.
SIAM J. SCI. COMPUT. Vol. 30, No. 3, pp. 1634–1657
c 2008 Society for Industrial and Applied Mathematics
ADAPTIVE FINITE ELEMENT METHOD FOR A PHASE FIELD BENDING ELASTICITY MODEL OF VESICLE MEMBRANE DEFORMATIONS∗ QIANG DU† AND JIAN ZHANG† Abstract. In this paper, a three-dimensional adaptive finite element method is developed for a variational phase field bending elasticity model of vesicle membrane deformations. Using a mixed finite element formulation, residual type a posteriori error estimates are derived for the associated nonlinear system of equations and, they are used to introduce the mesh refinement and coarsening. The resulting mesh adaptivity significantly improves the efficiency of the phase field simulation of vesicle membranes and enhances its capability in handling complex shape and topological changes. The effectiveness of the adaptive method is further demonstrated through numerical examples. Key words. vesicle membrane, phase field, elastic bending energy, a posteriori error estimator, adaptive finite element, mixed finite element AMS subject classifications. 65N30, 70G75, 92C05 DOI. 10.1137/060656449
1. Introduction. This paper presents an adaptive finite element method for the numerical simulation of vesicle membrane deformation based on a phase field bending elasticity model. The vesicle membranes, formed by bilayers of lipid molecules, are simple forms of biological membranes which exist everywhere in life and compartmentalize living matter into cells and subcellular structures, and are essential for many biological functions [38]. The equilibrium shapes of bilayer vesicle membranes have been successfully modeled via the minimization of certain shape energy; see, for instance, [14, 36, 41, 48], and the references cited therein. In the isotropic case, the most relevant energetic contribution to the equilibrium membrane geometry is usually the elastic bending energy of the form [11, 12, 41]: k 2 Eelastic = H ds, (1.1) Γ 2 where H is the mean curvature of the membrane surface. The parameter k is the bending rigidity, which can depend on the local heterogeneous concentration of the species (such as protein and cholesterol molecules), but it is mostly assumed to be a constant in this manuscript. Taking a simplified description of the effect of density change and osmotic pressure, it is assumed that the variation of the bending energy is subject to the constraints of specified volume and surface area. More general forms of the bending elastic energy, attributed to Canham and Helfrich, also incorporate effects of surface tension, the Gaussian curvature, and the spontaneous curvature [36, 41]. For the sake of simplicity in our presentation, we only focus on the energy (1.1), though much of our studies can be naturally extended to more general cases including the effect of the spontaneous curvature [20], the Gaussian curvature [21, 24], and the vesicle fluid interactions [17, 18]. ∗ Received by the editors April 5, 2006; accepted for publication (in revised form) October 31, 2007; published electronically April 18, 2008. This research is supported in part by NSF-DMR 0205232 and NSF-DMS 0712744. http://www.siam.org/journals/sisc/30-3/65644.html † Department of Mathematics, Pennsylvania State University, University Park, PA 16802 (qdu@ math.psu.edu, zhang
[email protected]).
1634
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1635
Computationally, there are many simulation methods developed for studying various deforming interface problems, such as the boundary integral and boundary element methods, immersed boundary and interface methods, front-tracking methods, and level-set methods (see, for instance, [34, 40, 43, 46, 49], and the references given in [22, 23]). For bending elasticity models, applications of these types of methods can also be found in [6, 28, 32, 53]. The phase field model can be viewed as a physically motivated level-set method; this is by virtue of its energy-based variational formalism. One of the main attractions to use the phase field method is its capability of easily incorporating the complex morphological changes of the interface, in particular, the changes in both topological and geometrical structures. For more detailed discussions, we refer to [22, 23], and the references given there. In [22], a finite difference method was used to study the energy-minimizing vesicle membrane in the three-dimensional axis-symmetric case. In [23], a Fourier spectral method was used to study the full three-dimensional case. Parallel implementation of such a spectral approach was also carried out to improve the computational efficiency. The various simulation examples given in the earlier studies demonstrated the effectiveness of the phase field approach. However, in the three-dimensional case, the high computational cost remains a formidable challenge in making the phase field simulation efficient. Indeed, the phase field function is defined on the whole physical domain, and it changes rapidly only near the transition layer around the membrane surface (the zero level set of the phase field function). Hence, uniform computational grids are generally not optimal, and it is natural to consider the application of adaptive finite element methods based on a posteriori error estimators [2, 3, 50]. We anticipate that adaptivity based on effective error estimations could make the phase field simulation much more efficient computationally and yet retain the advantage of being able to avoid the explicit tracking of the interfaces. This is indeed confirmed by the present work. In the adaptive method presented in this paper, a mixed finite element method (FEM) is used to discretize the phase field bending elasticity model. A residualtype a posteriori error estimator is derived for the development of the adaptive FEM algorithm. Effectively, the nodes of the adaptive mesh are concentrated near the interface (the membrane surface) so that the number of nodes is significantly reduced compared with the number of nodes in the uniform mesh cases, while the resolution of the numerical resolution of the adaptive FEM remains at the same level. These numerical results reveal the great potential of using the adaptive FEM to significantly reduce the computational cost of phase field approaches. Detailed descriptions, analysis, and numerical examples of our adaptive FEM approach are presented in the rest of the paper as follows: in section 2, a brief introduction is given to the phase field bending elasticity model for the vesicle membrane problem. In section 3, we set up a finite element discretization for the model based on a mixed formulation. In section 4, we derive some a posteriori error estimators. In section 5, an adaptive algorithm is outlined along with discussions on other implementation issues involved. In section 6, numerical examples are presented, and finally, in section 7, some concluding remarks are given. 2. The phase field bending elasticity model. As in [22, 23], we introduce a phase function φ = φ(x) defined on the physical (computational) domain Ω, which is used to label the inside and the outside of the vesicle Γ. The level set {x : φ(x) = 0} gives the membrane surface Γ, while {x : φ(x) > 0} represents the outside of the membrane and {x : φ(x) < 0} the inside. The original elastic bending energy model
1636
QIANG DU AND JIAN ZHANG
consists of minimizing (1.1) among all surfaces with specified surface area and enclosed volume. Define the following modified elastic energy: 2 k 1 E(φ) = (2.1) φ − 2 (φ2 − 1)φ dx, Ω 2 where is a transition parameter which is taken to be very small compared to the size of the vesicle. In our numerical examples, varies in the same range of that in [23], that is, taking values generally about 1/40 to 1/200 of the size of Ω to ensure a sufficient level of resolution of the interface and the phase field solution profile. For a minimizer φ of E(φ), it has been shown that 1 2 H(φ) = φ − 2 (φ − 1)φ is the phase field approximation of the mean curvature H of the interface, which behaves like a measure concentrated in the diffuse interfacial layer of the zero level set of φ as goes to zero [23, 25]. Notice that the definition of E(φ) is k 2 1 H (φ)dx, E(φ) = Ω2 with the normalizing factor 1/ accounting for the contribution due to the diffuse interfacial layer. The phase field bending elasticity model is then given by minimizing the above energy E(φ) subject to prescribed values of 1 |∇φ|2 + (φ2 − 1)2 dx . (2.2) φ(x)dx and B(φ) = A(φ) = 4 Ω Ω 2 Intuitively, it is insightful to consider a special phase field function of the form √ φ(x) = tanh( d(x,Γ) ), where d(x, Γ) is the signed distance from a point x ∈ Ω to the 2 surface Γ; the geometric meanings of the energy E and the constraints (2.2) would then become clear. More rigorously, the existence of the minimizer to E(φ) subject to prescribed A(φ) and B(φ) has been established in [25]. Moreover, it has been shown in [19, 51] that under some general ansatz assumptions, as → 0, the minimum of the phase field energy E(φ) with the specified constraints approaches to the minimum of the original energy (1.1), with A(φ) approaching the difference of the outside volume √ and the inside volume of the membrane surface and B(φ) approaching to 2 2/3 times the surface area of Γ. The previously developed finite difference, finite element, and spectral methods in [22, 23, 25, 26] are based on the above phase field bending elasticity model. Some convergence analysis and a priori error estimates have been given in [25, 26]. 3. The finite element discretization. The variational phase field bending elasticity model described in the previous section is conveniently stated as: ⎧ 2 ⎪ 1 2 k ⎪ ⎪ arg min E(φ) = (φ − 1)φ dx, φ − ⎪ ⎪ φ 2 ⎪ Ω 2 ⎪ ⎪ ⎪ ⎪ ⎨ A(φ) = φdx = α, Ω (3.1) ⎪ ⎪ 1 2 ⎪ 2 2 ⎪ |∇φ| (φ + − 1) dx = β, B(φ) = ⎪ ⎪ ⎪ 4 Ω 2 ⎪ ⎪ ⎪ ⎩ φ|∂Ω = 1,
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1637
where α and β are given constants, and Ω = [−1, 1]3 . The well-posedness of the above problem can be found in [19, 25]. The consistency to the original bending elasticity model when → 0 has also been examined in [19, 51]. To deal with the nonlinear constraints, a penalty formulation can be used. Let G(φ) = E(φ) + M1 [A(φ) − α]2 + M2 [B(φ) − β]2 , where M1 and M2 are penalty constants, and let us introduce λM (φ) = 2M1 [A(φ) − α] ,
μM (φ) = 2M2 [B(φ) − β] .
As the penalty constants M1 and M2 go to infinity, the minimizer of G goes to the solution of the constrained problem, and λM and μM converge to the Lagrange multipliers [22, 25]. For the sake of simplicity, we denote them by λ and μ in the analysis. In the simulations, M1 and M2 are taken to be fixed large numbers that assure the convergence of the Lagrange multipliers to within the given numerical accuracy. To derive the mixed weak formulation [26], we introduce a function f as f=
√
1 k φ − 2 (φ2 − 1)φ .
We note that f is a scaled phase field approximation of the mean curvature, and for small other boundary conditions such as the homogeneous Neumann boundary condition on φ may also be used. Multiplying the equation for f by a test function w ∈ H01 (Ω) and integrating over Ω, after integration by parts, we get
1 2 ∇φ · ∇w + 2 (φ − 1)φw dx Ω
√ f wdx = − k
Ω
for any w ∈ H01 (Ω). Note that the boundary condition φ = 1 is imposed. Taking the variational derivative of the energy functional, we get N 1 2 ∇f · ∇v + 2 f (3φ − 1)v dx + λ−μ f vdx = 0 k Ω Ω
√ − k
for any v ∈ H 1 (Ω). Given any spatial region D ⊆ Ω, let (u, v)D =
1/2
u D = (u, u)D
uvdx, D
denote the standard L2 inner product and the L2 norm on D, respectively, and
∇u, ∇vD =
∇u · ∇vdx . D
1638
QIANG DU AND JIAN ZHANG
Let us define H = H 1 (Ω) × H 1 (Ω)
and
H0 = H 1 (Ω) × H01 (Ω).
The weak form of our problem (3.1) is to find (f, φ) with (f, φ − 1) ∈ H0 such that, for all (v, w) ∈ H0 , we have
(3.2)
⎧ √ ) 1 ( ⎪ 2 ⎪ − f (3φ k
∇f, ∇v + − 1), v + λ(1, v)Ω ⎪ Ω Ω ⎪ 2 ⎪ ⎪ ⎪ ⎪ N ⎨ (f, v)Ω = 0, −μ ⎪ k ⎪ ⎪ ⎪ ⎪ ⎪ √ ) 1 ( 2 ⎪ ⎪ ⎩ k ∇φ, ∇wΩ + 2 (φ − 1)φ, w Ω + (f, w)Ω = 0 .
The above weak form leads naturally to a mixed FEM for its numerical solution [26]. Define the operator F : H → L(H0 , R2 ) by F (f, φ)(v, w) √ ) 1 ( = − k ∇f, ∇vΩ + 2 f (3φ2 − 1), v Ω N (f, v)Ω , +λ(1, v)Ω − μ k T √ ) 1 ( 2 W k ∇φ, ∇wΩ + 2 (φ − 1)φ, w Ω + (f, w)Ω , where W is a weight constant to be determined in simulations. Now the problem becomes that of solving (3.3)
F (f, φ) = 0
in an abstract form, together with the boundary conditions. To construct the finite element approximation to the mixed formulation, we take the discrete function space as Vh = Wh = {v ∈ C 0 (Ω) ∩ H 1 (Ω) | v|K ∈ P1 (K) ∀K ∈ Jh }, where Jh is a triangulation of Ω consisting of tetrahedra K, whose diameters hK are bounded above by h = maxK∈Jh hK , and P1 (K) denotes the linear function space on element K. The mesh is assumed to be regular so that the standard minimum angle condition is satisfied and the number of adjacent elements to any given element is bounded independently of h. We also assume that the family Jh is uniformly regular so that for any K ∈ Jh the ratio of the diameter of K and the diameter of the largest ball enclosed in K is uniformly bounded from above by a constant independent of h. The detailed adaptive construction of Jh will be described later in the paper. Now, let X h = Vh × W h
and Xh0 = Vh0 × Wh ,
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1639
where Vh0 = Wh0 = {v | v ∈ Vh , v|∂Ω = 0}. For the trial solution (fh , φh − 1) ∈ Xh0 satisfying the boundary conditions and the test function (vh , wh ) ∈ Xh0 , we define the discrete version of F : Fh (fh , φh )(vh , wh ) = F (fh , φh )(vh , wh ). Hence the discrete problem is to solve (3.4)
Fh (fh , φh ) = 0.
The solutions of (3.3) and (3.4) depend on the parameters α and β. By normalization, we may always take β = 1(some constant surface area); then the solution of (3.3) may be viewed as a solution branch of F (α; x) = 0. The basic mathematical analysis of the discretized system (such as the existence of solutions and an a priori error estimate, specialized to a piecewise linear element) has been made in [26]. In particular, if there exists a C 1 branch {α, x(α)} of nonsingular solutions of the nonlinear variational problem (3.3) for α ∈ Λ which is a compact interval in R, then for small h there is a unique branch {α, xh (α)} of solutions of (3.4) converging to {α, x(α)}. Moreover, if α → x(α) is a C 1 function from Λ into H ∩ (H 2 (Ω) × H 2 (Ω)), we have the optimal order a priori error estimate
x(α) − xh (α) H ≤ Ch, where C is a constant independent of h. While the a priori error analysis can offer theoretical assurance on the convergence of the finite element methods as the mesh size gets smaller and smaller, to implement an adaptive strategy for the finite element approximations, we need a posteriori error estimators. We choose to work with residual-type estimators which are derived in the next section. 4. A posteriori error estimate. Adaptive methods often lead to efficient discretization to problems with solutions that are singular or have large variations in small scales. In phase field models, the sharp interface of physical quantities is replaced by regularized phase field functions. However, for a small interfacial width constant , the phase field solutions may display large gradients within the diffusive interfacial region. Thus, adaptivity in the form of mesh refinement and coarsening as well as mesh transformation can greatly improve the efficiency of the numerical approximations of phase field models [10, 27, 37, 44]. A posteriori error estimators are key ingredients in the design of adaptive methods [3]. There have been many existing studies on deriving such estimators for the finite element approximation of linear and nonlinear variational problems and for standard Galerkin and mixed finite element formulations; see, for example, [1, 8, 45, 50], and the references cited therein. Let x0 = (f0 , φ0 ) be a solution of a nonlinear operator equation F (x0 ) = 0. We call x0 a regular solution if the Frechet derivative DF (x0 ) is well defined and is a linear homeomorphism; that is, DF (x0 ) and its inverse are bijective and continuous linear operators. First, using the abstract approximation results from [50] (see also
1640
QIANG DU AND JIAN ZHANG
[7], and particular applications to Ginzburg–Landau-type models [16], and the phase field bending elasticity model of vesicle membranes presented in [26]), we immediately get the following. Proposition 4.1 (see [50]). Let H0∗ = L(H0 , R2 ) and F ∈ C 1 (H, H0∗ ). Let x0 be a regular solution of F (x) = 0 with Z = ||DF (x0 )||L(H0 ,H∗0 ) and Zˆ = ||DF (x0 )−1 ||L(H∗0 ,H0 ) . Assume, in addition, that DF is Lipschitz continuous at x0 with a constant γ > 0, i.e., there is a M0 such that γ :=
sup ||x−x0 ||H <M0
||DF (x) − DF (x0 )||L(H0 ,H∗0 ) < ∞. ||x − x0 ||H
Set M := min{M0 , γ −1 Zˆ −1 , 2γ −1 Z}. Then, for all x ∈ H with ||x − x0 ||H < M . (4.1)
1 ˆ (x)||H∗ . ||F (x)||H∗0 ≤ ||x − x0 ||H ≤ 2Z||F 0 2Z
To apply the above framework to our problem, one relevant issue is how the ˆ and M depend on . We note that obtaining a very precise various constants γ, Z, Z, dependence in general is very difficult, and it remains largely as an open problem due to the nonlinear nature of the problem (see [29, 39, 42] for related discussion). Some crude estimates can be obtained, for example, by noticing that for given f0 , φ0 the √ components of the vector-valued operator DF (f0 , φ0 ) contain typical terms like − k (Δ + 12 (3φ20 − 1)I). It is then reasonable to take a typical tanh profile for φ0 based on the earlier sharp interface limit analysis in [19] and to get estimates on the spectra of such operators. Similarly, one can estimate the Lipschitz constant γ. These estimates, however, are not sharp in general, and future studies are obviously needed to examine such dependence more carefully. We now continue to derive a posteriori error estimators. Let xh = (fh , φh ) be a solution of Fh (xh ) = 0 and Rh be a restriction or projection operator from H to the finite element space. For y ∈ H0 , by Galerkin orthogonality Fh (xh )Rh y = 0. Hence F (xh )y = Fh (xh )y = F (xh )(y − Rh y). Specifically, let Rh = [Ih , Ih ], where Ih denotes the Clement interpolation [13]. We first note that, for a given element K and a given edge e defined by the triangulation Jh , the operator Ih has the following properties: ||u − Ih u||K ≤ ChK ||∇u||N (K) and 1
||u − Ih u||L2 (e) ≤ Che2 ||∇u||N (e) , where N (K) denotes the union of K and its neighbor elements and N (e) denotes the union of elements that have e as a face.
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1641
Next, we have ||F (fh , φh )(v, w)|| = ||F (fh , φh )(v − Ih v, w − Ih w)|| √
∇fh , ∇(v − Ih v)K = − k K
2 √
N − k 2 fh , v − Ih v + fh (3φh − 1) + λ − μ k 3/2 K K √ +W 2 k
∇φh , ∇(w − Ih w)K K
+
≤C
√
K
k
e⊂Ω
√
f+
k
3/2
(φ2h − 1)φh , w − Ih w
1 1 ∂fh he2 |e| 2 ∂n
e
2 ⎫ 12 ⎬ ⎭
K
||∇v||N (e)
2 √ N k fh ||∇v N (K) + hK λ − 3/2 fh (3φ2h − 1) − μ k K K 1 1 ∂φh √ ||∇w||N (e) +W 2 k he2 |e| 2 ∂n e e⊂Ω
2 ⎫ 12 √ ⎬ k + hK fh + 3/2 (φ2h − 1)φh ||∇w N (K) . ⎭ K
ηK
K
Let ⎧⎡ ⎪ ⎨ √ 1 1 = ⎣C1 k he2 |e| 2 ⎪ ⎩ e⊂Ω∩K ⎡
(4.2) +W 2 ⎣C2
√
⎤2 √ N ∂fh k 2 + hK λ − fh ⎦ f (3φ − 1) − μ h ∂n 3/2 h k e K
⎫1 ⎤2 ⎪ 2 √ ⎬ 1 ∂φh k k he |e| 2 , + C3 hK fh + 3/2 (φ2h − 1)φh ⎦ ⎪ ∂n e ⎭ K e⊂Ω∩K
1 2
where C1 , C2 , and C3 are weights depending on constants in the interpolation inequalities. Without loss of generality, we simply take them to be 1, and the choice of the weight W used in the simulations is described later. We have L 12 2
F (fh , φh ) L(H0 ,R2 ) ≤ C ηK . K
Applying Proposition 4.1, we get (4.3)
(fh , φh ) − (f0 , φ0 ) H ≤ C
L 12 2 ηK
.
K
ˆ We would like to point out that the constant C depends on through Z.
1642
QIANG DU AND JIAN ZHANG
Let us now derive the following estimate. Lemma 4.1. Let ηK be defined as in (4.2) and (fh , φh ) be a solution of Fh (fh , φh ) = 0. Then L 12 2 ηK ≤ C ||F (fh , φh )||L(H0 ,R2 ) . K
Proof. Let xh = (fh , φh ) denote a solution to the discrete problem, and let F1 and F2 denote the components of F , that is, T
F (f, φ)(v, w) = (F1 (f, φ)v, F2 (f, φ)w) . Let √ g = λ(φh ) −
k
N
fh (3φ2h − 1) − μ(φh ) 3/2
fh . k
For an element K, let vK denote the element-bubble function given by vK = 256ψ1 ψ2 ψ3 ψ4 , where {ψi }41 are the nodal basis functions on K. Since g is a polynomial of degree 3 in K, 2 (4.4) g 2 vK dK = CF1 (xh )(gvK ). ||g||K ≤ C K
Notice that gvK is a polynomial of degree 7 and vK has maximum value 1, so we have ||gvK ||H 1 (K) ≤ Ch−1 K ||g||K .
(4.5)
Multiplying (4.4) and (4.5), we get hK ||g||K ≤ C||gvk ||−1 H 1 (K) F1 (xh )(gvK ).
(4.6)
For an internal face e = K ∩ K , let ∂fh , ge = ∂n e and denote the face-bubble function by ve = 27ψ1 ψ2 ψ3 , where {ψi }31 are the nodal basis functions of K corresponding to the 3 nodes on the face e; then √ ∇fh · ge ∇ve dx + gge ve dx . F1 (xh )(ge ve ) = − k K∪K
K∪K
Hence, √
k ge2 ||ve ||L1 (e) ≤ ge |F1 (xh )(ve )| + ge ||g||K ||ve ||K + ge ||g||K ||ve ||K .
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1643
Let |e| be the area of e. The face-bubble function ve has the following properties: ||ve ||L1 (e) ∼ |e|, 1 ||ve ||K ≤ C(hK |e|) 2 , −1
1
||ve ||H 1 (K) ≤ ChK 2 |e| 2 . Notice that F1 (f, φ)(·) is a linear operator; we can choose the sign of ve such that |F1 (xh )ve | = F1 (xh )ve . Together with (4.6) and the fact that the size of adjacent elements does not change rapidly, these give us B √ 1 1 ∂fh 2 ≤ C ||ve ||−11 F1 (xh )(ve ) 2 k hK |e| H (K) ∂n e +||gvK ||−1 H 1 (K) F1 (xh )(gvK )
C +||gvK ||−1 . H 1 (K ) F1 (xh )(gvK )
Let ue =
ve gvK gvK + + . ||ve ||H 1 (K) ||gvK ||H 1 (K) ||gvK ||H 1 (K )
We have √
∂fh ≤ CF1 (xh )ue . k hK |e| ∂n e 1 2
1 2
Estimation of the other two terms of ηK can be obtained similarly. Hence for every element K, we can find yK = (uK , wK ) ∈ H0 such that 0 ≤ ηK ≤ C(F1 (xh )uK +F2 (xh )wK ),
||yK ||H0 = 1,
and
supp(uK , wK ) ⊂ N (K) .
Let ηK aK = '
2 K ηK
,
y = (u, w) =
aK yK .
K
Then
L 12 2 ηK
=
K
aK ηK
K
≤ C(F1 (xh )u + F2 (xh )w) ≤ C||y||H0 ||F (xh )||L(H0 ,R2 ) . Since the support of yK ’s only overlaps with that of the adjacent ones, we get ||y||H0 ≤ C
K
This completes the proof of Lemma 4.1.
12 a2K ||yK ||2H0
= C.
1644
QIANG DU AND JIAN ZHANG
Combining Proposition 4.1 and Lemma 4.1, we get the following theorem. Theorem 4.1. Let H0∗ = L(H0 , R2 ) and F ∈ C 1 (H, H0∗ ). Let x0 be a regular solution of F (x) = 0 with Z = ||DF (x0 )||L(H0 ,H∗0 ) and Zˆ = ||DF (x0 )−1 ||L(H∗0 ,H0 ) . Assume, in addition, that DF is Lipschitz continuous at x0 with a constant γ > 0, i.e., there is a M0 such that γ :=
sup ||x−x0 ||H <M0
||DF (x) − DF (x0 )||L(H0 ,H∗0 ) < ∞. ||x − x0 ||H
Set M := min{M0 , γ −1 Zˆ −1 , 2γ −1 Z}. Let xh = (fh , φh ) be the finite element solution to Fh (xh ) = 0. There exist positive constants M , Cupper , and Clower , depending on γ and x0 , such that, when h is small enough and ||x0 − xh ||H < M , we have (4.7)
Clower
K
L 12 2 ηK
≤ ||x0 − xh ||H ≤ Cupper
L 12 2 ηK
,
K
where ηK ’s are defined as in (4.2). The above theoretical analysis is used to guide us in the design of effective adaptive algorithms based on the a posteriori error estimator. A few comments are in order: as mentioned earlier, notice that the constants M , γ, Cupper , and Clower depend on since the constants γ, Z = ||DF (x0 )||, and Zˆ = ||DF (x0 )−1 || have such a dependence. Some of these constants rely on a priori information and may not be easily computable; we thus cannot completely assure the reliability and efficiency of the a posteriori error bounds theoretically, though such an issue is typical for many nonlinear variational problems [9] except in some limited cases [54]. Also, the condition ||xh − x0 || < M is satisfied only when h is small enough which may again, theoretically speaking, require a prohibitively small h, especially in comparison to . In addition, with possibly constants for the lower and upper bounds in ' different 2 12 } may not be asymptotically exact in theory [50]. (4.7), the error estimator { K ηK Nevertheless, our numerical experiments demonstrate that a very effective adaptive algorithm can be ' implemented for the phase field simulation based on the a posteriori 2 12 } as defined by (4.2). error estimator { K ηK The details are to be described in the rest of the paper. 5. Adaptive algorithm. From a practical point of view the most interesting objects of the phase field models are related to a narrow transition region near the zero level set of the phase field function, which in our case provides information on the deformed membranes (as well as information on the interactions with external fields where they are present). It is thus easily concluded that some adaptive algorithms may lead to more efficient numerical schemes. We now discuss an adaptive FEM for the phase field bending elasticity model. The common objective of an adaptive FEM is to generate a mesh which is adapted to the problem such that the error between the finite element solution and the exact solution is within the given error tolerance. Using {Mj }j≥0 to represent the various level of meshes generated from the refinement and coarsening procedure, a description of the general adaptive algorithm is outlined below.
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1645
Algorithm 5.1. (Adaptive FEM). Start with mesh M0 and error tolerance tol set j := 0; η := tol + 1 while η > tol, do solve the discrete problem on Mj compute local error indicators ηK and global error estimator η if η > tol mark elements of Mj for refinement or coarsening update Mj to get Mj+1 and set j := j + 1 end if end while. Naturally, the objective of the marking strategy to be implemented in the above algorithm is to produce an efficient mesh. This is understood generally in the sense that one should provide more degrees of freedom where the error is big and less where the error is small. For linear elliptic problems, adaptive FEM often generates a quasioptimal mesh with nearly equidistributed local errors, but, for nonlinear problems, the lack of resolution of the solution on coarse meshes may lead to some local refinement which is not needed for the final solution, and the mesh could be coarsened again. In the numerical implementation, we use the following marking strategy (where N denotes the number of elements of the current mesh and η is the total error). Algorithm 5.2. (Marking strategy). Start with given parameters ref ine f actor > 1 and coarsen f actor < 1 for all element K do if ηK > √ηN ∗ ref ine f actor mark K for refinement end if if ηK < √ηN ∗ coarsen f actor mark K for coarsening end if end for. In our simulations ref ine f actor = 1.4 and coarsen f actor = 0.7. We would like to mention that, during the mesh-updating procedure, elements marked for coarsening are actually coarsened only when the resulting mesh is still conforming. Since the variational problem considered here involves very complex nonlinearities, we next discuss some practical issues encountered in the implementation. On a given mesh, the minimization of G is carried out using a gradient flow approach [22, 25]. Given φn and a step size Δtn , the minimizer φn+1 of (5.1)
1 ||φn+1 − φn ||2 + G(φn+1 ) 2 Δtn
satisfies δG φn+1 − φn (φn+1 ) = 0, + Δtn δφ which is the backward Euler scheme of the gradient flow equation δG ∂φ =− . ∂t δφ Given an initial guess, as t → ∞, the dynamic solutions {φ(t)} converge to a steady state which is a critical point of the energy G [4, 15, 35]. The solution of (5.1) at each
1646
QIANG DU AND JIAN ZHANG
step is obtained using the nonlinear conjugate gradient (NCG) method. We adjust the time step Δtn so that the NCG method converges in 8–15 steps. This allows efficient computation at each time step without the assembly of Hessian matrices. Several other nonlinear solvers can also be considered, such as the built-in Newton solver in ALBERTA [31], BFGS-type methods and augmented Lagrange multipliers [28], multigrid methods [30, 33], and other time-stepping schemes like the second order scheme in [25] and operator splitting schemes [4]. 6. Numerical examples. We now present some numerical examples. Our objectives here are mainly to illustrate the effectiveness of the adaptive methods; thus, most of the computed vesicle shapes are ones which have also been computed by other methods as well [14, 22, 23, 28, 32, 41, 47, 53]. We let the computational domain be [−1, 1]3 and for simplicity, let the elastic bending rigidity be k = 1. The mesh refinement and coarsening procedure is implemented with the help of the adaptive FEM toolbox ALBERT A [31]. In the experiments, the weight W in (4.2) is set to be W = 100. To illustrate how this weight is chosen, we first calculate the error estimators for a sphere using a tanh √ ), and start from its finite element approximation profile. That is, let φ = tanh( |x|−R 2 (interpolation) on a uniform initial mesh, we refine several times according to the error estimators and then calculate the percentage of the two terms in (4.2). We conduct several experiments with the radius R = 0.8, 0.5, 0.2 and = 0.03, 0.02, 0.01, respectively. It is found that, when W = 1, the first term contributes to about 98% of the total error estimate for all nine cases. Thus, a choice of W = 100 leads to a balance of the two error components. Of course, the tanh function only gives a typical asymptotic profile [19] that provides us some hints on the possible scales of the terms in the error estimator. To validate this choice of weight W in our numerical simulations, a series of tests on numerical solutions are performed for = 0.048 with W ranging from 0.01 to 10, 000. The solution is taken to be an axis-symmetric disc, and the mesh density over the cross section x2 = 0 is shown in Figure 6.1, from left to right, for W = 0.01, W = 100, and W = 10, 000, respectively. The case of W = 100 tends to give a more reasonable mesh distribution. Moreover, the simulation results again show that the choice of W = 100 makes the corresponding two terms in the error estimator have comparable magnitude. It turns out that the same can be verified for the other simulations presented in this paper. The transition width parameter is often assigned several different values in the range of 0.01 to 0.05 so that the results can be compared to ensure the fidelity of the phase field approximation to the sharp surface model [19, 23]. However, to save space,
Fig. 6.1. Mesh density on cross section x2 = 0 for W = 0.01, W = 100, and W = 10,000.
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1647
most of the results shown here are for the case of = 0.03. Making further reduction in would not affect the results of the numerically simulated vesicle membrane shapes, although the level of adaptivity will certainly be influenced. Such dependence will be examined in more detail later, and the related analysis also serves to quantitatively measure the effectiveness of the adaptivity algorithm. Simulating a solution branch with changing excess area. In the membrane simulation community, the excess area of a compact surface Γ is defined as the difference between its area and the area of the sphere having the same volume as that enclosed by Γ, motivated by the fact that the surface with minimum surface area with a given volume is necessarily a sphere. It is known that, for a range of the excess area, a continuous solution branch of energy-minimizing shapes exists, and it undergoes the deformation from a more elliptical shape to that of the discocytes (biconcave shape). We first used our adaptive algorithms to calculate such a solution branch. Let us take a three-dimensional ellipsoid x21 x2 (bx3 )2 + 22 + =1 2 a a a2 as the initial membrane surface, which in general is not energy-minimizing. The initial value of the phase field function φ is assigned according to the formula d(x) φ0 (x) = tanh √ , 2 where d(x) is the distance from x = (x1 , x2 , x3 ) to the membrane surface. Since we are mostly interested in the shape of the energy-minimizing surface, the initial value of φ does not have to accurately fit the constraints nor have a good tanh profile. Thus, in our experiments, d(x) is calculated approximately using D d(x) ∼ x21 + x22 + (bx3 )2 − a . (6.1) We let a = 0.6; the constraints on volume and surface area are chosen by setting α = A(φ0 ), and β = B(φ0 ) for different values of b. Beginning with an uniform mesh with 173 nodes, we refine the mesh several times according to the error estimator calculated using φ0 to get the initial mesh, so that α and β provide accurate values of the volume and surface area for the associated level surface. We have also made various selections of the penalty constants M1 and M2 . Based on the experiments, we find that it is adequate to choose the penalty constants M1 = 100 and M2 = 10, 000, respectively. With this choice, the volume and surface area constraints are satisfied with less than 0.5% error, and the convergence of the Lagrange multipliers can also be observed. Table 6.1 provides the numerical estimates to the constraints and the energy values corresponding to the minimizing shapes for different values of b. We can scale the volume of the membrane to be of unit volume since the bending energy is scale invariant. With such a scaling, the rescaled surface areas (denoted by Area 1 ) are calculated, and they are naturally linked to the so-called excess areas described earlier, so that for larger values of b, we have larger excess areas. The results of Table 6.1 illustrate quantitatively the dependence of the minimum energy with respect to the excess area. In Figures 6.2–6.4, the energy-minimizing membrane shapes (obtained by the zero level sets of the phase field functions) are shown, along with some density plots of
1648
QIANG DU AND JIAN ZHANG Table 6.1 Values of volume, area, rescaled area, and energy. b 1.6 1.8 2.0 2.2 2.4 2.6
α 6.8253 6.9559 7.0596 7.1470 7.2175 7.2772
β 3.4548 3.3367 3.2840 3.2399 3.2333 3.2663
Volume 0.5873 0.5221 0.4702 0.4265 0.3912 0.3614
Area 3.6644 3.5391 3.4832 3.4364 3.4294 3.4644
Area 1 5.23 5.46 5.76 6.06 6.41 6.83
Energy 30.27 33.79 38.19 42.34 45.96 51.27
Fig. 6.2. Solution and mesh density for b = 1.8.
the cross sections of the phase field functions (the red color symbolizes φ = 1 in the exterior of the membrane, while the blue color shows φ = −1 inside the membrane). The three-dimensional sliced views of the densities of the corresponding adaptive meshes are provided (with the red color showing the most refined, or densely packed, meshing region and the blue color for the coarsest meshes; note the difference in the color coding for the solution profile and the mesh density distribution). The deformation to the discocytes for larger values of b is clearly seen from these figures. To get some ideas on what the actual phase field functions look like (not only their zero level set), Figure 6.5 shows the profile of φ and mesh density on cross sections z = 0 and y = 0 when b = 2.6. For a better view, the profiles of −φ are plotted instead. Figure 6.6 shows the profile of φ and mesh density for the same solution but with = 0.01. It is clear that the profile is well resolved by the adaptive scheme. We would like to comment on the dependence of the mesh on the curvature of the solution. First, on a heuristic ground, the membrane curvature should play a role in dictating the local mesh refinement, and the impact of the curvature with respect to the resolution of the phase field is affected by the choice of W , as we see in Figure 6.1. When W = 100, the magnitude of the two terms are comparable (in this case about 2:3 for both = 0.03 and = 0.01). In both Figures 6.5 and 6.6, the mesh
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1649
Fig. 6.3. Solution and mesh density for b = 2.2.
Fig. 6.4. Solution and mesh density for b = 2.6.
density at the two ends is a little higher than that in the middle. To get a better view of the local mesh refinement we zoomed in the upper left of the mesh density and plotted it in Figure 6.7. The effect of curvature can also be seen clearly in another simulation. Figure 6.8 shows a stomacytes-shaped energy-minimizing surface and the corresponding mesh density over a cross section where there is a bigger concentration of mesh points in the large curvature region near the axis.
1650
QIANG DU AND JIAN ZHANG
Fig. 6.5. Profile and mesh of solution for b = 2.6 and = 0.03.
Fig. 6.6. Profile and mesh of solution for b = 2.6 and = 0.01.
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1651
Fig. 6.7. An enlarged view of the upper left part of the mesh density plot in Figure 6.6.
Fig. 6.8. A stomacytes-shaped solution and the mesh density. Table 6.2 Number of nodes and estimated error. Number of nodes (N) Estimated error (E)
45,166 464.40
63,113 413.88
81,603 367.43
112,425 317.23
140,007 277.68
302,107 192.60
Quantitative measures for the efficiency of the adaptive scheme. In many simple contexts, the efficiency of the adaptive method is often illustrated by testing against a known exact solution, and the number of the total degrees of freedom used in the simulation against a certain error tolerance level are compared as a measure for the effectiveness of adaptivity. For our problem, however, the exact solutions in general are not easy to obtain. We designed two series of simulations to address this issue. First, we take the solution shown in Figure 6.8, fix , and solve for the same energyminimizing surface using more and more nodes. Instead of studying the relation between the error and the mesh size, we studied the relation between the total number of nodes and the error for the reason that the sizes of the elements vary in an adaptive mesh. Table 6.2 shows the number of nodes and the estimated errors. We speculate that, for the estimated error E, a relation to the number of nodes like log(E) ∼ C + δ log(N ) may be expected. A linear regression can then be used to estimate C and δ. The estimated error and the fitted line are shown in Figure 6.9. This result shows that the estimated error is on the order of N −0.4723 , which is close to the N −1/2 as projected by theory: since we are using isotropic elements,
1652
QIANG DU AND JIAN ZHANG
Fig. 6.9. Number of nodes vs. estimated error. Table 6.3 Number of nodes and norms of the differences in solutions. Number of nodes (N)
45,166
63,113
81,603
112,425
140,007
H 1 norm of fi − f0
481.0800
399.4072
338.0796
265.6828
217.7660
H 1 norm of φi − φ0
5.7128
4.3948
3.4876
2.7824
2.3988
an almost uniformly structured fine mesh may be expected near the membrane; in this case we should expect the estimated error to be on the order of N −1/2 so that the complexity of the three-dimensional (3D) adaptive FEM is comparable with a 2D uniform mesh. Of course, in the numerical simulations, as shown earlier, the curvature of the membrane still plays a role in dictating the mesh refinement, so that the adaptive mesh in general does not always have the same structure over the whole membrane, thus leading to small variations to the projected order. Next, the relation between the actual H 1 error and N is provided. The difficulty in getting such a relation is that the actual H1 error is unknown since no exact solution is available. We first decide to use the solution on the finest mesh as an approximation to the exact solution and compute the differences of the other solutions with it. Table 6.3 shows the H 1 norm of the differences. In Table 6.3, f0 and φ0 denote the solution of the mesh with 302,107 nodes. Since the error of the solution at the finest level cannot really be ignored, we check to see if asymptotically we have the following dependence: 1/2 1 1 (6.2) − , ||fi − f0 ||H 1 + ||φi − φ0 ||H 1 ∼ Ni N0 where N0 = 302,107 is the number of nodes of the finest mesh. The ratios of the left-
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1653
Table 6.4 Ratios between left-hand side and right-hand side in (6.2). Number of nodes (N)
45,166
63,113
81,603
112,425
140,007
ratio / 105
1.1218
1.1406
1.1421
1.1360
1.1246
hand side and the right-hand side of (6.2) are shown in Table 6.4, which indicates that (6.2) is a good match of the simulation results with the predicted behavior. Notice that, if we take N0 to infinity so that (f0 , φ0 ) approaches the exact solution, we then get an asymptotic convergence of the H 1 error like ∼ N −1/2 as demonstrated for the estimated error. Combining the above discussions of the estimated error and the differences between numerical solutions, we see that for a given interfacial width parameter, as we increase the level of numerical resolution, the a posteriori error estimators provided good predictions of the quantitative behavior of the actual numerical error. To demonstrate the efficiency of adaptive methods more quantitatively as gets smaller and smaller, we borrow ideas from the asymptotic sharp interface limit of the phase field model. By the analysis in [19] and the simulations presented earlier in [22, 23], we see that a solution to the phase field model typically has a tanh profile and the width of the transition layer is proportional to for small . Suppose we would like to keep the same level of resolution within the transition layer; we expect the size of the elements near the transition layer to be on the order of (we refer to [29, 39, 42] for more detailed analysis on various numerical approximations of similar phase field models and the dependence of the mesh size on the interfacial width for quasi-uniform meshes). In such a case, if uniform meshes are used, the number of nodes would grow at a rate proportional to at least −3 in order to maintain asymptotically the same level of numerical resolution. For an isotropic adaptive mesh with nodes being concentrated near the transition layer, we naturally expect the number of nodes to be proportional to at least −2 . Based on the above observation, we conduct another series of tests to check the theoretical prediction. As the errors cannot be controlled exactly as desired, we scale the number of nodes according to the estimated error under the assumption that the estimated error of the solution is proportional to N −1/2 . We now present the results of a series of ten simulations with different , for the case b = 1.6 and b = 2.4, respectively. Figure 6.10 shows the number of nodes needed in order to achieve the same accuracy (estimated error) for different values of . The blue “*” symbols are the experimental data, and the blue dashed line is the estimated number of nodes using the formula N = C −2 for the case b = 1.6, while the red “◦” symbols are experimental data, and the red solid line is the estimated N corresponding to the case b = 2.4. The plots are in log-log scale, so that a negative slope of −2 (for the dashed and solid lines) is expected and the actual numerical data largely follow the same trend. These simulation results support the theoretical prediction based on the asymptotic analysis and also serve as a quantitative measurement on the efficiency of the adaptive method. High genus energy-minimizing vesicles. We also conducted experiments with other values of the parameters and obtained other solutions as shown in [22, 23]. Those results are omitted here. Instead, let us briefly describe a new simulation result computed with the adaptive method. Figure 6.11 shows the result of one experiment which followed the path of four merging spheres. The transition width parameter is taken to be = 0.04, the volume is constrained to be the same as the initial value,
1654
QIANG DU AND JIAN ZHANG
Fig. 6.10. Number of nodes vs. the transition width parameter .
Fig. 6.11. Initial surface and the energy minimizing surface with a smaller surface area.
but the area is set to be 90% of the initial value. The initial profile showing the four small spheres appears to be less smooth, and this is due to the coarser meshes implemented in the beginning of the simulation. The reason to fix a slightly reduced surface area is to allow bigger perturbations to the initial profile and to allow the adaptive mesh to make self-adjustment based on the resolution need. As the deformation begins, meshes are adaptively refined and coarsened so that, in the end, there is very good resolution near the equilibrium membrane surface. Topological transformation takes place during the deformation as the spheres merge together. We note, however, that the change of topology may lead to contributions due to the Gaussian curvature energy whose effect has been ignored in the simulation reported here. The simulated result not only gives evidence to the capability of the phase field method in handling naturally the topological changes in the membranes, but it again confirms the rich variety of the bending energy-minimizing membrane subject to the volume and surface area constraints. We refer to [41, 47] for more discussions on this type of non-axis-symmetric and high topological genus energy-minimizing surfaces.
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1655
The experimental findings reported in this section are mostly designed to provide evidence to the effectiveness of the adaptive FEM discussed in this paper. We leave other computational results and their biological relevance to be reported in future works. 7. Conclusion. The study of vesicle membrane deformation is receiving more and more attention due to its fundamental importance and its close connection with many biological functions. The variational phase field bending elasticity model of vesicle membrane deformations has been demonstrated as an effective approach to study the vesicle membrane deformations [22, 23]. The phase field model requires the simulation of the phase field function in a computational domain enclosing the vesicle membrane. Since the objects of central interest are the membranes themselves and the phase field function also undergoes more significant variations around the membrane surfaces, it is highly desirable to design adaptive numerical algorithms for the phase field bending elasticity model. The three-dimensional adaptive finite element method developed in this paper meets this need. In our adaptive method, the mesh adaptivity is based on a residual-type a posteriori error estimator which is rigorously derived for the associated nonlinear system of partial differential equations. The effectiveness of the method is demonstrated through numerical simulation examples presented here that include the simulations of known solution branches and new high genus vesicles as well as quantitative experimental investigation on the adaptivity efficiency. This constitutes another step forward in making the phased field modeling approach a valuable tool for biophysicists and biologists to study interesting problems related to lipid bilayer membranes. The phase field bending elasticity models and simulation tools have the potential to be very helpful in the study of the geometry, mechanics, and function of complex lipid membrane systems. Recent extensions have been made to the study of multicomponent vesicles, vesicles with free edges [52], and membranes interacting with background fluid flows [5, 18]. The extension of the adaptive algorithms to more general cases and other types of error estimators as well as their applications in multiscale modeling of bilayer vesicles and other more biologically realistic membrane models are currently under investigation. REFERENCES [1] S. Adjerid, A posteriori error estimates for fourth-order elliptic problems, Comput. Methods Appl. Mech. Engrg., 191 (2002), pp. 2539–2559. [2] M. Ainsworth and J.T. Oden, A posteriori error estimation in finite element analysis, Comput. Methods Appl. Mech. Engrg, 142 (1997), pp. 1–88. [3] I. Babuska and T. Strouboulis, The Finite Element Method and Its Reliability, Oxford University Press, London, 2001. [4] W. Bao and Q. Du, Computing the ground state solution of Bose–Einstein condensates by a normalized gradient flow, SIAM J. Sci. Comput., 25 (2004), pp. 1674–1697. [5] T. Biben, K. Kassner, and C. Misbah, Phase-field approach to 3D vesicle dynamics, Phys. Rev. E, 72 (2005), pp. 419–421. [6] M. Bloor and M. Wilson, Method for efficient shape parameterization of fluid membranes and vesicles, Phys. Rev. E, 61 (2000), pp. 4218–4229. [7] F. Brezzi, J. Rappaz, and P. Raviart, Finite dimensional approximation of nonlinear problems part I: Branches of nonsingular solutions, Numer. Math., 36 (1980), pp. 1–25. [8] C. Carstensen, A posteriori error estimate for the mixed finite element method, Math. Comp., 66 (1997), pp. 465–476. [9] C. Carstensen and D. Praetorius, Numerical analysis for a macroscopic model in micromagnetics, SIAM J. Numer. Anal., 42 (2005), pp. 2633–2651.
1656
QIANG DU AND JIAN ZHANG
[10] Z. Chen, R. Nochetto, and A. Schmidt, Error control and adaptivity for a phase relaxation model, M2AN Math. Model. Numer. Anal., 34 (2000), pp. 775–797. [11] P.G. Ciarlet, Introduction to Linear Shell Theory, I, in Ser. Appl. Math., Gauthier–Villars, Paris, 1998. [12] P.G. Ciarlet, Mathematical Elasticity, III, in Stud. Math. Appl. 29, North–Holland, Amsterdam, 2000. [13] P. Clement, Approximation by finite element functions using local regularization, RAIRO Anal. Numer., 9 (1975), pp. 77–84. ¨ bereiner, E. Evans, M. Kraus, U. Seifert, and M. Wortis, Mapping vesicle shapes [14] H. Do into the phase diagram: A comparison of experiment and theory, Phys. Rev. E, 55 (1997), pp. 4458–4474. [15] Q. Du, Finie element methods for the time dependent Ginzburg–Landau model of superconductivity, Comput. Math. Appl., 27 (1994), pp. 119–133. [16] Q. Du, M. Gunzburger, and J. Peterson, Analysis and approximation of the Ginzburg– Landau model of superconductivity, SIAM Rev., 34 (1992), pp. 54–81. [17] Q. Du, M. Li, and C. Liu, Analysis of a phase field Navier-Stokes vesicle-fluid interaction model, Discrete Contin. Dyn. Syst. Ser. B, 8 (2007), pp. 539–556. [18] Q. Du, C. Liu, R. Ryham, and X. Wang, Energetic variational approaches to modeling vesicle and fluid interactions, SIAM J. Appl. Math., submitted. [19] Q. Du, C. Liu, R. Ryham, and X. Wang, A phase field formulation of the Willmore problem, Nonlinearity, 18 (2005), pp. 1249–1267. [20] Q. Du, C. Liu, R. Ryham, and X. Wang, Modeling the spontaneous curvature effects in static cell membrane deformations by a phase field formulation, Commun. Pure Appl. Anal., 4 (2005), pp. 537–548. [21] Q. Du, C. Liu, R. Ryham, and X. Wang, Diffuse interface energies capturing the Euler number: Relaxation and renomalization, Commun. Math. Sci., 5 (2007), pp. 233–242. [22] Q. Du, C. Liu, and X. Wang, A phase field approach in the numerical study of the elastic bending energy for vesical membranes, J. Comput. Phys., 198 (2004), pp. 450–468. [23] Q. Du, C. Liu, and X. Wang, Simulating the deformation of vesicle membranes under elastic bending energy in three dimensions, J. Comput. Phys., 212 (2005), pp. 757–777. [24] Q. Du, C. Liu, and X. Wang, Retrieving topological information for phase field models, SIAM J. Appl. Math., 65 (2005), pp. 1913–1932. [25] Q. Du and X. Wang, Convergence of numerical approximations to a phase field bending elasticity model of membrane deformations, Int. J. Numer. Anal. Model., 4 (2006), pp. 441– 459. [26] Q. Du and L. Zhu, Analysis of a mixed finite element method for a phase field elastic bending energy model of vesicle membrane deformation, J. Comput. Math., 24 (2006), pp. 265–280. [27] W.M. Feng, P. Yu, S.Y. Hu, Z.K. Liu, Q. Du, and L.Q. Chen, Spectral implementation of an adaptive moving mesh method for phase-field equations, J. Comput. Phys., 220 (2006), pp. 498–510. [28] F. Feng and W. Klug, Finite element modeling of lipid bilayer membranes, J. Comput. Phys., 20 (2006), pp. 394–408. [29] X. Feng and A. Prohl, Analysis of a fully discrete finite element method for the phase field model and approximation of its sharp interface limits, Math. Comp., 73 (2004), pp. 541– 567. [30] R. Hoppe and R. Kornhuber, Adaptive multilevel methods for obstacle problems, SIAM J. Numer. Anal., 31 (1994), pp. 301–323. [31] A. Schmidt and K. Siebert, Design of Adaptive Finite Element Software: The Finite Element Toolbox ALBERTA, Lect. Notes Comput. Sci. Engrg. 42, 2005, Springer, Berlin. [32] M. Jaric, U. Seifert, W. Wintz, and M. Wortis, Vesicular instabilities: The prolate-tooblate transition and other shape instabilities of fluid bilayer membranes, Phys. Rev. Lett., 52 (1995), 6623. [33] J.S. Kim, K.K. Kang, and J.S. Lowengrub, Conservative multigrid methods for CahnHilliard fluids, J. Comput. Phys., 193 (2004), pp. 511–543. [34] R.J. Leveque and Z. Li, Immersed Interface Methods for Stokes flow with elastic boundaries or surface tension, SIAM J. Sci. Comput., 18 (1997), pp. 709–735. [35] F.-H. Lin and Q. Du, Ginzburg–Landau vortices, dynamics, pinning and hysteresis, SIAM J. Math. Anal., 28 (1997), pp. 1265–1293. [36] R. Lipowsky, The conformation of membranes, Nature, 349 (1991), pp. 475–481. [37] J.A. Mackenzie and M.L. Robertson, A moving mesh method for the solution of the onedimensional phase-field equations, J. Comput. Phys., 181 (2002), pp. 526–544. [38] O. Mouritsen, Life - As a Matter of Fat: The Emerging Science of Lipidomics, Springer, Berlin, 2005.
ADAPTIVE FEM FOR PHASE FIELD MODEL OF MEMBRANES
1657
[39] X. Jiang and R. Nochetto, A P 1 − P 1 finite element method for a phase relaxation model I: Quasi-uniform mesh, SIAM J. Numer. Anal., 35 (1998), pp. 1176–1190. [40] S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), pp. 12–49. [41] Z. Ou-Yang, J. Liu, and Y. Xie, Geometric Methods in the Elastic Theory of Membranes in Liquid Crystal Phases, World Scientific, Singapore, 1999. [42] M. Paolini and C. Verdi, Asymptotic and numerical analyses of the mean curvature flow with a space-dependent relaxation parameter, Asympto. Anal., 5 (1992), pp. 553–574. [43] C. Pozrikidis, Effect of membrane bending stiffness on the deformation of capsules in simple shear flow, J. Fluid Mech., 440 (2001), pp. 269–291. [44] N. Provatas, N. Goldenfeld, and J. Dantzig, Efficient computation of dendritic microstructures using adaptive mesh refinement, Phys. Rev. Lett., 80 (1998), pp. 3308–3311. [45] S. Repin, A posteriori error estimation for variational problems with uniformly convex functionals, Math. Comp., 69 (1999), pp. 481–500. [46] A. Roma, C. Peskin, and ,M. Berger, An adaptive version of the immersed boundary method, J. Comput. Phys., 153 (1999), pp. 509–534. [47] U. Seifert, Configurations of fluid membranes and vesicles, Adv. Phys., 46 (1997), pp. 13–137. [48] U. Seifert, K. Berndl, and R. Lipowsky, Configurations of fluid membranes and vesicles, Phys. Rev. A, 44 (1991), pp. 1182–1202. [49] S. Sukumaran and U. Seifert, Influence of shear flow on vesicles near a wall: A numerical study, Phys. Rev. E, 64 (2001), pp. 1–11. [50] R. Verfurth, A Review of A Posteriori Error Estimation and Adaptive Mesh-Refinement Techniques, Wiley and Teubner, Chichester and Stuttgart, 1996. [51] X. Wang, Asymptotic analysis of phase field formulations of bending elasticity models, SIAM J. Math. Anal., 39 (2008), pp. 1367–1401. [52] X. Wang and Q. Du, Modelling and simulations of multi-component lipid membranes and open membranes via diffusive interface approaches, J. Math. Biol., 56 (2007), pp. 347–371. ¨ bereiner, and U. Seifert, Starfish vesicles, Europhys. Lett., 33 (1996), [53] W. Wintz, H. Do pp. 403–408. [54] A. Veeser, Efficient and reliable a posteriori error estimators for elliptic obstacle problems, SIAM J. Numer. Anal., 39 (2001), pp. 146–167.