Solving frontier Problems of Physics: The Decomposition Method
a
--
I
Kluwer Academic Publishers
Solving Frontier Problems of Physics: The Decomposition Method
Fundamental Theories of Physics An Itltc~rtationulBook Series on The Fundame~zralTheories of Physics: Their Clarificarion, Development and Application
Editor:
ALWYN VAN DER MERWE Universiq of Denver, U.S.A.
Editorial Advisory Board: ASW BARUT, Universiry of Colorado, U.S.A. BRIAN D. JOSEPHSON, Universiq of Cambridge, L!K. CLNE KILKILMISTER, Lini~~ersitj. of london, li.R GUNTER LUDWIG. Philipps-Universitiit, Marbrtrg, Gernlang NATHAN ROSEN. Israel Insritute of Technolog?, Israel MENDEL SACHS. Srare Universi~, of New York ar B;~&blo,US.A. ABDUS SAL.4b4. Inrenlarional Centrefor Theoretical Phxsics, Triesre, Ira!\ HANS-JURGEN TREDER, Zentralinstitut fur Astroph?.sik der Akademie der M'issenschafier:. Gemzany
Volume 60 -.
Solving Frontier Problems of Physics: The Decomposition Methoc George Adomian General h d y t i c s Corporation. Athens, Georgia, U.S.A.
KLUWER ACADEMIC PUBLISHERS DORDRECHT I BOSTON / LONDON
Library of Congress Cataloging-in-Publication Data A d o m i a n . C. s c l v i n g ' r o n t i e r ~ r o o i e m so f p h y s i c s ?he aecomposit,on method I G e o r g e Aaom I a n . D. ~ m . c Z ~ n d a m e n ? a lt h e o r i e s o f ~ h y s i c s: V . 6 0 ) Incluoes Index. ISBN 0 - 7 9 2 3 - 2 6 4 4 - X talk. paper) i . G e c o n ~ ~ s i t 1 0mne t h o d . 2 . Mathematical physics. I. T i ? l e . ii. Series. OC20.7.GCA36 1994 530."594--3~20 93-3956 1
--
ISBN 0-7923-7,644-X
Published by Kluwer Academic Publishers. P.O. Box 17.3300 AA Dordrecht, The Netherlands. Kluwer Academic Publishers incorporates the publishing programmes of D. Reide!, hlanlnus Nijhoff, Dr M'. Junk and MTP Press. Sold and distributed in the U.S.A. and Canada by Klurver Academic Publishers, 101 Piniiip Dri\.e, Norwell. MA 0106!, U.S.A. In all other countries, sold and distributed by Kluwer Academic Publishers Group, P.O. Box 321. 3300 AH Dordrecht. The Netherlands.
Printed
011acid-free
paper
All Rights Reserved O 1991 Kluu'er Academic Publishers
No pan of the material protected by this copyright notice may be reproduced or utilized in any form or by any means. electronic or mechanical. including photocopying. recording or by any information storage and rctrievai system. without written permission from the copyright owner. Pr~ntedin thc Netherlands
IN MEMORY OF MY FATHERAND MOTHER HAIG AND VARTUEIIADOMIAN
EARLIER WORKSBY THE
AUTHOR
~ppliedStochastic Processes, Academic Press, 1980. Stochastic System, Academic Press, 1983; also Russian transl. ed. H.G.Volkova, Mir Publications, Moscow, 1987. Partial Diflerential Equations with R. E. Bellman, D. Reidel Publishing Co., 1985. Nonlinear Stochastic Operator Equations, Academic Press, 1986. Nonlinear Stochastic Systems Theory and Applicatiorzs to Physics, Kluwer Academic Publishers, 1989.
TABLEOF CONTENTS
PREFACE FOREWORD CHAPTER 1
ON MODELLING PHYSICAL PHElNOMENA
T H E DECOMPOSITION METHOD FOR ORDINARY DIFFERENTIAI. EQUATIONS THE DECOMPOSITION METHOD1 CHAPTER 3 IN SEVERAL DIMENSIONS DOUBLE DECOMPOSITION CHAPTER 4 MODIFIED DECOMPOSITION CHAPTER 5 APPLICATIONS OF MODIFIED CHAPTER 6 DECOMPOSITION DECOMPOSITION SOLUTIONS CHAPTER 7 FOR NEUMANN BOUNDARY CONDITIONS INTEGRAL BOUNDARY CONDITIONS CHAPTER 8 BOUNDARY CONDITIONS AT INFINITY CHAPTER 9 CHAPTER 10 INTEGRAL EQUATIONS CHAPTER 11 NONLINEAR OSCILLATIONS IN PHYSICAL SYSTEMS CHAPTER 12 SOLUTION OF THE DUFFJNG EQUATION CHAPTER 13 BOUNDARY-VALUE PROBLEMS WXTH CLOSED IRREGULAR CONTOURS OR SURFACES CHAPTER 1 4 APPLICATIONS IN PHYSICS APPENDIX 1 PADE AND SHANKS TRANSFORMS APPENDIX 11 ON STAGGERED SUMMATION OF DOUBLE DECOMPOSITION SERIES APPENDIX nI CAUCHVPRODUCTS OF INFIM.TE SERIES INDEX
,/CHAPTER 2
vii
PREFACE
I discovered the very interesting Adomian method and met George Adomian himself some years ago at a conference held in the United States. This new t e c h q u e was very surprising for me, an applied ma.thematician, because it allowed solution of exactly nonlinear functional equations of various kinds (algebraic, differential, partial differentiai, integral,...) without discretizing the equations or approximating the operators. The solutio~lwhen it exists is found in a rapidly converging series form, and time and space are not discretized. At this time an important question arose: why does this technique, involving special kinds of polynomials (Adomian polynomials) converge? I worked on this subject with some young colleagues at my research institute and found that it was possible to connect the method to more well-known formulations where classical theorems (fixed point theorem, substituted series, ...) could be used. A general framework for decomposition methods has even been proposed by Lionel Gabet, one of my researchers who has obtained a Ph.D. thesis on this subject During this period a fruitful cooperation has been developed between George Adomian and my research institute. We have frequently discussed advances and difficulties and we exchange ideas and rt:suits. With regard to this new book, I am very impressed by the quality and the importance of the work, in which the author uses the: decomposition method for solving frontier problems of physics. Man-y-concrete problems involving differential and partial differential equations (including Navier-Stokes equations) are solved by means of the decomposition technique developed by Dr. Adomian. The basic ideas are clearly detailed with specific physical examples so that the method can be easily understood and used by researchers of various disciplines. One of the main objectives of this method is to provide a simple and unified technique for solving nonlineat fimctional equations. Of course some problems remain open. For instance, practical convergence may be ensured even if the hypotheses of known methods are not satisfied. That means that there still exist opportunities for further theoretical studies to be done by pure or applied mathematicians, such as proving convergence in more general situations. Furthermore, it is not always easy to take into account the boundary conditions for complex domains. In conclusion, I think that this book is a fundamental contribution to the theory and practice of decomposition methods in functional analysis. It
completes and clarifies the previous book of the author published by Kluwer in 1989. The decomposition method has now lost its mystery but it has won in seriousness and power. Dr. Adomian is to be congratulated for his fundamental contribution to functional and numerical analysis of complex systems.
Yves Chermault Professor Director of Medimat Universit.6 Pierre et Marie Curie (Paris VI) Paris, France September 9,1993
FOREWORD
This book is intended for researchers and (primarily graduate) students of physics, applied mathematics, engineering, and other areas such as biomathematics and asuophysics where mathematical models of dynamical systems require quantitative solutions. A major part of tihe book deals with the necessary theory of the decomposition method and its generalizations since earlier works. A number of topics are not included he:re because they were dealt with previously. Some of these are delay equatio~is,integro-differential equations, algebraic equations and large matrices, comparisons of decomposition with perturbation and hierarchy methods requiring closure approximation, stochastic differential equations, and stochastic processes [I]. Other topics had to be excluded due to time and space limitations as well as the objective of emphasizing utility in solving physical probllems. Recent works, especially by Professor Yves Chenu;iult in journal articles and by Lionel Gabet in a dissertation, have provided a rigorous theoretical foundation supporting the general effectiveness of the method of decomposition. The author believes that this method is relevant to the field of mathematics as well as physics because mathematics has been essentially a linear operator theory while we deal with a nonlinear world. Applications have shown that accurate and easily computed quantitat.ive solutions can be determined for nonlinear dynamical systems without assumptions of "small" nonlinearity or computer-intensive methods. The evolution of the research has suggested a theory to unify linear and nonlinear, ordinary or partial differential equations for solving initial or boundary-value problems efficiently. As such, it appears to be valuable in the background of applied mathematicians and theoretical or mathematical physicists. An important objective for physics is a methodology for solution of dynamical systems-which yields verifiable and precise: quantitative solutions to physical problems modelled by nonlinear partial differential equations in space and time. Analytical methods which do not require: a change of the model equation into mathematically more tractable, but necessarily less realistic representation, are of primary concern. Improvement of analytical methods would in turn allow more sophisticated modelling and possible further progress. The final justification of theories of physics is in the correspondence of predictions with nature rather than in rigorous proofs which may well
xii
FOREWORD
restrict the stated problem to a more limited universe. The broad applicability of the methodology is a dividend which may allow a new approach to mathematics courses as well as being useful for the physicists who will shape our future understanding of the world. Recent applications by a growing community of users have included areas such as biology and medicine, hydrology, and semiconductors. In the author's opinion this method offers a fertile field for pure mathematicians and especially for doctoral students looking for dissertation topics. Many possibilities are included directly or indirectly. Some repetition of objectives and motivations (for research on decomposition and connections with standard methods) was believed to be appropriate to make various chapters relatively independent and permit convenient design of courses for different specialties and levels. Partial differential equations are now solved more efficiently, with less computation, than in the author's earlier works. The Duffing oscillator and other generic oscillators are dealt with in depth. The last chapter concentrates on a number of frontier problems. Among these are the Navier-Stokes equations, the N-bod). problem, and the Yukawa-coupled Klein-GordonSchrodinger equation. The solutions of these involve no linearization, perturbation, or limit on stochasticity. The Navier-Stokes solution 121 differs from earlier analyses [ 3 ] .The system is fully dynamic, considering pressure changing as the velocity changes. It now allows high velocity and possible prediction of the onset of turbulence. The references listed are not intended to be an exhaustive or even a partial bibliography of the \.aluable work of many researchers in these general areas. Only those papers are listed which were considered relevant to the precise area and method treated. (New work is appearing now at an accelerating rate by many authors for submission to journals or for dissertations and books. A continuing bibliography could be valuable to future contributors and reprints received by the author will be recorded for t h ~ purpose.) s The author appreciates the advice. questions, comments, and collaboration of early workers in t h ~ sfield such as Professors R.E. Bellman, N.Bellomo, Dr. R. MCarty, and other researchers over the years, the important work by Professor Yves Cherruault on convergence and h s much appreciated review of the entire manuscript, the support of my family, and the editing and valuable contributions of collaborator and friend, Randolph Rach, whose insights and willingness to share his time and knowledge on difficult problems have been an important resource. The book contains work originally typeset by Arlette
Revells and Karin Haag. The camera-ready manuscript was prepared with the dedicated effort of Karin Haag, assisted by William David. Laura and William David assumed responsibility for office management so that research results could be accelerated. Computer results on the Duffing equation were obtained by Dr. McLowery Elrod with the cooperation of the National Science Center Foundation headed by Dr. Fred C. Davison, who has long supported this work. Gratitude is due to Ronald E. iMeyers, U.S. Army Research Laboratories, White Sands Missile Range, who supported much of this research and also contributed to some of the developme.nt. Thanks are also due to the Office of Naval Research, Naval Research Laboratories, and Paul Palo of the Naval Civil Engineering Laboratories, who have supported work directed toward applications as we11 as intensive courses at NRL and NCEL. The author would also like to thank Professor Alwyn Van der iMerwe of the s Most of all, University of Denver for his encouragement that led to t h ~ book. the unfailing support by my wife, Corinne, as well as her meticulous final editing, is deeply appreciated.
REFERENCES 1 . G. Adomian. Stochastic Processes. Encyclopedia of Sciences and Technology. 16, 2nd ed.. Academic Press (1992). 2. G. Adomian. An Analytic Solution to the -stochastic Navier-Stokes System. Foundarions of Physics, 2, ( 83 1-834) (July 1991). 3. G. Adomian, Nonlinear Stochastic Sysrems Theory and Ap,cllicafionsto Physics, Kluwer (192-216) (1989).
ON MODELLINGPHYSICAL PHENOMENA Our use of the term "mathematical model" or "model" will refer to a set of consistent equations intended to describe the particular features or behavior of a physical system which we seek to understand. Thus, we can have differenr models of the system dependent on the questions of interest and on the features relevant to those questions. To derive an adequate mathematical description with a consistent set of equations and relevant conditions, we clearly must have in mind a purpose or objective and limit the problem to exclude factors irrelevant to our specific interest. We begin by considering the pertinent physical principles whlch govern the phenomena of interest along with the constitutive properties of material with whlch the phenomena may interact. Depending on the problem, a model may consist of algebraic equations, integral equations, or ordinary, partial, or coupled sysrems of differential equations. The equations can be nonlinear and stochastic in general with linear or deterministic equations being special cases. (In solne cases, we may have delays as well.) Combinations of these equations such a s integro-differential equations also occur. A model using differential equations must also include the initiaUboundary conditions. Since nonlinear and nonlinear stochastic equations are extremely o r initial conditions, solutions sensitive to small changes in inputs, may change rather radically with such changes. Consequently, exact specification of the model is sometimes not a simple matter. Prediction of future behavior is therefore limited by the precision of the initial state. When significant nonlinearity is present, small changes (perhaps only 1%) in the system may make possible one or many different solutions. If small but appreciable randomness, or, possibly, accumulated rolund-off error in iterative calculation is present, we may observe a random change from one solution to another-an apparently chaotic behavior. To model the phenomena, process, or system of interest, we first isolate the relevant parameters. From experiments, observations, and known relationships, we seek mathematical descriptions in the form of equations which we can then solve for desired quantities. 'This process is neither universal nor can it take everything into account; we must tailor the model to fit 1
the questions to which we need answers and neglect extraneous factors. Thus a model necessarily excludes the universe external to the problem and region of interest to simplify as much as possible, and reasonably retain only factors relevant to the desired solution. Modelling is necessarily a compromise between physical realism and our ability to solve the resulting equations. Thus, development of understanding based on verifiable theory involves both modelling and analysis. Any incentive for more accurate or realistic modelling is limited by our ability to solve the equations; customary modelling uses restrictive assumptions so that wellknown mathematics can be used. Our objective is to minimize or avoid altogether this compromise for mathematical tractability which requires linearization and superposition, perturbation, etc., and instead, to model the problem with its inherent nonlinearities and random fluctuation or uncertain data. We do h s because the decomposition mctlod is intended to solve nonlinear and/or stochastic ordinary or partial differentia! equations: integro-differentia1 equations. delay equations, matrix equations, etc., avoiding customary restrictive assumptions and methods, to allow solutions of more realistic models. If the deductions resulting from solution of this model differ from accurate observation of physical reality, then this would mean that the model is a poor one and we must re-model the problem. Hence, modelling and the solution procedure ought to be applied interactively. Since we will be dealing with a limited region of space-time which is of interest to the problem at hand, we must consider conditions on the boundaries of the region to specify the problem completely. If we are interested in dynamical problems such as a process evolving over time, then we must consider solutions as time increases from some initial time: i.e., we will require initial conditions. We will be interested generally in differential equations which express relations between functions and derivatives. Th'ese equations may involve use of functions, ordinary or partial derivatives, and nonlinearities and even stochastic processes to describe reality. Also, of course. initial and boundary conditions musr be specified to make the problem completely determinable. If the solution is to be valid, it must satisfy the differential equation and the properly specified conditions. so appropriate smoothness must exist. We have generally assumed that nonlinearities are analytic but will discuss some exceptions in a later chapter. An advantage, other than the fact that problems arc considered more realistically than by customary constraints. is that
ON .UODELWLWVG PIIYSICAL PHENOMENA
3
solutions are not obtained here by discretized methods: solutions are continuous and computationally much more efficient as we shall see. If we can deal with a physical problem as it is, we can expect a useful solution, i.e., one in which the mathematical results correspond to reality. If our model is poor because the data are found from measurements which have some error, it is usual to require that a small change in the data must lead to a small change in the solution. This does not apply to nonlinear equations bsecause small changes in initial data can cause significant changes in the solution, especially in stochastic equations. This is a problem of modelling. Lf the data are correct and the equation properly describes the problem, we expect a correct and convergent solution. The initial/boundary conditions for a specific partial differential equation, needless to say, cannot be arbitrarily assigned: they must be consistent with the physical problem being modelled. Suppose we consider a solid body where u(x,y,z.t) represents a temperature at x,y,z at time t. If we consider a volume V within the body which is bounded by a smooth closed surface S and consider the change of heat in V during an interval (t,,t,), we have, following the derivation of N.!;.Koshlyakov, M.M. Smirnov, and E.B. Gliner [ I ]
where n is the normal to S in the direction of decreasing temperatures and k is the internal heat conductivity, a positive function independent of the direction of the normal. The amount of heat to change the temperature of V is
where c(x,y,z) is the specific heat and p(x,y,z) is the density. If heat sources with density g(x,y,z,t) exist in the body, we have
Since Q = Q, + Q,, it follows that
au = div(k grad u) + g
cp-
at
If c p and k are constants, we can write a2 = klcp and f(x,y,z,t) = g(x,y,z,t)lcp . Then du = a2V2u+ f at (which neglects heat exchange between S and the surrounding medium). Now to determine a solution, we require the temperature at an initial instant u(x,y,z,t = 0) and either the temperatures at every point of the surface or the heat flow on the surface. These are constraints or commonly, the boundary conditions. If we do not neglect heat exchange to the sunounding medium which is assumed to have uniform temperature u,, a third boundary condition can be written as a ( u - u,) = - k d u / a n / , (if we assume the coefficient of exchange is uniform for all of Sf. Thus the solution must satisfy the equation, the initial condition, and one of the above boundary conditions or constraints which make the problem specific. We have assumed a particular model which is formulated using fundamental physical laws such as conservation of energy, so the initial distribution must be physically correct and not abitrary. If it is correct, it leads to a specific physically correct solution. The conditions and the equation must be consistent and physically correct. The conditions must be smooth, bounded, and physically realizable. The initial conditions must be consistent with the boundary conditions and the model. The derived "solution" is verified to be consistent with the model equation and the conditions and is therefore the solution.
NOTE: Koshlyakov: et. al. [ l ] state that we must speci'. ult = 0) nvitlzin the body and one of the bourzda~condirions such as u on S . However S is not insulated from the body. The initial condition u(t = 0) fixes u on S also if surroundings are ignored. It seems that either one or the other should be enough in a specjfic problem and if you give both, they must be consistent with each other and the. model (equation). The same situation arises when, e.g., in a square or rectangular domain, we assign boundary conditions on the four sides, which means that physically we have discontinuity at the comers.
5
OH MODELLING PHYSICAL PHENOMENA
REFERENCE 1.
N.S. Koshlyakov, M. M. Smirnov, and E.B. Gliner. Differential Equations of ,Mathernarical Physics, North Holland Publishers (1 964).
1.
.
Y. C h e m a u l t iMarhemarical ,Modelling in Biomedicine. Reidel (1986). R. P. F e y m a n , R. B. Le~ghton,and M. Sands, The Feymnan Lpcfures on Physics,
3.
I. S. Sokolnikoff and R.M. Redheifer. iMafhernatics of Physics and lModern
.Addison-Wesley ( 1965).
Enqineering. 2nd ed.. McGraw-Hill(1966).
THE DECOMPOSITION METHODFOR ORDINARY DIFFERENTIAL EQUATIONS 0 I/-
-.,,
<
A
A cfitgally important problem in 3dntier science and technology is the ,, - - , , uf -yLP n of nonlinear andlor stochastic systems modelled h c 2 - , ,t.equati~gs initiajboundsy 6, I r ; & ' 9 conditions. k,: - l j y .pi. The usual procfdlures of analysis necessarily change such problems in !J t essential_u.ays in order to make them mathematicallyp~ctfi~le by established methods. Unfortunately these changes necessaril<~h&~e the solutions; therefore, they can deviate, sometimes seriously, from the actual physical behavior. These procedures include linearization techniques. perturbation methods, and resrnctions on the nature and ma,onitude of stochastic processes. The avoidance o i these limitations so that physically correct solutions can be obtained would add in aii important way to our insights into the natural behavior of complex systems and would offer a potential for advances in science and technology. The prior art in mathematical analysis as seen in the literamre necessarily relies on such limiting procedures. Thus it may well be said that physics is usually perturbarive theory and mathematics is essentially linear operator theory. Of course there are some methods of solving nonlinear equations, but not general merhods. For example, clever transformation of variables sometimes resulx in a linear equation: however, thls rarely works. The objective of the decomposition method is to make possible physically realistic solutions of complex systems without the usual modelling and solution compromises to achieve tractability. A bonus is that it essentially combines the fieids of ordinary and partial differential equations. This chapter will summarize the method and will briefly discuss applications and consequences for analysis ahd computation. Suppose we think about physical systems described by nonlinear partial differen ti a1 equations. In the more complicated problems, we ordinarily must reson to discretized methods and numerical computation. An appropriate example is fluid flow and "computational fluid dynamics'' (C.F.D.), an area of
hf - -,J I
,
i ,\ :
i
'
,
intensive research in attempting to develop codes for :study of transonic and hypersonic flow. Beca.use of the symbiosis between such existing methodology and supercomputers, as well as the coml?lexity, these methods are computationally intensive. Massive printouts are the result and functional dependences are difficult to see. We have a constant demand for faster computers, superconductivity, parallelismt etc., because: of the necessity to cut down computation time. Thus a continuous soluti.on and considerably decreased computation is evidently a desirable goal. Closed-form analytica.1 solutions are considered ideal when possible. However, they may necessitate changing the actual or real-life problem to a more tractable mathematical problem. Except for a small class of equations in which clever transformations can result in linear equations, it becomes necessary to resort to linearization or statistical linearization techniques, or assumptions of "weak nonlinearity," etc. What we get then is solution of the simpler mathematical problem. The resulting solution can deviate significantly from the solution of the actual problem; nonlinear systems can be extremely sensitive to small changes. These small changes can occur because of inherent stochastic effects or computer errors; the resulting solutions (especially in strongly nonlinear equations) can show violent, erratic (or "chaotic") behavior. Of course, it is clear that considerable progress has been made with the generally used procedures, and, in many problems, these methods remain adequate. Thus, in problems which are close to linear, or where perturbation theory is adequate, excellent solutio~lsare obtained. In many frontier problems, however, we have strong nonlinearities or stochasticity in paramet:ers, so that it bcecomes important to find a new approach and that is our subject here, &'rU;x / w e begin with the (del:erministic) form Fu = g(t) where F is a nonlinear ordinary differential operator with linear and nonlinear terms. We could represent the linear term by Lu where L is the linear operator. In this case L which may not be the case, i.e.. we may have a and a consequently difficult integration. Instead, we write the linear term as Lu + Ru where we choose L as the highest-ordGred / derivative. Now L-' is simply an n-fold integration for an nth order L. The * li.5,d-J remainder of the linear opeiator is R. (In cases where stochastic terms are present in the linear operator, we can include a stochastic operator term m.) The nonlinear term is represented by Nu. Thus Lu + Ru+ Nu = g and we
/
t'>5
-
4
L
write
'
L',
-
/"'
For initial-value problems defrne L-' for L = d"/dtn as the n-fold definite integ~tionoperator fkom 0 to t. For the operator L = d2/dt2, for example, we hi% -.LILu= u - u(0) - tu'(0) and therefore - ---
ri
For the same operator equation but now considering a boundary value problem, we let L-' be an indefinite integral and write u = A + Bt for the first two terms and evaluate A, B from the given conditions. The first three terms u,. Finally, are identified as uo in the assumed decomposition u =
' ,, " )r' assuming Nu is analytic, we write Nu =
33.
are,specially
xw
z-
n=O
~ , ( u , . u ,..., , un) where the 4
o=O
gediated (Adomian) polynomals for the specific nonlinearity.. l-L--
'
- a
, kI'hey depend only on the q,to u, components and form a r'apidly convergent
series. The & are glven as
,
and can be found from the formula (for n 2 1)
I
-'.
'
c,'
In the linear case where f(u)= u, the A, reduce to u,. Otherwise A n = A ,(uo,u , ..., u,). For f(u) = u2, for example, A, = ui, A , = 2u,u,, A, = u,2 - 2u,u,, A, = 2u,u2+ 2u,u ,,... . I! -.is p be noted that,in this scheme, - .'I,+. the sum of the subsc.~ptsi? each tern of t h s n are equal to n. The c(v, n) are L,.Ly,'. products (or sums of products) of v components of u whose subscripts sum c : to n, divided by the factorial of the number of repeated subscripts. Thus c(1,3) .; ,, "'can only be u,. ci2,3) is u,u, and c(3.3) = (1/3!)u:. For a nonlinear equation in
.
-
L;
Em
/
u, one rp3y express any given function f(u) in the A, by f(u) = ,=i) A,. -p6~ h ; d " eprev~ouslypointed out that the polynomials are not unique, e.g., for f(u) = u3, A, = u:, A ! = 2uOul, A2 = uf + 2u,u2,. .. . But A! could also
be 2u,u, t u!2 , i.e., it could include the first term of A, since u, and u, are h o w n when u2 is to be calculated. is n w established that the sum of the series A, for Nu is equal to n=O 6 1YJJ the sum of a generalized Taylor series about &(a). that u = ) u, is equal to a
xm
Jt'Y-&!,+
1 ,,?
YL
xm
generalized Taylor series about the function u,, and that the series terms approach zero as l/(mn)! if m is the order of the hghest linear differentlai . , . .i operator. Since the senes converges (in norm) and does so very ra ~ d l ythe , n- L'- - L'u, can serve as a practical solutiwt for ryntheas/ term partial sum q, =
%A
-> . ,
I
,
C J ~ E ~ & , I L' ' Y* -', L', ,- n-+kTG, @"'other convenient algorithms have been developed for d P o s l t e and y~multidimensionalhroctions as well as f o particular ~ funcfions of jnrer~st.A; an L'" / r--/ ' example for solution o f the ~ , u f f &7 - equation, we use e notation /I,. ' , -< t - +-' ~&-IG-$? ~ , [ f ( ~= ) ]A,(~~)$/Y 1 : ox and des~gn.The lim rp. = u. - b~ J~,,?-
-
/
C,
,
F,
J/?
--
A:/;b;
If we write f(u) =
zw a=()
~ . [ f ( u ) ] or more simpiy f(u) =
0
f(u) = u, we have u = - f,s,r,:
,-
j,
xw A, and let n=O
u, since then A, = u,, A, = u ,,... . Thus we
n=O
= can say u and f(u),i.g., the solution and any nonlinearity, are written in terms
&' of the A,,
or, that we do this for the nonlinearity and think of u as simply decomposed int6 components ui to be evaluated such that the n-term
xm
/
approximation rp, ~ = ' ~ I=O n - lui approaches -,.-.....-u= can now be written as:
n=O
u, as n i
-.
The solution
L / ~ u / ! ~ DO
oyd"
m
so that
etc. All components are determinable since A, depends only on u,. A, depends on uO,u,, etc. The practical solution will be the n-term approximation or approximant to u, sometimes written %[u] or simply G.
3r
-
, -
pP
J< Convergence has been rigorously established by Professor Yves Cherruault [I]. Also further rigorous re-examination has most recently been done by Lionel Gabet [2]. The rapidity of this convergence means that few terms are required as shown in examples, e.g., [ 11.
u+I~>~
BASIS FOR THE EFFECTIENESS OF DECOMPOSITION: 05-i;'! c*-, Let's consider the physical basis for the a c c u g ~and ~ - rapid rZE of 6. convergence. The initial term u , , is an optimal first approximatiacontaining ,, essentially all o p r i ~ information i about the system. Thus, u, = O + L - ' ~ ; L~ contains the given input g (which is bounded in a physical system) and the in Q, which is the solution of L Q = 0. initial or boundary conditions ,, hcluded .' I . !,. Furthermore. the f o l i o w e terms converge for bounded t as l/(rnn)! where n is the order of L and m is the number of terms in the approximant . Hence even with very small m, the 4, will contain most of the solution. Of the , . followins~erivedterms, u, is particularly simple, since &, the f i s t of Lhe - I - ,,-A<. - G-S~ .-.- (J polynomials re7re:yting the nonlinearity f(u), is f(u,) which of course is also known. The f(u? need not even be analytic [3] but could be piecewiseI; 1; , ,_: ,,+I differentiable (Sobolev space) and represented by a series of analytic I. functions. 11-e do require bounded Q, and g: which is p$.sicall y reasonable;-'-2, and the u, terms must be (Lebesgue) integrable. For nth order differential operators L,we must have an n-fold differentiabilityJcontinuity requirement on solutions i? order ,to,assure existence and uniqueness. The develppment was c/;i - , f l 1 based on a decor@osi~onm ~alc&blbie t e r n of the solution to @f&d rather than an expansion in a series. We will see that the method works for initialvalue or boundary-value problems and for linear or nonlinear, ordinary or partial differential equations and even for stochastic systems. A7e prove now is a rearranged Taylor series about this optimal first that the approximation q i= u,. To see that the series in A, polynomials forms a generalized Taylor series about a function (rather than a point), we write c
I
I!
:.I-
,
,
e,
1
1
1.
\
...-
.,J,:/:*
LA"
L
, *.
whlch can be rearranged as
rcu, = fiu,) - (ul iU, + ...)ft1)(u0)+ [(~;/2!)t U!U- - ...lfi"(~,)i ... = f ( u q ) +[(u - uq)/l!]f'"(u,) +[(u- u,)1/2!jp:'(u.))-
---
=
[(u - u.)"/n!]f~"'(u,)
= D
=0
,A REFERENCE LIST OF THE ADOMI.ANPOLYNOMIALS: .A,, = f(u3) A, = u,f(')(u3)
+ (1/2!)u~ftz)(u,) A, = u3fi1)(uo) + u ~ u ~ ~ ( +~ (1/3!)u:fi3'(u0,) )(u~) .A4 = u,ft1)(u0)+ [(1/2!)ui + ~ , u ~ ] f ( ~ ) ( u , ) A: = ulf(''(u,)
+(1/2!)u~u2f")(u,)
+ (l/4!)u~ff')~u0)
A, = u5ft')(u0)+ [u2u3+ ~ , u , ] f ( ~ ) ( u , )
+ (l/2!)u,t3]f")(uo) +(1/3!)~:u~f(~)(u,) + (l/5!)u:f(')(u0) A. = u ~ ~ ( " (+u [(vz!)u: ~) + u2u4+ u ~ u ~ ] ~ ( ~ ) ( ~ u ~ ) +[(1/3!)ui + u,u2u3+ ( 1 / 2 ! ) ~ j ~ , ] f ( ~ ' ( ~ , ) +[(1/2!)u;(l/2!)u: + (l/3!)li;u3]f'"(u0) +(1/4!)~~u,f(~'(u,) + (1/6!)~ff(~)(u,) A, = u,f(')(u,) + [u3u4+ u,u5 + ~,u,]f'~'(u,) +[(1/2!)uiu3 + ul(l/2)!u: + U,U,U, + (l/2,!)uju,]P(uo) +[(1/2!)u,u:
+f"'(u0)(l/6!)u~u2 + f(')(u0)(1/8!)uy A9 = fi1'(u0)u9+ f(2)(~o)[u,us + u,u6 + u2u7+ u1u8] -f'3'(u,)[(l/3!)u: -u:u3u5
+ u,u,u, + (l/z!)u;u, + u, (l/2!)u:
- U,U2U6 + (1/2!)u:u,]
-f "(u0)[(l/3!)u:u, + u,u2(l/2!)u; -(1/2!)uju~u; s (1/2!) uju,u5
+
u,(l/l!)uiu,
+ (1/3!)u;u,]
-f sJ(ug)[ul(l/4!)u;+ (l/2!)u;(l/~!)u;u3
+(1/3!)u; (1/2!)u:
+ (1/3!)u;u2u, + (1/4!)u;u,]
-f 6J/uo)[(1/3!)~:(1/3!)u;+ (1/4!)u;u:u3
- (lp!)u:u,]
-f ')~u~,)[(~/~!)u~(I/?!)u~ + (1/6!)u:u,]
-f "(u" )(l/7!)u;u2 + f~9'(uo)(1/9!)u; A,, = fil(uo)u,,+f")(u0)[(l/2!)u: +u,u, +u,u, Tu2u, + u , u 9 ] +i3~(u3)[(l/2!)u;u4-i-u2(1/2!)u: -u:u,u,
+ u2u3u, i(l/?!)uju,
- u,u3u, + u,u,u, + (l/2!)u;u8]
-f "(u, j[jl/2!)~;(1/?!)~;+ (I/~!)u;u,- U,(I/~!)U; u: u,
U? U4
+uI(1/2!)ut- Uii- + ( 1 / 2 ! ) ~ : ( 1 / 2 ! ) ~ ;
~ ( 1 1 2 1 )u~3 u; S+(1/2!)uf u 2 u 6+(1/3!ju: u,] + f f 5 ' ( ~ o ) [ ( ~ / 5 !+-) u~ ,;( I / ~ ! ) u ; u+~( i p ! j u ; ~(I/~!)u; ~ -(l/2!)uf (l/2!)u ju, + (1/3!)u;u3u,
+(1/3!) U : U ~ U ~+ (1/4!)u:u,] +f'"(u0) [(1/2!)uf(1/4!)u;
+ (1/3!)u; (1/2!)u ju,
+ (1/4!)u;u2u, + (1/5!)u;u,] +f"' (u0)[(1/4!)u:(1/3!)ui + (l/5!)u~u,u3+ (1/6!)ufu,] +f'"(u0) [(1/6!)uf(1/2!)ui + (1/7!)u:u3]
+(1/4!)u:(1/2!)u:
+f'g'(uo)(l/8!)u~u,+ f('0)(u9)(l/10!)u~0
'/iB-&>
*
j ' s
Notice that for urne a c d ( d u a 1 term is the product of m factors. Each term of A, has five factors-the sum gf s-upeqscri%& is m (or 5 in this case). The - I d ( I,uf sum of subscepu is n. The sec&ftermd%; as an example, is 5 u ~ u l u l u , ~ @7,L and th sum of subscripts is 4. A very convenient check on the numerical coefficients in each term is the following. Each c6efficient is m! divided by the product of factorials of the superscripts for a given term. Thus, the second term of A, (us) has the coefficient 5!/(3 !)(I !)(I !) = 20. The last term of A, has the coefficient 5!/(2!)(2!)(1!) = 30. Continuing with the &, for u5 we have
A, = u: + 3u?,u, + 3u:ui + 3u$, &3u:u1 + 6u,u,u, + 6u,u,u, -6u,u,u6 + 6u0u,u5 + 6uIu2u6i 6 u , ~ 3 + ~ 6u_u3u, , A,, = 3u:ulc
- 3u;ug + 3u;uo + 3 ~ 3 +2 3u;u2 ~ ~ - 3u;u, - 6uOu,u9
-6u,ulu,
+ 6u0u,u7+ ~u,u,u, + ~ u , u , u ,
-6uIudu5 + 6u2u3u5
EXAMPLE: N e = s i n e
~ U , U ~ U ~
A, = sin 8, A, = e, C O S ~ ,
e, + e, C O S ~ , = - ( ~ ; / ~ ) c oe, s - ole2sm 0, + e3CO'S e,
A, = -(e:/2)~in
A,
A, = uim A, = -mu,(m+')u,
-
A, = +m(m + l)u;("+')u~ - mu,-(m+l) u, A, =
-+ m(m + l)(m + ~)U;("+~)U;+ m(m + l ) ~ ; ( ~ + ~ )-umu,, u ~
EXAMPLE: f(u) = uy where y
-(m+l)
is a decimal number.
u,
EXAMPLE: Consider the linear (deterministic) ordinary differential equation d2u/dx' -kxpu = $ with u(l) = u( - 1) = 0. Write L = d2/dx2 and Lu = g + kxPu. Operating with L-' , we have L-'LU = L-'~ -tL-'kxPu. Then
xD=, oe
Let
u, with u, = c, + c2x+ gx2/2. Then urn+,= ~ - ' l x ~ uwith , m Z 0.
Thus
where
-
OD
o:(x) =
tmxmF-2m/(mp2m - l)(mp + 2m)
Since u(l ) = u(- 1) = 0,we have
;
7
7'
-
.p.
Hence c: and c, are determined. Suppose that in the above example, we let k = 40, p = 1, = 2. Thus we consider thp~equation d2u/dx2- 40xu = 2 with u(-1) = u(1) = 0.' This is the one-dimensional case of the eliiptic equation V2u = ~ ( x , Y , zi)k(x,y,z)u arising i~ problems of physics and engineering. Here L = d2/dx' and we have L u = 2 i 30 xu. T h s is a relatively stiff case because of the large coefficient of , *K--?err)u?
,
-
.
,
,< *
-
*-
I
i/
:her c \ u n n l e ~appear in thc literature
Jj-
- u',,
u, and the non-zero forc~ngfunction which yields an additional Airy-like funct~on.0 erating with .-I -(, ,/,4 L" yields u + A + Bx + ~ - : ( 2+) L "(40xu). Let uo + A T Bx - k i ( 2 ) = A + BX+ x and let u = n=O u, w t h the components
e,,.
=a*,,-
-
zw
to be determined so that the sum is u. We identify uu+,:= L-:(40xu,). Then all -0 '2*j components can be determined, e.g., uy:v
An n-rerm approximant
=
xu-' I
=o
ui with n = 12 for x = 0.2 is given by
-0.135639, for x = 0.4 is given by -0.113969, foi: x = 0.6 is given by -0.083321, for x = 0.8 is given by -0.050944, and for x = 1.0 is, of course, zero. These easily obtained results are correct to seven digits. We see thac a better solution is obtained and much more easily &an by variational+ methods. The solution is found just as easily for nonlinear versions without linprization. - ?Y
(+%/
U&pa I
ALYTIC
SIMULANTS OF DECOMPOSITION SOLUTIONS:
We now introduce the convenient concept of "simulants" to solutions by decomposition. The m-term "approximant" Gm to the solution u, indicated by
[u],will mean m terms of the convergent series
-.zoo u, which represents n=o
u in the decomposition method. If we have an equation Tu = g(t) where r is a general differential operator such as, for example,
and we write g(t) =
Em gutnbut only use m terms of the series, we have the n=o
m-term approxirnant m-1
The corresponding solution of the equation is the simulant of the solution u, thus r a m
['I = Gm [g]
,
-;
yi-
- L -/:
Analogous to the limit m + = of @,[g] = g, the limit as m -+ = of a,[u]= u Possible stopping rules arise in computerized calculation when the last computed simulant a,,,[u] corresponds to a,(u) to the number of decimal places of interest to us. We can also conceive of using the entire series for g, for example, writing the sum of the infinite series but using a sequence of approximants to parameters a$, ..., in r and a corresponding sequence of simulants o,[u] as parameuized by @&[a] or +&[fl.In solving a partial differential equation by decomposition. we may develop a sequence of simulants a,(u) for the solution u by concurrently improving the level of approximation of the coefficients, or the given conditions, or the input functions. For example in
L,o, [u]- R , O[u] ~ = 4. [g] where g(t.x) =
CLoCm=a gt.mt'xrn 0
we can
compute each am(u) for given approximations of g, and of the initial conditions, or finally of coefficients such as
We can also use the concept of simulants with asymptotic decomposition whch is discussed in 141. Consider the equation
- u = g(n) = x m
d'u/dx'
garn
u(0) = c, and u'(0)
= c,
n=O
By the asymptotic decomposition technique [ 5 ] , the equation is written as
En=,gnanand ud = -(d2/dx')um-, m
Then ul = pix) =
for rn > 1. We use an
approximant of g or
Then the simulant, or analpc simdant, of u is the solution of
d'a, [ u ] + a,[ul= ~ m [ g l dx2
En=" ahrn'where m
o,[u] is a series which we write as
m
"
dx:
OD-:
n=O
-
It is straightforward enough that if we don't use all of ,;: we have only q,[u] whlch approaches u in the limit as m +
T- m-1
in 9, = LL,=o ui. In the same
equation u = g(x) - (d2/dx')u
we can write
so that
xmx0 OD
where an =
a".
nerefore
is the solution for asymptotic decomposition. W e make two observations:
1) The method works very well for nonlinear equations where we solve for the nonlinear term and express it in the above polynomials. 2) Ordinary differential equations with singular coefficients offer no special difficulty with asymptotic decomposition, e.g.,
REFERENCES Y. Chermault. Convergence of Adomian's Method, Kyberneres, 18, (31-38) (1989). L. Gabet, Equisse d'une theorie decompositionelle, Modtlisation Marhtrnatique et Analyse Numerique, in publication. G . Adomian and R. Rach, Smooth Polynomial Approximations of Piecewisedifferentiable Functions. Appl. Math. Len.. 2. (377-379) (1989).
G.Adomian. h'onhnear Stochastic Operator Equations. Academic Press (1986). G. Adomian. A Review of the Decomposition Method and Some Recent Results for Nonlinear Equarions. Comp. and Math. with Applic., 21,(101-127) (1991).
SUGGESTEDREADING G . Adomian. R. E. Meyers, and R. Rach, An Efficient Methodology for the Physical Sciences. Kybernetes, 20, (24-34) (1991). G.. Adomian. Konlinear Stochastic Differential Equations, J. Math. Anal. and Applic., 55. 14414 5 1 ) :!976). G. Adomian. Solution of General Linear and Nonlinear Stochastic Systems. in Moden; Trcrlir ir: C~berneticsand Sysirt~:s.:. R3se (ed.), (203-2143 (1 077 j. G. Adomian and R. Rach. Linear and Nonlinear Schrodinger Equations. Found. of P h y i c s . 21. (983-991') (1991). N.Bellorno ar,? R . Riganti. Nonlinear Stochastic Systems in Physics and Mechanics, World Scientific. (1987). N . Bellomo and R. Monaco, A Comparison between Adomian's Decomposition Method and Perturbation Techniques for Nonlinear Random Differential Equations, J. Math. Anal. and Applic.. 110, (1985). N . Bellomo, R. Cafaro, and G. Rizzi. On the Mathematical Modelling of Pbysical Systems by Ordinary Differential Stochastic Equations, Math. and Comput. in Simul.. 4 . 1361-367) (?984). N. Bellomo and D. Sarafyan, On Adomian's Decomposition Method and Some Comparisons with Picard's Iterative Scheme. J. Math. Anal. and Applic.. 123. (389400) (1 987). R. Rach and A. Baghdasarian, On Approximate Solution of a Nonlinear Differential Equation, App!. Marh. LRft., 3 , (101-1 02) (1990). K . Rach. On the Adornian Method and Comparisons with Picard's Method, J . Math. Anal. and Applic.. 10, (139-159) (1984). K. Rach. A Convenient Computational Form for the A, Polynomials, J . Marh. Anal. and Applplic.. 102. (415-419) (1984).
12. A.K. Sen, An Application of the Adomian Decomposition Method :o the Transient Behavior of a Model Biochemical Reaction, J. Mah. Anal. and Applic., 131, (232245) ( 1988). 13. Y. Yang. Convergence of the Adomian Method and an Algorithm for Adomian Polynomials, submitted for publication. 14. K. Abbaoui and Y. Cherruault, Convergence of Adomiian's Method Applied to Differential Equations. Compur & Mah. with Applic.. to appear. 15. B.K. Datta. Introduction to Partial Differential Equations. New Central Book Agency Ltd., Calcutta (1993).
THE DECOMPOSITION METHODIN SEVERAL DIMENSIONS
Mathematical physics deals with physical phenomena by modelling the phenomena of interest, generally in the form of nonlinear partial differential equations. It then requires an effective analysis of the mathematical model, such that the processes of modelling and of analysis yield results in accordance with observation and experiment. By this, we mean that the mathematical solution must conform to physical reality, i.e., to the real world of physics. Therefore, we must be able to solve differential equations, in space and time, which may be nonlinear and often stochastic as well, without the concessions to tractability which have been customary both in graduate training and in research in physics and mathematics. Nonlinear partial differential equations are very difficult to solve analytically, so methods such as linearization, statistical linearization, perturbation, quasi-monochromatic approximations, white noise representation of actual stochastic processes, etc. have been customary resorts. Exact solutions in closed form are not a necessity. In fact, for the world of physics only a sufficiently accurate solution matters. All modelling is approximation, so finding an improved general method of analysis of models also contributes lo allowing development of more sophisticated modelling [ I , 21. Our objective in this chapter is to see how to use the decomposition method for partial differential equations. (In the next chapter, we will also introduce double decomposition which offers computational advantages for nonlinear ordinary differential equations and also for nonlinear partial differential equations.) These methods are applicable in problems of intokrest to theoretical physicists, applied mathematicians, engineers, and other disciplines and suggest developments in pure mathematics. We now consider some generalizations for partial differential equations. Just as we solved for the linear differential operator term Lu and then operated on both sides with L-', we can now do the same for highest-ordered iinear operator terms in all independent variables. If we have differentiations for example, with respect to x and t: represented by L,u and L,u, we obtain equations for either of these. We can operate on each with the appropriate inverse. We begin by considering some illuminating examples. Consider the example d u / d t + - d u / r 3 x i f ( u ) = O with u(t = 0) = I /2x and U(X = C) = - llt. For simplicity assume f(u) = u'. By decomposition, L,
-77
writing L,u = -(dl d x)u - u2. then writing u = by u =
xw n=O
Z-
o=O
u, and representing u'
A, derived for the specific function, we have
n=O
n=O
Consequently, u, = u(x,O) = 112x
Substituting the A, { u 2 }and summing, we have
which converges if (t I 2x) < 1 to u = lI(2x -t). If we solve for L,u, we have
or u = -(l/t)[ 1+ 2x/t + ---I which converges near the initial condition if 2 d t < 1 t o u = l(2x-t). Both operator equations yield distinct series which, converge to the same function with different convergence regions and different convergence rates. It is interesting to observe that convergence can be changed by the choice of the operator equations. In earlier work, the solu!tions (called "partial solutions") of each operator equation were combined to ensure use of dl the
given conditions. We see that partial differential equations are solvable by looking at each dimension separately, and thus our assertion holds about the connection between the fields of ordinary and partial differential equations. There is, of course, much more to be said about this and we must leave further discussion to the growing literature, perhaps beginning with the introduction represented by the given references. Consider the equation u, - u, + (d/d t)f(u) = 0 where f(u(x,t) is an anai y t ~ cfunction. Let L, represent d2/dt2 and let L, represent d2/Jx2. We now write the equation in the form
Using the decomposition method, we can solve for either linear term; thus,
Operating with the inverse operators, we have
where the a,,0,are evaluated from the given initial/boundary conditions. Generally, either can be used to get a solution, so solving a partial differential equation is analogous to solving an ordinary differential equation with the L, operator in the frrst equation and the L,operator in the second equation assuming the role of the remainder operator R in an ordinary differential equation. The (;/at)f(u) is a nonlinearity handled as before. The solution depends on the explicit f(u) and the specified conditions on the wave equation. To illustrate the procedure. we first consider the case with f(u) = 0 in order to see the basic procedure most clearly. We have, therefore, u, - u, = 0 and we will take as given conditions u(0, x) = 0, u(t,O) = 0, u (nI2,xj = sin x, u(t,x/2) = sin t. Let L, = d2/dt' and L, =d2/ax"d write the equation as Lru = L,u. Following our procedure, we can write either u = c, k, (x) + c, k,(x)t +L;'L,u or u = c,k,(t)+c,k,(t)x+L;'L,u.
Define 0, = c, k, (x) + c2 k,(x)t and @, = c, k,(t)
+ c, k,(t)x to rewrite
above as u = @, + L;'L,U and u = @, + L;'L,u. The first approximant 4, is u, = 0,. The two-term approximant
4,
the is
u, iu, where u, = L;'L,u,. Applying the t conditions u(0,x) = 0 and utn 12,x) = sin x to the one-term approximant gi, = LI, = c, k,(x) + c,- k,(x)t we have c, k,[x)= 0 czk,(x)nI2=sin x or c, = 2/7r and k?(x)=sin x. The next term is u, = L;'L,u, = L;'L,[c,t sin x], and we continue in the same manner to obtain u, ,u,, ...,u, for some n. Clear1y, for any n. u, = (L;'L,)",
= c2(sin x)(-1)" t'"-l/1:2n - I)!
If we write for the m-term approximant, we have for the two cases: m-l
0,= c2 sin x(-l)"t2"+' /(2n
+ L)!
Since 0, ( X I2,x) = sin x c,k, (x) = 0 ci sin x E(n/z)'"+' /(Zn
..
+ I)!=
sin x
As m + =, cz + 1. The sum approaches sin t in the limit. Hence our approximation becomes an exact solution u = sin x sin t. (The same result can be found from the other operator equation.) Thus, in this case, the series is summed. In general, it is not and we get a series with a convergence region in which numerical solutions stabilize quickly to a solution within the range of accuracy needed. Adding the nonlinear term does not change this; the A, converge rapidly and the procedure amounts to a generalized Taylor expansion for the solution about the function u, rather than about a point. We call the solution an approximation because it is usually not a closed form, solution; however, we point out that all modelling is approximation, ancl a closed form which
necessarily changes the physical problem by employing linearization is not more desirable and is, in fact, generally less desirable in that the problem has been changed to a different one. Recent work by Y. Chermault and L. Gabel on the mathematical framework has provided the rigorous basis for the decomposition method. The method is clearly useful to physicists and other disciplines in which real problems must be mathematically modelled and solved. The method is also adaptable to systems of nonlinear, stochastic, or coupled boundary conditions (as shown in the author's earlier books). The given conditions must be consistent with the physical problem being solved. Consider the same example u, - u, = 0 with 0 2 x I n and t 2 0, assuming now the conditions which yield -an interesting special case for the methodology. u(x.0) = sin x u(0,t) = 0 u(7T.t) =0 u, (x,O) = 0 Decomposition results in the equations
The one-term approximant 9,= u, in the first equation is
Satisfying conditions on x we have c, k,(ii = O and c,k,(t)li = 0.Hence u, = 0. The f ~ s equarion t clearly does not contribute in this special case; we need only the second. Thus, U, = c?k3(x)+ c;k4(x)t Applying conditions on
t,
c,k,(x)
U, = sin
u = (1 - 1'/2!
= sin
x and c,k4(x)t= 0. Hence
x
+ t74!- ...) sin x =sin x
cos t
We are dealing with a methodology for solution of physical systems which have a solution, and we seek to find this solution without changing the problem to make it tractable. The conditions must be Icnown; otherwise, the model is not complete. If the solution is known, but the: initial conditions are not, they can be found by mathematical exploral.ion and consequent verification. Finally, we consider the general form
We now let u =
xO=O
u, and f(u) =
2,- An and notz this is equivalent to n=O
letting u as well as f(u) be equal to O=O An where the A, are generated for the specific f(u). If f(u) = u we obtain A, = u0, Al = u l ,.,., i.e.,
To go further we must have the conditions on u. Suppose we choose
Satisfying the conditions, we have c, k(t) = c2k2(t) = 0 or u, = 0. Therefore the equation involving L;' does not contribute. In the remaining equation we get c,k, (x) = f (x) and c,k,(x)t = 0. Hence,
Thus, components of u are determined and we can write rp, =
En-' u asan m=O
n-term approximation converging to u as m -+ -. To complete the problem, f(u) must be explicitly given so that we can generate the A,. We see that the solution depends both on the specified f(x) and on the given conditions.
RESULTS AND POTENTIAL A PPL~CATIONS: The decomposition method provides a single method for linear and nonlinear multidimensional problems and has now been applied to a wide variety of problems in many disciplines. One application of the decomposition method is in the development of numerical methods for the solution of nonlinear partial differential equations. Decomposition permits us to have an essentially unique numeric method tailored individually for each problem. In a preliminary test of this notion, a numerical decomposition was performed on Burger's equation. It was found that the same degree of accuracy could be achieved in two percent of the time required to compute a solution using Runge-Kutta procedures. The reasons for this are discussed in
121. EXAMPLE: Consider the dissipative wave equation U,
- U,
+ ( d l dt)(u2)= g = -2sin2 x sin t cos t
with speci~iedconditions u(0,t) = u('ii,t) = 0 and u(x,O) = sin x, u,(x,O) = 0. We have u, = k , ( x ) + k2(x)t+L;'g from the L,uequation and use of the two-fold definite integation L;' and
from the L,u equation and application of the two-fold indefinite integration L;'. Either solution, which'we have called "a partial solution", is already correct: they are equal when the spatial boundary conditions depend on t and
the initial conditions depend on x. When conditions on one variabie are independent of the other variable, the partial solu~tionsare asymptotically equal. From h e specified conditions u(x.0) = sin x arid q(x,O) = 0 k, (x) = sin x k2(x)= 0 so that u, = sin x - (sin: x)(t/?
- 111 sin
2t)
The n-term approximant q, is
where A, = u,u,, A, = u,u,- = u,u ,,... . The contribution of the term L-'g to u, results in self-canceling terms, or "noise" terms. Hence, rather than calculating exactly, we observe that if we use only u, = sin x. we get u, = (-t2/2!)sin x, u, == (t4/4!)sin x. etc. which appears to be u = cos t sin x. Thus the solution is u = cos t sin x + other terms. We write u = cos t sin x + N and substitute in the equation for u and find that N = 0, i.e., the neglected terms are self-canc:eling and u = cos t sin x is the correct solution. It is often useful to look for patterns to minimize or avoid unnecessary work - .. To summarize the procedure, we can write the two operator equations
Applying the operators L;' to the first equation or L;: to the second,
Substituting u
=zwu, and f(u) O=O
=
EmA,, n=O
where A, are defined for
f(u), we have u, = k,(x) + k,(x)t u , , ~ = L;'
L,U,
+ L;'g
- ~;'(d/dt ) ~ ,
where, if both make a contribution, either can be solved to get an n-term approximation qn satisfying the conditions to get the approximant to u. The partial solution as noted earlier is sufficient. The integration LLconstants"are evaluated from the specific conditions, i.e., each (on satisfies the appropriate conditions for any n. The following example illustrates avoidance of often difficult integrations for solution to sufficient accuracy in a particular problem. The exact solution, which will then be used for a solution to two or three decimal places in physical problems, is often unnecessary. We can sometimes p e s s the sum of the decomposition series in a closed form and sometimes determine it by Euler, PadC, Shanks, or modified Shanks acceleration techniques. However, whether we see this final form or not: the series is the solution we need. EXAMPLE: u, - u,
+ (d / dt)f(u)= g(x, t)
Let g = 2e-' sinx - 2eW2' sinx cosx and f(u) = uu,. The initialhoundary conditions are: u(x. 0) = sin x u,(x,O)= -sinx u(0, t ) = u(z, t ) = 0 Let L, = d2/d t' and write the equation as
(By the partial solutions technique: we need only the one operator equation the 6"/dx2 is treated like the R operator in an ordinary differential equation.j Operating uvith L;' defined as the two-fold integration from 0 to t and m u, and f(u) = O = O A n where the An are generated for writing u =
Em
S(u) = uu, , we obtain the decomposition components
U,
= U(X,0) -t- h,(x, 0) + L;' g
u,+, = -L;'(d / d t) A, c ~ ; ' ( d/ d~r:')
for m 2 0.Then. since
Zz
LI,
z* O=O
u, is a (rapidly) converging series. the partid
sum ,$t = ui is our approximant to the solution. We can calculate the above terms u,, u, ,...,u, as given. However, since we. calculate approximanrs, we can simplify the integrations by approximating g by a few terms of its double Maclaurin series representation. Thus we will drop terms involving t 3 and x3 and h,oher terms. Then
sinx = x cosx = 1-so that
x2
Then L-' g -- 0 to the assumed approximation. Hence
Thus the two-term approximation is
Although we can calculate more terms using u,,, for m 2 0, substitution verifies that u = e-'sin x is already the correct solution. If we need to recognize the exact solution, we can carry the series for g to a higher approximation to see the clear convergence to e-' sin x. Once we guess the solution may be e-' sinx, we can verifjl i t by substitution, or substitute e-' sin x + N and show that N = 0.
E Q U A L IOF ~ 'PARTIAL SOLUTIONS : In solving linear or nonlinear partial differential equations by decomposition, we can solve separately for the term involving either of the highest-ordered linear operators* and apply the appropriate inverse operator to each equation. Each equation is solved for an n-term approximation. The solutions of the individual equations (e.g., for L,u, L,u, L,u , or L,u in a fourdimensional problem) have been called "partial solutions" in earlier work. The reason was that they were to be combined into a general solution using all the conditions. However, it has now been shown [4] that in the general case, the partial solutions are equal and each is the solution. This permits a simpler solution which is not essentially different from solution of an ordinary differential equation. The other operators, if we solve for L,u, for example, simply become the R operator in Lu + Ru + Nu = g. The procedure is now a single procedure for linear or nonlinear ordinary or partial differential equations. (When the u, term in one operator equation is zero, that equation does not contribute to the solution and the complete solution is obtained from the remaining equation or equations.) We will show that the partial solutions from each equation lead to the same solution (and explain why the one partial solution above does not contribute to the solution). Consider the equation L,u + L,.u + Nu = 0 with L,= d' l 8 2 and
L, = a ' l a y 2 : although no limitation is implied and Nu is an analytic term accurately representable by the An polynomials. We choose conditions: u(2:.y)= al(y) U(a:. y) = a,(y)
u(x.b,! = p, cx) u(x, h,) = fizcx>
Solving for the linear operator terms
Using the "x - solution", we have
* I'urely n o n h e a r equarions or equations in whch Lhe tughest-ordcrcd operator require further consideration 131.
IS
nonlinear
where L;' is an indefinite two-fold integration and
L<:(.)
= jJ(-)dxdx + 0,
to
where Q x= (y) + xt, (y). The (y) and {,(Y) are matching coefficients to the boundary conditions. Hence, L;'L,U = u - O x m~d0,= &:,(y) + xc,(y), where and 5, are the integration "constants". We now have
c9
We let u =
xm O=O
un and Nu =
xm a=O
A, where the A, are defmed specifically
for Nu. The u, term is normally taken as Ox(or 0,-- :L: when there is an inhomogeneous term). We can take a somewhat different approach (double decomposition), discussed in Chapter 4, and decompose cP, as well. In that case, we write
and
Then instead of the following components being given, by u,,,
= -L;'Lyu,
- L;'A,
for rn 2 1
we would have U, =
ax,, - L;1Lyu0- L;1A, --
- L;IL~U! - L;A, ut = u3= @a.3 - L ; I L ~ -L;IA~ u~
The boundary conditions cp,+l(a,,~)= a,(y) cp,,l(a,7y) = a,(y) determine
io,,and 5,., . Then
Now
which is the solution to the equation in x. We can proceed in the same manner with the y equation; however, we return to the ordinary or regular decomposition for clarity. The additional decomposition is of no advantage for initial-value problems but speeds up converserice in boundary-value problems by giving us r e s u l ~for u,, u , . ... that are obtained by correcting constants of integration as we proceed. so that we can then use a corrected initial value uVithoutmore matching to conditions. From the x partial solution.
From the y partial solution
The rnth appmximanr qm,=
xr:i
LI,,
in each cas: above. The integration
constants are determined by satisfying the _eiven conditions by solution of the
matrix equations
to determine <,,, <,, V,,, 77,. The limit as m + = of rp, for the x equation and the y equation are respectively the x partial solution and y partial solution and are identically equal; either is the actual solution which satisfies the differential equation uniquely for the given initiafioundary conditions.
REMARKS:Suppose we consider a partial differential equation whose solution is the surface u(x,t) in a Cartesian system. We write this in the form L,u + L,u + Ru + Nu = g. The intersections with the u,x plane is u(x,O) = f(x). As t increases from this initial value, the surface u is gent:rated. Similarly, the intersections with the u,t plane is u(0.t) = g(t). As x increases, the surface is generated. The partial solutions represent these two possibilities, i.e., we can determine u either by starting from f(x) and using the: t equations (L,u = g-L,u-Ru-Nu) or starting from g(t) and using the x equation (L,u = g -L,u - Ru - Nu) and the appropriate inversions for each. ulxis
Consider, as an example, the simple heat flow equatisn u,= u,,, given that u(x,O) = sin(mlt) and u(0,t) = u(P,t) = 0. The solution is
The equation in t is L,u = L,u. Applying the L;' u = u(x, 0) + L;' L,
operator, we get
CZ, un. u, = sin (ax/P) u, = L;'L,u, = -(n2 t/12)sin (sx/!) u, = (a' t2/f4)sin(nx/!)
u=
un = e-"'"" sin (aX/P)
which is the complete solution usually obtained more easily than the textbook solutions of this problem. The x equation is L,u = L,u. . Applying
We see u, = k, (t) - xk2(t) = 0 which means all following components must be zero so this equation, as previously stated, makes no contribution. Here, the x conditions (boundary conditions) u(Olt) and u(E.t) do not depend on t. Hence the partial solutions are asymptotically equal-they both are zero at t+-.
Use of the panial solutions technique as compared with the author's earlier treatments of parual differential equations [4] leads to substantially decreased computation and minimization of extraneous noise [ 5 ] . Also we note that the convergence region can be changed by the choice of the operator equation. Since the panla1 solutions are equal, we need solve only one operator equation. (Exceptions occur when the uo term is zero in one of the equations or the initialhoundary conditions for one operator equation dc not involve remaining variables.) The remaining highest-ordered linear differential operators can now be treated like the remainder operator R. Thus ordinary or partial differential equations are solved by a single method. The decision as to which operator equation to solve in a multidimensional problem is made
on the basis of the best b o w n conditions and possibly also on the basis of the operator of lowest order to minimize integrations. To make the proczdure as clear as possible, we consider first the case where Nu = 0, i.e., a linear partial differential equation in R',
L, = G2/dx' and L, = d' l d y 2 with the boundary conditions specified by boundary-operator equations where
Solving for L,u we have L,u = g
-q,u - Ru and operating with
L: we have
where 0, satisfies LxO,= 0. The inverse operator L;' is a two-fold (indefinite) integration operator since L, is a second-order differential operator. The "constants of integration" are added for each indefinite integration. ** (This makes notation consistent with decomposition solution of dt , and for initial-value problems where for L, = d 1d t we define I<'[.] = f[.] 0
L, = dZI d t" we have a two-fold definite integration from zero to t.) Now the decomposition u =
Emu, U=O
yields
where we identify
uo = 0,+ L;'g
* *For linear ordinary differential
equations. but not partial differential equations. we can
view :L as 3 pure two-fold integration operator not involving constants and simply add the @, for a general solution.
as the initial term of the decomposition. Since L;' is a two-fold integration, 0,= co(y)+ xc,(y) and u, = co(y)+ xc,(y) + Lyg. Hence
n=O
n=O
Then for m 2 0
for components after u,. Consequently, all components of the decomposition are identified and calculable. We can now form successive approximants @,=, C:=',um as n increases which we match to the boundary conditions. Thus cpl = u,, cp2 = q1+ U , , (03 = cpt + u2, serve as approximate solutions of increasing accuracy as n + m and must, of course, satisfy the boundary conditions. Beginning with cp, = u, = co(y)+ xc, (y) + L;'~, we evaluate c, and c , from the boundary conditions
Thus cp, is now determined. Since u, or q, is now completely known? we form u, = -L;'L,.u, - L;'RU, Then (7'2
= v1
- u,
which must satisfy the boundary conditions. Continuing to an acceptable approximation cp,, we must match the conditions at each value of n for a satisfactory solution as decided either by substitution or by a stabilized numerical answer to a desired number of decimals. For the special case of a linear ordinary differential equation, we have the simpler alternative of using the unevaluated u, to get u , , simply carrying along the constants in u, and continuing in this way to some cp,. Thus, in this case, only one evaluation of the constants is necessary. F o r nonlinear cases or for partial differential equations, the simpler procedure is not generally possible.) To make this clear, consider some examples:
u(-1) = u(1) = 0 Write Lu = 2+40xu OT m
u = c, + c , x + ~ - ' ( 2 ) + 4 0 ~ - ' x ~ u , n=O
We can identify U,
= c , +c,xiL-'(2)= c, + c , x - x 3
Now instead of evaluating the constants at this stage, we write
If, for example, cp, is sufficient,
Imposing the boundary conditions at - 1. and 1 on q3 we have q,(-1) = &(I) = 0 or
determining c, and c, and therefore ~ o in , a single evaluation. (By cp,, this yielded seven-digit accuracy.) Another exampie is dZy/dx2+2x dyfdx = 0 with y(0) = 0 and y(a) = I. The solution is y(x) = (erf x) / (erf a) or
Write Ly = -2x(ddx)y or y = yo - 2~-'x(d/ dx)y with yo = c, -+ c,x
If we satisfy the conditions with q, = yo we have yo = x / a a s our first approximant. If we continue to some q, and evaluate only then, we have
If we stop at =yo + Y l + Y l 40, = c, + c,x - c ? x ~ / - ~ c ?+ x ~ / ~ o
403
and now satisfy the conditions we have
which is (erf x)/(erf a) to this approximation which can, of course, be carried as far as we like. For linear ordinary differential equations, both procedures will work, i.e., we can use the e\.aluated u, to get u,, add it to the unevaluated u, to get q,, then satisfy the conditions at the boundaries: or cany the constants along as in the examples. The last procedure is most convenient because of the single evaluation: the first is more general since it applies to nonlinear ordinary differential equations and linear or nonlinear partial differential equations as well [6.7]. EVALUATION OF COEFFICIENTS FOR A LINEARPARTIALDIFFEREP~TIAL
EQUATION : Urn-Un
=o
for 0 5 x L: 7i / 3, 0 5 y 5 T I 2 with the conditions given as u(0, y) = u(x,O) = 0 U(T12,y) = sin y u(x, T / 2) = sin x Let L,= ;I" l d x ' and L,. = d' / d y 2 to write L,u = L,.u. If we apply inversion to the L,operator, we have u = k , ( y ) + xk,(yj
+ L:L,u.
Now O x = k, (y) + xk, ( y). Hence u = @, + L;'L,u. The one-term approximant is cp, = u, = a,. A two-term approximant is = q, + U, and u, = L-,'L,u,. The x conditions are u(0,y) = 0 and u(x/2,y) = sin y. Applying these conditions to k,(y) + xk,(y) we see that k, = 0 and k, = (2/7r)sin y. Thus, if the one-term approximant p, were sufficient, the "solution" would be cp, = (2 / Z)X sin y. The next term is u, = L;L,U, = L;'L, [(WE)x sin y]. 'Then p, = u, + u, is
-eiven by q, = k, - xk? - (2/ir)(x3/3!)sin y. Because of the condition at x = 0, we have k, = 0. From the condition on x at 7x12, (a/2)kl(y) -(2/~)((~/2)'/3!)siny = sin y
kz(y) = (2/a- wl2)sin y hence ~"((3/,s-;r:13_)x
sin y
The first coefficient was (2 / n) = 0.637. The second (fiom cp,) is 0.899. As n -+ m, the coefficient approaches 1.0 so that u = x sin y is the solution. Notice that if we try to carry along the constants of integration, k, and k2, to some q, and do a single evaluation for determination of the constants, we have u, = L:L,u, = LL : , [k, (y) + xk,(y)] which we cannot carry out; we must use the evaluated preceding tenns rather than a single evaluation at rg,. We have used only the one operator equation; the same results are obtained from either. Let us consider a more general form for the linear partial differential equation L,u + L,u + Ru = g where Ls= d Z / d x 2and L, = d Z /d y Z with conditions specified by B, u(x)lx=,, = p, (y) and B,u(x)J,.,: for L,u and applying the inverse operator, we have
= pz(y). Solving
where 0 = c,(y) + xc,(y) is the solution of LQ = 0.
xm=, 00
Let u = Urn and identify the initial term as u, = c,(y) + xcz(y) + L;'~. Now for m 2 0, the remaining components are identified
We now form successive approximants rpn = x L : u r n which we match to the boundary conditions. Thus rp, = u,, rp, = cp, + u,, cp, = cp2 + u2, --.serve as approximate solutions of increasing accuracy as n approaches infmity and must satisfy the boundary conditions. Beginning with
:,1
we use P,V,!,, = fi, (y) and fi2cp, = B2(y) to determine c, (y) and c: 0.) so that q, is completely determined. Since u, is now known, we can form u, = -L;'L,u, - L;'Ru,. Then rp, = rp, + u, which must also satisfy the boundary conditions. Continuing to some q,, we match the conditions for a sufficient approximation. Thus carrying along constants to q, for a single evaluation doesn't work except for linear ordinary differential equations. For linear partial differential equations, we must use the already evaluated preceding terms and can do so also for nonlinear ordinary differential equations.
COEFFICIE~T-GENERATING ALGORITHMS FOR SOLUTION OF PARTIAL DIFFERE~TIAL EQUATIONS IN SERIES FORM: Let's consider a model system in the form
aZu
aZu
-+au+p--r=g(t.x) dt' dxL assuming conditions given in the form u(0,x) = ~ ( x and ) a u l d t (0,x)= q(xj. Write m
We note that
m
TIIEDECO,WP~).S~ION .ULT/OU1.V SEVERAL DIMENSIONS
43
where 00
D e f i n i n ~L = $ ' / d we can write
and L-' as a two-fold definite integration from 0 to t,
t'
LU
+ a u + ~ ( d ' l n')u d
= g(t, x)
or Lu = g(t, x) - a u - P(d2/d x:)u Operating with L-', we have
u = u, - L-'au - ~-'fi(2'/2x')u where
which we will write as 00
-
where
Thus the coefficients are
Using decomposition u =
xLo
urn,
CHAPTER 3
14
so that for m = 0, m
m=O
and for m > 0, Since we can also write m
we note
The next conlponent u i is
Since
Thus
which we write as
m
THEDECOMPOS~~ION M m i o IN ~ SEVERAL DIMENSIONS
45
where
Proceeding in the same way, write
which is now rewritten as
where
Continuing, we calculate u3, a,... and see that we can vMite for the p rh component of u
with
as a coefficient-generating algorithm. Thus for p = 0
Then the solution is u =
CLou,. Consequently
is the solution with
as the th approximant to the solution which becomes an increasingly accurate representation of u as v increases. MIXED DERIVATIVES :
Consider the equation u, = -u given the conditions u(x,O) = ex and u(0,y) = e-'. k t L, =
a/ax
L,. = a / a y Then L
=d
and L;'() = jy(.idy. In operator form; we have O
L,L,u = -u. Operating with L-,' we have L-,'L,(L,.u) = -L;'u L,.u - L,u(O, y) = -L-,'u L,u = L, u(0, y) - L;!u Operatins now with L;'
Let
un a
Hence u =
Since we have
Zn=o urn, OD
Because u =
is the solution. However, we can rearrange the terms to get a simpler representation using staggered summation:
where
We recognize the binomial expansion of (x - y)rn and write
which is, of course, the exponential series of (x-y) so that u = the same result in a convenient form.
which is
REMARK: U we write rp, = u, + u,, we can recognize the first six terms of (1 + x + x2/2) - (1 - y + y2/2) = eXe-'. Write u = exe-Y+ N , substitute into the original equation and see that N must vanish in the limit.
EXERCISE:u.+, = ux + uy - u with u(x,O) = ex and u(0,y) = e-Y. (The solution is u = ex-'.)
EXERCISE:u, = [4xy/(l + X ' ~ ' ) ] U with u(x,O) = u(0, y) = 1. (The solution is u = 1s x2y2.) A generalization to u, + k(x,y)u = g(x.y) with u(0, y) = {(y) and u(x,O) = q(x) can also be considered using power series expansions of the functions to several terms.
MODIFIED DECORZPOSITION SOLUTION: uxY= U , - u,. - u with u(x,O) = ex and u(0, y) = e-?. Let u=
2 2 a,,,
a" y c
Then
We note that ui x,O)= stituting in the equation.
xn=O
xm/m! and u(0, y) =
Cw (-y)n/n!. n=O
Sub-
+
zC m
m
m=O
n=O
m
(n+l)a,,n+lxm~"
m
-Z C am,. x m y n
m = O n=O
Equating like powers
a,., = l/m! a3," = (-l)'/n ! We can now compute a table of coefficients in a convenient mangular form:
which is given as:
and by induction,
Therefore
Consequently, u = ex-Y.From the table of coefficients, we see that
Since
ADDENDUM:From the table of coefficients in trian_pular form, we have
Therefore by substitution,
Consequently we can derive the recurrence relation by substitution:
a,.,
= l/m!
a,,, = (-l)"/n! so that
u=
2 2 a,,
xm yc
m=O o=O
GENERALIZATION O F THE A , POL'I'NOMIALS TO FUNCTIONS OF SEVERAL \'.4RIAULES: In applying the decomposition method to nonlinear differential equations
arising in physical problems, we may encounter nonlinearities involving several variables [8]. We now generalize the algorithm for A, for f(u) to analytic functions of several variables such as f(u,vj. where f(u.v) is not lactorable into f (u)P,(v). (The latter case: of course, is solvable as a "product
nonlinearity" by developing the A, for each factor and obtaining their product.) Examples appear in [2]. Our objective is to extend the class of solvable systems. In the use of the method. the solution of a differential or partial differential m u, and flu) = O = O An(u,,u,, ..., u,) where a equation is written u = is a function involving initiallboundary conditions, the forcing function, and an integral operator. This amounts to the assumption that the solution and functions of the solution are expanded in the A, pcllynomials since A, reduces to u, for f(u) = u. For development of an algorithm for the A,, i t is convenient to assume parameuized forms of the u and f(uj. The following expressions have been given by the author (21 as:
or simply A, = (l/n!)~Yfl,_,where D" = d"dX and
The DDf term for n > 0 can be written as a sum from v = 1 to n of terms dvf/duv with coefficients which are polynomials in the dvu/ddv.Thus,
The result for A, can fmally be given in a very convenient form which we have referred to as Rach's Rule,
Here f'"(uo) means the n h derivative of f(u) at u =: uo and the l l n ! is absorbed in the c(v,n). The first index of c(v,n) progresses from 1 to n along with the order of the derivative. The second index is the order of the polynomial.
The A, is a function of uo,u, ,...,u,, i.e., of the components of the solution u in the decomposition. The c(v,n) are products (or sums of products) of v components of u whose subscripts sum up to n with the result divided by the factorial of the number of repeated subscripts. For example c(1,3) can only be u, (a single subscript which must be 3). c(2,3) can only be u, u2 (two subscripts adding to 3). c(3,3) = (1/3!)u c(2,6) has two subscripts adding to 6 for which we have three possibilities for u,y using (2,4), ( 1 5 ) and (3,3). Hence ~ ( 2 ~=6u2u4 ) + u u5 + (1 /2!)u The result is
:.
,
5.
ANALYTICFUNCTION OF TWOVARIABLESf(u.v): Proceeding analogously to the case of one variable,
The A, for f(u) are written A,{f(u)}. Generalizing to A,{f(u,v)} or A, {f(u(/l),v(A))}we introduce the notation
9
Proceeding analogously to the c(v,n) and f'''(u,) for f(:u),we can now write c(p, v,n) and fg.', or f,.,(u,, v,) for a function f(u,v).
Comparing with D ' f we see that c(1,O.l) = duldd which must be evaluated at A= 0. Since u =
xwRu,, n=0
du/ddl,=, = u,. Hence
Proceeding in the same way we can list the A, { f(u,v)}
Perhaps more conveniently we will write in symmetric form [1,2] where the indices of f,, start from n,O, subtracting 1 from n and adding 1 to 0 for the next set ,...,and finally reaching 0,n. Thus
For &, we have c(0,O:O) = 1. We can list thr 4
2:
fnlln~rrs
CIiApTER 3
56
For A, =
For A, =
XI
P+V=I
c(p,v,l)f,, we need only c(1,O;l) = u, and c(0,l;l) = r , .
,+v=I
~ ( pY.,2)fy,,, , we need
x2
CONVENIENTRULESFOR USE: The A, have been written in detail as a convenient. reference and an aid in calculations. However, they can now be written by simply remembering the algorithm. The c@,v;n) are written by considering all possibilities for p and v with p + v = n. Inspection of the listed c(p,vp) will make it clear that p tells us how many times u appears and v tells us how many times v appears. Further, we see that the sum of all the subscripts is n and as with functions of a single variable, repeated indices require division by the factorial of the number of repetitions. -- ANALYTICFUNCTION OF SEVERAL VARIABLES:
Let's consider f(u, v,w) # f,(u)f,(v)f,(w). Thus, N[u,v,w], with N a nonlinear operator, acting on u is an analytic function f(u,v,w) which we set equal to A,, . Now we defme
CEO
fp,v.U= ( a p / a o ) ( a v / d v 0 ) ( a U / a w , )wO) f(~~, Now A0 = fo,o,o A, = c(l,O,O; l)fl,0,0 + ~(0,1,0;l)fo,1,0+ ~(OlO,l;f)fo,o,i At = ~(1,0,0;2)fl,,o + ~(0,1,0;2)fo,l,o+c(O90,l;2)fo,o,i f~(2,0,0;2)fi,o.o + c(1,1,0;2)f, ,I, + c(1,0,1;2)f1,O,l + c(0,1,1;2)fo,,,, + c(O.2,0;2)fo,,+ c~o.o~2:;2)fo,o,2
The values of the c@,v,w) above are c(l,O,O;l) = u, c(0,1,0;1) = v, c(O,O,l;l) = w, c(1,0,0;2) = u, c(0,1,0;2) = vz c(0,0,1;2) = w,
c(1,1,0;2) = u,v, c(1,0,1;2) = u , w , c(0,1,1:2) = V l W l c(2,0,0;2) = u ;/2! c(0,2,0:2) = v f /2! c(0,0,2;2) = w /2!
;
Thus
Ao = f(uo,votwo> A, = u , ( d f / d u o ) + v,(df/dvo) + w,(df/d2,) A, = u2(G'f/du0)+ v 2 ( d f / d v c ) +w z ( d f / a 2 , )
~ , ~ i~/ d,u (o ~aw~o ) + v , w , ( f~ /~a ~ ~ a +~(~;/z!)(a , ) f/du:) + u , v , ( f~/ a~u o a v o ) +
-(.;/z!)(a
f/av;)
+ ( ~ ; p ! ) ( af / a w~;)
etc. for A,. We can proceed analogously for determination of A, functions f(u,,ul .....u d .
for
.APPLICATIONS:
The A, for f(u,jr,w) is needed to solve three coupled nonlinear differential equations. In the author's form [2] for coupled equations, using decomposition we have
m
u,, v = x D = ovn, w
We lei u = f iu.v,u~)=
=In=$ wD and we write li,(u,v.w) = m
Zm A, {f(u, v, w)] for i = 1 2 3 . Then n=C
+ L;!g, v0 = cD2 + L : g;
where L,@,= O where L,@, = 0 wo = cD, + ~;'g, where L3cD3= 0
u,=cD,
Similarly we require A, { f(u ,,...,urn)}for m coupled operator equa~ions. An example for a non-factorable nonlinearity f(u,v) is the pair of coupled
equations dddx dvldx
+
a,u
+ b,v +
f,(u,v) = g ,
+ azu + bzv + f2(u,v)= g,
Finally, we consider N(u,v) = f(u,v) = eu". This is an interesting case for comparison purposes since it is a factorable nonlinearity: e"' = eu . e v ,so we can solve it as a product nonlinearity using A,(f(uj) or with the present results for .A,{f(u,v)}. We can now consider a set of two coupled equations in the general form: L, u + R (u,v) + N(u,v) = g , LV+ R2(uIv)+ N(u,v) = 92 where N(u,vj = e"'.
,
SOMEFINALRE~I~UUCS: The definition of the L operator avoids difficult i~ltegrationsinvolving Green's functions. The use of a finite approximation in series form for the excitation term, and calculation only to necessary accuracy simplifies integrations still further. (With Maclaurin expansi~on,for example, of trigonometric terms, one needs only integrals of t".) 'I'he avoidance of the necessity for perturbation and linearization means physically more correct solutions in many cases. The avoidance of discretized or g i d methods avoids the computationally intensive procedures inherent in such methods. The decomposition method is continuous and requires significantly less processing time for the computation of results. It has been demonstrated that very few terms of the decomposition series are necessary for an accurate solution, and also that the integrations can be made simple by the suggested methods, or by symbolic methods, and use quite simple computer codes in comparison with methods such as finite differences or finite elements. As we have shown, partial differential equations can be solved by choosing one operator for the inversion and considering all other derivatives to be included in the R operator. Hence we solve exactly as with an ordinary differential equation. W e have the additional advantage of a single global method (for ordinary or partial differential equations as well a;many other types of equations). The convergence is always sufficiently rapid to be valuable for numerical work. The initial term must be bounded (a reasonable assumption for a physical system) and L must be the highest-ordered differential.
REFERENCES G. Adomian. Stochastic Sysrems, Academic Press (1983). G. Adomian. Nonlinear Stochastic Operaor E q w i o m , Academic Press (1986). G. Adomian and R. Rach, Purely Nonlinear Equations, Comput. Math. Applic., 20 , (1-3) (1990). G. Adomian and R. Rach, Equality of Partial Solutions in the Decomposition Method for Linear or Nonlinear Partial Differential Equations, Comp. Mark Applic., 19, (9-12 ) (1990). G. Adorman and R. Rach, Noise Terms in Decomposition Solution Series, Compur. Marh .4pplic.. 23, (79-83) (1992). G. Adomian. Solving Frontier Problems Modeled by Nonlinear Partial Differential Equations. Comput. Math. AppIic .,22, (91-94) (1 99 1). G. Adomian. R. Rach, and M. Elrod, On the Solution of Partial Differential Equations u9itf: Specified Boundary Conditions, J. Math. Anal. and Applic., 140, (569-581) (19891. G. Adomian and R. Rach, Generalization of Adomian Polynomials to Functions of Several Variables, Comput. Math. Applic., 24, ( 1 1-24) (1992).
SUGGESTEDREADING M. Srnirnov. and E.B. Gliner, DifjCerential Equations 0-f Maihenaical Physics, North HoIland ( 1 964). h.1. M. Smirnov, Second-order Partial Dinerenrial Equations, S . Cbomet (ed.). Noordhoof 11964). E.A. W t , Fundamentals of Mafhemaical Physics, McGraw (1967). >:. Bellon?. 2. Brzezniak, L.M. de Socjc. Nonlinear Stochmic El,allrrio? .Pmhlrm.r on Applied Sciences, Uuwer (1 992'). A. Blaquiere. Nonlinear Svsrem Analvses. Academic Press ( 1966). I\;. S . KosDlyakov, M.
CHAPTER 4 DOUBLE DECOMPOSITION
In solving boundary-value problems by the decomposition method. we have seen that we can either retain the "constants" of integration in the u, term for the case of linear ordinary differential equations, re-evaluating the constants as more terms of the approximate solution cp, are computed, or, we can use the u,evaluated to satisfy the boundary conditions and add constants of integration for each successive term u,. For a linear ordinary differential equation, it is more efficient to calculate an n-term approximation rp,, carrying along the constants u,,, and finally force rp, to satisfy the boundary conditions, thus evaluating the constants of integration. We now introduce an effective procedure which allows us decreased computation, especially in partial differential equations. This is done by a further decomposition, i.e., we now decompose the initial term u, = (O into At first thought, this seems like an ill-advised procedure which can only slow convergence, since the new initial term a, or u,., will be farther from the optimum value for y,.However, we will see that, as a result, we can use Q, to determine a (tr which can then be used for further tenns of a7, without further evaluations. The boundary-value problem becomes an equivalent initial-value formulation in terms of 0 . This eliminates further matching to boundary conditions. Let us again consider the equation ,u - u, = 0 with u(0,y) = 0, u(x,O) = 0, u(x 12,y) = sin y, and u(x,x 12) = sin x whose solution by decomposition is u(x,y) = sin x sin y. We will again use decomposition and also the concept of equality of the partial solutions of the operator equations, so onIy one operator equation needs to be considered. Also, we will decompose the uo term of the decomposition series, which means a double decomposition of the solution u. (This is a much preferable method to that of eigenvatlue expansion in m dimensions.) We have L,u = b u and can apply the inverse operator L;' on both sides. 'Ths L : L,u = u - (tr, or u = 0, + L;'+ with u(0,y) = 10 and u(xL2 ,y) = sin Y.Equivalently, we can start with b u = LUand apply IL-,' to write u = Q, +
L-,'L,uwith u(x,O)
= 0 and u(x, r /2) = sin x. 69
As usual, we assume u =
xw m=O
urn but now we also decompose uointo
E l ou , , . For the x conditions, we have
with uo = @ , , and u,,, = @ , , +L;'L,u ,-,. We can also write, using the y conditions, the equation
with uo =Q,.., and u,,,
= @ y . m + ~ ; ' L ,-,. x u Since LX@,= O and L,Q,=O,
we have Qx.0 = 50(Y) + x51 (Y)
where the 5 ' s and q ' s arise from the indefinite integrations. The conditions given determine these integration "constants" for the approximate solution
..
xn=o ffi
qD-!= u,. Thus q,,, (0, y) = 0 and qm+, (r/ 2, y) = sin y determine &, (y) and <:., (y). Similarly, q,,, (x,O) = 0 and ( ~ , + , ( ~ : 7 r=/ 2sin) x deermine qG,,(x! and q!,,(XI. Let us consider improving approximations to the x-dimensional solution as we calculate increasing terms of the decomposition series. Of course, the approximation is the solution in the limit m + m.
c,,
Since cp,(O,y) = 0, = 0. Since q 1 ( ~ / 2 , y=)sin y, t,., = (2lz)sin Y. nerefore. q, = u, = (2/n)xsin y. To calculate u, we have
A two-term approximation is given by cp? = cp, + u, (or u, + u,): hence
Since p, (0,y) = 0, we have <,.q = (7r / 2)(sin y) /3!
<,,
= 0, and
since cp,(nl:l, y) = sin y, we have
u, = (?r / 2)(x sin y) / 3!-(2 / 7r)(x3sin y) / 3! Uz
co.z+ x { ~ ,-~L;lLyu,
=
L,ul = (2 / r ) ( x 3 sin y)/3!-(~/2)(x sin y)/3! L;L~U, = (2 / 7r)(x5 sin y)/ 5! -(x /2)(x3 sin y) /(3!)'
u2 =
eo,2+ xe,.-,+ (2 / 7r)(x5sin y) / 5!-(x / 2)(x1'sin y) /(3!)'
q3= q 2 + u 2 or u O + u 1 + u 2 etc. Summarizing,the components of u are -.
u, = (2/7r)x sin y
u, = (IT / 2) (x sin y) / 3!-(2 1n)(x3 sin y) / 3! u, = {-(IF / 2)' / 5!+(n / 2)' /(3!)'}x sin y
- (IF/ 2)(x3 sin y) /(3!)' + (2 / R)(x5 sin y)/
5!
etc. The approximate solutions cp, ,cp,,cp,,. .. are:
rp, = (21n)x sin y 9, = ( 2 / 1 ~ + ( 1 ~ / 2 ) / 3 ! sin ) x y + ( 2 / ~ ) ( - ~ ' / 3 ! ) s i ny
cp, = 1/(R / 2) + ( x / 2) / 3!-(z / 2)' / 5!+(x / 2)' 1'(3!)')x sin y +1 / ( X I2) + (x/2)(3!)(-x'
+1/(IT/ 2)(x5 / 5!) sin y
/3!) sin y
etc., or rp, = (.6366198)x sin y q, = (.8984192)x sin y + (.6366198)(-x3 / 3!) sin y q3= (.9737817)x sin y + (.8984192)(-x3 /3!) sin y -t (.6366198)(x5/ 5 ! ) sin
y
which converges very rapidly to the given solution. It is interesting to write the result as
cp, = a,,x
sin y + a,,,(-x3 /3!)sin y + a,,,(x5 /5!)sin y + ...
or m-1
a,,, (-1)' (x2"+') /(2n + l)! sin y
cp, = n=O
where the a,., are numerical series whose sum is 1; each term is delayed behnd the preceding term. Now, m.- i . -
lim pm= lim
m -+-
m+-
I
a,,, (-l)"(xinii) n=O
where lim a m ,= 1 for all n. Then m4-
-
u = lim pm= m 4-
{ ( - l ) ~ ( r ~ ~ ' ) / ( 2 I)!) n sin y = sin a -sin y n=O
The y-dimensional solution is u = sin y sin x since, by symmetry: y is interchanged with x; i.e., the partial solutions are identical. Consider the example u,+ u,,.= g(x,y) = x2+y' with u(0,y) = 0, u(x,O) = 0, u(1,y) = 412, u(x,l) = x2/2.We have shown previously, using decomposition. that the solution u = xZy2/2can be obtained in only two terms. It is also clear that either the operator equation for L,u or for L,.u can be used with appropriate inversions. Thus L,u = xi + y2 - Lyu L;'L,U
and since L;'L,u
= u - @,
+ y2)- L;IL,u
= L-,~(x,
Dousu DECOMPOSITION
73
u = 0 , +L-,~(X'+~')-L;'L~U Similarly, u = 0, + L;'(x2
+ y2)- L;'L,U
Using (I), u, = 0, + L-,'(x'
- y2)
m
Oa
m=O
m=O
C U m = Uo - L~'L, Urn+,
u,
= -L;IL,U~
for m 2 0. Now, if we decompose the u, term as well, we write
Identifying u, = Q,., + L: (x'
+ y2), all other componenrs are determined by
Proceeding analogously using (2)
Continuing with the x equation, i.e., (1) and (3),-
or from (2) and (4) @,
= %(x)+YVI(X)
@y.m
= %,rn (XI + Y 771,rn (x)
The "constants" of (indefmite) integration are no3w matched with the m approximate solutions for n = 1.2,... where qpm+,= uu. Thus
xu=,
q,,,(O,y) = 0, (p,+,(l,y)= y2 12 determines &,,(y) and C,,,(y) in ( 5 ) . Similarly, qm+, (x.0) = 0 and Gm+,(x, 1) = x2/2 determines qOmm (x) and ql,, (x).
Proceeding with thc x-dimensional solution, @, = to (y) + xj,(y) and U, =
to+ xj, + L;'
(x2 + yZ);after decomposition of h,
Our first approximation is p, = u,, or
w h e r e ql(O,y)= 0, ( ~ ( 1 ,= ~ yz ) 12. Since
rp, = (0, y) = 0, j,,,, = 0. Since
Cp1(1,y)= >=I?.
+ 1112+ Y 2 12 = y.= 12 or
=-1/12.
Hence
..
u, = - x / 1 2 + x " 1 2 i x - ; - / ~ Then U,
= 50,1+ xtl.!- L;'L,uc
Since L., u, = x' and L-,'L,u, = L;'x2 = x 4 / 12 U,
=
<,,,+ xt,,,- x"
Then (Pz
= uo + U, = q1+ u,
12
-
We now have cpz = x2y'/2, i.e., the exact solution in two terms. If we proceed further ~ 4 1 . 2- L : Lyul U 2 = 60.2 We have L,,u, = 0,Ly Lyul= 0
and since p3(0,y) = 0,&,: = 0.Since cp,(l, y) = y 12, <,,_= 0; hence U, = 0 so 50, = xZY2 / 2. We can continue to see lim q,,, = u. = x'y2 12. The same 2
m+-
result is obtained from the ydimensional solution. We now appiy the double decomposition to a linear ordinary differential equation represented by Lu + Ru = g where L is the highest-ordered linear differential operator-in this example we choose L = d.'/dx2and R is a linear operator (the "remainder" operator) which can contain for this L no derivatives higher than the first (the order of R is always less than thie order of L).
Solving for Lu and operating with L", we have_ u =
+ L-I:
- L-'Ru where
~ 0 = 0 . ~ o w l e r tn =u~u=m~a n~ d @ = xmw= ~@,;then
(where L-' is a pure integration not involving constants). Let Q, = c,,, + xc,,, and define uo = + L-'g. Now 0,= c,,, + xc,.,. Matching q, to the boundary conditions c,,, and c,., aue determined by two linear equations. Suppose g = 0 for simplicity. Then C0.o
+
blcI.0 = P,
Co,, + btc1.0
or in matrix form
=P 2
if the determinant of the first matrix is non-zero. We now go to the next approximation q2by first determining
to get q, = cp, iu,. Matching q, to the boundary conditions to evaluate the constants, q, is determined completely. Continuing in this manner, we determine u,,u3,... until we arrive at a satisfactory qm verifiable by substitution or stabilized numerically to sufficient accuracy. We have
where
Qm
= cc.=
- X C ~and. ~pmA1 = (P,
+ u r n .Matching %+I to the boundary
conditions. we require
where
+ b , ~ , , , .Substituting and matching the conditions.
which we write simply as C0.m +
blcl,rn = P i , m
Thus,
Now 0,or c,.,and c,., are determined and we remark that
The decomposition of the initial term can be used for nonlinear boundary-value problems (for ordinary or partial differential equations) and also for linear partial differential equations. It is not necessary in linear ordinary differential equations where we can carry along the unevduated uo and evaluate all at once in the rp,, a simpler procedure. The objective of the decomposition of u,, is to allow a convenient matching of the boundary conditions to any approximant qm,i.e., for any value of m. Each integration involves constants w h c h are added to get a better uo. This gives us a useful procedure:.
EXAMPLE:u,
+ uyy= 0 with the conditions
Write Lxu+ Lyu= 0. If we solve for L,u, we have u = Q, -L;'L,u 0. = t0(y)
+ xt,(y).
Now decompose 8, also; thus Ox =
where
Em0,.. Then m=O
where
to,, and ti,, are determined by satisfying the boundary conditions with
the approximate solution pb+,=
The solution is
zi=o 00
ui ; thus,
xm m=O
um or
Since we decomposed u,,
so that we obtain the solution
We have seen that the solution can also be obtained from the equation for L,u. Thus, i f u,e wrile L,u = -L,u and apply the inverse L;,' wc have u = Q?,- LT'L,u where
Now u = qo(x) + yqI(x) - L;L.U
where L, = di/&i2 and L; is a two-fold
indefinite integration with respect to y. We let u =
Em u, where u, is n=O
normally given by qo(x) + yql(x). We now decompose the u, also, i.e.,
where
Now u = lirn @+, m-4-
. In the inhomogeneous case g # 0,
Finally, summing to get the solution
Rearranging terms,
The value of decomposition of the initial term is the matching of the boundary conditions for each $, for any m. Every integration has new constants to evaluate. Finally, we can add all the c,, and c,, separately to form a new c,, and c, or equivalently, a new which now is close to a final value which would be reached as n -+ w in Q,. NONLINEAR
CASE:
Consider the ordinary differential equation L u L u and applying L':
(Again LA' is an indefinite integration-in c,(y) + xc,(y) is decomposed,
The solution u =
1-u, m=O
+ Ru + Nu = g. Solving for
t h s case, two-fold.) Since @, =
Zm=ocD,m where 00
and 0,=
NONLINEARCASE-PARTIAL DIFFERENTIAL EQUATION: Consider L,u + L,u + Ru + Nu = g. We assume that L, = d2/&' and
treated as another R term. If there are consider the equation for L,u with L,, also operators L, and L,, they are treated exactly like &; then,
where 0,= c,(y)
+ xc, (y). Decomposing a, where Qx=I c,(y) + XC: (y),
To evaluate, we have Q, = u, which is matched to the boundary conditions. Suppose the x conditions are u(bl,y) = fl, and u(b2,y) = 13~.Then
so that
or
from w h c h we determine c,, and c,.,, so that pletely. Now u, is calculated from
= u, is determined com-
Since uo has been determined, ul is calculable, so we obtain 4, = 4, + u, which is matched to the boundary conditions using
so that $, is known. The process is continued to a satisfactory 4".
Consider the equation Lu + Ru = 0 where L = dn/dt" and R is a linear operator possibly involving differentials of lower order. The integral representation is: u = @ - L-IRU where L-' is an integral operator defined as an n-fold integration and ee u= um yields the solution in series form. It is interesting to consider
xrnX0
a double series representation
urn =
.,
c
urn,.
d=O
If L is of nth order in the single series, we have u, = Q, where LQ,= 0 and has n terms. Suppose then we decompose
Q, =
Em0, and let u, = Q, m=O
only. Now u,. which was previously identified as u, = -L-'RQ,. becomes
U, =
om- L-'RQ,,-, - . . + J - L - ~ R )Q,,~ + J-L-'R)= Q~
Now
The approximate (m-term) solution is given by:
whch we can write as a double summation
Q,
so we see that our m-term approximation becomes the exact solution in the limit as expected. We can now view initial-value and boundary-value problems in the same way, offering clear advantages over finite difference or shooting methods. Thus in initial-value problems,
The approximation I$, is given by
In the limit r + -, this becomes
Cw (-L-'R)~8 = u. rn=~
In the boundary-value representation of the decomposition components of the solution u, uo = UO,O= QO uI = uo,,+ u , , ~= 0, - L-'R
and the approximant to the solution summations. Thus?
qm+' is
given by the staggered
Again in the limit of the approximations we get u;thus
We now have an initial-value format for boundary-value problems. We can determine @= by evaluating an approximation Om-) at the boundary conditions. then use it in the initial-value format for a better solution, i.e., one even closer to the final u. The two limiting forms of our approximation are equal. To summarize. in a boundary-value problem we compute q,,, = q, u, and evaluate at the boundary conditions. Now we can approximate a good value for u- from
-
Then, using the approximation for 0:calculate
for k as large as we wish without further evaluations at the boundary conditions of 0, for higher \*duesof the index m.
DOUBLEDECOMPOSITION
0RDINARY D I F F E R E N T I A L WITH GIVENBOUNDARYCONDITIONS:
HOMOGENEOUS EQUATION
35
NONLINEAR
Starting from the usual decomposition form Lu + N'u = 0 where Nu is an analytic function f(u), the usual integral representation of the solution m u = 9- C'NU with the (single) decomposition u = C m = , ? urn and Nu =
Em A, m=O
is now written using double decomposition,
Now our usual polynomials A, must also be doubly decomposed: thus:
Now
where we have decomposed the initial, or 4, tenn taking only u , , = Q 0 as the first term. It is not essential, but we can also utilize the analytic grouping parameter A. In such a case,
m=O n=O
Without A. m
m=O
m=O"
"'D
An initial-value solution to Lu + Nu = 0 is provided by:
A boundary-value solution using decomposition of the initial term grves us
(There are no integration constants implied by L-', i.e., it represents a pure two-fold integration with no constants.) The A, in the boundary-value problem are decomposed into A, ; thus:
demonstrating convergence. The value of @, is det~erminedby evaluating = @=- urn at the boundary conditions. m u, = u or In the initial-value problem, 6 = UI=O u, and lim 4. =
zr-'
I---+.=
m=O
again,
What we accomplish, after computing several terms of $, is to minimize our computation for boundary-value problems by avoiding the further matching to the boundary conditions. Because of the rapid convergence in decomposition, computation was already minimal in comparison to discretized methods, and the above procedure offers further decrease. -
EXAMPLESOF BOUNDARY-VALUEPROBLEMS: We now calculate two boundary-value problems to diemonstrate how one can change the boundary-value format to an equivalent initial-value format, decreasing computation and accelerating convergence by using the concept of double decomposition. The fust example is an ordinary differential equation. The second is a partial differential equation which is considered both in the temporal format (tcoordinate partial solution) and the spatial format (x-coordinate partial solution); convergence of the spatial solution is accelerated by transforming it into the initial-value format. The procedure can also be used for nonlinear equations. Consider the ordinary differential equation
88
CHAPTER 4
with boundary conditions
U! X \ ' \ z
3(
(x,.:'Xr-2 m=O 00
m=O
Ln operator format with L = d2 I dx2, we have
Then solving for Lu and operating with L-',which is always an indefinite integration for boundary-value problems, we have
u = u 0 - L - I ffu where
u, = A,
+ B,X + jj
P(x)dxdx
xu=o 00
By writing u =
u u , the solution is decomposed into a sum of
componentj to be determined, and $, =
w m - I
n=O
u, is the "approximant" to the
solution, i.e.. a A-term approximation converging to u in the limit. The oneterm approximant is $, = u, and we must have
Following components are given by
and @,-: = o; T u;,. The increasingly accurate approximatiocs must still sarisfy the bound-. conditions, hence
DOUBLE DECOMPOSITION
89
Then for i. > 0,
while for h = 0,
uo(L) =
t2
Since
and we know that
we have
k t ' s write
where
$1 ( X I )
= {I
@I(.:)
=52
which we can s>mbolizeas a simple matrix equation xA = 5 or A = x-'5 if x, # x,, a tri\*ial condition, since the points x,, x, must be distinct We now have
Thus
with
and 10)
-
a,,, - (m
P,
- l)(m + 2)
Now we calculate the u , component to get the L-'represents indefinite integration.
= A,
+ B,,: -
Il a
'a! m=O
We have u
1,;
=
O and u , ( x , ) = 0 and we let
q2 approximant, recalling that
x mdxdx
and let
Proceeding as before to solve for the "constants of integration" which we prefer to call matching coefficients,
with a!) = A, and at1)= Bl and
We now have #, = #,
+ u, and can proceed in the same: manner to a general
where
so that
We now have
(where, of course x,, x2 are distinct mints in a boundary-value problem). Therefore,
where
and frnally Q,-: = @, + u,
and
is the (m + 1)-term approximant to the solution u, which we can also write as
We observe that
lim @,,, {u) = u w x+l b #m+l{am} = am m-rsince
Upon substitution,
where
Finally we note that no difficulty exists in extension to nonlinear cases since it only requires use of our A,, polynomials for the nonlinear tenn.
We again consider the same ordinary differential equation
but with the initial conditions
The (m + 1)-term approximant
4,
+,of & is
We can now avoid further evaluations of boundary conditions, as we did earlier. by recasting the problem into the initial-value problem fonnat. We can then continue with less work. i.e., without furLiter rilatching to the boundary conditions. This acceleration of convergence by recasting boundary-value problems into initial-value format becomes more helpful as the problem complexity grows and matching boundary conditions becomes more painful, because we then have a more accurate initial term to work with. Thus,
will yield identical analytical and numerical solutions to the boundary-value format solutions. Thus beginning with Lu + a: u = B (x), .3 Lu = P ( x ) - a u
where L-' is the def*te
integration operator L-' =
,foXIOX (.)dx dx and
We can write u, as m
where
We note now that no further boundary condition evaluations are necessary for computation of the um for any m. Continuing,
where
where
Since
=CD=, and m
U,
U=
cI0un,u = Cw xZn cl0a" n=O
xm. Stag-
gered summation is applicable. (See Appendix 11.)
DIFFERENTIALEQUATIONS IN S P A T I A L T E ~ ~ P O RFORMATS: AL
SOLUTION OF PARTIAL AND
with the initid conditions u(0, x) = z, (x) au(o,~) = =2(x> at and boundaq conditions
We suppose a and
are given in the form:
heco-&iuluuiil\ ':-:. are also in serie~form:
Let L = 2' 1 d x' and write
where
u,, = A, (t) + x B, (t) + L-' P(x, t)
The solution u is the decomposition u =
Zm u, and the approximant is n=O
Cn=o u,. Now a-1
$m
=
$1
= uo
$,(x17t) = 5l(t> $ 1 ( ~ z . t ) =
The general component is
Clearly
Thus for m > 0.
and for m = 0,
We have
We can also write
Now u, = Ao(t)+ x Bo(t)+
m=O
pm(t) xmt2 (m + l)(m + 2)
= 41
We now have the fxst approximant: it must satisfy the boundary conditions, hence,
Let's write h s as A,(t)
+ x, Bo(t)= 5:')
(t)
+ X? Bo(t) = $JO' ( t )
where
go'(1) = SI( t j - C = ,, 00
rfO'(t) = S2( t j -
32
m=O
B,(t)xf"-' (m + l)(m + 2) p, ( t ) x y
(nl T 1)(m + 2)
from which we find that
so that
becomes
where a'!
(t) = Ao(t)
aiO)(t) = Bo(t) and
Since A, (t) =
C A:) n=O
we can write
Thus
tn
where
We can now calculate the u, component and the 0,= @, -tu, approximant. (We point out that although we explain in considerable detail, the procedure is simple and straightforward and is easily programmed or even calculated by hand.) For the u, component, we have
whch is the Cauchy product a ( x ,t)uo(x,t) . (See Appendix 111.)
Since ul (x, ,t) and u, (xZ,t) must be zero A, (t) + x,B,(t) = <:"(t)
A,(t) + x,B,(t) = <:"(t) where
so that we obtain
(Of course, x,
#
x, .) We can now write
where a t ) (t) = A, (t) a!') (t) = B,(t)
and
m
m
,!a:
(t) = -
C tn
(n + l)(n +?)a"+,
+
n
CCa
,-,,n-,
p = o v=o
(m + 1) (m+ 2)
n=O
Note that A, (t) =
C A!)
tn
Finally
where :I
I
a c , n= A:' - B$"
a;.c -
r
or
where
We can conmue in this manner to calculate u?, u3, ... . To compute the general term u, and the $,+,approximant, we write ui = -L-I a(x, t)u,-, o
u,-, =
- L-' (a2i d t2)u,-,
m
C C a;l,')xE
tn
m=O n=O w
di -u,-] d t2
=
m
C
m o o
a ( x , t) =
(n
- l)(n + 2)ai>
m=O n=O
11a,,, x mt D m = O n=O
x mt n
The Cauchy product n ( x . t)u,-, ( x ,t ) is given by
Let
where
Consequently, =
We can now write
Since
Therefore,
where
x2 5 i f ' ( t )- x, < i f ' ( t ) x2 - X I
We can write the result as
We note that
lim
A-++=
Substitution leads to
h+,{u} =u
and that A!m+@L,, {a,..}=
a,.. Since
xi=o OD
where a,., =
a(') mvn.
TRANSFOR~IATION OF SPATIALSOLUTION: Having calculated u, , we know that
We can now recast the problem into an initial-value problem format, accz1e:ating coc1,ergence by simplified fcrther calculation through avoidance of further boundan. condition evaluations:
331 a(x,t)u 8 xL
--;-i
a'? -dt-
We proceed to desired accuracy by choosing
= &x! t )
L )
:
L(.) =
(.) d x">
and L-I(.) =
jxjx(-)dx dx. W e
emphasize that L-' now
0 0
represents d e f ~ t integration. e Upon substitution. we have:
where
with
!Is9
Therefore we can write u, as rn
where ar)(t) = A(t)
We can also write
with
To cdculate the mth component urnof the decomposition of u. we have
108
where. L-' (.) = Jox
CHAPTER 4
Jox
(.) dx dx . Thus,
so that u=Ov=O
(mil > ( m + - 2 )
where
The apprournants
m,.,
=
xko
u,. and u =
xmu, are t=<,
computed ac usual so
Staggered sumlation can be applied at this point.
TEMPORAL FORMAT: Consider the same partial differential equation a2u aZu= P(x, t) + a(x,t)u i -
a X?
at-
with the initial conditions:
and boundary conditions: OD
We will suppose that a and P are given in the form
where
T o derive the L-I(-)
=
J:
solution we define as usual
L = d z / d t 2 and
(.)dtdt. a (two-fold) defmite integration operator. Then
and operating with L-' we get
u = u,
- L-'a(x,t)u - L-' (d2/dx2)u
where
Solving by decomposition, the solution u is written u =
Now we calculate the u,component (and
4, approximant).
and we note that we can write
r,(x)=
rq) xm lo=,
and
vrhere
ab"j(x)= 7)(x) a;Oi(x)= 7, (x)
Emu, and = , 0
the
Thus we write
or
where
Now we can compute the u, component and 4 approximant:
The Cauchy product a(x,t)uo(x,t) is computed as
z zx m
m o o
=
C
n
xmt"
m=O n=O
(which can readily be programmed).
p=O u=O
am-An-u
4.2
which we can finally write as
or u1 = ti
C a!)
(x) t n
with
We now have o:= $4 + u , and can continue in the same manner to the u, component and the @,, approximant.
We next compute the Cauchy product indicated by a(x. t)u,-, (x,t) or
where P
Staggered summation can now be applied.
COMPATIBILITY OF EQUATION AND CONDITIONS: If the computed solution satisfies the equation and the conditions, we have the solution. If the physical problem is correctly mlodelled, no difficulty appears. One cannot arbitrarily assign conditions to an equation. The equations, conditions, and solution must be consistent. If anempts to model a physical system fail to give us correct conditions, one can get both temporal and spatial solutions and find that these solutions are different.. If they are close over a frnite region, then we realize the modelling needs improvement. This may ailow us to develop a predictor-corrector m,ethodolo~which we leave to future work.
If incorrect conditions are used for the decomposition solution, we do not have a solution. If the solution is correct, i.e., it satisfied the equation, we can do inverse operation, e.g., in Lu + Nu = g, we have u = Q> - L-'Nu and can solve for F. If L is second order, we know u, = Q, - L-' g where Q, = a + p x; hence, we know F and have a test for a, b.
NONLINEARBOUNDARY-VALUE PROBLEMS: We have two alternative, actually equivalent, approaches for boundary-value problems, whether ordinary or partial differential equations are involved. The fust is to match each approximant 4, for n = 1,2, ..., n to the boundary conditions. In boundary-value problems modelled by ordinary linear differential equations, only one such matching is necessary. We can carry along the unevaluated initial t ? ~ without . evaluating the integration constants by matchmg to the boundary conditions and only do the matching when the m-term approximant has been calculated. In nonlinear differential equations or (linear or nonlinear) partial differential equations, this is not possible. Then the m a t c h 2 must be done for each level of approximation. The second (or double decomposition) procedure adds decomposition to the initial term. Thrs allows a convenient match to the boundary conditions of the approximant o, because the constants from each integration are added to give a better initial term. As we will discover later, the solution can then be carried further, if more accuracy is needed, as an initial-value problem. The value of decomposition of the initial term is in the matchmg of the boundary conditions by adding all the integration constants c,, and c,., separately to form a new c, and c! and neis. initial term whch is now close to a final value as n 7 in 4,. Kow we can use this u, term without adding further constants of integration and matching to boundary conditions, since for high approximants to achieve accurate solutionsl the computation is much less.
-
REFERENCE 1.
G . Aaornian and R. Rach. Analytic Solution of Nonlinear Boundary-vaiue Problems in Several Dimensions, J. M a h . Anal. and Appiic.. 173. (1 18-137) (March 1993).
CHAPTER 5 MODIFIED DECOMPOSITION
The modelling of physical problems can lead to ordinary or partial differential equations which are quite generally nonlinear. Examples include equations such as the Navier-Stokes equations in fluid mechanics. the LaneEmden equation for stellar structure, nonlinear Schrodinger equations in quantum theory, soliton equations, etc. We present here a variation of the decomposition method which can also be applied to such equations to obtain accurate quar~titativesolutions. A mathematical advantage of the various adaptations of decomposition is that linear equations are an easily solved special case and ordinary differential equations are a special case of the theory for partial differential equations, so we have a single unified field. This alternative formulation will be referred to as "modified decomposition" [1,2]. It requires the. following result on transformation of series [3]. Normally we write f(u) A,(u,, ..., u,). However. given a convern=o
=Zm
m
gent series u =
c, x" and f(u), we can write
We can see this as follows. Let
We wish to find a transformed series f ( u ) = : f ( zO=O w c u r n ) . Since
Thus for f(u) = uZ,for example,we have
Since u, = c,, u, = c,x, u, = c2x2 ,..., we have
with A, = ci, A, = 2c0c,,... . As an example consider tan-'x and f(u) = u'
tan-' x = x - x 3/3 + x5/5 A 0 = U 20 = A, = 2u,u, = -2x "13 (tan-, x): = x' - 2n 7 3 + Power series solutions of linear homogeneous differential equations in initial-value problems yield simple recurrence relations for the coefficients but oenerally are not adequate for nonlinear equations. + Dealing. for example, with a slmple linear inhomogeneous case Lu i Ru = g with second order L. we le!
where I is m integration and 1' will mean a two-fold intesation. Thus
so we obtain coefficients from a recursion formula. The techmque provides an interesting alternative for equations such as the Duffing, equation and the Van der Pol equation.
ONE-DIMENSIONAL CASE: Consider the nonlinear inhomogeneous ordinary differential equation L,u - Ru + Nu = g where L,u = d'/dt2, R = p(t), Nu = u(t)f(uj. We will view this as a special case in one dimension of a multi-dimensional partial differential equation. (In the following sections, we will consider equations in two. three, and four dimensions.) We can write
where O , = ro+ t rl and L;' =
j'j' (-)dtdl. The substlitution yields 0 0
Multiplying and collecting like powers oft.
and
Replacing the above quantities in brackets with the equivalent expressions on the right side, m
m
n=O
n=O
C antn = ro+ tr,+ j': C gntn dtdt
Carrying our the integrations, we have
m
tni2
f:Pv an-v
In the summar~onson the right, n can be replaced by n -3 to write w
C a, O=U
t" = 7,
tn
+ t i , + C -en-: n=2
n(n - 1)
Finally, we can equate coefficients of like powers o f t on the left side and on the right side LO arrive at recurrence relations for the coefficients. Thus
and for n 2 2
This solution, of course, is OJ
We now have two linear operators L, and L,. L.et L, = d 2 / dt'
and
L, = d 2 / dx' . Assume that we can write
u=C n=ot n [k=o C a,,, x k ] =n=O ~a,(x)tn
If we have u(x, y) =
x,: zzo
c.,
xm
3'" we get lenm
2
Co.0 C,., y ,C0.2 y ,..., CI,,, x ,CIS1xy ,ClVZxy 7
The first group can be written x0 x1
2
"..'C2,,0
x- ,C2., x-y,. ..
zzo
cOvn yn. the second group as
Czoc,,, yn,the third group as x2 EL,c,. yn etc. Thus
En=,c,., m
where cm(y)=
y" so that the double series is collapsed into a single
series. We suppose the operator R is
Let the nonlinear term Nu = a(t,x) f(u) and
Write f(u) =
Em tn ~ , ( a , ( x ) , . ...an(x)) =
DO
u=o
(x). Let
The decomposiuon solution using the t partial solution is given by: u = @, - L;'~-L;' L x u -L;'Ru-L;'
NU
where @, = t , ( x ) - t ~ , ( x j = ~ ( t = O , ~d )u~/ at t ( t = O , x )
and L;'
=
jL:j-'( ) d i d t . Substitutine for u. fiu). g. and p, we have
The bracketed products are
Substituting the above products
We now carry out the above integrations to write
Let n + n - 2 on the right side. Then
Finally, equating coefficients of like powers of t, we derive the recursion formula for the coefficients a,(.> = zo(x) a, (x) = Z,0;) and for n 2 2,
The f m l solulion is now given by u(t, r) =
xm n=G
a, (x)tD.(Whether modified
or r e ~ u l a rdecomposition is used. we can still apply Pad6 approxlmants. Shanks. IVynn. Euler. or Van Wijngaarden transforms to accelerate convergence.) T H R E E - D I M E N S I O N A L CASE-NONLINEAR EQUATIONS:
assume
PARTIAL L)IFFERENTIAL
00
m
I(U) =
tn ~ , ( a , ( x .y),. . .. a2(x,y)) = C t n ~ , ( x - y )
The t partial solution is
where
We can now write
The bracketed quantities are
which we can substitute to obtain
whlch we now integrate to get
We now replace n by n -2 on the right to get
By equating coefficients of like powers of t, we have i,(x,y) = r,(x. y), a, (x, y) = z, (x, y) and for n 2 2,
The fmal solution is now given by u ( t , x , ~ )= FOUR-DIMENSIONAL CASE-NONLINEAR
ELoa , ( ~ . ~ k ' . PARTIAL DIFFERENTIAL
EQUATIONS:
w h e r e w e l e t ~ , = d ~ l d t ' , ~ , = d ' / d x ~ , ~ , = ~ ~ / d ~ ' , ~We ,=d~ldz~.
can assume
The t equation is
z) with @, = u ( r =O.xty.z)-t d u / d t ( t = O , x . y . z ) = ?,(x.y.z)- t s ! ( x . ~ . ~ and we can proczcd a h before ivith substi~utlonsand Iniegrauons fom~ulasfor t;i;coefficients.
LO get
recursion
REh1.4RK: We have seen that nonlinear partial differential equations are solv-
able by the modified decomposition procedure using concepu of the decomposirion method (partial solutions, the A, and transformations of series using the A,.). We have seen previously h a t such equations are solvable by straightIbrivard decomposition also. Comparisons can now be made; in general, deconlposition solutions converge faster but the solution is identical. '4 simple exampie is L,u + L,u + f ( u ) = 0 where L, = a l d t , L, = d l d x , f ( u ) = u', u(t = 0: x) = 1 / 2 x. The t partial solution is
where cP, = u ( t = O ) = 112x.Then
where the .Anare defmed for u'. We get the (decomposition) solution
Using modified decomposition, let
u=
x
tnan(x)
hence
-1(tn/n) 2 an-,-. -C(tn/n)C m
w
n=l
from which we get
2 ~ '
n-l
v='O
An-,-,
so that
which is the same solution obtained by decomposition. For homogeneous equations with a constant coefficient of f(u) as in this example, the rate of convergence is equal. For the general case, we find that the decomposition method converges faster. Suppose we consider the solution of a nonlinear equation where the nonlinearity and the excitation are $\.en as a power series, or equivalently as graphs from whch series are derived by curve-fitting t e c h q u e s . Let's write u and f(u) in series form w
Computing
{xmantn)' we see that n=O
for all 7, and for v > 0. (Of course, for v = 0 the quantity on h e left is clearly equal to 1.) Thus, if we define
m
a, ~ . [ u ~ ] l . , = ~ ,
b. =
for n > o
The A, are easily evaluated. For example,
We obserre that A. [uV='] = u, , i.e, the linear case, then A.[ftu)]l,".,
= ~ , [ u " ] l " ~ == , *~ . ( a o ,...:a:)
The previous result on transformation of series stated that if u =
z m
f(u) = n=O t " ~ ~ { f ( u ) } l ~ ~ = ~ ~
From (1)
Em antn, n=O
The cg dominate the convergence. For simplicity of notation, let
Then
I
x
a,, ~ ; " ( a , ; a , )- 1:
I;a, ~ r l ( a , . a , . a ~ )
where
f ( u ) = a, +
if u =
x-
n = r a n t D and
T H E O R E MIf:
Ti")=
f(u) =
xm "=o
m
tm
1 a, . ~ t ) (,...., a a,)
xzo
a,uL'
a,,tn and f ( u j =
Em, ,,= h,tn is convergent where
xz,
aDunare convergeni, then
b,,, =
x
a. A ~ ( ~ " ) ( u , = ~ A
,=I
This is useful in the following problem. Consider (using modified decomposition j the nonlinear equation,
anun
with u(0) = c, and uf(0)= c, and Nu =
d'u + Za n u n= C gutn m
dt-
.=o
n=o
Substituting
d2u , = x( n + 1)(n + 2)a,+,to dt- .=, 00
and using the theorem above
Erna n u D= xzobutn with O=O
formulas for b,,
C (n + l)(n + 2)an+,t" +
x
n=O
n=O
00
x
the given
m
00
b, tn =
gatn
o=O
Let g, = (n + l)(n + 2)an+,+ b, so that we have found the coefficients
with u =
XIoantn.(Of course, we can solve the problem by decomposition
as well as by modified decomposition.) Let us consider, as a generic example, a partial differential equation in the operator form of the decomposition method:
where L, is a linear differential with respect to x, L,is a linear differential with respect to y, and Nu is a nonlinear term. The x-dimension panial solution is
where L, Q, = 0. Assume the solution in the form
m
where
m:
=
a,, yY Write also Nu = f(u) =
Zw A, (y/xm where the m=O
A m (y) = A, (5: (y);...,5, (y)) are our polynomials. Substituting. we have
m='J
Since
and
we can write
m=O
m=C
Assuming L, is a second-order differential operator,
where the k, and k, can be determined from the given conditions. We finally get recurrence relations for the coefficients 5,
and for m 2 2 j',(y) = -{L, <,-:(Y)'
where
' r n - ; ( ~ ) } / ~ ( ~-11
A,(y) = A , ( < , ( ~ ) , . ..,<,(y)). The y-dimension partial solution is
similarly obtained. Assuming
where A,(x) = ~,(17~(x),..., rl,(x)).
Proceeding as for the x partial solution, we now get the recurrence relation (x) = c o ( 4 77, ( 4 = c,(x)
770
and for n 2 2 rln
(x)= -{LS77q-I (x) + ~n-z(x)}/n(n - 1)
REMARK:When lg(y) and kl(y) are expressed as power series in y, and c,(x) and cl(x) are expressed in power series in x, the series solutions
can be written as
Consider as an example the equation
Since we have chosen a two-dimensional case with the conditions, u(0, y) = CO(y) and
(3u(O,1. ; = C ,(y-)* dx
we assume the solution in the form
n=O
m=O
and sirnilarl)~write
(An ordinary differential equation such as d2u/ d x' - p(x)u + a(x)f(u) = g(x) becomes a special case as do linear cases of both the ordinary and partial differential equations. We use a single series. For a three-dimensional series, we use a triple series.) We write the differential equation in our usual decomposition form as L,u+L,u+Ru+Nu=g
*This could also he solved wrth boundary cond~tlons.
MODIFIED DECOMPOSTION
1.35
We can use either the x or the y partial solution. Using the: x partial solution for w h c h we have stated conditions, we operate on both sides with L;' to write
where L; is the two-foid d e i i t e integration kom 0 to x. We now have
Computing L-,' g we get
Computing L, u , we have
Now computing L: L,u we have
Computing Ru,
Next,
Also
where the polynomials A,.,(C~,,,....c,.,) are obtained by transformation of series. We have Nu = uf(u) =
{x n=O
a,,, x' y " j m=O
iz
An., x n ym
/
J
n = 3 m=O
Then
xl0
c , , ym and C, (y) = c,,,, c , . , ,c,,?. Similarly, we h o w c,.,, c,,,,c,.,,. .., .
We d s o write C,(y) =
and we can write
xm=o m
c,,. Y". We l a o w
We now equate coefficients of like powers. We are using the x partial solution; hence, we are particularly interested in powers of x. For n = 0, c , , = C,., F o r n = l , c,.,=C,., For n 2 2, we have for like powers of y" a recurrence relation yielding the coefficients
xu=,Xm=, OD
and can now write u ( x , ~= )
00
c,, xuym since the coefficients are
determined and the A, can be found. Since C,,,, c , , , Co,2 are known by decomposition of G(y) we can find other components, e.g., c,, depends on coSl.Similarly C,(y) yields components c,., for all m, so c,, , for example, is found from c,,, . The linear cases (a = 0) are considerably easier since the A, become unnecessary. Also the recurrence relation simplilies if g = 0 or p = 0. For example, if we consider the equation d2u/ dx2 + d2u/ d y 2 = 0 we have
M O D I F I E D DECOMPOSITION SERIESAND B O U N D A R Y - V A L U E PROBLEMS: The "modified decomposition" series solutions have been found for initialvalue problems by incorporating and adapting ideas of the decomposition method. Now using the double decomposition technique discussed in recent papers, the procedure can be further generalized to treat initial-value and boundary-value problems in a similar and computationally efficient formulation with an acceleration of convergence. We will consider some pro~ressively more complicated problems.
LINEAR(HOMOGENEOUS) ORDINARY DIFFERENTIALEQUATIONS: Consider the example d2ul dx2 + p u = 0 for Dirichlet conditions u(x = ti)= b, for i = 1, 2. Let p be a constant (to simplify the discussion) and seek the solution in the form of a Maclaurin series
In the usual operator form for decomposition solutions, this equation is written Lu + Ru = 0 where, in this case, L = d' / dx2 and R = p . (Of course, the method was developed for more general equati9ns.j S o w write L-' Lu = -L-' Ru where L-' is a two-fold indefinite integration yielding u - C, - C,X ; hence 03
DC
u=
aExn= c, = c,
t xc,
- pJJ
a,xQxdx
+ xc,- p C s n xn'2/(n + l)(n - 2; n=O
Equating coefficients, a, = c, and a, = c, . For n 2 2 we have the recurrence relation
Using double decomposition, 3.c
Also a:"' = c y ' , a\"'
= cia',
and for n 2 2, a?) = -pa.:i/n(n
- I),
achiev-
ing a decomposition of the recurrence relation
by staggered summation. (See Appendix 11.) We use the staggered summation to rearrange the double decomposition components of u into a new series to achieve a decomposition suitable for boundary-value problems. (The switching of n and m makes the result computable; only the sum matters the components are not unique.) by use of the 'boundary conditions. We must now determine c r ) and
elm'
Instead of u(x = 5,)= b, and u(x = 6,) = b,, we use the: approximant &+, to the correct solution u. We can write this as @,,,{u}., an operator on u. Similarly, the approximant to a, is m-l
om{..) = C a'." v=o
Then
I
We will have determined the solution if we can also compute the values of cim) and clm)matching the (approxirnant to the) solution with the boundary
conditions @rn+l
= 5 1 ) = 1'
gm+l {u)(x = 5 2 ) = '2 For the staggered series of u
Equivalently.
where [n/? is defined as the greatest integer value less than n/2. Thus the staggered summation has resulted ir! a different decomposition of u suitable for boundq-value problems.
R E M A R K : The greatest integer function used here is for a second-order
equation. For a third-order equation: we will have [n 131 and for fourth-order. we use [n/4]. Kext, we derive the approximant for the staggered series of the solution; thus 6,-, {u) is given by:
We use the approximation of the boundary conditions
in order to compute the constants c p ' and cjm' and consequently the compo-
nents a',"' and
aim' of the Maclaurin series for the solution, thus determining
the solution u.
Su~h1.i~~: The basic steps are: 1) Compute a p ) and aim' by matching the solution approximants to the boundaries, i.e., @ m + i { ~ l ( ~ ={ I ) = @rn+l
1'
{u)(x = t 2 ) = b4
= -p ag)z / n(n - I), compute more components a?) to improve the accuracy of the solution approximants. Thus we have
2) Using the recurrence relations for n 2 2, i.e., )a:
and therefore u=
C a,, xu
satisfying both the equation d2u1 dx2 + p u = 0 and the conditions
We can now accelerate convergence by going to an initial-value formatted solution without further matching of solution to boundary conditions. Let
CHAPTER 5
142
I'
= @m+l {a,}
and for n 2 2 a, = -p a,-, / n(n - 1) which gives us a newr and improved u, to start as an initial-value problem
where up' is the approximate initial value. NOMIu =
0 RDINARY D IFFERENTIAL CONSTANT COEFFICIENTS: ONLINEAR
Emu, = n=O
03
a,xD.
EQUATIONWITH
Consider as a specific example d2u / dx2 + a f(u) = 0 given the conditions
We seek Maclaurin series solution u =
xMa , x D The equation in operator o=C
form is Lu + Nu = 0 with L = d2 / dx2 and Nu = a f ( u ) . Operating with L-I. we have u = c, + xc, - a f(u)dxdr
Jj
Esing the result for transfo:mation of series
w~herethe Az are functions of 4....,a, rather than uo,...,u,. Now calculating the inte~ral f(u)dx dx
jj
Therefore 00
C a,nn = c, + xc, - a x
An-,xn /n(n -- 1)
Equating coefficients, a, = c, a, = c,
and for n 2 2 a n = -aAn-, / n(n - 1)
To match the boundary conditions, we apply decomposition to the integration constants and double decomposition to the: coefficients of the Maclaurin series solution. Ce
C,,
=
C cp)
Substituting into the recurrence relations
= C(m) 1
-
and for n 2 2 )a:
= -a! ~ b " _ ) ~ / n (1)n
We have achieved a decomposition of the recurrence :relations which will determine the solution once the components c p ) and cim) are found. Next, we organize the solution into a form suitable for matching at the boundaries by rearranging the double decomposition components of the solution into a staggered series. This organizes the solution into the boundaryvalue format so that c p ) and c(,") components are calculable by use of the approximants to the boundary conditions
We have
To stagger the series,
- a(rn! urn- 0
+ a\m)
+
... +
X(2m)
2m
+ a?:+,
x2"+'
whch we can write
where [n/?] is an integer greater than 1112 (for second-order equations). For third order. we write [n/3] and for fourth order, [n/3]. We now have a different decomposition of u whch is suitable for boundary-value problems:
Now we d e i v ~the , solution approximants for the staggered approximants m
'u}
QrnTll
= &.lD
tC
=
C
. I + ~
C a:'
XI-!
I T .
a.I
ajn)+xS
n=O
Hence
Now usins the boundary condition approximants
we can compute the constants c',"',
c!") and a!?', ay". the coefficients for
the Maclaurin series determining the solution. SUMMARY:
The basic steps are: 1) Compute a(;") and a:"'
by matcbng the solutiorl approximants to the
boundaries. 2) Compute components a?' to improve the accuracy of h e solution approximant~using the recurrence relations for n 2 2. we gel
and finally
3) Finally we can transpose from the b~undary-vducli)rmat to the initialvalue format + ~ + ~ { ~ ! ' } = + m + ~ { a o ] + ~ m + ~ { ~ ~ ] ~ -.
with
a0 = $ m + ~ { ~ ~ }
and for n 2 2 a, =-a A,-z 1 n(n - 1). (Note Lllis is li,r the c s e of zero input) This avoids further need to match the solulir,~~ to the boundary
=x,,=,, m
conditions. NOW u = x w n=O u.
3.x"
ant1 convergence is
accelerated over that of the boundary-value fOrmatk1 solution.
LINEARORDINARYDIFFERENTIAL
EQUA'T~~N WI'I'II S; VARIABLE
COEFFICIENTS:
We consider the example d2u 1 dxz + p(x)u = 0 with lhc contlitions
Let
and seek the solution in the form u =
u = c,
xn=o
+ rc, - ji
a,xn. The solution is
p(x)udxdx
Let
Now
\i;e nou- ha\.e
Consequcntl! a,
= co
and a, = c, and for n 2 2
We conciude with the solution
ar!= cp) and a:"
are determzned using the b o u n d q conditions
= c:"!. The c p j and cjmj
and for n 2 2, a?' =
-xl::
pv aZ\-"/n(n - 1) achieving a decomposition
xm=c, a,:' we have the m
of the recurrence relations. Then computing a, =
solution u, =
Em a,nn. m=O
H OhlOGENEOUS
N O N L I N E A R (3 R D I N ; \ R Y D ! F F E R E . U T I . I 1, EQUATIONS WITH VARIABLECOEFFICIENTS: Consider the example d'u/dx2 - a (x)f(u) = 0 with the conditions
Let a ( x ) =
xm a n x nand seek the solution in the form n=O
1 1=
a2x\ i.e.,
a Maclaurin series. Let Nu = a(x)f(u) and write Lu + Nu = 0 which leads to the
solution u = c,
+ xc, -
ll a(x)f(u)dxdx. To calculate this integral
(which is an indefinite integration for every iteration), we first use the result for transformation of series
, ..,an). Now where A, = ~ , ( a ,.
and
We now have
Equating coeficients a, = c, and a, = c, and for n 2 2
we determine c r ! and cim) and for n 2 2
m
Then. computing a,
=
.a':
we gel u =
a,
*".
HOMOGEKEOUS NONLINEAR 0RDINARY D I F F E R E K T I A L EQUATIOKS \!TTH \'ARIABLE COEFFICIENTS FOR L I N E A R AND NONLINE.4R TERMS: Consider a specific example with Dirichlet conditions
We seek a solution in Maclaurin series form u =
Em a,xD satisfying the n=O
boundary conditions. In the operator format of the decomposition mcthod, the above equation is written Lu + Ru i-Nu = 0 with L = d2 / dx2, R = p ( x ) and
Nu = a(x)f(u). Operating with L-' ,
I~OD~FIE DDE C O M P O S ~ O N
149
where this is understood to be an indefinite integration for every iteration, since it is a nonlinear function. Following the earlier examples, we can write
with c r ' and cjm' to be determined by the specified conditions @m-i
{u)(x = < I ) = bl
$m+l
{u)(x
= ( 7 ) = bz
and for n 2 2
w h c h gives us a decomposition of the recurrence relation. Then, computing 0
=L . 0a ):,
we have the solution u =
xw O=O
a, xu.
COMMENTS: Comparison of initial-value format and boundary-value format: 1) Initial-value problem, formatted solution ut'.'.)
2 ) Boundary-value problem formatted solution
m=O
where
m=O
n=O
dB.'.)
and where
@:;':.
W e remark further that u(d.'.) # u r V . ) and # &,).; but as n become becomes numerically equal to @?.".), i.e., sufficiently large, @).'.! = lim @y") = ]in @(B.'.) n+-
n-+-
.
Thus the evident difference between the two is in the organization of the components of the double decomposition, i.e., the staggering of the double series summations makes it possible to calculate the matching coefficients. The procedure is quite general and will work for a wide class of problems. It is to be emphasized that the series resulting from decomposition is not a Maclaurin series. It is actually a generalized Taylor series about a function rather than about a point, which reduces in trivial cases to the well-known series. Despitz the improved applicability of the Maclaurin series with the use of the A, polynnnlials and decomposition techniques, the decomposition series is still superior in convergence properties to the modified decomposition series.
SOLVINGEQUATIONS WITH DIFFICULTNONLINEARITIES: Consider a second-order (nonlinear) ordinary differential equation in the form u" - a ( t ) r ( u )= p (t) with initial conditions u(0) = c, and u'(0) = c,. We suppose T(u) is obtained by curve-fitting or leads to difficult computation of the A, pol~nomials.Or in a function such as T ( u ) = sin u we may prefer LO work with the powers of u. We, therefore, write
Using modified decomposition, let
T(u) can be written
xm n=O
An(u0,...,u,) and we have assumed that u is
xn=o m
written in a convergent series u =
with u,
= a,t3.
antn so that
Hence A n ( u 0,...,u,) = tnAn(a,,...,a,)
S0
If we write for clarity An{ f(u)} for the A, representing f !u j,
i.e., the A,{T(u)} are functions of a,,.. .,an. Since T(u) =
Em=? A,{un}tm, 00
u" =
the
xm O=O
ynun and
~ ~ { u ' are } also functions of a, .....a,.
Substituting, we have
which we compare with T(u) =
Em ~ ~ { r ( u ) } tso" that m = ~
Returning to thc dil'l'crcntial equation u" rcspcctivc series. wc havc
+ a f ( u ) = P.
:ulcl
substirutiug the
Performing the Cauchy product
/% ant"]. J
Ln=o
jz
A, {r(u)}tnj =
In=o
J
2 5 tn
U=O
an-,
~,{r-(u)}
V=O
so that m r n
m
1
m
where we can now substitute
which makes it easy to compute the An for
where the A,
=~
r(u).We obtain
, ( a ..... , a,) are functions of the coefficients for the series
for u. Equating coefficients of like powers of the independent variable t.
and solving for a,
we have
a,,, = [fin
-2a
.-, g r , ~ , j u p }] L n + l ) i n T ? )
with a, = c, and a, = c, so we can solve the differential equation for u; the A,, can be obtained by a rapid computer calculation by decomposition into simple integral powers of u, just as the choice of L in the decomposition method led to simple integrations, avoiding difficult Green's functions.
REFERENCES 1 . G . Xdomian and R. Rach. Modified Decomposition Solution of Nonlinear Partiui
Differential Equations, Appf. ~Malh.Lett.. 5. (29-30) (1992). 2. G. Adomian. R. Rach. and R. Meyers, A Modified Decomposition. Compuf. ~Maftz. Applic., 13. f 17-23) (1992). 3 . G. Xdomian and R. Rach. Nonlinear Transformation of Series. .4ppi. Mafh. Left., -1. 169-71) (1991').
APPLICATIONS OF MODIFIEDDECOMPOSITION We now consider some applications of modified decomposition. The Duffing equation is an interesting example of an ordinary differential equation; it has important applications further discussed in Chapters 13 and 12. The equation is given as u f ' + a u ' + j 3 u + y u 3 =g(t)
CONSTA~T COEFFICIENTCASE: We assume that the solution, as well as the excitation, is in Maclaurin series form. Then.
2 2 an-Lau-fi a,,
,.ap)=
Then ( m + l)(m-2)a,-z - a ( m + l ) a m + ,+pa,+.YA,=g, We can now write the recursion relations
so the a, u(t) =
are determined (dependent on the A,,) and we can write a, t m . Thus uo = a,, u, = a,t,. .., .
xn=o 00
a, = u(0) a, = u'(0) a2 = (go - (1)a. a, - pa0 - Y A0 )/(1)(2) a3 = ( 9 ,- (2)a a? - p a 1 - 7' A1)/(2)(3)
a, = (el - (3)a a, - pa, - y ~ * ) / ( 3 ) 1 [ 4 ) a5 =(93-(4)a:a,-pa,-Y~3)/(4)(5) a6
= (gJ - (5)a
- pa, - Y ~ 4 ) / ( ~ ) ( ~ )
a7 = (9, -(6)a a, - p a , - ~ ~ , ) / ( 6 ) 1 ( 7 )
- b - Y A6)/(7)1(8) a9 = (9, - (8)a a, -/?a, - y ~ , ) / ( 8 ) ( 9 ) = (g6 - (7)a
a10 = (9s - (9)a a,
- p as - Y A*)/(9)1(10)
The approximants to the solution will be given by
For convenience of the reader we list the Adomian polynomials for the nonlinearity in the Duffing equation:
A; = 3ai a, A, = 3ai a,
+ 3af a? + 3af a, + 6a,ala, + 3af a, + 3a; a, + 6a,a,a, + 6a,a,a,
A, = a ~ + 3 a ~ a , + 3 a f a , + 3 a a , + 6 a , a l a , +6aoa2a,+ 6ala,a, .4, = 3ai a, + 3af a, 3 a a, 3af a, - 6a,a,a, +6a,a2a, + 6aoa,a, + 6ala,a, .A8 = 3ai a, +- 3af a, 3al a, + 3a: a, - 3a: a, -6a,a,a, + 6a,a2a, + 6a,a3a, - 6ala,a, + 6a,a,a, 3 .Ac = a3 + 3ai a, 3af a, -t 3a:-a,.+ ? a a, -i-6a,a,a, +6aoaza7 6a,a,a, + 6a,a,a, i 6a,a,a, +6a,a3a, + 6a,a,a, A,, = 3ai a,, + 3a: a, + 3af a, 3aS a<+ 3ai a, i- 3a: a, +6a,a,a, + 6a,a,a, + 6a,a,a, + 6a,a,a, +6a,a2a7+ 6aIa3a6+ 6a,asa5- 6aza7as
-
-
-
-
We can substitute the expressions for the A, into the expressions for the a,,. but it is unnecessary, since it is more practical to numerically evaluate the A, beforehand and then to determine the a,,. If one prefers to work with the final expressions for the h, they are:
a,, = u(0) a, = ~ ' ( 0 )
"
-
= (DY,
-Pa,
- Y a:)/(1)(2)
a3 = (gl - a(2)al - pa, - y Oal, a, ))/(2)(3) a, = (92 - a(3)a3 - /3 a, - y(3a: a, + 3aj a, ))/(3) (4) a, = jg, - a(J)a, -pa, - y(a: + 3ai a, + 6a0a,a,))/(4)('5) a, = (9; - a(5)a, - /i a, - y (3ai a, 3ai a, + 3ai a, a, = (gj - a(6)a6- pa, - y(3ai a, + 3a: a, + 3ai a, +6a, a,a, + 6a,
+ ha, a+, ) ) / ( 5 ) ( 6 )
))/(6)(7)
as = =( - a(7)a, - B a, - y(ai + 3a: a,
+ 3 4 a, + 3ai a,
VARIABLECOEFFICIENT CASEOF THE DUFFlNG EQUATION:
write
Carrying out the indicated Cauchy products,
so recursion relations can be given
so that u(t/ =
xmamtm is determined since the A, are knourn. The m=O
computation is easily programmable. We can list the a, as follows:
REMARK:Possible areas of further investigation include regions of convergence, numerical algorithms for computation, application of convergence acceleration nansforms*, and stochastic versions of the Duffmg equation where we solve for first- and second-order statistics of u. W e can, of course, consider special cases of our solution such as (i) (ii)
u(0) = k, and u' (0) = 0 with g = 0 u(O)=Oandg=Owith u'(O)=k
(iii) u(0) = 0 and u'(O)= 0 with g = go, a constant (iv) u(0) = ul(0) = 0, g = go+ g, t or g = go+ g,t + g2t2
u(O) = b,~ ' ( 0 = ) k,, g =
C- gntn n=O
W e might then use double decomposition and write
with
W e emphasize that these problems are also solvable by usual decomposition. Also: we note that the rate of convergence of the modified decomposition only approaches that of decomposition when the excitation approaches zero. The reason for t h ~ sis that the initial term contains only the first term of the series for g; only when we go to sufficient terms of u will we have enough of the input to have as good an approximation.
* s u c h as PadC znproximants. Shanks and Wynn transforms, and the Euler and Van Wijngaaraen ~ransiorms.
APPLICATIONTO LINEARPARTIALDIFFERENTIAL EQUATIONS: Suppose we begin with the equation L,u+ Lyu = 0 and, to be specific, choose L, = d2/d x2 and Ly = d2/d Y2.Following the decomposition procedure, we write the equation for the x partial solution :
where @, = j,,(y) + x<,(y) must be found from the given boundary conditions and L: is defined as the two-fold (indefdte) integral
iJ(.) dx dx. It has been
demonstrated previously that solutions are easily determined by decomposition. Now, however, we use the modified decomposition method which we have discussed for ordinary differential equations. Thus we let
or
where
We now have
The coefficients are identified by
and for m 2 2 by the recurrence relation
We can equally well consider the y partial solution by writing
where @! = q, (x) + yrl, (x) and u =
where bn(a)=
xloCn=o DD
c m , ?im ~
xm m=O
cm,,xmand L;' =
yn
fi(-)dy dy is an indefinite integration
operator. Now
a2
w
=
+Y
~( xI ) x
,,,
bn(x) :
( n - l)(n + 2) dx-
We get immdiately bo(x)=
r7d4
and for n 2 2.
We now consider the more general linear form
L,u+L,u-Ru=O again with L,= d2/dx' and Ly = d 2 / d y 2 .The x partial solution is given by
For simplicity, we choose R = p.
where a, (y) = operator. Now
so that
and for m 2 2
zzoc,.,
y n and L: = Jj(.)dx dx is an indefiite integration
The y partial solution follows similarly:
b, (4= T70(x)
b, (4= 1'5 (x) and for n 2 2
We consider L,u
+ L,u + Ru = 0. Let R = p(x,y)
as before. m
where
where
Now
m
with L, and L,
A PPUC,~TIONSOF .MODIFIED DECOMPOSITION
We rewrite the bracketed quantities
Now
so that
ao(Y) =
~I(Y)= CI(Y) and for m 2 2
The y partial solution follows similariy
165
bo(x) = 770(x)
b, (4= 7j,(4
and for n ;r 2
(-a2 1 dx2)bn-,(x)bn( 4 =
z n-2
~~(x)b,-2-~(~)
v=o
n(n - 1)
W e have used 00
00
where m
P, ().
=
Cp,..xm
m=O
Also. u =
Zmb , ( x ) y n with b , ( ~ ) =xmc n=:
APPLICATION
m = ~m.n
TO
XONLINEAR
xm and
PARTIAL
DIFFERENTIAL
EQUATIONS: Consider L,u - L , u - Xu = 0 and the x partial solution with L,
= d'/dx2.
Lv = d2/d!.'. and L;' = ~ ~ ( ~ ) d x We d x .let Fiu = a f ( u ) . (We have shown algorithms ior considering functions such as f<:.-b) O: f(u.\ W e have now U=C~~-L;'L,U-L-,'NU where Cg ,= 5, (Y)+ x<,(y ). Let 00
We haire generally written
03
. W J In
Chap~er3.)
m
f(u)=CEoA n : however, wc
showed in the
previous results on the transformation of series that if u =
I*amxm m=O
we can then write
where A,(y) = ~ , ( a , ( y ),..., am(y))and we do so now. We now have
so that ~ O ( Y > =~O:,(Y)
a,(y>= C,(Yl)
and for m 2 2
The y partial solution is similarly obtained, letting u =
Em b,(x) y" where U=O
with B,(x) = ~ , ( b , ( x ,..., ) b,(x))where we have now used Bn instead of the usual An only to distinguish it from the previous set of polynomials. GENERAL INHOMOGENEOUS PARTIAL DIFFERENTIAL EQUATIONS:
Consider the linear case:
L,u+L,.u+Ru=
c
a,(y) = 51(y) and for rn 2 2 m-t
@,
Now the nonlmear case:
=
5 , ( ~ ) +x ~ , ( Y )
and for m 2 2
where and
The solution is u=
a,,, = c f ) a1.n
- c(l)
-
Consider the example
n
C C a,,,
tmxn
The solution is m
00
a,., = En
The algorithm has a duality property since we can calculate either the t partial soiution as we have or the x partial solution. Let's take P = 1 to write
for the x partial solution. If we have a boundar!-value problem: ure use the double decomposition technique for a few terms to get a good initial value. Then rearrange and solve as an initial-value problem. Sumrnanzinp, the t-coordinate partial solution-we might call it the temporal format-is
with the
th approximant b-i
,'m
u-i
while the spatial format of the solution (or x-coordinate partia! solution! is
with the v th approximant
2 (2am.Dtm)xn zan(t)xn u-l
=
=
n=.l
m=O
n=O
Each sequence of coefficients in the two formats, i.e.. the %(a) and the aJt) has its own radius of convergence, and moreover, the solution itself is unique. As another example of a linear partial differential equation., consider
au d'u du +a-+pu+y--6-= dt2
dt
dx
;'u OX-
x m
,=a
m
&,.tm~" ,=o
a,FI, y , 6:E constant
We can now write the approximants
4m,
=
C$
a,., t'x"-
LINEARPARTIALD I F F E R E N T I A L EQUATIONWITH V A R I A B L E COEFFICIENTS:
Using modified decomposition we write
LINEAR PARTIAL DIFFERENTIALEQUATIONIN T W O S P A T I A L DIMENSIONS AND ONE TEMPORALDIMENSION:
The solution b!. nlodified decomposition is =
C C C a,.,,
i r o n
t r Y
LINEARP A R T I A LDIFFERENTIAL EQUATIONI N THREES P A T I A L DIMENSIONS AND ONE TEMPORAL DIMENSION:
3'~ -dt-
au
a~
a~
U - + ~ L I + ~ ~ + G - + ~ - dt dx y dy
Solution by modified decomposition:
a~ azu d2u d'u du -+ a - + f i ~ 1 + ~ - + 6 ~ + y r u ~= dt2 atz
dt
dx
dx
C C a , . tmxn
u2 = where the A,,
C C A,,
are our polynomials.
00
Z...E
m = ~n=d
Assuming constant coefficients, write u=
eo
tmx"
tmxn
The approximants will be
For the solution by modified decomposition. we write m
u=
oa
C 1 a,,,
ui=
zz
where the .A,,, are our polynormals
tmxn
A , , , tn'xn
APPLICATIONTO COUPLED PARTIALDIFFERENTI~AL EQUATIONS: Consider the coupled partial differential equations d2u +---;+av=p(x,t) d'u -
ax'
dt;
with (uncoupled) boundary conditions**
v(x1, t) = I],(9
v(x, 0) = 0, (x:)
We will use the spatial format, hence the initid conditions are not used in this example. We assume a and y are constants and
8
*
Note that the case of coupled bounduy equations is solvable.
q1(t) =
qptn
Define
for the u solution and C ( t )+ D(t) + Jj(.)dxdx for the
solution OVe point
out also that we can use double decomposition and recast this as an initialvalue or temporal format problem to accelerate convergence.) We b,= *om with
Now we can write the decompositions u = approximants
xI=0
u, and v =
xI=(.
\.;
and the
The general decomposition components u, and v, are:
We have, of course.
The approximants must satisfy the boundary conditions; hence
which implies
Also
$2
Since
{ v ( x ~t)} ? =q20)
h+i{v(x2 .t)} == rl2(t)
@,+I{.)
=4 , { ~ > + ~ ,
cp,+l{vI= 4;i-b71+v*
we clearlj. have u m ( x l , t ) =u,(x,,t)=
0
v ~ ( x ~ , v~m) (=x 2 , t ) =0
Summarizing, uo(x~:t)= 51(t>
~ o ( x 2 . t )= 52(t)
vo("1.t) = TJl(t)
vo(x2,t) = 712(t)
and for m r 1: u,(x,,t)=u,(x2,t)= v , ( ~ ! . t ) = ~ , ( x ~ , t ) = oWrite .
and
and
~ ' ~ ' (=tj)~ ( d 2 / d i 2 ) u (Ix, I .~ ) d xd,x ,
3 1
- / / ~ v ~ - , ( ~ ~ , d~x), d x ,
.
< j U ( t )= ~j(d'/r)t:/u, ( x , , i ) d x , d x , -J/@v,
, ( x 2 , ~ ) d- xdx. .-
We can write
and
Consequently, we write the matrix equations
Thus, we find
Now we write the uo, vo components of u and v and th~c0,approximans for uandv u, = A,(t)+ x ~ , ( t ) +j/fl(x, t)dxdx Substituting m
we have
Similarly, with
so that
We can now write u,,, vo in a convenient form as
where
and
Next: n 7 e c d c ~ l a t eu, and v, for the
oz{u) and
Let A, ( t ) =
x
#,{\I}
~l"t'
approximmu
We cm now write 00
00
m=O n=iJ
where = 9.n
ail)
n
(1)
am+~.n =-
and (1)
b0.n
=~
1.n
( 1 ) n
(n + l)(n + 2)a@+, - nbr.'cL (m- l)(m + 2)
- cil) - n
Now we have the approximants
bj'! =
~ b "
4, {u} and 4, (v}
Now we write the general components u, and V, and the approximants cp,*,{ul and #,*1{v)
- --
n=O
Substituting
m = O n=O
we have
and finally
where
x
Now the approxiinants $, {u) = $,{u} + ut and $,+,{v) = d t ( ~ ) v,. n u, and {v} = 1. V, and substitute ~f we write {u} = e=o
x(20
we can write
and we note that
and analogously
2'u d'u - + ----;-+ a u\. = P(x. t)
3 ~ -a t 6
with uncoupled boundary conditions***
v ( x 2 . t ) = q2(i)
In operator form LU= P(x. I ) - ( d 2 / dt')u - a u v
***
We nokc that the case of coupled boundary conditions is also solvablc by the aecomposltion merhod.
Operating with the inverse operator (an indefinite integral operator)
(a2/at 2 )-~L-I
= vo - L-I
uv
where u, = A,(t)
+ x ~ , , ( t +) IJfl(x. t)dxdx
Let
This nonlinearity can be expressed in terms of the A, polynomials
where
We have changed the notation of the polynomials slighdy, i.e. to it will not be confused with the integration constant A,(t). The approximants are {u} = components u, and v, are given by:
Since
1 t=o
u, and
{v} =
A,,so that
zg0v, and the
the term u,-,-, v, can be written as
w h c h is substituted into the equations for the u, and v, components. EXERCISE: Generalize the al_gorithmfor the A, polynomials to products
such as uM vi obsening t h a ~ u =
ELou,
and
EXERCISES:
1)
Show the solution of the anharmonic oscillator
with ~ ( 0=)co, ~ ' ( 0 = ) C, is given by
with
for m = 0 and for m > 0 by
where A?) =
xm z:=O zzO m-P)
u=O
a:-"
(PI.
a"
2) Generalize the algorithm for the An polynomials to products such as uMvM observing that u =
uf and
DECOMPOSITION SOLUTIONS FOR NEUMANN BOUNDARYCONDITIONS
For simplicity, consider a linear differential equation Lu + Ru = g where L = d2/dx' (and R can involve no differentiations higher than first-order). Assume conditions are given as
/ duldx I,=,> duldx
= ,,
PI = P2
=
The decomposiuon solution is u = @ t L-'g - L-'Ru where 0 satisfies L@ = 0 and L.' is a pure two-fold integration (not involving constants). The derivative du/dx or u' is given by
where La'= 0 and I is a single pure integration and is, of course, not equal to the two-fold integration operator L-'. Returning now to the solution u, we have by decomposition:
where we note the decomposition not only of u but also of Q . Other decompositions are also sometimes useful. For example, when integrations such as L-: Ru or of L - ' ~become difficult, R or g can be decomposed into a convenient series so that the individual terms become simpler integrations. Now u, = Q, +L-I& and
9 U, = Q, - L-'Ru,-,
(-L-'R)~Q,-,
= n=0
so that
(-L-'R)'L-'~
D ~ c o h i ~ o s r r f oSOLLTIONS iv FOR NEUMANNBOUNDARYCONDITIONS
191
Differentiating u and noting that Ru can be written RI u', we have
We now solve the u' equation by decomposition just as we did for the u equation: m
m
C u:=Z
m
o.,+lg-mZ
m=O
m=O
00
m
u,
m=O
Hence,
C u.=C
00
O:+lg-mC
m=O
m=O
u; m=O
u; = QI, +Ig and for m 2 1.
um ' = O'm - IRI~
'm-~
m
=
(-IRI)"
a,-, + (-nu)"
I*
n=O
so that m
\
m
u' =
(-IRI)'O,-,
+ (-nu)"
Ig j
m=O o=O
00
U'
{0'+ Ig}
= m=O
We now need to determine O' to determine the constants of integration c, and c, involved in a' or c,., and c,,, involved in @,,,. For simplicity and clarity we let g = 0 and calculate the series for u. Beginning with Lu + Ru = 0,we have
The series for duldx is
I+
u' = ( d l d x ) = C, -IRC,
C,
c,x - L-'R
C,
- L-'R
C,X
I
+ (L-'R)~c,+ (L-'R)~c,x- -.-
-IRC,X+IRZ~RC + I, F U ~ R C... ,X
Noting that Ic, = xc,
Rearranging and collecting terms, we have
u ' = w - ~ ~ w + ( ~ ) 2...= ~ u', ' - - u , ,+ u , + . . . I
Thus @' = C, - IRc,
-
@:
= C1.m
+%.lo
and we note that ( dldx )urn;t u; but
( dldx )urn=
u;
i.e., the decomposition is not unique. Thus, although dum/dx is not the same as the corresponding derivative of the mth component of u, the infinite sums are the same. Returning to the computation of the solution in general and matching the solution LO the given conditions,
DECOMPOS~TION S~IL~TIONS FOR NEUMANNBOUNDARY COND~TIONS
Uo
= Co,,
uj = C,,,
193
+ XC,,, + L-Ig - IRc,., i Ig
0; = u; 0;(bl = Ql cp;(b?-)= P z which determines c,, and c,,,. For g = 0, the mauix equation determining the integration constants is*
Matching $;+,
to the conditions,
determines c,,
and c , , . Thus
*
If R is constant, for instance R = p, the equation for,,c and c, is
(a: :).(:::)=(;:I
where
P:.,
and
p ,,are determined from
represents the sum x:ou,.
where
Now the constants c..,, ,c,
are
determined for all m. Hence all the Ok and 0, are determined.
Upon rearranging terms
where
Q, =
EwQ n r=O
We have shown a technique for linear operator equations for development of an invertible rnatnx for the vector equation, which determines all constants of integration. Thls method is readily extended to the nonlinear case and also to partial diff:r:z:ii ilqiizions. Consider an example: let R = 1 and 2 = 0.(Then R I = I' = L-'.) We have d%/dx2 - u = 0 . Substituting g = 0 and R = 1 in the previously derived solution u. we have
We compute
u = C, cos x + C, sin x u'
= -c,
sin x -+ c l cos x
DECOMPOSITION SOLUTIONS FOR
NEUMMN BOUNDARYCONDITIONS
195
Matching u' at the boundaries
Then
-c, sin b, + c, cos b, =p, -c, sin bz + c, cos b, = p,
-sin b2 cos b2 which. for a non-zero determinant. determines co and C, and a unique solution u = c9 cos x - c, sin x satisfying the given Neurnann conditions.
SUMMARY: We have shown the solution for Neumann conditions of linear ordinary differential equations. The procedure can be simplified considerably for linear differential equations but is a general procedure for nonlinear differential and partial differential equations for boundary-value problems. The procedure of decomposition of the initial term, as well as of the solution, yields faster convergence because when we have found an n-term approximant, the resulting composite initial term uo incorporates more of the solution. and hence we accelerate convergence.
INTEGRAL BOUNDARY CONDITIONS
We fmt consider an expository linear example: d'u/dx2
+y u =0
n7ithconditions given as:
y.Dl, and D. are assumed constants here although they can be functions of x with minor modifications to the procedure given. In decomposition format we have Lu + Ru = 0 or L-'Lu = -L-]RU or
where 1; is a two-fold pure integration with resp- to x. Since cc:- c, x is identified as u,: we have u: = -y l f u,, u,
= -yI;
u,, .. . .
Consequently, we write
Since the successive approximants q, represent u to increasing accuracy as m increases, each cp, must satisfy the boundary conditions for m = 1, 2, ... When m =I: q , ( j l ) = u , ( t l ) = b! 401(<2)=u , ( t 2 ) = b2 or C,+C,<~
C,
= bi
+ c , & = b2
thus the "rnatchinr coefficientq" c and c musi
The j, and j, are distinct points, i.e., j, # j2,in a t.wo-point boundary problem; hence C, = (0, -51 bz)/({2 - < I )
Next we determine cp, = cp, + u,. Since cp, (4,) = b, and cp,
(&I=
bz already7
and
Equivalently,
The next decomposition component is u,(x) = -y[c, x2l:2 + c, x3/3!] and we form q, = cp, t u, which we match to the given conditions
Therefore to match the boundary conditions,
cg) + c\') g1 = 0
;1 ~ ~ ( c+fc?') x)dx + y[cf' 6: 12 + c/0)6: /3!]
r"
We have added the superscripts (1) to distinguish new values from the previously calculated values of c, and c, on the right-hand side. The righthand sides of the two equations are symbolized as b:" and by'. Then
which is solvable as before for a new c, and c,. As m increases, cp, approaches the u very closely so the error vanishes.
d2u/dx2+ y u = 0 with the conditions
p, u(x) dx
u(jl ) = b, +
We write Lu = d'u/dx2 and L-' = 1: where 1: is the two-fold indefinite integration operator. Then
I
where u, = Q = c,
+ c,x and u, -
= (-L-' R ) ~ Q or
I
~ z ( - l )y ~m ~ ~ a ( cc o, x j
so that
Even thoush the sums are recognizable, write
where p, and 4 represent the sums. Next we can evaluate the constants c, and c: at the boundary conditions. (We call them matching coefficients.) Thus
INTEGRAL
BOUNDARY CONDITIONS
which we can write as
199
a,,c, + aI2 C, = b, a:, c, + a, c, = b2
by defining
Forming the vector equation for the matchmg constants,
If the a matrix is not sin,dar ( a , , a,
- a,, g,* 0), then we can determine
its inverse
so that Consequently
which we can write as C,
= a , b l -a,,b* a11b,
-a12a21 - a 2 1 bl
a 1I a 2 2
-a12a21
a 11 a 2 2
Cl
We have now computed
-
Where of course in this case,
p,(x) = cos (fix) (8x1 p, (x) = sin -----
47
ANOTHER ALTERNATIVE:
Start with the exact integral boundary condition:
Let us decompose the equation and the conditions
n=O
hence
Then
Now we use our earlier notation [ I ] where 2, is a grouping parameter for collecting terms. Thus
Substituting,
Equating like powers of I,,
then we set i = 1. Since each u,(x) has two undetermi~ledcoefficients, i.e., the c': and cj" , we notice that we have an algorithm to compute c r ' and cj"' without explicit reference to them. However, the earlier way of writing approxima~eboundary conditions qmpl(
C2Bp, in)dn is appealing 71
because tbs limit as m approaches infinity is explicitly the exact boundary condition.
EXERCISE:Carry out the computations for the matching coefficients and verify solutions.
REM..~RKS: The single decomposition (where we carry ;dong the constants of integration) is applicable to all linear ordinary differential equations for either initial conditions or linear integral boundary conditions. (However, if the boundary conditions are nonlinear, we need to use double decomposition. If we try to use single decomposition with nonlinear integral boundary conditions, we must solve for the matching coefficients c, and c, as roots of algebraic or transcendental equations, then find which roots are correct. It will be more simple to use double decomposition in such cases.) We will now use double decompostion for the case already considered by sin,ole decomposition. USING
so that
DOUBLEDECOMPOSITION:
We have previously shown for boundary-value problems by application of staggered summation (see Appendix II) that
Substituting into the equation,
we have
);2 x3 d );5 y' '.lo' u- = c'Z' 0 + XC/2J - y '.l'!-- ~ C j ' ) y'c, ~ i '0) - l+ 2: 3. 4! 5!
where
n=il
and = (-y)n
U,"-n'
{Cf-'jX'n/(2n)! + C/.-n) X2n7'/(?n+ I)!
A convenient rearran_eement by staggered summation results in
m=O
since m
m
We now form the approximants to the solution u
and form the approximate boundary conditions. For m = :L,
q,(5,)= b, and
cpI(t2
) = b2
and for m > 1,
qm+,(&) = b, + Ji'4 51
(noting that lim q,+, = lim m4-
cp, = u
and that the approximate boundary
m-b-
conditions become the exact boundary conditions in the limit). Wehave q , = u o = c ~ ) + x c ~Since ). p,(<,)=b,and~,(~,)=b2,w have e uo(5,) = b , and uo(t1)=b2
Since q1(5,) = bl and P,( 5 2 ) = b, then
Since
then
~ ~ ( =5 j:"~, ,"!) u l ( x ) d x U2()=
j'p,~,()od~ ;I
Urn
= cF'
+
- y 1,
( P r n + ~= Y m + Urn
j::pl ( ~ , ( x ) d x .& b, -q<,vrn(x)dx
( P ~ + , ( c ) =b, +
cp.,-,(<:)=
Since q, = (P,_,
- urn_,
21
Pl
Now using
uo = c r ) + xcfO) u, = cp)+ xcfm)- y ~ 'urn-, ,
m 2I
Thus
We can now write C(0) + 0
4:I C(0) = biO) 1
Cb') + 4I
1
+ kc:)
= b(l) I
-
= bt)
For computational convenience, we defme by) = b, and br' = b,. For m = 0
Now we can write form 2 0
c1 = b!") cp)+ c, cfm)= b y ) co( m ) +
c;m)
--
which are a set of simultaneous equations for the matching coefficients c p ) and cjm).Equivalently,
If j, and
<: are distinct as they must be for two-point b o u n d q conditions,
so that the solution is determined.
ACCELERATIOK OF CONVERGENCE: We can accelerate convergence while minimizing further matching coefficients by now uansposing from the boundary-value format, denoted by B.V.. to an initial-value format denoted by 1.V. The procedure is to firsr compute a current best estimate of u;''.' (the first term of the decomposition series in initial-value formulation)
then compute
utv
for m 2 1 :
For a reasonable approximation to u','.".' we have
where
vn+,[u:.'
J
] = Z:=O ):u
= ~1'").Substituting,
where Consequently,
The limits of the series for the boundary-value solution and the initial-value solution are the same, i.e., --u = u (B.V.) - u (I.V.)
which we can symbolize by the algorithmic form
Then in the double limit
=c, c o s f i x + c ,
sin f i x
fi
which satisfies the original linear ordinary differential equation d'u/dx2 + y u = 0 as well as the original linear integral two-point boundary conditions.
M A T C H I N G C OEFFICIENTS FOR CONDITIONS: For linear equations, we wrote h+, =
N ONLINEAR
INT EG R AL
11,uD as an (rn + 1)-term approxi-
mation to u, i.e., for a sufficiently high value of m?
qm+I [u] = u or lim cp,,, [u] = u m--
(We note that in practice m does not need to be l a r ~ e . For ) a nonlinear function of the solution such as h(u), w e expand in the A,: thus
which we can a-rite simp]); as h!u)
= qK-_: [h(u)]
EmA n=G
if the contcxt is clear. Thus
or h(u) = lim q,+,[h(u)]. The exact boundary conditions m-+-
are u(c,) = b,
+jFt2 ~,h,(~)d~ 51
The approximate boundary conditions are:
which yield the exact boundary conditions in the limit. For m = 1 we have
The coefficients c?', cjm' are now calculable by decomposition of u and h(u), whlch means that nonlinear integral boundary conditions can also be dealt with ana10,oously to linear integral boundary conditions.
SUGGESTIONS FOR RESEARCH: 1 ) Linear equation, linear integral boundary conditions: constants or functions of x)
(P, and P,
2) Linear equation, nonlinear integral boundary conditions
3) Nonlinear equation, linear integral boundary conditio~is
4) Nonlinear equation, nonlinear integral boundary conditions
U(X= 6)= bl +
jil6 PI(Io h1(u(x))dx
U(X= R ) = b2 +
I.
d
GI
&(x) h,(u(x))dn
can be
5 ) Linear (two-dimensional) partial differential equation, linear contourintegal boundary conditions. Note that C(x,y) = 0 implies x = {(y)
6) Linear (three - dimensional) partial differential equation, linear surfaceintegral boundary conditions. S(x,y,z,) implies x = { (y,z)
REFERENCE 13
G . Adorman. Xorllinear Stochar~icOperaor Equaions. Academic Press !19863.
SUGGESTED READING 1) G. Adomian and R. Rach, Analytic Solution of Nonlinear Boundary-value Problems in Several Dimensions. J. Math. Anal. Applic., 174. ( 1 18-137)(1993). 2) G. Adomian. Partial Differential Equations with Lnrepral Boundary Cond~tions.C o n ~ r . Math. Applic.. 9. (19833.
CONDITIONSAT INFINITY
BOUNDARY
Solutions of problems with boundary conditions involving a limit at infinity can be difficult. To build some intuition, we will begin with a simple example modelled by a linear differential equation. Consider the function ~ We obviously have u(0) = 1 and u(-) = 0. By the u = e-' = m = (-x)"/rn!. latter, we clearly mean that lirn u(x) = 0. This function satisfies the differential
1-
x +-
equation d2u/dx' - u = 0 . Letting L denote d2idx', we have Lu-u = 0 which is in our standard format Lu + Ru = 0 with R =- 1. With L-' defined as a two-fold indefinite integration operator, we have L-'LU = L-'u so that u = C, + C, x + L-'u. (Since we are dealing with a linear. ordinary differential equation, double decomposition is unnecessary.) We identify
The resulting solution is u = v
Emu, m=O
or
u = C, cosh x + C, sinh x Even from the first-order approximation rp, = u,, it is clear that C, = 1 since u (0) = 1. We also have the condition u(-) = 0 which might make us jump to the conclusion that C1 = 0. However, we would soon see that we do not then get a verifiable solution. Since u must approach zero as x -+ m, we have lim [cosh x + C, sinh x] = 0
x+-
so that C,
or
lim sinh x = - lirn cosh x
x+-
X --+-
lirn cosh x
c1= - x+-
lirn sinh x X -+-
= - lim coth x = -1 ~ 4 -
and we see that u =cosh x - sinh x . Substitution of the exponential forms
shows that indeed u = e-' which we started with. Equivalently, since ..
cosh x = zxh/(2rn)! m=O
sinh x = C x h " /(2m
+ l)!
m=O
we have u = (I-x:/2!+x3/4! - ...)-( a +x3/3!+x5/j!-...)
It is instructive to write C, = - lim &,(x)/(,(x) X 4-
=- lim y (x) +-
where
& = cosh x slnh x y(x) = coth x
.:' =
1x1 < 1x1< 1x1 < n for the series representing coth x 00
EXERCISE: Write the series for coth x. Using the Pad6 transform, show that the limit. as x + -, is 1. (See Appendix I.) For problems in which the de-
composition series is not recognized as we did above, we complete the division, e.g.,
then determine the Iimit with the Pad6 approximant.
EXERCISE:Consider the equation d 2 u/dx2 - p(x)u = 0 with ufO) = 1 and u(+-) = O and verify the solution.
EXAMPLE:Consider a generic second-order linear homogeneous differential equation with boundary conditions specified at dimity
Let R = a d/dx + ,8 m
lim u(x) = 0
x+-
,:x
we have u =
urn with uo= Co + C,x. Then
which we will write
= &(x>+C,~I(X> Since lirn u(x) = 0 we have x+-
-
%e series for y(x) will usually have a finite radius of convergence. Therefore, in order to evaluate the limit at + to compute C,, we must transform the series for y(x) or a truncated series approximating y (x) to a representation suitable at infinity. T h e Pad6 approximant is useful here. Write 2 Y(X)= Yo i y,x + y2x + ... and calculate T(x) where T(x) is the Pad6 approximant of y(x). (See Appendix I.) Then C, = - lim T(x). x-+-
CONDITIONS AT INFINITY: Consider d2u/dx2- u = 0 with given conditions u(0) = 1 and u(+ m ) = 0 using the modified decomposition. In some cases. it is possible that the resultMODIFIED DECOMPOSITION AND
ing recurrence relation could provide insight and illuminate the matching of the solution to the condition at infinity. A finite decomposition approximant such
Z,=,U,
as W = with a finite radius of conrersence might be transformed into a finite fraction which is accurate for large x to obtain the limit at infinity. Of course. other techniques may be applicable such as Euler transforms, anal g i c continuation, etc. If we reco_mizewell-knnun functions as in our first example. h c additional step will be eliminated.
SOLUTION:L'Lu = L'u where L-I(.) = C,
- C,x - I:(.)
We have u = C, i C , x ~ l j uwhere we let u =
so t h a ~we have recurrence relations lor the coefilcients:
where
m
m=C
a,rm instead of
so that
We can now write a,
= C,/(2m)! and, , ,a
= C1/(2m
- I)! so the solution
can be written as --
u=
{ ~ , a ~ / ( ? r n ) !c,x""/ + (2m t I)!
I
m=O
At this point, we recognize the summations; but let us assume that we do not see that these are series representations of the hyperbolic trigonometric functions or know their limit values at infinity, and must proceed in a way which is usable when the series are not recognized. Evaluation at the zero boundary u(0) = 1 requires Co = 1. To evaluate at the second boundary gives
lim u(x) = 0 or
x+-
hence
z
xh/(2m)!
m=O
r
m
+ C, lim X+"
xa+'/(?m m=O
+ I)! = 0
EXERCISE:Verify this series. It can also be written
where the B, are the Bernoulli numbers* . We list, for convenience of the reader, B, = 116 B6 = 69112730 B;= 1/30 B7 = 716 B3= 1/42 Bg = 36171510 B,= 1/30 Bg = 43,8671698 B5= 5/66 B l o = 174,611/330 The radius of convergence of this series is x: Thus to evaluate
we need to transform the series, or at least. transform the truncated series to a representation whch will be valid as x + + m.
EXERCISE:Show, using Pad6 approximants, that the limit of
is 1 as x 4 m . (Hint: Ignore the l / x which obviously does not contribute.) Write the Pad6 approximant [2/2] and since C1 is equal to the negative of this limit, show that C, is equal to - 1. Hence
" H. B. Dwigh!. Table of Integrals and Other Mathematical Data, 4th ed.. MacMilian and C:o..
N.Y. (1961 I .
m
- { x + x 3 / 3 ! + x ' / 5 ! + . . - ] = I - x + x 2 2 ! - x 3 / 3 ! + - . . = x (-1)"
xm/(m)!
==.I
Thus the solution is computed. Of course, it is the exponential function and it is easily verified by substitution. We observe that it may be necessary to transform h e soiution series to a representation for which we can easily determine the limit as x + m. A useful technique is that of Pad6 approximants which allows us to evaluate the matching coefficients C, and C1, i.e., the "constants of integration," (particulariy the condition at infinity). Of course, since we do recognize the functions we can use the fact that the limit as x + m of coth x = 1 and that the limit of e-" is zero. But in general, the functions are n~otwell-known nor will we know a priori their limits at infinity.
EXAMPLE:Consider the case dh/dxZ-p(x)u = 0 with ~ ( 0 = ) 1 and lim
x +-
U(X)= 0. Let p (x) =
ELopm xm. ~ r i t eLU = {I-
d'/dx2. Operating on both sides with
Therefore
im=O
L1=
I/(-)dx dx,
and for m 2 2
so that a, = p, a,/l - 2 a, =(po a, + PI ad12 . 3 a, = (p, a, + PI a, + ~2 a0113 - 4 a, =(p, a, + p, a2 + p2 a, + P3
5
Consequently a,
= C,
a i = C, a, =p, C,/2! a; = p, C, /3!+p, C0/3! a _ = CO/4!+p,C, 13.4 + p, C 0 / 3. 4 a, = pi C, /5! p, p, C0/5!
+ p l p p C o / 2 . 4 ~ 5 + p 2 C , / 4 . 5p,Co/4.5 + Thus,
Since u(0) = 1, Co= 1. TOevaluate C1 we write
C,= - lim
(I - p,x2/2!p~x'/4!+ .-}4!+ 7
5
- a -
x+-{x+p,x3/3!+2p,x4/4!+ p i x /5!
+--.I
which is the same as our earlier result if p ,= 1, p = p2 = 0.Of course, if 00
P=
Crndp,xm
we must include the neglected terms of p in the ratio for C,.
219
~OIJNDARYCONDITIONS AT INFINITY
EXAMPLE: Linear Tlurd-order Equation: The function u = e'" satisfies the equation d3u/dx3+ u = 0 with conditions u(0) = 1, u(m) = 0, and u'(.o) = 0. Let us investigate how to determine this solution if only the equation and conditions are known. L = d3/dx3and L-I is now a three-fold integration. From Lu = - u and L-' Lu = - L-' u, m
where u, = C,
Since u =
+ C,x + c2x2/2
zw urn,we m=O
write
with
C=
(-1)" x3"/(3rn
+ l)!
m=O
m=O
I
Thus we have
Since q,= u, must satisfy u(0) = 1, we know Co = 1 so that
Since u and u' must approach zero in the limit as x + w , we have
Solving for the C1. C2 and writing D = 5,t;- t,'j,,
--
Symbolizing this as C, = - lim p(x) and C: = - lim a ( x ) , we can obtain X 4-
*
Pad6 approximants of p and a to get the limits. EXERCISE: Show that C, =- 1 and C,= 1 so that u = e-". EXAMPLE: N o n h e a r 3rd-order equation-As Blasius equation of b o u n d q layer theory:
an example we consider the
or in our format Lu + Nu = 0 with L = d3/dx3,NU = (1/2)uuN, L-I defmed as a triple integration and the A, calculated for (1/2)uu". Applying decomposition,
n=O
and
The given conditions are u(0) = 0, u'(0) = 0, and u' + 1 two conditions require a = p = 0 so that
-+ u
i
m.
The first
BOUNDARY C O N D ~ O AT N SI N F I N ~ Y
ZZ I
Thus, u = x2/2- (y'/2)(x5/5!) + 11(y3/4)(xs/8!)- 375(y1/8)(x"/1 1!)
which is the Blasius series! Now,
-
We note that u"(0) = y and we have the remaining condition u'(x) + 1 as x -+ for the evaluation of y . Since the u'(x) series lacks the fust term in x = z + 5 with
Em c, x'. n=O
we translate, using
5 taken as 112, so that the series for each new coefficient will
xzo
converge very rapidly. We then have a numerically equal series bnzn with non-zero b,,b,,b,, ... and can fmd the limit of the series. (See Appendix I.) Since this involves y and we have u'(-) = 1, we can evaluate y directly without use of numerical methods such as shooting techniques.
EXERCISE: Carry out the evaluation and verify the solution u(x). Transform f(x) = n=O cnx' where c, = 0, (using z = x - j with 6< 1 and within the
zw
radius of convergence) to
zm n=O
bn zn with b, t 0. If the Blasius problem is
given as an initial-value problem, we have um+uu" = 0 with u(0) = u'(0) = 0 and u"(0) = 1. [ 11 The given condif om are u(0) = u'(0) = 0 and u"(0) = 1. The first two conditions require a = f l = 0 so that
A four-term approximant 9 4 =
Now,
EL.,urn is given by
u'f = y - y2X3/3!+.-.
We have the remaining condition u" = 1 which gives us y = 1.
EXERCISE:Substitute the resulting solution, carrying terms through xS to see that the differential equation and given conditions are satisfied. EXAMPLE:Consider the nonlinear 3rd-order equation
for 0 < u < m and given conditions u(0) = 0, u'(0) = p, and u'(m) = y . We let L = d 3 /dx3, L-' is defined as a 3-fold integration. and aye write
ELOA.{uu")
for the first nonlinear term and
EmA" {(u')'} for the secn =O
ond. By decomposition,
En-'
We can now form an n-term approximant p, = m = O u, which converges to u as n approaches. Because of the condition on u(O), we have 5 = 0 and because of the condition uf(0) = P we have q = P . Evaluating the A, for n=O
A,{uun) = ( f i x + a x 2 / 2- m3/3!)(a - ax) A,{u'u') =
Hence.
(p+ a x - m2/3!)"
U,
= rn~.'{(/?xt a x 2 / 2 - a x 3 / 3 ! ) ( o - a x ) }
+L.'{P+ o x - m2/2!}
EXERCISE:Determine the Pad6 limit and use it to evaluate the remaining unknown constant o.Verify the solution for u.
EXERCISE:Consider the solution of u"' sition". Let u
=Em amxm and m = ~
+ u = 0 using "modified decompo-
Lu=-u
with L = d3/dx3. Since
u, = C, + C,x + C,x2/2 (where Co,C, C, are determined from the specified conditions), show a, = C,
a, = C,
-am
am+3 = (m+ l)(m+2)(m+3) for m = 0,1,2,... . EXERCISE: Show that the solution is
E X E R C I S E : Using the same conditions as previously used for the decomposition method, i.e., u(0) = 1, u(m) = up(-) = 0, show IJ = e-".
1.
SUGGESTED READING R. E. Meyer. Inrroducrion ro Mathematical Fluid Dynamics, Wiley-Interscience (1971).
CHAPTER 10
Integral equations of Volterra type arise quite naturally in physical applications modelled by initial-value problems. Consider the linear Volterra equation of the second kind. (Fredholm equations of the second kind which are associated with boundary value problems for a finite interval [a,b], are similar except that the upper limit is b.)
with a 6 x, y 5 b. and let 2, = 1. Using decomposition 9 =
*
xn=O
p, (x), we
identi@ q, = f(x j assuming f(x) 0, then
and write rp =
p=
xm o=O
p, as the solution, or with an rn-term approximant
Em-' q,. In some cases, exact solutions are determinable. Consider an n=O
example:
Kfx. y) = ly f(x) = x
- xi
Then
Thus the two-term approximant is
or cp = sin x as is easily verified either by calculating more terms or by substitution. Several illuminating further examples on convergence appear in [ I I. 1 ) Consider the equation 224
The sequence
The partial sums are S, = 1.000 S, = 1.333 S, = 1.449
-
S, = 1.48 1 S, = 1.493 and the lim S, = 1.5. Then the solution of the problem is: n+-
2) Consider the nonlinear integral equation
We get
u0(x) = 0.75~+ 0.20
The approximant to the solution with three terms is:
which is \.en near to the exact solution which is x. 3) Consider a nonlinear biological problem: Find a real function u defined on R by
-
U(X) 0.25xjd K(x, t) g(u(t))dt = 1 with I:(>;$ = l/x+t and g(u) = liu. The solution by decomposition is: U(X)= 0 . 2 5 ~(log (X+ 1) - log
X)
a three-term approximant results in
Of course, we can improve the approximation with more terms until the desired accuracy is achieved.
IMEG?>L
EQUATIONS
227
4) Find a real h c t i o n u satisfyng the integral equation:
where i is a real parameter, A E [0,1]. With a five-term approximation, decomposition gives the following results:
Assuming a value of
A = 1/10, [ I ] show an error of 9.7 x loJ
-
REFERENCE 1.
Y. Chermault, G. Saccomandi, and B. Some. New Results for the Convergence of Adomian's Method Applied to Integral Equations. Math. Comput. Modelling, 16, ( 8 5 93) (1992).
SUGGESTEDREADING 1. G. Xdomian, Nonlinear Stochastic Operaor Equations, Academic Press, New York (1986). 2. B. .%me, Some Recent Numerical Methods for Solving Hammerstein's Integral Equaions, Math. Comput. Modelling, to appear. 3. B. Some. A New Computational Method for Solving Integral Equations, submitted for publidon.
NONLINEAR OSCILLATIONS IN PHYSICAL SYSTEMS
Nonlinear oscillating systems are generally analyzed by approximation methods which involve some sort of linearization. These replace an actual nonlinear system with a so-called "equivalent" linear system and employ averaging which is not generally valid. While the linearizations commonly used are adequate in some cases, they may be grossly inadequate in others since essentially new phenomena can occur in nonlinear systems which cannot occur in linear systems. Thus, correct solution of a nonlinear system is much more significant a matter than simply getting more accuracy when we solve the nonlinear system rather than a linearized approximation. If we want to know how a physical system behaves, it is essential to retain the nonlinearity for complete understanding of behavior despite the convenience of linearity and superposition. Physical problems are nonlinear: linearity is a special case just as a deterministic system is a special case of a stochastic system. In a linear system. cause and effect are proportional. Such a linear relation somPt'imes occurs but is the exception rather than the rule. The general case is nonlinear and may be stochastic as well. In such cases. it is natural to make limiting assumptions-which is not always justified. Using decomposition. these become unnecessary even for the strongly nonlinear case and the case of stochastic (large fluctuation) behavior, as well as in the cases where perturbation would be applicable or in the linear and/or deterministic limits. "Smallness" assumptions, linearized models, or assumption of sometimes physically unrealistic processes may result, of course, in mathematical simplicity but again may not be justified in all circumstances. Here we are concerned with the study of vibrations, or equivalently with oscillatory motion and the associated forces. Vibrations can occur in any mechanical system having mass and elasticity. Consequently, they can occur in structures and machines of all kinds. In proposed large space structures containing men and machines, such vibrations will result in difficult and crucial control problems and also lifetime or duration considerations, since vibrations can lead to eventual failure. Oscillations can be regular and periodic, or they can be random as in an earthquake. Randomness leads to stochastic differential equations. In
deterministic systems-the special case where randomness vanishes-the equations modelling the phenomena or system provide instantaneous values for any time. When random functions are involved, the instantaneous values ate unpredictable and it is necessary to resort to a statistical description. Such random functions of time, or stochastic processes occur in problems, for example, such as pressure gusts encountered by aircraft, jet engine noise, or ground motion in earthquakes so that F may be a nonlinear stochastic operator " in the most general case.* In such cases, we write
where the script letters indicate stochasticity. Still more generally, Nu may be a function of u, u', ... as well, but this causes no difficulty. In any case Nu and ?& can be written in terms of the A,. Although convergence of the decomposition series for y will be most rapid when we invert the entire linear deterministic operator, computation of the integrals will, of c o m e , be more difficult also since we will not then have simple Green's functions. We will let L denote the highest-order linear differential operator. In an oscillator we have generally an external force or driving term x(t), a restoring force f(u) dependent on the displacement u, and a damping force, since energy is always dissipated in friction or resistance to motion. Usually this is dependent on velocity and we will write it as g(u':). If we have a free oscillating mass m on a s p ~ with g no damping, we can write mu" + ku = 0 if the spring obeys Hookes' Law, i.e., assuming displacement proportional to force. Of course no spring really behaves this way. Often the force needed for a given compression is not the same as for an extension of the same amount. Such asymmetry is represented by a quadratic force, or force proportional to u' rather than u. We may have a symmetric behavior but proportionality to u3. Then the solution is not the harmonic solution which one gets for the model equation mu"+ ku = 0 though it is still a periodic solution. The damping force gfu') may be u" where c is constant, or it may be more complicated such as g(u',u'2) so it depends on $as well as v. By usual methods, analytic solutions then become impossible.
* When the highest derivative appears in exist [I].
a nonlinear term. multiple solutions or branches
Suppose we write -f(u) for the restoring force, -h(u') for the damping force, and represent the driving force with g; the resulting equation will be u" + f(u)+ h(u') = g . Suppose the restoring force is represented by an odd function so that f(u) = -f(-u). We have this in most applications; it means simply that if we reverse the displacement then the restoring force reverses its direction. A pendulum, for example, behaves this way. We might take the first two terms of the power series for f(u) and write f(u) = cru + p u3. Then we have u" + au + Pu3 = g. If we have damping also, we have
assuming the damping force is -cyf. ( T h ~ sis Duffing's equation [2].) The simple case of the harmonic oscillator mu"+ ku = g , or the case avoiding the assumption of limited motion, has been discussed completely in [3], and we will consider more realistic cases here with damping and nonlinearity. We might note, however. that if instead of sin u = u, we go a step further and write sin u = u - u3/3! we get the Duffing equation with E as a small parameter. i.e., a perturbation result. It is clear then that we may well have other nonlinearities than u3. which we consider in the following sections. The Duffing oscillator In a random force field modelled by u" + cr u' i p u y u3 = g(t) can be analyzed without limiting the force g(t) to a white noise and allowing a,P,y to be stochastic processes as well. The same appiies to the Van der Pol oscillator modelled by U" i b'u' - $' u = g(t). These equations are in our standard form Fu = g(t) which can be solved by the decomposition method [2-51. If the equation is linear and deterministic, we have simply Lu = g or Lu iRu = g.
-
THE DUFFINGAND VAN DER POL OSCILLATOREQUATIONSA N D REAL-LIFEPHYSICAL PHENOMENA: Suppose we make measurements in the laboratory and observe a function f(u) in a "Duffing" experiment and find an odd function f(u) = b c u - b,u3 + b2u5+ ... or f(u) = n=O h,u2"" as shown in Figure I 00
NONLINEAROSCILLATIONS IN PHYSICALSYSTEMS
23 I
Figure 1
or, on the other hand, we observe a measurement in a "Van der Pol" experiment which yields an even function f(u) = b, i..b,u' bzui + ... o r f(u) =
Zzobnuz"as in Figure 2.
-
Figure 2
In the Duffmg case our nonlinear oscillator equation is
and if we retain only the first term of the summation,
Thus equation (1) subsumes the Duffing equation. Similarly, we can consider an oscillator equation with^ nonlinear damping subsuming the Van der Pol oscillator equation:
u"
+ {b, + b,u2 + b2u4+ ---}uf+ p u = g(t)
If we retain only the summation to n = 1, we have the Van der Pol equation
with u(0) = c, and u'(0) = c,. We see that these equations, as commonly used, are simply fmt-order perturbations of the real physical models. Mathematics has progressed considerably using linearity and linear operator theory. Konlinear differential equations derived for physical phenomena, e.g., in electronic devices, have utilized perturbation theory or hearization of actual behavior. This is so pervasive in the uaining and acceptance of what is possible that models of physical phenomena may be oversimplified under the assumption that considering the true behavior will represent serious dificulties in the analysis. It is hoped that the decomposition method may contribute to the development of more sophisticated models and result in physicall\. realistic solutions to frontier problems.
D I F F E R E ~ T I A EQUATIONS L WITH EMPIRICALNONLINEARITIES: Nonlinearities which are specified only through experimental measurements then require curve-fitting techniques yielding series representations. We will consider a generic anharmonic oscillator (subsuming the cases of Duffin: and Van der Pol oscillators [2])
with u(0) = c,,
~ ' ( 0 =) c,, g(t) =
x:,
g , , t b i t h a ( u , u t ) and
Piu)
assumed to be given as empirical graphs (i.e., as a plotted surface for a and a plotted contour for P ) . The curve-fitting procedures result in: DJ
a ( u , u t )=
0
C C a,,, umu"
Then the equation u"
+ a(u, u') + P(u) = g(t)
becomes
Using the decomposition and the polynomials A,{f(u,v)}, discussed in Chapter 3, and the customary A,[f(u)] which we will now call B,, we can write
a(u, u') =
z xx x c 2 -2 00
=
-am,, umu'"
A[a(u, u')] =
m=O n=O
p=O
m
am,.
~ ~ [ u - ] A.[u'~]
where the A,, Bnare now specified. We can now use the decomposition method to write
Lu = g(t) - P(u) - a(u,ut) where L = d 2 / dt2 and L-I is the two-fold integration from 0 to t. Operating with L-' , and substituting
- L-'P(u) - L-' a(u, u') u, = C, + C,t + L-'g(t) urn+,= -L-IB, - L-'A, u = u,
we can -te
u=
xlo
urnand h e approximant
(m 2 o)
qrn[ u] = #D =
Now if we approximate the input function g(t) =
xm n=O
zJ
un.
gntn by the mth
order approximant
then we can compute the corresponding mth order simulant to the solution a,[u] or a, wtuch satisfies the equation
Thus the simulant to the solution amis the result when qrn[g] is used for the input. To summarize for a given pre-set precision, we need only approximate the input and the nonlinearities to compute a convergent sequence of solution simulants which approach the solution more and more closely as the series for the approximants are carried farther. Obviously: the techniques discussed can be valuable in solid-state or vacuum tube electronics and device simulation. The outcome should be useful in getting realistic models. The Van der Pol equation. for example, is assumed to have the u'u2 nonlinearity and we have used a(u,u'). By using the "best" empirical noniinearity, we are in a better position to refine the model. Professor S.N. Venkatarangan (Indian Institute af Technology at Madras) and his students have prepared several papers and dissertations nearing publication using the decomposition concept. Ln [6] he finds a closed form
solution for a (particular) Duffmg equation by applying a 1-aplace transform to the decomposition series, then converting the transformed series into a meromorphlc function by forming its Pad6 approximant and finally doing the inversion. The technique was also applied to the Van der Pol equation and the Rayleigh equation. Professors F. Jin-Quing and Y. Wei-Guang (China Institute of Atomic Energy ) have deveioped computer programs using the decomposition method to study accuracy of the solution of the Duffing equation and for the first time to study chaotic behavior [7]. The error is only 0.0001% in four terms, which corresponds closely to our results.
REFERENCES 1. 2. 3. 4.
5.
6. 7.
G. Xdomian and R. Rach. Purely Nonlinear Equations. Comp. and .Mafh. with Applic.. 20, (1-3) (1990). G. Xdomian. Decomposition Solution for Duffing and Van der POI Oscillators. Math. and Mafh. Sc., 9 , (731-32) (1986). G. Adomian, R. Rach. R. Meyers. An Efficient Methodology for the Physical Sciences. Kybernetes, 20, (1991). G. Xdomian, Nonlinear Stochastic Operator Equations, Academic Press (1986). G. Adomian, A Review of the Decomposition Method, Comp. and iMarh. wirh Applic.. 21. (101-127) (1991). S. N. Venkatarangan and K. Rajalakshmi. A Modification of Adomian's Solution for Nonlinear Oscillatory Systems, submitted for publication. F. Jin-Quing and Y. Wei-Guang, Adomian's Decomposition Method for the Solutions of the Generalized Duffmg Equation and of Its Coupled Systems, PTOC.of the 1992 Int. Workrhops on Mathematics Mechanization, China Inst. of Atomic Energy.
1. V.S. Pugachev and I.N.Sinitsyn, Stochastic D~rerentialSysrerns, John Wiley and Sons (1987). 2. A.M. Yaglom, Stationary Random Functions. R.A. Silverman, trans. and ed., PrenticeHall (1962). 3. V.S. Pugachev, Theory of Random Functiom. Addison-Wesley (1965). 4. A. Blanc-Lapierre and R. Fortet. Theory of Random Functions, J. Gani, transl., Gordon and Breach (1967). 5. J. Hale, Oscillations in Nonlinear Systems, McGraw-Hill (1963). 6. A. Blaquikre, Nonlinear System Analyses. Academic (1 966).
SOLUTIONOF THE DUFFING EQUATION
THE DUFFINGEQUATION:
Consider the Duffing equation with variable excitation and constant coefficients a,P, y u " + a u ' + p u + y u 3 = 6(t) u'(0) = c, u(0) = c, &t) will be written as a series 6(t) =
EmbDtD.Let n=O
L = d2/dt'. Then L-'
will be the two-fold integration from 0 to t.
Lu=6(t)-au'-pu-
yu3
Operating with L-' .
Replace u b )
Zm,u, D =,.,
and the nonlinearity L'? by
XIoAn We ha:fc .
(A, through A,, are listed in Chapter 3 for reference.) A convenient algorithm which also gives us the correct result for t h ~ sspecific case is
We identify
SOLUTION OF THE
237
DUFF~NG EQUATION
rn-l
The solution is the convergent series ~ ~ , and u , rpm = xa=,u, is the approxirnant to the soiution. Although our definition of L as only the highest-ordered derivative rather than the entire linear operator avoids difficult Green's functions, we still have integrations of the function 6 (t) and the 4.If 6 (t) is a function such as sin o t, the nonlinearity results in a third power of sin o t , and we see that a proliferation of terms and computations can occur. (see page 254). We observe, however, that we don't calculate u but a rapidly converging approximant rp,. Since this is the case, we need not use 6(t) but an approximant of its series, or, pa[6] =
Y''6,tD.The corresponding solution n-0
is called the simulant am[u].It satisfies, in this problem, the equation
[6] = Zm-'6, t" . Of course, m+lim qm[,16] = S(t) and lim om [u] = u. with om a=O m-
Consider an example with a = p = y = 1 and 6(t) = e' sin x + e-3' sin3x. Solution by decomposition will converge to u = e-'sinx. But we choose to approximate e" by the terms up to but not including the cubic, so e-' = 1- t +(t2/2). For sin x we write x. Now g = x- xt + (xt2/2) and ~ - ' g = x t ~ since / 2 we are dropping cubic terms and beyond. Hence, uo = sin x - tsin x + L-'g = x - xt + (xt2/2) u, = o
so we have
for our "solution" or simulant for our approximant of 6 which, of course, is e-'sin x to the same approximation as used for the sinusoidal and exponential functions. If a, is the mth order simulant, q,,[u] = q,[a,].
%[a,
If we carry more terms for we can identify correspondingly more terms of o,[u] or, in the limit, u = e-'sinx. As soon as we recognize, or think are recognize, a known series, we can verify whether the equation and conditions are satisfied. If we do not recognize the series, we can verify that computation of more terms, either for the actual solution u with the excitation 6, or for a,ful with for the the excitation (o,[6), yields results which have converged s~~fficiently decimal places of interest, remembering that we are interested in physical problems. We can plot results for p, [u], or a, [u], for m = 1,2,3,...N to show the convergence and establish a solution approximant to a sufficient accuracy. We note that for a specified mth order approximant of the excitation 6, we obtain th2 mtli order simulant a, of the solution. If we approximate to, and including. cubic terms, we get
i.e., to the sarnc approximation with u, = u, =.-.=0, we have u = e-'sinx. If one chooses to use the function 6 rather than qD[6J, and the result is not a known series. one doesn't simply keep calculating with the straishtfonvard decomposition: a stopping rule is required, i.e., there is no point in further calculation, with the proliferation of terms whlch result from nonlinearity, if the results are past the necessary accuracy. If we need three decimal places and the solution has stabilized at least that far, it is sufficient. Consider now the example: y" + 2y' + y i 8y3 = e-3'
Suppose we f i s t write e-3' z 1- 3t. We have
yo = (1/2)(1- t + t2 - t3) , BL-'~: since ~ , ( : y l ) = y,3 yI = - 2 ~ I- d yo - ~ - ' y d ' y, - L - ' ~, ~ u ' A ,(y3) Y2 = - 2 ~ - dt
Substitution into the differential equation shows N = 0 so (1/2)e-' is the (exact) solution. Alternatively, we can compute more t e r n and group together terms of the same power. If we approximate e-' with another term of the
series, we get
so that the Two-term approximation q? is given by
so we have another term of e-' . Finally, we can use numerical results with increasing n for %to show that the results are converging to the solution, i.e., there is no further change w i h n the accuracy of our graph or table. If we calculate the one-term approximant 9 ,= y , for the solution using three terms of the series for the forcing function, we get the following results:
Convergence to the exact solution will be best if the actual forcing function, or at least more terms of its series are used. In practice, qnwill consist of very few terms. Another example is given by
The solution by decomposition with the above excitation is u = cost , obtained more easily if we approximate the series represented by the above excitation. (Those already familiar with the asymptotic decomposition method will see immediately that the solution for t -+ is also cos t. Hence, u = cos t is the solution for all t.) There is no problem in canying out solutions where any or all of the parameters a,P, 7 ,as well as the excitation 6, are time-varying functions.
-
DUFFINGEQUATION: Consider the case of unity coefficients for convenience:
ASYMPTOTIC DECOMPOSITION FOR THE
Since we are interested in the solution as t -+ m, we introduce the notation Q[u]= limu(t)= lim lim p, t +-
t-+-
m-s-
for the asymptotic solution. Of course the cp,must be computed by the asymptotic decomposition method for the above identity to be valid.
EXAMPLE:Constant S
Hence
If F >> I. ~ [ u=] 6% If 6 < 1. the series diverges.
Thus we _get results for 6 2 1 in this approach. .4symptotic decomposition is applicable to linear as well as nonlinear equations: hence,
and we notice that the series converges for 0 < 6 < 1.
It appears therefore that the magnitude of the excitation must be considered for a convergent result. Thus if u" + u' + u + u3 = S for 0 < 6 < 1 we soive for the u, i.e., we write u = 6 - u3 -ur-u"; while if 6 > 1 , we write u 3 = 6 - u - u' - u". Thus we have a choice in asympt.otic decomposition which we can use to advantage to obtain a convergent series. (We also notice that if 6(t) = sin t whch is between 0 and 1, Q[u] behaves like a Fourier sine series.)
STAGGERED SUM%I.ATION TECHNIQUE: Let's first consider the harmonic oscillator with variable exciration to illusuate procedure: u"
i-
a u = p ( t)
u(0) = c, u'(0) = c,
XI,Pata
Assume ~ ( t ) =
We now will write u, as a series; thus,
where
With the decomposition u =
XI,u..
a constant
for m 2 1 and uo =
En=,a:) m
I". EIence
The solution of the equation we began with is u =
Ern=, t'" xm ':a oe
we can write u =
n=O
Ern urn.If we let m=O
t D. We now rearrange results by
staggered summation as shown in the following tabulation.
Altcmatively, the solution components can he computed as follows:
where
--
u, = -L-' a u, = t 4
C a?) tn
where
Continuing, we can now write
Since u =
C l ourn,
which is the previously derived result but may b~emore convenient for programming. By staggered summation, i 0
For m=O
Form > 1,
a,, =
2 a?2-,,
p =o
We note that we can write the a?) in terms of the a!)
as expected
where for n = 0
and for m 2
I
SOLUT~ONOF THE
D~JFFING EQUATION
247
Since a is constant-valued = f-1)" 2rn
am)a:
n ( n + v)
u=l
H A R M O N I CO S C I L L A T O R W I T H VARIABLE EXCITATIONAND
We have
L-' LU= L-' P(t) - L-' a(t) u(t) u = ~ ( 0+) u'(0)t Uo
+ L-' P(t) - L-'
+
= ~ ( 0 ) uf(0)t + L-' P(t) = c,
which we can wria as u, =
xzo
a(t) u(t)
+ c, (t) "t L-' P(t)
a" tn where
The following components are derived from
Thus
Since
hence
where
Substituting, we have
where
Upon substitution
where
where
Finally, since
we have the decomposition solution
This result can $so be rearranged by the staggered summation procedure: thus:
where for m = b~:? have
and for n-I2 1 we have
and
F o r m = 0.we have
and for m
1. we have
HARMONIC0 SCILLATOR W I T H VARIABLE E;XCITATION
znzO lotn 00
Lst a. b be numerical constants and 7( =
Specify
to get
n=O
where a(0) 0 1
= =
Co Cl
Continuing, U,
U,
Since u, =
xu;,
a(') "+I tn(n
=-L-' a u ' , - ~ - ' f l u o
= -L-'Q
+ I)=
- L-I
zwbf) n=O
t"
p Urn-, w h e n b!:' = (n + l)af!!,
AND A
The next component u2is given by
and
where br'
= i n -e .))a!'.
Thus,
M'e can comhine these t e r n to w r i t e
and .
.
Now going on lo u,,
u, = -L-' a u; - L-' Since
p u,
U2
= t1
C o=O
where )a:
= -a
-a
(n+3)(ns4)+t5
-,O a',?' t o ~;;;4)(n-5)
bf)/(3) (4) and
noting that bF-') = (n + m)a?'".
where a r ) = -a bp-"/m(m
Thus, for m = 0,
and for m > 0,
m
t"
Finally,we write
+ 1) and
-
which is the solution by decomposition.
E X E R C I S EFor : a damped linear oscillator described by d2u/dt' + 2duJdt + u = 0 with u(0) = a and u'(0) = 0, show that the solution for decomposition is
EXERCISE: For the undamped nonlinear (Duffingj oscillator described b). Lu + u +u' = 0 ~vithu(0) = a and u'(0) = 0, show thar the two-term solution is
-
EXERCISE: Consider u"- au'&f l u y u3 = p(t) with u(0) = a. ~ ' ( 0 = ) 0. Assume a = 0.fl = ./ = 1. For g(t) assume sin t and approximate with h e first two terms of rhe sine series. Show
PROLIFERATIOIS OF TERMS: In nonlinear equations, where the initial component of the decomposition solution consists of several terms, the nonlinearity may result ir. a proliferation of terms and consequent increased computation unless proper steps are taken. Also it is sometimes convenient to write the lnhomogeneous term as an d i r e series to simplify integrations. It may well be the case that the excitation is known as a power series representation. We consider such a case here since it
is a "worst case scenario" from the point of view of proliferation of terms. If one seeks the complete solution (steady-state plus transient), the number of terms in each of the components u,for m 2 1 can increase rapidly because of the nonlinearity. A number of practical alternatives are possible to solve such apparent problems, and we will show that rapid convergence to the solution will be observed. Summations are utilized to organize the derived results. We now consider the Duffmg equation with variable excitation and constant coefficients . u " t a u ' + / 3 u + y u 3 =g(t) We assume given conditions u(0) = co and u'(0) g(t) =
=.
c, and assume that
Em gatn since it is more general. will kequenrly simplify integrations u=O
(e.g., if g(t) = cos n o t ) and is a worst case from the point of view of computational difficulty arising from the action of the nonlinearity on the initial term (u, = u(0) + tu'(0) + L-'p(t)). Let L = d2/dt2 and define L-' as the twofold definite integration from 0 to L We have L u = g(t)- a u ' - p u Operating with L-'
We identify the initial term
which we will write as
where the coefficients are known:
yu3
W e can u=
x;,
now
write u = uo - L - ' ~ u ' -L - ' ~ u -~ - ' y u and ~
u, and u3 =
xm n=O
assume
A, whore An or equivalently, A n { u 3 }is
calculated for the function u3. W e can also write both u and u3as sums of the appropriate An; then u becomes simply u = n=O un and u' becomes ( d / d t ) ~ un ~ o. The Anpolynomials for u3 are
Algorithms have been previously given to generate the Anfor general nonlineariues: however, the above form is also convenient for pol~momials. We now. ha\.=
and can w r i t the components
Since u,, is given as a power series, we can use the following result: If u = n = o antn. then f(u) = n=O Antn where the A,(a, , ....
zm
xm
a. ) are
simply the A, polynomials expressed in terms of the coeficients a, instead of the components u,, u,, ... . Continuing, we compute u', and u,.
using the above result. Thus
and
where n-u u=O
We can now write
v-pap p=o
We see that u i is known in terms of the a:' for u, in the convenient form:
and that we can write the equation
where (01 a!) = - c r b y ) - p a r ) - yc, (n + l)(n + 2)
where
where
where
We now have
and
Thus.
where
. .
Going on to u3
Since u, =
Ema"'" n=O
where
For A2 we have where
Hence
w h c h we write as
and finally
where and
u, = -L-'a u; - L-'fi u2- L-' y A, t"'),
we have u; =
Em(n + 3)af) tn*' n=O
or
SOLUTION (IF THE DUFFING
EQ~JATION
261
Thus,
or more simply, as before, OJ
where
Continuing to u,, u, , ... , we can write the mth term
Summarizing, for m = 0
ZLourn or u = uo + xm=, u, or m
The decomposition solution is u =
We can also write the solution by staggered summation in the form
where
and for m 2 2
= S-I
where for m = 1. bp' also have
= (n
- l)ar!,
m-1)
- p ay-l)-
-a (b',-, YC:-" n+m+l)(n+m+2) and for m > 1, bLm--')= (n - rn )a',"'. We
=e
ALGORITHMSFOR THE DUFFING EQUATION: We consider now explicit solution of the Duffing equation
given initid conditions u(0) = coand uf(0) = c,. We assume S(t) =
xw6,tn. n=O
Our objective is an efficient procedure for calculation of the decomposition series. We have U = U,
-L-'~U'-L-'~~-L-'~~~
where u, = u(0) + tu'(0)
+ L-'&t) 00
= c, + c l t + J t J t C S n t ndtdt O
n=o
We note that if the input is a sinusoidal function, integrations will soon become difficult because of the effect of the nonlinearity. The result for 6(t) = sin t is found by setting 6,,+, = (-1)"/(2n - I)! and 6,, = 0. For cos t, let 6," = (-1)"/(2n)! and 6,,+, = 0. For 6(t) = c, cost + c, sin t, we can let 6,, = (-l)"co/(2n)! and &,+, = (-l)"c,/(2n +I)!. Finally, in many cases, a finite series will be sufficient for the required accuracy and we N
can write { ( t ) = ~ , = , c n t ni.e., , 6,=c,
for 0 5 n 5 N and 6 . = 0 f o r
n > N. Conveniently, 6(t), assumed to be uniformly convergent, can be programmed as an N-component vector with the series truncated to the precision required. Tenns after u, are given by
xI::un
for rn 1 where the A,-. = A,-{u3) = ~ , o ~ ~ = , ~ m - 3 u~..n -q,, = is the approximarit to the solution. 50,-, = q, u,. and lim 0, = U . h o w
-
m-a
write the imtial term u, in the convenient form:
where
a (-c,, ~ ~ a(o! , =c,,
a:-',=6,/(nil)(n+2)
The approximant cp, will be written
where b y ' = a ? ' . The u, component can now be calculated and therefore the q2 approximant. since cp, = cp, + u, . Since
we have
SOLUTION OF
2 65
THE U l j ~ F l r Y ( EQUATION ;
Computing A,, using the given algorithm,
which we know from our general algorithms for f(u) as well. Using our result , can find A,as the Cauchy product: for ~ 3 we
which we will write as
x m
A, =
A'O'tU
o=o
where ~ ' 0 '= n
C Ca'O ' n-vav-wap n
v
'0'
'0'
v=O u=o
Since u, = -L-'aul, - L-'P u,
In succinct farm,
U,
- L-I
y A,, we can now write
= t 2 x L o a',')tn where
00
W e n o w have q2=q, + u l w h e r e (j?,= z n = , b l " t n and hence 00
UI
=
2
Cn=oan(1) n .
where
Proceeding to the u2 component and therefore the cp, approximant,
so t h a from u, = t2zm n = O ay't' we write
and
or the Cauchy product
where
+ u,
Now we have the approximant cp, = cp,
where
with
b$' = bf' bj3) = bj"
by' = b(2) 2
(3)
= b(?) + ,(2,
bn+3
n+3
n
Proceeding to the u, component and consequently the 07, approximant, we have u3 = -L-'au; - L-'P u, - L-'.I/A, using
which we will write as
where
~ ( 2 = ) D+I
3x
x a ( O '
n+~-v
a(0) (2)
v-pap
Thus
where
The next approximant cp, = cp,
+ u, using
so that
with b r = b'" ' b:4J = b:31, by' = b y ) , b y ! = b y ) , (1
and
Continuing to the u, component and the cp, approximani.
b',";d=bL32,+ab3)
269
SOLUTION OF THE DUFFING E Q U A ~ O N
u, = - L - I ~ u ;
- L - ' ~ u , -L-'YA,
for which we need
which we write finally as 0
where
Thus the ul component is given by
w h c h we will write as ?.r
r.=C
where a'a: 0
-
a:3~ P
5
and we ha\-c the next approximant
(p5 = (p,
- u4
with
.SOI.[/TION OF TIIE ~)IJFFINCE Q ~ JTION ,~
We write now
with
Computing the u, component and cp, approximation.
for which we need
271
which we now write as
\
where
n=l
=3 z
C a",,
n=O
a?', 'a:
r.=O p = 0
s
+ 3x C ,a,:
n=O
a ~ ~ ,+ a6 ~z '
v=O p=o
v
a,:
,:a
a):
v=o p=o
u, can now be written
and finally m
n=O
where
-
Thus. the six-term approxirnant qa = qr u5 where
u5 = ti
with
xn=0
aF)tn and we conveniently write
xn=O
b:'in
and
SOLLT~ON OF THE
DUFFING EQU~~TION
2 73
We now have a six-term approximant to the solution and, if necessary, can continue in the same manner. (The algorithm can easily be progammed for machine computation.)
SUMMARY: Decomposition components are given by:
The coefficient-generating algorithms are:
274
CHAPTER 12
lim cp, = u
m+-
REMARKS: A method of approximation does not need infinite precision. We
need to h o w that we have uniform convergence of input and coefficient series and generally 6(t) can be truncated after a few terms for sufficient precision. Computation will investigate the effect on increased accuracy of adding terms to the 6i t ) series to develop stopping rules. An example i s instructive. Consider 3 U"+U'+U+U =g(t) u(0) = 0 u'(0)
=1
The solution is sin t. If we calculate u(t) by decomposition, we find cp, = u(t) = t - (l/_?1)t3or sin t to the same approximation as the series for g. If we specify g further. we can get more terms of u. Practically, h s means that if we are interested in accuracy to n decimal places, but further calculation results in no further change in the n places, the solution is considered known and will be verified to satisfy the equation and the given conditions. Because of the generally rapid convergence, whlch we have seen in all work on the subject, we would expect that most of the solution is in the early terms of the decomposition series. If we have the cp, approximant to three decimal places aqd furrher change exists only in the fourth or fifth place, o w result may be sufficiently accurate so that convergence to a required precision can lead to a stopping rule. As we gain insizht from our computation, we can develop stopping rules: investigate
convergence rate and error, acceleration techniques (Padt, Euler, Shanks), checks on the solution by modified decomposition, asymptotic decomposition, and verification that the solution satisfies the given equation and conditions, problems and determination of the region of convergence in initial-~~alue possible use of analytic continuation, our one-step proced~ue,and other special techniques. Because of our choice of L and R with R a lower-order operator than L, we always have convergence, as we saw in evaluation of matrix inverses with /L-'R[c I. and since our integrations are rziv~al.i.s.. we havc Green's functions of unity. Adomian and Rach [ I ] have previously discussed an initial-value problem with a simple diffusion equation with uix,O) = f(x) where the operator in the solution senes u(x, t) = m = ) (LI'L,)" f(r) annihilates f(x) at a finite m; the result then does not satisfy the equation and conditions. No closed form solutions exist for the Duffing equation and solut~onshave always been made using perturbation, discretized methlods, etc. The results have shown multiple oscillations-not only at the excitation frequency o but also at subharmonics w In, or superharmonic nw with n = 1,2, ... . If the excitation contains several terms we get combination frequencies also. The nonlinearity causes complex interactions among the input terms: however, the series captures all of the actual result. One could plot the result and do a Fourier analysis to see components in the output. With non-conservative systems where the u' term is non-zero, one would not in general expect sinusoidal outputs unless g(t) compensated appropriately for the damping. We must be certain that we have a correctly modelled problem whose solution is consistent with the given conditions, and then solve it to see if the harmonics do exist in fact. Finally, solutions are unstabIe against smaI1 changes which can result from round-off error or linearization between grid points in the usual computer methods.
1-
EXAMPLES: 1) To show that the decomposition method yields correct solutions to the degree of approximation that we use for the excitation, we will begin with u = s i n t = t-t3/3! and substitute into the Duffing equation. u" + u' + u t- u3 = g(t). Dropping terms greater than t3, we have
t2
L - ' ~* - (+hgher terms > t 3 ) 2
t2 l3 u =---' 2 6 to order of g t2 t2 t 3 cp1 -t+----2 2 3! t3 cp, = t - - or sin t to order of g 3! We now have the solution to the same degree of approximation as g. Thus, given the problem u" I u' + u + u3 = g(t) where g(t) = 1 - t2/2 - jt3/6 with u(0) = 0 and uf(0) = 1, we get u = sin t to the approximation q, = t - t3/3!. As the forcing function is approximated more closely, our series for u approaches the series for sin t more closely. Substitution of the approximant cp, into the equation must satisfy the equation to that degree of approximation and satisfy the given conditions as well.
-
2) Let
u' = - t2/2 and u" = -t, substituting in the Duffing equation approximate u with the first two terms of each series. Thus u = 1- t3/3!. Then u" + u' + u I u3 = g(t) We get g = 2 - t , since other terms are greater than t' when operated on by L-:. Thus L - ' =~ t 2 - t3/3!. Since u(0) = 1 and ~ ' ( 0 = ) 0:
To get u, = -L-'uj
- L-'u,
- L-'U;,
7 t3 cp, = l+t------
3!
t3
t2
3
u, is zero to same approximation so we have cp, = u.
Thus, given the problem u"+u'+u+u3 = 2 - t
we get p2= 1- t3/2 which is correct even if we do not see that this is (1- t) + (t - t3/3!) and recognize or guess e-' + sin t . A CONVENIENT FORM OF THE SOLUTIONO F THE D U F F I N G
ASCENDINGPOWERSOF T: Considering the Duffing equation u" + a uf + P u + y u3 = g(t) where a,fl, y are constants, u(0) = c,, u'(0) = c, ,and g = n=O gutn,, we calculate the solu-
EQUATION IN
xm
tion through the t6 term which should generally be sufficient. Letting L = d2/dt2, R = a(d/dt) f p, and Nu = uu3, we have in operator format
Lu + Ru + Nu = g. Decomposition results in u =
Em u, with n =O
The followling components are:
where the A, for u3 have been given by convenient algorithms. Thus,
We can continue to obtain u,. uj, ... . However. i t is most convenient to write the solution in ascending powers o f t as
li=O
~ h p r C, c and c, are given and
The sum
xzo
cotn is the solution through the t6 term and if we have c,, or c,
equal to zero, the result becomes quite simple.
RESPONSE OF NONLINEARSTOCHASTICOPER.ATORS: Random vibrations arise, e.g., in space structures and buildings subjected to seismic events. Our objective will be consideration of randomness in physical systems, which are generally nonlinear, without the use of perturbation or linearization which may prevent our seeing real possibilities of catastrophic failure. We will also consider parameters and excitations without the usual
restrictive assumptions which are customary but do not necessarily conform to physical reality. For the present, assume the equation u" + a u' + p u + y f(u) = g or Lu + Ru + Nu = g with L = dz/dt2, R = a(d/dt)+P, and Nu = yf(u). W e can consider cases in which one or more of the a,/.?,y,g may be stochastic processes, without restriction to only g being stochastic, and without further m assuming a white noise excitation. Decomposition yields u = u, with
xn=,
where the A, are determined for f(u). The approxirnant
qn = ~1;:
u, senres
as the solution. Since the procedure conver_gesrapidly so that a few terms are sufficient in practical cases and because the terms depend on preceding terms rather than following terms, avoiding closure problems, ensemble averages can be taken term by term to determine < u >. We do make the natural assumption that the excitation and parameter processes are uncorrelated. Then taking the ensemble average of the product q, ( t ) @,(t'),we can also get a twopoint correlation. A review of the necessary knowledge of stochastic processes for application to solution of physical problems by decomposition appears in I1 I.
D U F F I N G 'S E Q U A T I O N WITHOCT PERTURBATION TO
1 L'E K
ACCURACY:
Quantitative general solutions of Duffing's equation are easil~.found using the decomposition method. The motion depends on the initial conditions ~ ( 0 ) :u'(O), the parameters, and the inputs. The method of solution makes no assumption on the nature of the output or on smallness of certain parameters, and is not restricted to a single input or closeness of the excitation frequency and the natural (unforced) frequency. This section will show that by seehng solutions to only the necessary accuracy, considerable computation and difficult integrations are avoidable. The appearance of harmonics and subharmonics will be demonstrated. Finally, we will demonstrate, using the Duffing equation, that decomposition subsumes perturbation.
281
SOLUTIONOF THE 3UFFINC; E Q O A ~ O N
Let L = d2/dt2 and L'=
SIC(.)dtdt and solve for Lu. Thus. 0 0
Let u be decomposed into components
1- u, with u, identified as n=3
with other components to be determined. (The nonlinear term is written as
xz,
A. {u3} or briefly as
zwA,.) U=O
urn+,= -L-' for m 2 0.Then
$m
=
U;
The terms after u, are:
- L-'o~u, - L-' p A,,
xm-' n=o
u, approximarzs the solution u =
xm un in a n=O
rapidly converging series. Although this provides general solutions, there are difficulties with trigonometric inputs, for example, and the u3 term; we can still get difficult integrations despite defining L so that a difficult Green's function can be avoided. Also we can get a proliferation of terms, causing unnecessary m computation. Hence, we will assume g = gntn which might appear
Enso
counter-intuitive because of the proliferation problem. However we only need to compute to a necessary accuracy in a physical problem which we demonstrate with some illuminating examples.
EXAMPLE:Consider up'+ u' and
+ u + u3 = g(t)
where u(0) = 2 and u'(0) = -1
g = cos3 t + 3e-' cos' t + 3e-'' cos t - sin t + e-"
+ e-'
If we approximate each function in g with the terms of its series through t2 and calculate ~ ~ ' we g , get 9 t2/2 to the above approximation. Hence
u, = 2 - t + (9/2)t2 and u, = -(9/2)t2. Therefore the two-term approximation. correct through the t2 term, is u Z 2 - t which we can write as ~ = ( l - t + t ' / 2 ) + ( 1 - t ~ / 2 ) = e - ' + ~ 0 s t , ~ I~+=. . ~. =and ~0~ u ~ cos t
-
for large t Note that this is the exact solution.
EXAMPLE:Show superharmonics are possible in a Duffmg equation.
Where g is a uigonomeuic function as before. To avoid difficult integrations, we consider the Maclaurin approximation to three terms:
Since we have
if we drop terms Sreater than t2.Then
The two-term approximant q2 is given by
which we see can be arranged as
qt = (1 - t2/2)- 2t and immediately guess
u = cos t + sin 2t
showing existence of a superharmonic. Needless to say, we verify the solution obtained by direct substitution. (Also wc can consider another term of the Maclaurin expansion of g to get the cubic term in our approximant.)
EXAMPLE: Show that subharmonics can arise in a D u f f i g equation
with u(0) = 2 and u'(0) = 0. g is again a trigonometric form whose Maclaurin expansion is given by 79 1lt 1it' g=----9 9 18 Then
through quadratic terms. The first term of the decomposition series is
Solving the Duffing equation by decomposition, we have
Since
1- u n, the next term ul is 0
=o
The polynomial A,(u'} is simply u:; therefore, u, = St' plus, of course, i;hence higher terms. The two-term approximant $2 to the solution is u, -u,
Decomposition yields the solution-it conditions, and the excitation.
depends on the parameters, given
The fact that we recognize closed forms here is perhaps interesting but of no real significance. The series can be carried far enough for computation as necessary. The traditional emphasis on closed fonn solutions has generally led to replacement of actual problems with more tractable but less realistic models.
PERTURBATION VS. DECOMPOSITION: Consider the homogeneous Duffmg equation with no damping and assuming a "small" nonlinear term: u"+ U S &u3= 0 ~ ( 0=) a u'(0) = 0 Using perturbation defme u = u,(t) + E u, (t) + - - .. Hence, substituting to O(E),
Equating pourers of E , u;+u,
=o
The linear solution (E = 0) satisfying the given conditons is u, = a cos t . The E' terms give us ur- u, = -u, 3 - a -7 cos3 t = -a 3 (;cost
- cos3t)
with u, (0)= u;(O) = 0. Hence we can solve for u. and write u, - E u!. We observe that the u, is the solution of u:i u, = 0. In decomposition i t is simply u, = u(O)+ tu'(0). Also the perturbative term is harder to obtain than the decomposition component u, = -L-'u, - EL-'A, where the L-' is merely a double integration. The perturbation case involves integration using a Green's function and more difficult integration. Further, the perturbation u i involves the secular term t sin t, and we do not get a uniformly valid expansion which would allow a bounded u for a finite number of terms. Thus u,/u, -t. D. as t + -. The results converge slowly while decomposition converges rapidly so few terms are required. The totals should be the same but not term by term. Applyins decomposition to the equivalent linear system u" + u = 0 with
u(0) = a and uf(0)= 0, the decomposition terms are
Therefore, the three-term approxirnant obtained by decomposition is
which is the approximant to u = a cos t. Now consider u" + u + Pu3 = 0 by decomposition without assumption of smallness (and use of perturbation). We get U, = a
We have added
-PL-'U;
P
or - a3t2/2 as a first approximation to the previous
linear result. When the system is close to linear (weakly nonlinear), we get
P= E. u"+u+~Nu=O u(0) = a and u'(0) = 0
Now the EL-'Nu approaches EL% or - at2/2. For this weakly nonlinear (or small E) case the first approximation or E term u;+ u, = -acos t so U, = -at2/2. Thus the first decomposition addition is equal to the first-order perturbation result if and only if the nonlinearity Nu is sufficiently small. In decomposition
there is not a restriction to "close to linear"; it applies generally to nonlinear systems so "weakly nonlinear" or "linear" become special cases.
ah, , - = ,(I
-
$)
= a o r I as rhe number of t e n s increaser
which is easily checked by finding more components. In the equation u " + u + & u 3 =O, wehad u0 = a u, = -L-'u,
- EL-' u3,
For small enough E, we have added -L-'U; or -a't2/2 a s a first approximation to the linear result u = a cos t. When Nu = u , i.e., we have a weak nonlinearity, we get u"+u+ENu=O u(0) = a and u'(0) U,
=0
=a
U , = -L-'u~
- EL-'u,
The EL-'NU or EL-'A, approaches EL-'u, or -ati/2. In the perturbation case the first approximation or E term now satisfies u;' u, = -a cos t so u, = -a t 2 / 7 . Thus the first decomposition addition is equal to the first-order perturbation result. Perturbation is effective if and only if the nonlinearity Nu is almost linear. Decomposition is effective for general nonlinearities and includes perturbation as a special case. Discontinuities in frequency response will occur as a result of ~ a r y i n g excitation frequency, since the nonlinearity acting on the aifference between excitation frequency and natural frequency causes new frequencies to appear and new multiple possible responses. With small damping, the oscillatory motion can suddenly change from slow to fast or vice-versa. h phase space we can have changes from one orbit to another and may fmd separated regions dcpendent on initial conditions, parameters, and excitation. If we change excitation frequency to approach the natural frequency, the behavior can change significantly. Decomposition yields the actual quantitative results for
-
real physical behavior for any given parameters, conditions, and inputs whether constants or time-varying. However, conditions must be specified. Further, decomposition provides solutions for r e d oscillators with any nonlinearity as determined from laboratory measurement, not only those with a simple nonlinearity u3 whlch might be, in actuality, un.
SUGGESTED READING 1. 2. 3.
A. Blaquikre. ~VonlinearSystem Analysis, Academic Press (1988). J. Hale. Oscillations in Nonlinear Systems, McGraw-Hill (1963). C. Hyashi. Nonlinear Oscillatiom in Physical Systems, McCraw-Hill (1964).
4.
G . Duffing. Emvungene Schwingungen bei Veranderlicher Eigenfrequenz und ihre rechnische Bedeucung, Vieweg (19 18). J. Cuckenheimer and P. Holmes, Nonlinear Oscillations. Dynamical Systems. and Bifurcations. Springer-Verlag (1983). K. Kreith. Oscillation Theory, Springer-Verlag (1973). P. Hagedorn, Nonlinear Oscillations, 2nd ed., Clarendon (1988). J. D. Cole, Perrurbaion .Methods in Applied Mathematics, Blaisdell ( 1968). A. A . Andronov. A. A. Vitt. and S. E. Khaikin. F. Immkzi. trmsl.. Theory of Oscillators. Addison-Wesley (1966).
5. 6. 7.
8. 9.
BOUNDARY-VALUE PROBLEMS WITH CLOSEDIRREGULAR CONTOURSOR SURFACES
The simulant concept can now be used in an extremely valuable application, that of boundary-value problems for differential or partial differential equations modelling physical problems between two closed irregular contours (or surfaces). These are considered using decomposition of the boundary shape and simulation of the solution for each boundary approximant. Our objective is to solve "two-limit" boundary-value problems analogous to two-point boundary-value problems for a second-order ordinary differential equation with Dirichlet conditions. In two dimensions, the corresponding situation is a two-contour second-order partial differential equation. In three dimensions, the analogue is a three-dimensional second-order partial differential equation solved between two surfaces. We can continue to an ndimensional second-order partial differential equation and n-dimensional manifolds. Our special interest is in solving partial differential equations in regions bounded by contours or surfaces. Two-CONTOUR CASE: Thus for the two-point, i.e., two-limit problem for second-order ordinary differential equations, which we can think of as a one-dimensional partial differential equation, we ha5.e two-point boundary conditions u ( x ) ~ , . ~=, b; and u ( ~ ) l , = = + b, ~ where x = 4, and r = 6, are embedded in a line and we are considering equations such as d'u/dx2 two-dimensional case we consider equations such as u, 4-u,,
+ p(x)f(u.ux,u,) = 0
with conditions such as
Thus the limits are smooth closed curves or contours. 788
+ f(u,u') = 0. In
the
TWO-SURFACE BOUNDARY-VALUE PROBLEM: Here, we consider equations such as
in a three-dimensional region bounded by surfaces S,(x,y,zi = 0 and S2(x,y,z) = 0 embedded in the region. Our boundary conditions are
The surfaces are smooth closed surfaces representing the Iimits. Obviously the concept can be extended to equations such as
ax,
+ p(x ,,...,x , ) ~ ( u ,,,... u .u,)
--
"
with
where 51 = (x,, ...,x,). M,and M, are smooth closed manifolds representing the limits in n dimensions. Let us begin with the merely illustrative one-dimensional example for comparison, using decomposition [I]. Consider a two-point simple one0 with v a numerical dimensional boundary-value problem d2u/dx2+ v u = 1 constant and Dirichlet conditions u(x = 5,) = b, and u(x = j,) = b,. The first example does not require the technique of analytic simulation but serves as an introduction to the following multidimensional cases. The equation can be written in the decomposition form as Lu + Ru = 0. Solving by the decomposition technique yields
where 1: is a two-fold pure integration with respect to x. By double decomposition,
Hence
We now have
We can write
m
- C(m)
Urn-
i XClml
+
CI
(-V).{cp-.)x1./(2n)! ;c\"-"'
x2"+'
/(2n -. I) !]
In the decomposition method the (m + 1)-term approximant to the solution u is
symbolized by g,.,
=
ELOu n . Thus
The exact boundary conditions u(x = 5 , )= bl and u(x = 5,) = b2 can be approximated successively by the approximate boundary conditions
But since qm-,= @,
i u,,
then
Defme bjo)= b, and b f ) = b,. Then,
Thus for
5, # 5,
Using staggered summation
Recasting this example in the format of higher dimensional cases,
where P, (x) = x - 4, and P, (x) = x - t2.We consider the simulants 0, to the approximations of the boundaries and denote them by a, [u] which becomes u in the limit. Thus (d' I ds')a,
+ va,
=0
Successive simulants are a,, a 2 , ... ,a,. The lim a,(x) = u(x) . In the onem+-
dimensional case it is x = 4 = p , a "radius". (We ma): develop a point = 5, and similarly for {:"I. Thus lirn a, = u.) In sequence where lim m+-
m+=.
the two-dimensional case, the limits are not points but closed contours in R' described by C(x?y)= 0 and we can have a contour sequence ~ ' " ' ( x y) . = 0. If the contours are not smooth but consist, for example, of piecewise differentiable functions, we can represent them by smooth continuous functions as accurately as we wish, and without Gibbs phenomena, by a recent combination of techniques for decomposition of algebraic and differential equations [2]. Thus we can assume that the contours (or surfaces) are smooth though irregular in shape.
TWO-DIMENSIONAL CASE: Now we consider a two-dimensional case with the model equation on R'
which we view as a two-dimensional analogue of the first example with vty) = d Z 1 d r? Analogous boundary conditions are
where C, and C, are closed contours representing the boundaries in R'. (We can, if we wish. let the outer contour + m or the inner contour approach the origin.) The model equation is written as L,u+L,u = 0 w h e r e
L, = d ' / d x 2 and Ly = d 2 1 d y 2 . Operaringwith L;',
Decomposing u into
21, u, we have
Using double decomposition u =
ELo
u" . Also.
u, = cf)(y) + x c;O)(y) U,
= ~ ( d ) ( ~+) x cll)(y)- L, 1: u,
U2
= cf)(y) + x ~ ; ~ ) (-yLy ) 1;
U,
urn = c r y y ) + xc;")(y)- Ly1:
Urn-]
We can write
The approxirnant to the solution is qm+,= conditions are:
m
u,. The exact boundary
~ ( ~ ~ ~ ) l c , ~ =x 1.' y , = o ~(~~~)lc,r= . , b2 ~)=o are approximated by
~ ~ . , ( x , ~ ) / c , , ~=, ,bjl = o
f o r m = 0. 1: 2, ... . Since qm_,= q , + u , , we have
The interior contour or boundary is given as C,(x,y) = 0 so that x = C,(y). The exterior boundary C,(x, y) = 0 so that x = Sl(y). * These can be approximated by we can consider
5;"'(y) lim
m--=
and j?'(y). If we wish for a particular model. i m so the exterior boundary -+ -. Define
jp)(y)
bjO'= b, and by' = b2 and
so we can write
c r ) + clm) = bin) cr)+ {, c:")= b y i
Thus
THREE-DIMENSIONAL CASE: We can generalize to RI with two closed surfaces S,(x,y,z) = 0 and S , ( X , ~ , Z=)0 (or even to manifolds in Rn)and simulate the solution u(x.y,z) representing the model phenomena. The Dirichlet conditions are U ( X ~Y * Z ) ~ S , ( ~ . Y . Z ~= =O bi
for i = 1,2
The simulants are o,[u] --,u for sufficiently high-order m, i.e., m-+lim a, = u. The boundary shape is subjected to decomposition and simulants are found for * See implicit function
theorem. Also see chapter on decomposition of algebraic equations [I].
successive boundary approximations to higher and higher order. Thus am satisfies the model equation and the boundary condition
for i = 1, 2 for m = 1, 2, 3,... . As we increase the dimensionality, we can use the previous results by writing
where L, = d2/d x2 and L,. = d2/dy2 + d2/d z2. Then
where y' = (y, 2 ) . Decomposition u =
With double decomposition u =
c,(y') =
xm n=O
u, yields
XIo xLou r ' and
C;m'(y~) m=O
so that uo = c!'(~') U, =
+ x c;O'(yl)
c',')(~') + xc;l)(y')-
L ~1:. uo
urn= cb"'(y')
+ X C ' , ~ ' ( ~ 'Lye ) - 12, urn-,
We can write
Then
Our (m + 1) term approximants to the solution are
The exact boundary conditions are
The approximate conditions are
for successive m but since $,+, = $, + urn,
for rn = 0, 1, 2, 3... . S , ( X , ~ ' ) = O implies x = t 1 ( y r ) and S , ( K Y ' ) = ~implies x = {,(yl). Hence x = ~ ~ " ) ( y 'and ) x = C?'(Y') explicit functions. Defininz b:') = b, and by) = b?.
are the approximate
so that
yielding
if wc exclude the trivial case {, ;t 5 : . If lirn ~ ' " ' ( x . ~ . z=) ~ ( x . ~ . zthen ). m-=
linl a,[u] = u.
ni+=
The ideas used for the solution by decomposition of algebraic equations can be used to obtain smooth expansions of piecewise-differentiable functions and do so without Gibbs phenomena. Thus in our consideration of irregular contours and surfaces, we can if necessary go farther and consider nonsmooth contours and surfaces as well. Physically, this means we can approximate shapes of artificial devices. Mathcmaucally, it means broadening of the class of nonlinearities. Consider, for example, the function formed on thc domai~i0 x 2 2 by two simple parabolas, one with vertex at x = 0, y = 0 and one with vertex at x = 2, Write y(x) = P,(x) for 0 2 x 5 I and jr = 0.c . y ~ = x2 and y = (x-2)'.
BOU~VDARYVALUEPROBLEMSWITH CLOSED l R R E G U U R C0h'TOURS OR
SURFACES
299
y(x) = Pz(x) for 1 5 x 2 2. Now y(x) is non-differentiable at (1,l). We will show that an A, expansion can be carried out, i.e., the analyticity requirement can be weakened. Write [y - P, ( x ) ] [ ~- ~ ~ ( x= )o ]considerins P,(x) and PJx) as roots of a quadratic equation to be solved by decomposition. Of course, the functions need not both be parabolas, or for that matter, other functions can be used for either one with a discontinuity in the derivative at the intersection. Consider the example
The decomposition form of this quadratic equation in y is given as
where the
are the A. polynomials for the function y2. The decom-
position of y into components results in
determining y =
zs n=O
y,
h n-term approximation q, = C nI=o - l y i is
The result can be written y(x) = (x2/(1+ x ) ) x km(xm/(l+ x)'" if we define k,,= 1 and &,,=
zm=,,-,-,. m-1
k, k
)
(k, = 1, k, = 1, k2 = 2.
k, = 5, k, = 14, k, = 42,... .) This result now represents the function of interest; thus, it is analogous to a Fourier expansion. Note that the limit of y as
-
x + is x, i.e., in the region 1I x < -; the limit of y as x + 0 is x2. In [ l ] it is shown that the smallest root of a quadratic (or higher degree polynomial) is obtained first and we see the smallest root values in 0 < x < 1 are from the y = x2 section, while in the 1 Ix < region, the smallest root values are from y = x. The farther apart the roots are, the faster the convergence will be. The second root of y2 - (x + x2)y+ x3 = 0 is obtained by dividing this by the root already obtained, e.g., if we consider only q,a one-tern approximant of the first root or @, = x2/(1 + x ) , we have
Iy2- (x+ XI)^ + x3}/{Y - xi/(l + x)}
=0
Thus, neglecting the remainder, the second root is y = (x + x2)- x2/(l + x) whose limit for small x is x and whose limit for large x is x2. E the fust root is f,(x), the second root is [y2- (x + x2)y+ r 5 M y- f,(x)] or
whch is solvable by decomposition. We can now do numerical calculations to see how well the result approximates the function of interest. We choose one value of x in each region and one at the discontinuity. At x = 112, we must have y = 114. i.e._ lirn y, = n--
xr
'-0
y i gives exactly those values.
q+ = .2201646
@
p.,= 2293096 cp,= 2349998 %= .2387932 p,= 2414426 @ = ,2433561 cp,= .2447734 p,,= .2458444 lirn 4, = 114
cpA = 1.8344764 cp: = 1.879998 q6= 1.9103456 p = 1.93 15408 & = 1.94682485 y9= 1.059 1975 q,,= 1.9667548 lim @, = 2
1.6% error by cplo
1.6% error by pl0
m-+-
= 1.7613169
m-+-
cp,= .7265625 cp5= .7539063 y6= .7744141 @ = ,79052174 @ = .8036194 cp,= .9145204 ~ p =, .8238030 ~ lim 4, = 1 m-+-
We note from observation of the values at x = 1 that the expansion in the A, polynomials does not display the Gibbs phenomenon which we see in Fourier series [2]. Instead, we get a blending effect at the point of discontinuity in the derivative-an interesting application of the decompositio~method.
REFERENCES 1.
2.
G. Adomian, Nonlinear Stochartic Operator Eqclations. Academic Press (1986). G. Adomian and R. Rach. Smooth Polynomial Expansions of Piecewise-differentiable Functions, Appl. Mczfh. Lett.. 2, (377-79) (1989).
APPLICATIONS IN PHYSICS Real problems of physics are generally nonlinear and often stochastic as well. Linearity and determinism should be viewed as special cases only. The general practices of linearization, perturbation, white noise, and quasimonochromatic approximations necessarily change the problems whose solutions are desired, to be tractable by convenient mathematics. They are not then identical to the physical solutions which we seek The alternative of using the decomposition method will be explored here as we consider examples of problems of physics. These problems are often quire difficult because nonlinearity and stochasticity are involved. Decomposition makes unnecessary procedures such as closure approximations [I] and perturbation and white noise processes in differential equations which involve stochastic process parameters, inputs, or initialhoundary conditions. Decomposition also avoids discretization and consequent intensive computer calculation and yields analytic expressions rather than tables of numbers. Thus quantitative solutions are obtained for dynamical systems. (When the systems are stochastic as well, the decomposition series involves stochastic terms from which statistics can be calculated.) The method applies to linear or nonlinear, ordinary or partial differential equations and is useful for many algebraic, integral, and delaydifferential equations 121. This chapter will outline procedures for typical applications. ANALYTICAL
SOLUTIONOF
THE N AVIEK-STOKES EQUATIONS:
The Navier-Stokes model* 11, 31 for an incompressible fluid of kinematic viscosity t.and constant density p is given as :
where E is a vector with components u, v, w. " Thls Irea.ment d f i e r s from the earlier work presented in 121 in that pressure is dynamic allowing i . ~ rk g e veiocity and turhulencc. 3 02
We assume that the velocity ~ ( xy,z, , t, a),the pressure, p(x, Y,Z, t, a),and the external force, F are stochastic processes. In terms of velocity components u, v, w. we write
-v{(d2u/d x')
+ ( d v d y')
i
(d2u/dZ'
)]
(1)
- ( l / p ) ( ~ F / dx) = F, with similar equations for v (replacing F, by Fy and d p,/d x by d p / d y) and for w (replacing Fy by F, and d p l d y by dpldz). We define an initialboundary problem by specifying initial conditions for u, v, w and for t 2 0, specifying u, v, w, on the boundary. Let's rewrite the system (1) in the equation above in the decomposition form.
We have some choice on the definition of nonlinear terms. Let's consider
To complete the specification of g,, g, g3 we must. know the pressure function. We can assume an initial pressure which will, of course, become a function of x, y, z, t as any disturbance occurs. However, we must determine the functional dependence of pressure on the velocities u, v, w so that the g,, g, ,g, are calculable.
If we find the divergence of each term in the Navier-Stokes equation, the Vp becomes VZpor l/pV2p depending on the definition used for p. The first and third terms vanish from the divergence condition. The second term gives us V(E. V) . ii. Thus
VZP= V . F - V ( c . V ) . i i Thus,
Symbolizing the right side by f, solving for L,p and inverting the operator L,, we have p = A + BX + ~;'f- L;'Lyp - L;'L,~ Wn ting p =
zw n=D
p,, and identifying
we have for n 2 0
pnA1 = -L;'Lyp, - L;'Lzn and we can write an n-term approximation for p by verges to
4,
=
pi which con-
zIopDor p. (Similar equations can be written for b p and L,p.)
We assumed at] initial pressure which gives us the A in our equation for p,. The coefficient B is zero since the disturbance vanishes as x -+ m. We use this p to find u. v, w. The resulting velocities are used in our equation for p, as a function of velocities, to yield an improved p = po + p, (which re-calculates po because of the change in f). T h s is used to improve results for velocities u, v, w. These calculations can proceed until we have sufficiently accurate results for u, v: w , p. We have
from which
Four similar equations can be written for v with g, and N, replaced by g, and N: and also four equations for w using g, and N,. It has been shown in Chapter 3 that when the boundary conditions are general (when conditions on any one variable depend upon all the others) that to solve for u, v, w, we can use any of the four operator equations depending on the given conditions and integrations required. If we h o w initial conditions, the equations involving the operator L,on the left side will be simplest since only a single integration will be required. We can also solve the system as a boundary-value problem using any of the equations involving L,, I,,or L, on the left side as discussed in Chapter 4. Hence, using the first equation of each set above and operating with L;' , we have
Now write the decompositons
u=Emu , , v = ~,=oU=O
W=C,=~ w,. OD
v,,
Also, write N,, N2 , N3 in terms of the A,, polynomials and finally identify:
The remaining components of u, v, w for n 2 0 can now be determined
where the notation A, {.) refers to the A, for the quantity in brackets. We now
have a completely calculable system which, if we imore stochasticity for the moment, and approximate by "-1
n-l
&'=C wi we have found u, v, w to n-term approximations. In the stochastic case, the expressions for u, v, urare stochastic series, i.e., series containing stochastic processes which we must solve for first- and second-order statistics, where each velocity component is replaced by a sum of a deterministic component velocity and a stochastic component. We do this now in spite of the fact that the equation obtained by replacing velocities with stochastic processes may not be the correct stochastic model since the equation was derived oe~rministically.An example of h s is the problem of nave propagation in a random medium where it is incorrect to simply replace the velocity in the dtAlembenian operator with a stochastic quantity: i.e.. a stochastic model must be derived which has the deterministic model as a limit rather than usinf the deterministic model to obtain a stochastic mode2 [3]. Thus, u v emust obtain ( U ) = ( U " ) + ( U ~ ) - { U ~..') (v) = ( v g ) + (v,) - (v:)
- ...
(w) = (w,) + (w,) '( w 2 ) i
remembering that g, , g: g, are stochastic. since F and p are stochastic and the A, are stochastic. The two-point correlation for each velocity component u, v; w is obtained by averaging the product of series for the velocity component at two spacetime points. If we consider, for example, fixed space position and time scales such thai stauonarity can be assumed, the ergodic hypothesis may hold so that ensemble averages can be replaced with time averages of observations. Since nonlinear terms can contain both functions of a single variable, such as f(u), and also functions of variables, such as f (u,v) and f (u,v.w), we have listed these geneial~zed A, in Chapter 3. We can use the A,{ f (u,v)) since N , . IS2, N j can be considered term by term. Now the cocfficien~sin the
generalized algorithm for f(u,v) will involve three quantities, the derivative is f("'") and the summation is over p,u. Using the resulting generalized A,, we obtain a general solution. Smooth solutions (to the incompressible problem under consideration) do exist for short times and are continuously dependent on the initial data. A basic question is whether the Navier-Stokes equations are an adequate model for real turbulent fluids. The linear constitutive law used in the derivation means that derivatives of the velocity components u, v, w are necessarily small. Secondly, stochasticity cannot be considered as an afterthought; it must be considered in the initial modelling. A more general model due to Ladyzhenskaya has partially addressed this issue by allowing nonlinearity in the constitutive law which leads to a globd uniqueness for nonstationary threedimensional flow. A truly nonlinear stochastic model coupled with the decomposition method of solution may resolve remaining difficulties.
SOMETHOUGHTS
ON THE ONSET OF
TURBULENCE:
Consider f ~ sat very simple equation whose solution is trivial. Thus consider dy/dx = (y - 1)' which obviously is satisfied by y = 1. Now consider the effect of a 1% change in a parameter by writing dy/dx = (y - I)' + .0 1. ** This now yields a periodic solution y = 1+ 0.1tan(x/lO) which has vertical asymptotes& (2k+1)5n, k = O I k 1 , k 2 , . . - . Now, let's make a 1% change in the initial condition, or y(0) = 1.01, We now have a hyperbola y = 1- l/(x - 100)*** and only one vertical asymptote at x= 100. Thus the effect in a nonlinear equation of even very small changes in inputs or parameters can result in large effects on the solution. Suppose now that very small fluctuations are present in the input and parameter because of small inherent randomness. Then the solution could change randomly between the possibilities above and appears very complex indeed. Now considering the Navier-Stokes system with its nonlinear terms where there could be small fluctuations in density, pressure, visc:osity, and velocities, it is clear that we can expect similar effects and a "chaotic-looking" or turbulent case. The nonlinear terms cause small fluctuations to become large fluctuations rt
We have considered y'=(y -1)+a with a > 0. If a < 0, solution varies between two horizontal asymptotes with inflection point at (0.1). The asymptotes coincide if a = 0. The solution y = 1 is a singular solution not derivable from the general solution.
***If y = 0.99. the asymptote moves to x = - 100.
while friction terms tend to remove differences in velocities. The Reynolds number is a measure of the ratio of nonlinear terms to frictional terms, so it is reasonable that if the number becomes large, the tendency to turbulence increases. However, factors such as smoothness of boundaries and the magnitude of initial fluctuations also influence the resulting now. In the simple deterministic case, consider one nonlinear term u d u / J x divided by a molecular friction term v a2u/d x2. Lf u and d u/d x are assumed to be of the order U, and L is a typical distance over which the velocity varies by U, the ratio is of the order (u'/L)/(v.u'/L) = U - L/v or the Reynolds number. In the general case, if we have a fluctuation in v or u we can see that large changes can occur in the tendency to turbulent behavior. The best way, apparently. to determine when turbulence starts is to solve the stochastic Navier-Stokes system as we have outlined and study the behavior as a function of the parameters of the flow. A comparison of a deterministic solution and z stochastic solution with varying conditions should illuminate the problem of the onset of turbulence. Suppose we consider flow in a flat channel as an idealization of a pipe in a plane. We have
Replace x by A/ C to make the half-width unity and assume d p l d x = 0.Write
where th:: A, polynomials are generated for the nonlinear term. Then
309
APPUCATIONS IN PHYSICS
for n 0. If v is constant and u, is deterministic, u is deterministic. If u,has a random component, this component will cause new terms to keep appearing because of the expressions on the right side of the equation for u,, for any n 2 0,especially from the term involving A,. This is obvious by inspection of the A, for increasing n. The effect of physically unrealistic change in the solution by a linearization is also clear. Consequently, as a result of any randomness and the nonlinearity, the flow is radically altered- the effect increasing as the fluctuation becomes larger. Random boundary conditions resulting from roughness in the walls will have the same effect. The general problem may have random initiallboundary conditions. p is generally taken as a constant and set equal to unity; however, compressibility becomes a factor with increasing depth and p may not only be a function of z but random in turbulent conditions.
Let
with
Let u = C L o urn. uh2=
00
A,. The %, can be found from
320
CHAPTER 14
Since we have q,, all the following components are determined from
The approximants to the solution are given by
#mil
lim
m-w
= qrn + Urn
4,
=u
The A, are A, = uhui A, = uJui + 2u',u,u, A, = u;uf, + 2u~u,ul+ uhu: + 2u',u,u, A, = uiui + 2u;u,u, + uju: + 2u~uou2 +2u;u,u, + 2u;uou, A, = u ~ u +i 2u;u,u, + uiu: + 2u~u,u, +~u;u!u,r 2u;u0u, + U ~ U ;+ 2u',u1u3 +2u;uou, A S= u;u:, i 2u:uFuI I U ; U ~ + ~ u ; u ~ u ~
NOTE: This is done by writing u'u2 = N u = N, . N 2 , writing
B, for u'
5 C,
for u' and considering the possible products, e.g., A, = B ..C ,.- B I C , + ...+ B,C,+B,C,. (SeeChapter2.) Given the A,, the u, can be calculated and rearranged ir, ascending powers of t (see Appendix 11) to get solutions to any required answer. The same procedure as with the Duffing equation can be used to write u = c.1' and
En=, 00
and calculate the c,. However, we can get a quick approximation as in the following example.
with u(0) = 1 and u'(0) = 0. Approximating sin t by t and cos t by 1 - t2/2, we find L - ' ~doesn't contribute, u, = 1, u, = -t2/2, so $2 = 1 - t2/2 which we recognize as a two-term approximant of u = cos t and we can verify by showing that it satisfies the equation and the given conditions.
BURGER'SEQUATION: The equation is u, + uu, = V u, for x 2 0 and t 2 0 where necessary conditions must, of course, be given. We write
with L, = d / d t and L,
= d2/dx2. Operating with
L: = j;(.)dt, we have
We identify u ( l 7 0) = f(x) as the u, term of the decomposition u = X
and write the nonlinearity uu, as
n=O
CEOu,
A, where
Now the components after uo are given by
and we can write the m-term approximant which converges rapidly to the correct solution 4, = n=o u,. Since either of the possible operator equations for
Ern-'
L,u and L,u can yield the solution in the general case where the conditions for t = 0 depend on x and the conditions on x depend on t, it is no longer nec-
essary to use both operator equations as in earlier work. (When the conditions are not general in this way, we have asymptotic equality.) Integrations for a diff~cultf(x) can be made trivial by writing f(x) in series form and carrying a limited number of terms. If we use the L,u equation, u, = A + Bx where the A, B are evaluated from the boundary conditions and we note that L; represents a two-fold indefinite integration. If we have a non-zero u(x, t = 0) = f(x), the problem is simply solved. If f(x) = 0, we must use the L,u equation.
KURAMOTO-SIVASHINSKY EQUATION: The K-S equation is given as
In the operator notation of the decomposition method, this is where
L, = a i a t L, = u a 4 /a x 4 RU = p a t i d x 2 NU = u ( d i 6 x ) u
This equation describes problems in fluid motion, fluctuations in the position of a flame fiont and oscillating chemical reactions. There are a number of possibilities dependent on the stated conditions. Suppose we know that u(x.0) = f(x) explicitl?. Then we write
and defme L;'
dl (.)dt . Now
=
Substituting u =
xzo
u,, Nu =
xzo
u( d/a x ) u or uu' . These are found as
A, where the A, are generated for
In the decomposition of u into
zLo
u,, we identify uo = f(a:). Then from
we have U,,
= f(x)
u,+, = - ~ ; ' p ( d ~ i a-L;' ~ ~v)( a~ 4~i a x 4 ) ~-L;;A, ,
for n 2 0 so all components are calculable. We compute 0. = term approximant to the solution u =
xrii ui as an n-
XIoun.The results are sufficient for a
complete solution if f(x) is continuous and n-times differentiable or may be apmust satisfy the equation propriately transformed by Fourier series. Now to nth approximation and exactly as n + -. Of course, a numerical result depends on an explicit f(x). Given boundary conditions on x, such as:
we can also solve L u which will require four-fold (indefinite) integrations. Then
(If u is a function of x, it must, of course, be inside the in.tegmtion.) We write
where I, (-)=
1c)&.
Now u, = c,
By decomposition
- c,x - c2x2/ 2 + c 3 x 3/ 6
Since this is a nonlinear equation, we must evaluate the coefficients for each approximant 0, for each m = 1,2... . Alternatively, we can use double decomposition. In h s case we write
..
c, (t) =
Then
c ! ~(t) )
i = 0,l.2,3
Matching the solution approximants at the boundaries determines the components of the integration constants as discussed in Chapters 3 and 4.
THE LANE-EMDEN EQUATION: This is one of the basic equations in the theory of stellar structure in astrophysics and was recently solved by N.T. Shawagfeh [4] using the decomposition method. It is given by
where m is particularly of interest in the range f?om 0 to 5 with the conditions
T(0) = 1 and [dT/drIr=, = 0 What was needed was an approximation which did not require I to be small. Such a non-perturbative solution follows. First, the dependent and independent variables are transformed* using 8 = 5 T and { = I"'r to obtain
which is written as LO = -{I-"
0-
where L = d2/dt2 and L-' is a two-fold integration with respect to
5. Now
where 8, = {(O) + 5d8/d{IC=o= 5. The nonlinearity is f(8) = 8" = where
* This step is eliminated and results generalized in work to be published.
xw A, U=O
Now 8 =
5-
n=o
0, where =5 en+,= - L - ' ~ ' - ~ A ,
90
for n 2 0. 8 can now be written as a series in the form
where the c's are determined as follows:
Finally: since 8 =
A new approach to time-dependent spread of contaminants in moving fluids is provided by decomposition which is easily extended to nonlinear and stochastic partial differential equations as well. First we consider
The one-dimensional advection equation:
duldt-adul2x=O u(x, 0 )= f (x) u(0,t) = g(t)
O < t l T , O < x < l , cr>O
By decomposition and using the partial solution for I, we have
where
is identified as u,, and f(x) is assumed differentiable as n'ecessary.Then
un = (- 1)"( a "tn/n!) f'")(x)
so that
and
is an (m + 1)-term approximation to u, satisfying the equation and the condition at t = 0 using the t-dimension "partial solution". The x-dimension partial solution is derived by
Consequently,
Either the t equation or the x equation represent the solution under general conditions.
ADVECTION-DIFFUSION EQUATION: Let <(x.y,z, t) represent concentration. Let the fluid velocity be components u, v, w in R3 and assume an incompressible fluid
u with
where D is the diffusion constant (whch is a constant for a particular fluid or contaminant. temperature and pressure), (x, y, zl0)is a given initial condition and various boundary conditions are possible. e.g., 5 + 0 as x. y,z + =. or <(t) is specified on a boundary , or, we have a preassigned flux at T. We have d
:;ax* a' (;a,= a' 2'1 -UJ:/~X\lag/ay-w~~/az
By decomposition , using L,= J j d t and L-' =
j'()dl 0
(4J
for m 2 0. Now all components are determined and we can write $u(e) = m = ~ as an approximation to 5 , improving as N increases. If we have turbulent motion of the fluid, we can have random fluctuations of the concentration and hydrodynarnical variables; hence statistical description becomes necessary. $, (<) becomes a series of stochastic terms and we form (@,(<)) to get the expectation as a function of average velocity components. The customary treatments of turbulent motion lead to a lack of closure and concomitant assumptions which are avoided by using decomposition. Thus, in the above method, u,v,w, and 5 are replaced by their corresponding steadystate values plus quantities representing fluctuations from the steady-states. Thus { = + j', u = u + u', v = v + v', w = w + w'. Statistical averaging causes terms such as
x9-'em
5
etc. to vanish. We then have
The last three correlation terms involve correlations of velocities and concentration which are &own. Then the procedure is to let ui for i = 1, 2, 3 denote u,v,w, and xj for j = 1, 2, 3 represent x,y,z and to write terms as being proportional to a mean gradient of the concentration in terms of a "turbulent diffusion tensory' -JSij (xj,t)d / d xj. To clarify the difficulty, consider the operator format Lu + Ru = g or u = L-'g - L-I Ru. If we average we have (u) = L-' (g) - L-'(Ru). We can think of g as an input to a system containing R. The output u can be statistically independent of g but not of R. To achieve closure, one must approximate. By decomposition one writes
Averaging is no problem since g is statistically independent of R. We have
NONLINEAR TRANSPORT: Let's consider the equation L( + R(
+ N(
= g where
N5 = f(S) R=;.V-DV'
Let
5=
zwtn and Nt xmAn.Then n=O
=
n=O
where
for rn 2 0.T h ~ ndm+,=
xm 4. U=O
which converges to
:= xIOg.Further
generalizations are straightforward. We can, for example, consider Fu = g where Fu=L,u~L,u+L,u~L,u+RuiNu=g and solve for L,u, L,u, L,.u, or L,u which would simpiy treat the other operator terms as the remainder operator R and would require the appropriate given boundary conditions. The case of stochastic g or stochastic processes in the R tenn leads to a stochastic 0, which can be averaged or from which expectations and covariances can be found. The solutions are verifiable by checking that the original equation and the given conditions are satisfied. Since thp, concern here is solution of physical systems, inputs and conditions are assumed to be bounded. If the model equation and the conditions are physicall>,correct and consistent, a solution is obtained which is unique and accurate. If numerical results are calculated, one sees the approach to a stable solution for the desired number of decimal places. If conditions on one variable are better known than the others, we consider the corresponding equation which can yield the solution most accurately. &-
THE KDV EQUATION: The equation is u, + a uu,
+ p u,,
= 0. In decomposition form, wc write
where u(x.t = 0) = f(x) is given. This equation was previously solved (21also using the operator equation L,u = -p-'u, - ~ ~ f i - ~ u u , where L, = G 3 / d x 3 . This was done to ensure use of all the appropriate initialrboundary conditions. However, in Chapter 3 we saw that the solutions from each of the operator equations--called "partial solutions"-are actually the solution and identical in the general case when the t conditions depend on x, as above, and the x conditions depend on t. Therefore, we can simply use one or the other saving considerable computation. Since solking the L,u equation involves a single integration and the Lxu equation involves three integrations, the optimal procedure is clear. We proceed by application of the L;' which is a simple definite integration fkom 0 to t Thus
Identifying u(t = 0) as q,in the decomposition u = nonlinear term uux as
if ar and
zm o=o
Em u, n=O
and writing the
A,, {uu,} we have:
p are constants.We can now write the decomposition components
Indicating us as u', we can write the A, for uu' as:
We can now write the m-term appmximant to the solution as @, = which converges to u.
xri: uu,
THE NONLINEARKLEIN-GORDON EQUATION:
Now L, = d'/d
t'
and L;' is a two-fold definite integration from 0to t and we
can write
L,u = V 2 u + f ( u ) u = A + Bt + L;' V2u + L;lf (u) Then. U,
= A+Bt
U]
= L;'v~u,
+ L;'A,
(f(~))
u2 = L;'B~U~ + L;'A] { f ( ~ ) }
Thus. with specification of the explicit function f(u) and the conditions on t assuming dependence on x, y, z,we can calculate $., We car. use one of the other possible operator equations if the appropriate conditions on x. y. or z are better known and do not give a vanishing u, term. If a forcing function g is also present the u, term becomes u, = A - Bt + L;'g.
With L, = d / d t and L;' as integrations from 0 to t
We need to know u(t = 0) which we identify as u, in the decomposition, and also need the function k(u) for the specific model being studied. For illustration, we use k(u) = 1 + u which gives us a nonlinear term uu, or uu'. Now
Now
and ),
=
~r:;
RANDOMNONLINEARHEAT EQUATION: We can write using L, = d/d t ,
and assuming
For either random g or random a, for stationary or nonstationary cases, < u > is found by writing out the series and W,gterm by term expectations without closure approximation or perturbation. If g is random,
where we note that the input of g would not be correlated with system parameter. If a is random,
a, which is a
J2u/dt J x = sin u Letting L: = d/dt and L,u = u, =- u'?
L,u' = sin u u'
= ul(0)
+ L:
sin u = u'(0)
- L;'
m
A, {sin u} o=O
Thus the system is calculable using the &for sin u:
T
A, = sin u, A, = u, cos u,
A, = -(ut/2)sin u, c u, cos u,
A, = -(u;/h)cos u, - u,u, sin u, + u, cos u,
SCHRODINGER EQUATION WITH QUARTIC POTENTIAL:
describes the case of a particle in a quartic potential V(x) = (1/3,)ax4.It can be written d 2 y / d x 2 + a x 4y + P y = 0 If L = d2/dx2, L = ll(.)dx dx,
Thus
where 45 satisfies the conditions. Then for n 2 0
y = 'j'rii ynis the solution.
We have, using decomposition,
Assume g is a constant Then U = C, + C,X into Uo
= C,
+ gx2/2 + L-' kxPU
is decomposed
+ C2X + gx2/2
u,,, = L-' kxpu,
(m 10)
Ern=, u,. m
Since u =
We can write the result in the form u = c, a ( x ) + c$(x>
+ y(x)
where
We can use the conditions u(1) = u(- 1) = 0 to evaluate c, md c+.
so that
(The case g = 2. k = 40, p = 1 was verified to se ven-digit accuracy.)
iu, + 2 ~ 1+~u,,) = ~0 can be written
iL,u + Nu + L,u = 0
where L, = d/dt, L, = d'/dx2, and Nu = 2u(uI2.We can solve either for the L, operator or the L, operator. If we solve for L,u, we get immediately
and 4, =
xu-' 1=0
ui as an n-term approximation converging to u. If we solve for
L,u, we have u=u,-~L;'L,u-L;'NU uo=a+px u ~ =+-a ~; L,U,
- L;'A,
for n 2 0. The given conditions determine a,P.If instead of a finite interval we have a limit such as u -,0 as x -, then we set lim u = 0 and find the 00,
x-
limit of the series.
The equations involve operators in x,y, z and t Our specific solution depends on the given initiallboundary conditions. Considering the equation for V, and assuming conditions on t are given,
The A and B must satisfy given conditions, whether initial or boundary or limit conditions at infinity. The II/ equation, now using L: = d / d t , results in
To solve the system we find ( y , , ~ , )then , ( y , , ~ ,then ) , (y2,v2), etc. Difficult integrations will arise because of difficult initial conditions for whch solution is desired. This problem essentially vanishes by approximation of these functions by a few terms of an equivalent series, e.g., a Maclaurin series, or decomposition of the functions to arrive at elementary integrations.
THE IS-BODY PROBLEM: The problem of the interaction of N bodies is important in many connections and for many force laws which can involve atuaction or repulsion, or even collisions. The problem is soluble by decomposition because the nonlinearity can be represented by the generahzed polynomials (Chapter 3) which can be calculated. Here. we will consider N-body dynamics in a gravitational field. Assume a system of N point masses m, where i = 2,3, ...N, with positions specified by vectors Ti from a chosen origin. The distance between mj and mj will be denoted by
112
TSJ= 17, - Fj/= [(fi - 7,). (Fi - i)]
where the multiplication
indicates a scalar product. If
The trivial one-body case is described simply by m, 7' = 0 with initial conditions
TWO BODIES:
For two bodies in a gravitational field, we have two coupled equations
112.11
with initial conditions
noting that i,,,/ lf1,,l)= il., / li,,.r where i,,, is the unit vector. We can for example, let one mass be a sufficientlylarge, say M, to be assumed as a fixed origin, then find the motion of the small mass m. Or, the origin can be the observer on earth considering the effect of the sun on the moon. Or, finally, the origin can be in a rocket traveling through the solar system.
THREEBODIES: For three bodies, we have:
with initial conditions:
We now have four coupled equations and the corresponding initial conditions to consider.
I
with initial conditions with Ti(0)= pi and Ti (0)=
with i = 1,2,3,4.
GENERA L Z Z Z ~ ' GTO N - BODY D Y N A M I C S : In a gravitational field, we first simpliZy notation by writing
which is a convenient form for use of the generalized A, for f(u,v) discussed in Chapter 3. The indicated multiplication is a scalar product. Hence for N bodies,
Lei L = d' 1 d t2 to write the left side as mi Lq and apply
L-l,
a two-fold
defmite integration from 0 to 5 i.e., Ilji(.)dtdt, to both sides. Also let
We can identify
c'j
as the solution of ~ 7 ; ' ' = 0,or,
Analogously, 7";
=
pj+ tT]
The first decomposition component is determined and the following components are calculable from
Thus n 2 0, for
so all components are calculable by determining the An using b e methods of Chapter 3. For convenience, the necessary quantities are listed. The A, for u-" are:
The A, for f(u,v) are given by
where
xn=o m
For scalar problems, f(r) = r-2 =
For f ( r ) = r-' =
An
xLo
A.
For the nonlinearity i(ri,rj) we define the necessary polynornids by i.e.,
X!'",
Since
We can now write
so that
which can be rewritten as
We can decompose the $ and
5 thus:
Then
Since the h function is a nonlinear function of two variables, we can write it in terms of the generalized polynomials ~ , { f ( u v)} . in Chapter 3 (and also in reference [4]). Thus
Thus we have a system of N coupled nonlinear second-order equations. or Volterra intezral equations when the L-' operator is applied, which are solvable by the decomposition method since the nonlinearity, though difficult, can be expressed by methods of Chapter 3 and reference [2]. Details for specific special cases will appear in a forthcoming paper as they are programmed and calculated.
NONLINEARRELATIVISTICPARTIALDIFFERENTIALEQUATIONS: A class of nonlinear equations occuring in mathematical approaches to elementary particle theory [5] is given by
The objective is to obtain a non-perturbative time-development of the field of this and similar equations involving other nonlinear interactions, without the use of cutoff functions or truncations. In decomposition form [2-51
where L,=i?'/dx',L, = d ' / d Y 2 , L , = J 2 / d z 2 , L ,= - d 2 / d t ' . The possible operator equations f i r partial solutions [ 41 are:
The solution q(x,y,z,t) will be approximated by the n-term decomposition series O,
We visualize a manifold in a four-dimensional coordinate system wilh x,y.z,t axes. On the coordinate planes we have curves cp(x), q(y),q\z), and q ( t ) representing intersection s of the q "surface" with the planes. We can start from any of these functions (i.e., the initial-boundary conditions) to generate rp(x,y,z,t). We can begin with any one of these equations to yield the solution
and in general all of the results are identical. (In special cases, they are asymptotically equal.) Suppose we consider the fourth equation. Then operating with L;' or the two-fold definite integration from 0 to t and identifying the initial term
we have q = qo+L;' m 2 q - ~ ; ' g q 3+L;'[L, +L,+L,lq We now apply the decomposition
We can now determine all components of q ; thus,
qm+t
= -L;'
rn qm- L ; ~ ~ A , + L ; ' ( L , + L , + L , ) ~ ~
Now
0
is a convergent approximation to
(p =
qn,i.e., to the solution. Which of
the four equations we use depends on which initialhoundary conditions are best known or measured. When the nonIinearity is of higher degree, e.g. in the scalar relativistic equation:
the same procedure applies with the appropriate A, for q P for which rules are given in [Z].
TIME-DEPENDENT SCHRODINGER EQUATION IN CONFIGURATION SPACE: The equation is [6]
336
CHAPTER 1 4
The vector i is the position vector of the particle referred to a convenient origin. We can introduce unit vectors along axes of rectangular (x,y,z) or spherical (r,O,q?) coordinates. In terms of rectangular Cartesian coordinates, the operator V' = L, + L, + L, where L, = d2/ A 2 , L, = 13' I dy', L2 = d21&'. Then
Using decomposition, we can solve for any of the operators L,, L,, L,, or L, = d/dt as long as we know the appropriate conditions on x, y, z, or t. The inverse operators L;', L;', L-,' are two-fold indefinite integrations respectively in x, y, z. The inverse L;' is a single d e f d t e integration from zero to t representing an initial condition problem for which Y (t = 0) is required. In the other cases. ure have boundary-value problems for which we need values of Y at two values of x or y or z. Suppose we solve in terms of L,. then
where cx = 2ml it? and p = 2m l h2. Then we must operate on all terms with L;'. The left side becomes L-,' L,Y = Y - A - B x . Rearranging, we have
Y=A-B~-~~L;'~/~~Y-PL~vY-L;'[L,+L Lei ~ ] Y .= x L 0 Yn and identie Yo as A + Bx. Then an n-term approximant to Y denoted by cp,
which becomes Y in the limt as n + -. Because of rapid convergence, a few terms are generally sufficient. (When this is not the case, one can use Pad6 approximants or other acceleration techniques or the method of asymptotic decomposition.) The decomposition components are
and we can now write cp,[Y]. For a particular potential, e.g. an isotropic harmonic oscillator, V = 112 mw2(x2+ y% z2), we can now calculate the solution. Evidently, we can also deal with nonlinear potential functions or nonlinear Schrodinger equations. The decomposition method is not restricted to potentials varying slowly in a de Broglie wavelength. The problem solution is complete when A and B are evaluated by matching to the given conditions. This requires matching the @= to the boundary conditions for each value of n as previously discussed. Other interesting examples (Ginzburg-Landau equation, Euler equations for inviscid flow, isentropic flow) as well as further results on Navier-Stokes equations and on numerical computation will appear in future publications.
REFERENCES 1. 2. 3.
4. 5.
6.
G. Adomian, Analytic Solution of the Stochastic Navier-Stokes System, Found. of Physics. 21. (831-843) (July 1991). G. Adomian, Nonlinear Stochastic Systems Theory and Applicafiom to Physics, Kluwer (1989). G. Adomiaa Linear Random Operator Equations in Mathematical Physics I, 11, IfI, J. Math. Phys., 11, 3, (1069 - 1084) (1970), 12, 9, (1971). (1944 - 1955). N.T. Shawagfeh, Lane-Emden Equation, J. Math. Phys.. 34, 9, (4364 - 4369) (Sept. 19931. 1. ~ e g a l .Quantization and Dispersion for Nonlinear Relativistic Equations, in The Mathematical Theory of Elementary Particles, R. Goodman and I. Segal (eds.), MIT Press (1965). D.S. Saxon, Elementary Quantum Mechanics. Holden-Day (1968).
A.S. Monin and A.M. Yaglorn, Statistical Fluid Mechanics I-.II,J. Lumley (ed.), M r r Press (1971). 2. L.D. Landau and E.M. Lifshitz, Quantum Mechanics, J. B . Sykes and J. S. Bell (transl.), Addison-Wesley (1958). 3. L.D. Landau and E.M. Lifshitz, Mechanics. J . B . Sykes and J. S. Bell (transi.), Addison-Wesley (1960). 1
APPENDIX 1 PADE AND SHANKSTRANSFORMS
PADEAPPROXIMANTS: The objective here is to find a solution in the large, i.e., in the range ( 0 , ~ ) from the decomposition series which normally has a finite circle of convergence for initial-value problems. The procedure is to seek a rational function for the series. Given a function f(z) expanded in a Maclaurin series f(z) = cnzn, we can use the coefficients of the series to represent the
xz,
function by a ratio of two polynomials
symbolized by &/MI and called the Pad6 approximant. The basic idea is to match the series coefficients as far as possible. Even though the series has a finite region of convergence, we can obtain the limit of the function as x + m ifL=M. Notice that if we are satisfied with [1/1]. we will have
so that coefficients of z' are zero, i.e., blcl + b0c2= 0
Taking b, = 1, we have blc1+ c2 = 0 Now consider [2 / 21 or
Clearly the coefficients of z3 are zero, so that we can write
In general, we note that there are L + 1 independent coefficients in the numerator and M + I coefficients in the denominator. To make the system determinable, i t is customary to let b, = 1. We then have M independem 338
coefficients in the denominator and L + M + 1 independent coefficients in all. Now the [LIM] approximant can fit the power series through orders 1,z,zZ, . ..,zbM with an error of O(zLM+').For example, for
we have
Consequendy,
Equating coefficients of zL", p2,. .., z ~ in+turn, ~ we can write
Setting bo = 1, we have M linear equations for the M coefficients in the denominator.
We invert the matrix on the left and solve for the bi for i = 1,..., M. Since we lcnow the co, c,, q,..., we can equate coefficients 01f 1, Z, zZ, ..., zL to get %, a,,..., at. Thus a, = c,
a, = c, + blco
Thus the numerator and denominator of the Pad6 approximant are determined and we have agreement with the original series through order zbM. From the matrix equation, we can write the lower-order approximants. (For higher orders, one can use symbolic programs.) 1) IUM] = [l/l]
where D = clc, - c: so that we have
For f(z) = c,
+ clz + c,zZ+ -.- we have lim [1/1] = a, /b,
Z -+=
lim[2/2] = a2/b, 2-9-
lim[3/3] = a,/b,
z-+-
EXAMPLE: Find the limit for e- x/(l +x) [ I l l ] = .333... [?I21 = .368... [3/3] = .368...
EXAMPLE: For ex, we have
Note that if we let x = 1 to consider the series for e, we get the correct limit.
-+
PROBLEM: Noting that the limit is correct for x = 1 but the limit at for [1/1], [2/2], [3/3] fluctuates between 1 for both ex and e'", i.e., %/b, = 1 as m increases, explain the lack of convergence to a limit since we know e-" + 0 as x + and ex + w as x + m. Try the Shanks (or Wynn*) transformation. These transformations are effective in accelerating convergence of many slowly convergent series. For cases where the Pad6 approximant appears inapplicable, we can sometimes use transformations of the series. Consider
-
i.e., cJmf 0 but
C3m+I = C3m+2 = 0.
+
We let y = z3so that
is now in a form for Pad6 transformation. In general for a series
where N is a positive integer (or "skip factor") with c,, for 15 v IN - 1. We substitute y = ZN to get
#
0 but c,,,,
=0
Other useful transforms used to accelerate convergence are the Euler, Wym, and Van Wijng3arden transforms.
where n = Nm.
To approximate by [ 1/11 we have
Consequently
(whch is w i t h 8% of the correct limit of 0.5 for the function).
EXERCISE: Calculate [2/2] and [3/3] to show that the limit approaches 0.5 more and more closely as we go to hlzher-order &MI.
We note that even though the series has a limited region of convergence (x I 1/2), the function is smooth for 0 5 x < -. If we write a ratio (a + bx) /(c + dx) it is clear that we get a finite limit as x approaches -. [l/l], we have
Calculating &/MI =
and carrying out the division, we get
which exactly matches the frrst three terms of the TayEor series. If we go as high as [5/5], we match the first 11 terms of the Taylor series
which is the correct limit of 2'" to three-place accuracy.
EXERCISE: Since cos x =
xm (-l)"x/(2n)!, show that n=O
uses the first five terms of the Taylor series. Show that [2/2] is closer to the exact value of cos x than the sum of the first 5 terms.
SERIESNOT SUITABLEFOR PADETRANSFORM: Sometimes the series is not in a convenient form for the P a d i transform which is designed for a series
cooc,xD with non-zero co. c,, c2. n=O
If we have missing terms, e.g.. f(x) =
m
xm
cN,xND where N is a positive
integer and co # 0, e.g., n=O cox0+ c,x3 + c6x6-I---.then we use the transform z = xN (or z = x3 in this specific case) and let b, = c,, to write f(z) =
xzo
b,zn and b, (n
If f(x) =
6
#
0 which is now suitable for the Pad6 transform.
c,xn and c, = 0, we can apply a translation z = x - 6 with
with p synbolizing the radius of convergence. If we use
It]< 1, the
result will be simpler for manual computation because the resulting series for each new coefficient in f(z) = n = O bDznwill converge rapidly. The resuit for b, will be b, $ 0 and
Em
whch is equivalent to an a n a l p c continuation.
THESHANKSTRANSFORM: This is a nonlinear transform which can be very effective, particularly in accelerating convergence of slowly converging series. It has even been applied to diverging series which seems contradictory. However. if a power series has s transform is a been obtained by dividing out a rational fimction, t h ~ nonlinear means of inverting the procedure to obtain the rational function. The Shanks transform is related to the Padi approximant. It is more accurate: however, the Pad6 approximant is more explicitly expressed in terms of the coefficients of the original series. L e t s write the sequence of partial sums {S,) for a series and define the Shanks transform by
We often wan1 repeated transforms, called the iterated Shanks transform, so it is convenient to write {A,} for the IS,}. The iterated transforms will often lead to an extremely accurate solution. The frrst-order transform is written as:
We now write the sequence simply as
where A,
= S,.
EXAMPLE : In the continued fraction representing
fl
, the sequence of partial sums {S,) is S, = 1, S, = 3/2, S, = 715. Find the limit. Calculation
yields:
We note that AS = 1.4137 which is correct to 3 figures, while C3 = 1.414213564 is correct to 9 figures. EXERCISE:Verify all results. EXAMPLE: The Leibnitz series for x is x
results are:
= 4 - 413 + 415 - 417 + .... The
We see thar the tenth partial sum A,, or S,, is correct only to one figure. Shanks [ I ] points out that to give an answer correct to eight figures would require n = 40 million in S, whle we note that es(S$. or ED,is already correct to eight fi-mes. EXERCISE:
In the example
in the previous section on Pad6 transforms. are obtained [1/1] = 3 4 which was close to the correct limit of .5. Show that the Shanks transform T(S,) = .54also. Ln\,esugare the relationshp between the two procedures. Another related transform (which can also be iterated like the Shanks transform) is the .kitken uacsform defrned by
The first-oidcr Shanks transform is equivalent to the l t k e n (6') process and the mth o r a x Shanks transform of the nth p m a l sum is equivalent to the [m/n] Pad6 approximant. We can view the Shanks transform as a unifying concept subsuming the Artken process and the Pad6 approximent.
SUGGESTED READING 1.
2.
Daniel Shanks, Nonlinear Transformations of Divergent and SIowly Converging Series. J. ,Marh. and Physics, 34. (1-42) (1953). A.C. Xitken, On Bernoulli's Numerical Solution of Algebraic: Equations, Proc. Roy. Soc., Edinburgh, (289-305) (1926).
APPENDIX II
The fxst column on the right of the equation is equal to uo.The second column is equal to u,. The third column is equal to u,, etc. These sums of each column will be denoted by ub, ui, u; .... respectively where i indicates the initialvalue format. The sum of the column to the left of the equal sign is denoted by :U for boundary-value format. We have
XIo
h
Thus in writing the approximants pm we can write
which can be written
Refening again to (I), we have
or u, =
xn=, m
u:-"
in our boundary-value format. Returning to (1) we can
write in initial-value format,
or u, =
zm m n=o
By double decomposition u =
Zn=, u,. m
u p ' (in initial-value format). By decomposition u =
XIoEm=, up'. 00
APPENDIX III
CAUCHYPRODUCTS OF INFINITE SERIES
In one dimension for u =
xma,xY P=O
Ln two dimensions,
U=
C
Za,,,,;
x:l x,Y1
In three dimensions,
In four dimensions, m
m
m
m
and
P=
xmBmxm we can m = ~
,
write
In N dimensions,
These can all be programmed using "do-loops" and stopping rules. These product rules should be useful in programming solutions where the system input and system coefficients are known only as power series.
Mixed Derivatives 46 Acceleration Techniques 30, 206,338 (Adomian) Polynomials Reference Lit 11 Analytic Simulants 17, 289 Applications Advection 3 16 Advection-Diffusion 3 18 Burger's Equation 31 1 Dissipative Wave Equation 28.30 KdV Equation 321 Kuramoto-Sivashinsky Equation 3 12 Lane-Emden Equation 3 15 N-body Problem 328 Navier-Stokes Equation 302 Nonlinear Heat Equation 322 Nonlinear Klein-Gordon Equation 322 Nonlinear Relativistic Partial Differential Equation 334 Nonlinear Transport in Moving Fluids 3 16 Random N o n h e a r Heat Equation 323 Schrodinger Equation Nonlina- 327 Quartlc Potential 325 Yukawa-coupled Nein-Gordon 327 Sine-Gordon Equation 324 Turbulence 307 Van der Pol Equation 230. 23 1.309 Applications of Modified Decomposition I54 Asymptotic Decomposition 18. 241 Boundary Conditions at Infiniv 21 1 Boundary-value Problems 87. 114. 138, 28 8 Cauchy Products 350 Convergence Regions 23. 25 Decomposition for o r d i n q differential equarions 6. 28 for partial Mierential equations 22, 28 Difficult Nonhearities 150 Dirichlet Conditions 75 Double Decomposition 22, 69, 87 Duffing Equation. 154. 157, 230, 231, 235. 236. 263, 277, 280 G e n e r a l i d (Adomian) Polynomials 50 Generalized Taylor Series 10 Gibbs Phenomena 301 Harmonic Oscillator 247. 251 Integral Boundary Conditions 196 Integral Equalions 224 Irregular Contours or Surfaces 288
Modified Decomposition 115, 131, 138, 214 Neurnann Boundary Conditions 190 Noise Terms 29 Nonlinear Partial Differential Equations 80 Nonlinear Ordinary Differential Equations 85 Nonlinear Oscillations 228, Partial Solutions 23, 28, 30, 33, 35. 36 Perturbation 284 Proliferation of Terms 254 Smooth Expansions of PiecewiseDifferentiable Functions 298 Spatial and Temporal Formats 97, 106,109, Staggered Summation 243. 348 Stopping Rules 18
Fundamental Theories of Physics 22. A.O. Barut and A. van der Merwe (eds.): Selected Scienrijc Papers of Alfred Landi. [I888-197s]. 1988 ISBN 90-277-2594-2 23. W.T. Grandy, Jr.: Foundations of Staristical Mechanics. Vol. 11: Nonequilibrium Phenomena 1988 ISBN 90-277-2649-3 24. E.I. Bitsakis and C.A. Nicolaides (eds.): The Concepr of Probabiliry. Proceedings of the ISBN 90-277-2679-5 Delphi Conference (Delphi, Greece, 1987). 1989 25. A. van der Merwe, F. Selleri and G. Tarozzi (eds.): Microph?sical Reality and Quantum Formalism, Vol. 1. Proceedings of the International Conference (Urbino, Italy, 1985). 1988 ISBN 90-277-2683-3 26. A. van der Merwe, F. Selleri and G. Tarozzi (eds.): Microphysical Reality and Quantum Formalism. Vol. 2. Proceedings of the International Conference (Urbino, Italy, 1985). 1988 ISBN 90-277-2684- 1 ISBN 90-277-2685-X 27. I.D. Novikov and V.P. Frolov: Physics of Black Holes. 1989 28. G. Tarozzi and A. van der Merwe (eds.): The Narrcre of Qrcannun Paradoxes. Italian Studies in the Foundations and Philosophy of Modem Physics. 1988 ISBN 90-277-2703- 1 29. B.R. Iyer, N. Mukunda and C.V. Vishveshwara (eds.): Gravirarion, Gauge Theories ISBN 90-277-2710-4 and rhe Early Universe. 1989 30. H. Mark and L. Wood (eds.): Energy in Physics. War and Peace. A Festschrift celebrating Edward Teller's 80th Birthday. 1988 ISBN 90-277-2775-9 31. G.J. Erickson and C.R. Smith (eds.): Maximum-Entropy and Bayesian Methods in Science and Engineering. ISBN 90-277-2793-7 Vol. I: Foundations. 1988 32. G.J. Erickson and C.R. Smith (eds.): Maximum-Enrropy and Bqesian Merhods in Science and Engineering. ISBN 90-277-2794-5 Vol. II: Applications. 1988 33. M.E. Noz and Y.S. Kim (eds.): Special Relativity and Quantum Theory. A Collection of ISBN 90-277-2799-6 Papers on the Poincare Group. 1988 34. LYu. Kobzarev and Yu.1. Manin: Elementary particle:^. Marhematics, Physics and ISBN 0-7923-0098-X Philosophy. 1989 ISBN 0-7923-0253-2 35. F. Selleri: Q m t u m Paradoxes and Physical Reality. 1990 36. J. SkiUing (ed.): Maximum-Entropy and Bayesian Methods. Proceedings of the 8th ISBN 0-7923-0224-9 International Workshop (Cambridge, UK. 1988). 1989 37. M. Kafatos (ed.): Bell's Theorem, Quantum Theory and Conceptions of the Universe. 1989 ISBN 0-7923-0496-9 38. Yu.A. Izyumov and V.N. Syromyatnikov: Phase Transitiom and Crystal Symmetry. 1990 ISBN 0-7923-0542-6 39. P.F. Fougtre (ed.): Maximum-Entropy and Bayesian Methods. Proceedings of the 9th International Workshop (Dartmouth, Massachusetts, USA, 1989). 1990 ISBN 0-7923-0928-6 40. L. de Broglie: Heiseriberg 's Uncertainties and the Probabilistic Interpreration of Wave ISBN 0-7923-0929-4 Mechanics. With Critical Notes of the Author. 1990 41. W.T.Grandy, Jr.: Relativistic Quantum Mechanics of Leptons and Fields. 199 1 ISBN 0-7923-1049-7 42. Yu.L. Klimontovich: Turbulent Motion and the Structure of C J ~ O A S .New Approach ISBN 0-7923-1 114-0 to the Statistical Theory of Open Systems. 1991
-
Fundamental Theories of Physics 43. W.T. Grandy, Jr. and L.H. Schick (eds.): Mnrirnwn-Enrropy and Bayesian Methods. Proceedings of the 10th International Workshop (Laramie, Wyoming, USA, 1990). 1991 ISBN 0-7923-1 140-X 44. P.Pt& and S. RilmannovA: Orrhomodular Structures as Quantum Logics. Intrinsic Properties, State Space and Probabilistic Topics. 1991 ISBN 0-7923-1207-4 45. D. Hestenes and A. Weinganshofer (eds.): The Electron. New Theory and Experiment. 1991 ISBN 0-7923-1 356-9 46. P.P.J.M. Schram: Kinetic Theory of Gases and Plasmas. 1991 ISBN 0-7923-1392-5 47. A. Micali, R. Boudet and J. Helmstetter (eds.): CZtflord Aigebrm and their Applications in Mathematical Physics. 1992 ISBN 0-7923- 1623-1 48. E. Prugovdki: Qumrwn Geomerp. A Framework for Quantum General Relativity. 1992 ISBN 0-7923-1640-1 49. M.H. Mac Gregor: The Enigmatic Elecrron. 1992 ISBN 0-7923-1982-6 50. C.R. Smith, G.J.Erickson and P.O. Neudorfer (eds.): Maximum Entropy and Bayesian Methodr. Proceedings of the 1lth International Workshop (Seattle, 1991). 1993 ISBN 0-7923-2031 -X 51. D.J. Hoekzema: The Quanrwn Labyrinth. 1993 ISBN 0-7923-2066-2 52. Z. Oziewicz, B. Jancewicz and A. Borowiec (eds.): Spinors, Twisrors, Clifford Algebras and Quanrw, Defonnarions. Proceedings of the Second Max Born Symposium ISBN 0-7923-2251-7 (Wrocian, Poland. 1992). 1993 5;. A. Mohammad-D!afan' and G. Demoment (eds.): Maximum Entropy and Bayesian Methods. Proceedings of the 12th International Workshop (Paris, France, 1992). 1993 ISBN 0-7923-2280-0 54. M. Riesz: Cl~ffordNumbers and Sprnors with Riesz' Private Lectures to E. Folke Bolinder and a Historical Review by Penti Lounesto. E.F. Bolinder and P. Lounesto (eds.). 1993 ISBN 0-7923-2199- 1 55. F. Brackx, R. Delanghe and H. Serras (eds.): Cliflord Algebras and their Applications in Mathematical Physics. Proceedings of the Third Conference (Deinze, 1993) 1993 ISBN 0-7923-2345-5 56. J.R. anc chi: Parametrized Relativisric Quantum Theon. 1993 ISBN 0-7923-2376-9 ISBN 0-7923-2549-4 57. A. Peres: Quantum Theon: Concepts and Methods. 1993 58. P.L. Antonelli? R.S. Ingarden and M. Matsumoto: The The07 of Sprays and Finsler Spaces with Applications in Physics and Biologj. 1993 ISBN 0-7923-2577-X 59. R. Miron and M. Anastasiei: The Geometry of Laprange Spaces: Theory and Applications. 1994 ISBN 0-7923-2591 -5 60. G. Adomian: Solving Frontier Problems of Phvsics: The Decomposition Method. 1994 ISBN 0-7923-2644-X
KLUWER ACADEMIC PUBLISHERS - DORDRECHT / BOSTON / LONDON
-1
I